Download as pdf or txt
Download as pdf or txt
You are on page 1of 674

Front cover

IBM TotalStorage SAN


Volume Controller
Install, use, and troubleshoot SAN Volume
Controller

Learn how to implement block


virtualization

Perform backup and restore


on a cluster

Jon Tate
Steve Cody
Bent Lerager
Fred Scholten
Marcus Thordal

ibm.com/redbooks
International Technical Support Organization

IBM TotalStorage SAN Volume Controller

April 2005

SG24-6423-02
Note: Before using this information and the product it supports, read the information in “Notices” on
page xi.

Third Edition (April 2005)

This edition applies to the SAN Volume Controller Version 2 Release 1. Testing to produce this book was
carried out with pre-GA code. Some of the window examples may differ slightly from the code that you use.
However, the general principles of operation are consistent.

© Copyright International Business Machines Corporation 2003, 2004, 2005. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii

Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix

Chapter 1. Introduction to storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


1.1 The need for storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 In-band virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Out-of-band virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Chapter 2. IBM TotalStorage SAN Volume Controller overview . . . . . . . . . . . . . . . . . . 11


2.1 Maximum supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Glossary of commonly used terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Virtualization overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Compass architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4.1 SAN Volume Controller clustering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4.2 SAN Volume Controller virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.3 SAN Volume Controller multipathing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.5 SAN Volume Controller logical configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.6 SAN Volume Controller compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.7 Software licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

Chapter 3. Planning and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25


3.1 General planning rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2 Physical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2.1 Physical rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2.2 Cable connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.3 SAN planning and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3.1 SAN definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3.2 General design guidelines with SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.3.3 Boot support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.3.4 Configuration saving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.3.5 High availability SAN design and configuration rules with SVC . . . . . . . . . . . . . . 35
3.4 Zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.5 Naming convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.5.1 Dual room high availability configuration with the SVC. . . . . . . . . . . . . . . . . . . . . 40
3.5.2 Local and remote SAN fabrics with SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.6 SVC disk subsystem planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.6.1 Block virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.6.2 MDGs, I/O groups, virtual disks, and managed disk . . . . . . . . . . . . . . . . . . . . . . . 44
3.6.3 Extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.6.4 Image mode virtual disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.6.5 Managed mode virtual disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. iii
3.6.6 Allocation of free extents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.6.7 Selecting MDGs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.6.8 I/O handling and offline conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.6.9 Quorum disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.6.10 Virtualization operations on virtual disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.6.11 Creating an MDisk group (extent size rules) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.6.12 Creating a managed disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.6.13 Creating a virtual disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.6.14 Quality of service on VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.6.15 Creating a host (LUN masking). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.6.16 Expanding an SVC cluster configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.6.17 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.7 SVC supported capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.7.1 Adding ESS storage to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.7.2 Adding DS4000 storage to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.7.3 LUN layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

Chapter 4. Performance and capacity planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67


4.1 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.1.1 SAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.1.2 Disk subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.1.3 SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.1.4 Maximum capability of an I/O group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.2 Planning guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.2.1 I/O queue depth handling in large SANs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.2.2 SVC managed and virtual disk layout planning. . . . . . . . . . . . . . . . . . . . . . . . . . . 74

Chapter 5. Initial installation and configuration of the SVC . . . . . . . . . . . . . . . . . . . . . 77


5.1 Preparing for installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.2 Secure Shell overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.2.1 Generating public and private SSH key pair using PuTTY . . . . . . . . . . . . . . . . . . 78
5.3 Basic installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.3.1 Creating the cluster (first time) using the service panel . . . . . . . . . . . . . . . . . . . . 82
5.4 Completing the initial cluster setup using the Web interface. . . . . . . . . . . . . . . . . . . . . 84
5.4.1 Uploading the SSH public key to the SVC cluster. . . . . . . . . . . . . . . . . . . . . . . . . 90
5.4.2 Configuring the PuTTY session for the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.4.3 Starting the PuTTY CLI session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.4.4 Configuring the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

Chapter 6. Quickstart configuration using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . 107


6.1 Adding nodes to the cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.2 Setting the cluster time zone and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6.3 Creating host definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6.4 Displaying Managed Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
6.5 Creating Managed Disk Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
6.6 Creating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
6.7 Assigning a VDisk to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

Chapter 7. Quickstart configuration using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . 119


7.1 Adding nodes to the cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
7.1.1 Installing certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.2 Setting the cluster time zone and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
7.3 Creating host definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

iv IBM TotalStorage SAN Volume Controller


7.4 Displaying managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
7.5 Creating Managed Disks Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
7.6 Creating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.7 Assigning a VDisk to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

Chapter 8. Host configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153


8.1 SAN configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
8.2 SVC setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
8.3 Switch and zoning configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
8.4 AIX-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
8.4.1 Configuring the AIX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
8.4.2 Support information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
8.4.3 Host adapter configuration settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
8.4.4 SDD installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
8.4.5 Discovering the assigned VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
8.4.6 Using SDD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
8.4.7 Creating and preparing volumes for use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
8.4.8 Expanding an AIX volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
8.4.9 Removing an SVC volume on AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
8.4.10 Running commands on an AIX host system . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
8.5 Windows-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
8.5.1 Configuring Windows 2000 and Windows 2003 hosts . . . . . . . . . . . . . . . . . . . . 169
8.5.2 Support information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
8.5.3 Host adapter installation and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
8.5.4 SDD installation on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
8.5.5 Windows 2003 and MPIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
8.5.6 Discovering the assigned VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
8.5.7 Expanding a Windows 2000/2003 volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
8.5.8 Removing a disk on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
8.5.9 Using SDD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
8.6 Microsoft Volume Shadow Copy Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
8.6.1 How it works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
8.6.2 Installing the IBM TotalStorage hardware provider . . . . . . . . . . . . . . . . . . . . . . . 186
8.6.3 Using VSS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
8.6.4 Running a command line (CLI) on a Windows host system . . . . . . . . . . . . . . . . 197
8.7 Linux (on Intel) specific information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
8.7.1 Configuring the Linux host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
8.7.2 Support information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
8.7.3 Host adapter configuration settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
8.7.4 Discovering the assigned VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
8.7.5 Using SDD on Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
8.7.6 Creating and preparing volumes for use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
8.8 SUN Solaris support information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
8.8.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . . 205
8.8.2 Multipath solutions supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
8.8.3 SDD dynamic pathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
8.9 HP-UX support information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
8.10 More information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206

Chapter 9. SVC configuration and administration using the CLI . . . . . . . . . . . . . . . . 207


9.1 Managing the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
9.1.1 Organizing on-screen content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
9.1.2 Viewing cluster properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210

Contents v
9.1.3 Maintaining passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
9.1.4 Modifying IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
9.1.5 Setting the cluster time zone and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
9.1.6 Starting a statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
9.1.7 Stopping a statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
9.1.8 Shutting down a cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
9.2 Working with nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
9.2.1 I/O groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
9.2.2 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
9.3 Working with managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
9.3.1 Disk controller systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
9.3.2 Managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
9.3.3 Managed Disk Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
9.4 Working with virtual disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
9.4.1 Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
9.4.2 Virtual disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
9.4.3 Tracing a host disk back to its source physical disk . . . . . . . . . . . . . . . . . . . . . . 245
9.5 Managing copy services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
9.6 Service and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
9.6.1 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
9.6.2 Running maintenance procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
9.6.3 Setting up error notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
9.6.4 Analyzing the error log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
9.6.5 Setting features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
9.6.6 Viewing the feature log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
9.6.7 Dump the configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
9.6.8 Listing dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
9.7 SVC cluster configuration backup and recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
9.7.1 Backing up the SVC cluster configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
9.7.2 Restoring the SVC cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
9.7.3 Deleting configuration backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
9.8 Cookie crumbs recovery process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
9.9 Scripting and its usage under CLI for SVC task automation . . . . . . . . . . . . . . . . . . . . 263

Chapter 10. SVC configuration and administration using the GUI. . . . . . . . . . . . . . . 265
10.1 Managing the cluster using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
10.1.1 Organizing on-screen content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
10.1.2 Viewing cluster properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
10.1.3 Maintaining passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
10.1.4 Modifying IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
10.1.5 Setting the cluster time zone and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
10.1.6 Starting the statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
10.1.7 Stopping the statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
10.1.8 Shutting down a cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
10.2 Working with nodes using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
10.2.1 I/O groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
10.2.2 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
10.3 Working with managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
10.3.1 Disk controller systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
10.3.2 Managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
10.3.3 Managed Disk Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
10.4 Working with Virtual Disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
10.4.1 Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314

vi IBM TotalStorage SAN Volume Controller


10.4.2 Virtual disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
10.4.3 Showing VDisks mapped to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
10.5 Managing Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
10.6 Service and maintenance using the GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
10.6.1 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
10.6.2 Running maintenance procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
10.6.3 Setting error notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
10.6.4 Analyzing the error log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
10.6.5 Setting features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
10.6.6 Viewing the feature log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
10.6.7 Dump configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
10.6.8 Listing dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
10.7 Backing up the SVC configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
10.7.1 Backup procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
10.7.2 Restoring the SVC configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
10.7.3 Deleting configuration backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362

Chapter 11. Copy Services: FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365


11.1 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
11.1.1 How it works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
11.1.2 Practical uses for FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
11.1.3 FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
11.1.4 Consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
11.1.5 FlashCopy indirection layer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
11.1.6 FlashCopy rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
11.1.7 FlashCopy mapping events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
11.1.8 FlashCopy mapping states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
11.1.9 Background copy rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
11.1.10 Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
11.1.11 Metadata management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
11.1.12 I/O handling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
11.1.13 Serialization of I/O by FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
11.1.14 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
11.1.15 Asynchronous notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
11.2 FlashCopy commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
11.2.1 Creating a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
11.2.2 Modifying the mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
11.2.3 Deleting the mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
11.2.4 Preparing (pre-triggering) the FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . 385
11.2.5 Preparing (pre-triggering) the FlashCopy consistency group . . . . . . . . . . . . . . 386
11.2.6 Starting (triggering) FlashCopy mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
11.2.7 Stopping the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
11.2.8 Stopping the FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
11.2.9 Creating the FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
11.2.10 Modifying the FlashCopy consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . 387
11.2.11 Deleting the FlashCopy consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . 387
11.3 FlashCopy scenario using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
11.4 FlashCopy scenario using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391

Chapter 12. Copy services: Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413


12.1 Metro Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
12.1.1 Metro Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
12.1.2 The importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415

Contents vii
12.1.3 Practical use of Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
12.1.4 SVC Metro Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
12.1.5 Metro Mirror states and events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
12.1.6 Metro Mirror configuration limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
12.2 Metro Mirror commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
12.2.1 Listing available SVC cluster partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
12.2.2 Creating SVC cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
12.2.3 Creating a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
12.2.4 Creating a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
12.2.5 Changing a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
12.2.6 Changing a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
12.2.7 Starting a Metro Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
12.2.8 Stopping a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
12.2.9 Starting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
12.2.10 Stopping a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 429
12.2.11 Deleting a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
12.2.12 Deleting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
12.2.13 Reversing a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
12.2.14 Reversing a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . 430
12.2.15 Detailed states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
12.2.16 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
12.3 Metro Mirror scenario using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
12.4 Metro Mirror scenario using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444

Chapter 13. Migration to the SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . 493


13.1 Migration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
13.2 Migration operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
13.2.1 Migrating multiple extents (within an MDG) . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
13.2.2 Migrating extents off an MDisk which is being deleted . . . . . . . . . . . . . . . . . . . 495
13.2.3 Migrating a VDisk between MDGs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
13.2.4 Migrating the VDisk to image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
13.2.5 Migrating a VDisk between I/O groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
13.2.6 Monitoring the migration progress. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
13.3 Functional overview of migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
13.3.1 Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
13.3.2 Migration algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
13.4 Migrating data from an image mode VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
13.4.1 Image mode VDisk migration concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
13.4.2 Migration tips. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
13.5 Data migration from DS storage to SVC using the CLI . . . . . . . . . . . . . . . . . . . . . . . 503
13.5.1 AIX host connected directly to the DS storage . . . . . . . . . . . . . . . . . . . . . . . . . 504
13.5.2 SVC is added between the AIX host and the DS storage . . . . . . . . . . . . . . . . . 506
13.5.3 Migrating the VDisk from image mode to managed mode . . . . . . . . . . . . . . . . 510
13.5.4 Migrating the VDisk from managed mode to image mode . . . . . . . . . . . . . . . . 512
13.5.5 Migrating the VDisk from image mode to image mode . . . . . . . . . . . . . . . . . . . 514
13.6 Data migration from ESS to SVC using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
13.6.1 AIX host connected directly to the ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
13.6.2 SVC added between the AIX host and the ESS . . . . . . . . . . . . . . . . . . . . . . . . 520
13.6.3 Migrating the VDisk from image mode to managed mode . . . . . . . . . . . . . . . . 526
13.7 Data migration from ESS to SVC using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
13.7.1 Windows 2000 host system connected directly to the ESS . . . . . . . . . . . . . . . 531
13.7.2 SVC added between the Windows 2000 host system and the ESS . . . . . . . . . 533
13.7.3 Migrating the VDisk from image mode to managed mode . . . . . . . . . . . . . . . . 539

viii IBM TotalStorage SAN Volume Controller


13.7.4 Migrating the VDisk from managed mode to image mode . . . . . . . . . . . . . . . . 543
13.7.5 Migrating the VDisk from image mode to image mode . . . . . . . . . . . . . . . . . . . 546

Chapter 14. Master console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549


14.1 Hardware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
14.1.1 Example hardware configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
14.2 Management console software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
14.3 IBM Tivoli SAN Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
14.4 Installation planning information for the master console . . . . . . . . . . . . . . . . . . . . . . 575
14.5 Installing the master console software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576
14.6 Configuring the master console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576
14.7 Managing the master console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
14.8 Secure Shell overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
14.8.1 Uploading SSH public key(s) sample scenarios . . . . . . . . . . . . . . . . . . . . . . . . 578
14.9 Call Home (service alert). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
14.10 Master console summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584

Appendix A. Copy services and open systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585


AIX specifics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
AIX and FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
AIX and Metro Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
Making updates to the LVM information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
Windows NT and 2000/2003 specifics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
Windows NT and Copy Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
Copy Services with Windows Volume Sets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
Windows 2000/2003 and Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597

Appendix B. DS4000 migration scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611


Initial considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
Scenario 1: Total numbers of LUNs is less than maximum LUN per partition . . . . . . . . . . 613
Scenario 2: Migrating storage (where the total number of LUNs in all partitions is greater than
what is supported in one partition) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618

Appendix C. Scripting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625


Scripting structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627

Appendix D. Standby node and cloning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629


Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 630
Replacing a failed SVC node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 630

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635


IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
Referenced Web sites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
IBM Redbooks collections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637

Contents ix
x IBM TotalStorage SAN Volume Controller
Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that does
not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such provisions
are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.

Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.

Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.

This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.

COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrates programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and
distribute these sample programs in any form without payment to IBM for the purposes of developing, using,
marketing, or distributing application programs conforming to IBM's application programming interfaces.

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. xi


Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
Eserver® developerWorks® Redbooks™
Eserver® Enterprise Storage Server® RETAIN®
ibm.com® FlashCopy® S/390®
3090™ HACMP™ Sequent®
AFS® IBM® Storage Tank™
AIX 5L™ Netfinity® Tivoli®
AIX® NetView® TotalStorage®
BladeCenter™ pSeries® xSeries®
DB2® Redbooks (logo) ™

The following terms are trademarks of other companies:

Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States, other
countries, or both.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.

Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun
Microsystems, Inc. in the United States, other countries, or both.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Other company, product, and service names may be trademarks or service marks of others.

xii IBM TotalStorage SAN Volume Controller


Foreword

Businesses today are facing unprecedented growth in their storage infrastructure. Managing
the cost of this growth adds another dimension to the challenge. As a result, IT managers are
expected to provide storage on demand, in ever-increasing amounts, without increasing their
staff or budgets. Technologies have emerged to help IT managers meet these challenges.
One of the most important of these technologies is storage virtualization.

Storage virtualization technology promises to help IT managers manage this inevitable


growth in their storage by simplifying the infrastructure and reducing the cost of managing the
infrastructure. Combining the storage capacity of multiple disk arrays into a single pool of
storage managed from a single point simplifies the infrastructure. IT administrators can also
be more productive since they can allocate storage to applications from a single point.
Exposing unused storage capacity for use by applications saves money by increasing the
utilization of existing assets.

IBM® introduced the IBM TotalStorage® SAN Volume Controller in July 2003 to deliver the
benefits of storage virtualization to our customers. IBM's approach to delivering this
technology ensures that we deliver the most customer value at the best possible price. Our
success in the marketplace over the past two years bears witness to the wisdom of our
approach. We have delivered the benefits of storage virtualization with SAN Volume
Controller to hundreds of customers in many different industries around the world.

The third version of this IBM Redbook covers all aspects of the SAN Volume Controller,
including the architecture, planning and configuring solutions and how to tailor these benefits
for your situation. Some of IBM's best technologists have combined their talents and
knowledge to provide you with the best explanations and procedures for realizing the benefits
of this exciting new technology.

Roger Wofford, IBM Product Marketing Manager

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. xiii
xiv IBM TotalStorage SAN Volume Controller
Preface

This IBM Redbook is a detailed technical guide to the IBM TotalStorage SAN Volume
Controller (SVC), a virtualization appliance solution that maps virtualized volumes visible to
hosts and applications to physical volumes on storage devices. Each server within the SAN
has its own set of virtual storage addresses which are mapped to a physical address. If the
physical addresses change, the server continues running using the same virtual addresses it
had before. This means that volumes or storage can be added or moved while the server is
still running. IBM's virtualization technology improves management of information at the
“block” level in a network, enabling applications and servers to share storage devices on a
network.

Successful businesses require real-time responsiveness to change, whether because of new


customer needs, changes in the supply chain, unexpected competitive moves, external
threats, or changes in the economic climate. Rapid response to change requires an IT
infrastructure that can turn information into a competitive advantage; the IT infrastructure
must provide the maximum benefit at an affordable cost, and must have the flexibility to
support changes in business processes. An on demand operating environment provides a
cost effective and flexible IT environment. With information at the heart of competitiveness,
storage becomes an ever more critical component of an on demand operating environment.

The IBM TotalStorage strategy addresses some of the most pressing needs currently facing
Chief Information Officers (CIO) and IT managers. As part of its strategy, IBM intends to
deliver industry leading technologies that will help dramatically reduce the total cost of
ownership (TCO) for storage, and help turn fixed costs into variable costs that scale with
business volume.

Success in the on demand world will depend on the ability to leverage information technology.
A greater dependence on information means a greater dependence on storage. What
distinguishes an on demand business is the ability to quickly sense and rapidly respond to a
dynamic marketplace; to do this, there are challenges that an on demand business must
overcome.

At the business level, customers are faced with three major storage challenges:
򐂰 Managing storage growth: Storage needs to continue to grow at over 50% per year.
Managing storage is becoming more complex than ever, because we now have to deal
with multiple server platforms and different operating systems, which may be connected to
a storage area network (SAN) with multiple and diverse storage platforms.
򐂰 Increasing complexity: Although the declining cost of storage per megabyte makes it
attractive to add additional disks, the increasing complexity of managing this storage
results in over-utilized staff and under-utilized IT resources. Combining this with the
shortage of skilled storage administrators, it is possible to add significant cost and
introduce risk to storage management.
򐂰 Maintaining availability: The added complexity of 24x7 environments significantly
reduces, for example, the efficiency of conducting routine maintenance, scheduling
backups, data migration, and introducing new software and hardware. This problem is
compounded by the fact that as availability increases, so does the cost inherent with
making it so.

These challenges still exist, although large SANs do offer desirable and tangible benefits, for
example, better connectivity, improved performance, distance flexibility, and scalability.

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. xv


However, even these benefits may be outweighed by the added complexity that they
introduce.

As an example, large enterprise SANs often contain different types of storage devices. These
differences could be in the types of disk deployed, their level of performance, or the
functionality provided, such as RAID or mirroring. Often, customers have different vendor
storage devices as the result of mergers or consolidations. The result, however, is that
storage and SAN administrators need to configure storage to servers, and then keep track of
which servers own or have access to that storage. The storage administrative tasks can
become daunting as the SAN grows and as the storage administrators manually attempt to
manage the SAN.

Furthermore, the complexity of having different file systems in the same SAN requires that
storage administrators know how to administer each client operating system (OS) platform.
The management interfaces for each may be different, since there is no common standard
that all vendors adhere to. Lastly, since the file systems are tied to each of the servers,
storage management functions potentially have to be run on hundreds of servers. It is easy to
see why manageability and interoperability are the top areas for concern, especially in a SAN
where the number of possible storage and OS platform permutations is considerable.

These challenges are at odds with the commonly held belief that storage is decreasing in cost
per megabyte. It is clear that the cost of managing storage is greater than the initial purchase
price. A strategy is needed to address storage manageability, while at the same time
addressing the need for interoperability. This strategy is the IBM TotalStorage Open Software
Family.

This strategy represents the next stage in the evolution of storage networking. It affords you
the opportunity to fundamentally improve your company’s effectiveness and efficiency in
managing its storage resources. With the IBM SAN virtualization products, you are witnessing
IBM deliver on its continued promise to provide superior on demand solutions that will assist
in driving down costs, and reduce TCO.

The team that wrote this redbook


This redbook was produced by a team of specialists from around the world working at the
ITSO in the Almaden Research Center, San Jose, California.

Jon Tate is a Project Manager for IBM TotalStorage SAN Solutions at the International
Technical Support Organization, San Jose Center. Before joining the ITSO in 1999, he
worked in the IBM Technical Support Center, providing Level 2 support for IBM storage
products. Jon has 19 years of experience in storage software and management, services,
and support, and is both an IBM Certified IT Specialist and an IBM SAN Certified Specialist.

Steve Cody is a Level 2 Support Software Engineer for IBM TotalStorage software products
in Tucson, AZ, USA. Before joining this group in 2000, Steve built up seven years of
experience in hospital clinical systems software support. He has over 24 years of IT
experience and is an IBM certified AIX® system administrator.

Bent Lerager is a Senior IT Specialist with 13 years of experience with IBM products, and is
currently working as a Field Technical Sales Specialist within IBM Storage Systems in
Denmark. He is involved throughout the pre-sales process, from the design of end-to-end,
high-end storage solutions to competitive positioning and all the way through to
implementation and problem determination. He is certified in high-end storage products,
storage virtualization, storage SAN, AIX and HACMP™.

xvi IBM TotalStorage SAN Volume Controller


Fred Scholten is a hardware SAN Specialist and has been working for EMEA Storage
Solution Support (ESSS) in Mainz, Germany since 2001, providing level 2 support for SAN
and virtualization products. Before joining the ESSS, he worked as an ESS country specialist
in the Netherlands, and before that he was an EMEA S/390® hardware specialist in
Montpellier, France. Fred has worked for more than 25 years in IBM services and support. He
holds the following certifications: IBM Certified Specialist TotalStorage Networking Solutions,
SNIA Certified Specialist FC-SAN and Brocade Certified Fabric Professional; he has also
co-authored an IBM Redbook.

Marcus Thordal is a Senior IT Specialist in IBM Global Services, Denmark and has been
with IBM for seven years. He is MCSE certified, and his areas of expertise include Open
Systems storage solutions and Microsoft® Windows® Server Systems, in particular MS
Cluster and MS SQL Server. He holds a Bachelor of Science in Engineering from The
Technical University of Denmark, and has co-authored four previous Redbooks™ on storage
products and solutions.

From left to right:, Bent, Fred, Steve, Jon, and Marcus

Thanks to the following people for their contributions to this project:

There are many people that contributed to this book. In particular, we thank the development
and PFE teams in Hursley, the SSG Advanced Technical Support team, and the Solution
Central team in Raleigh, North Carolina. Bill Wiegand of ATS and Rob Nicholson of STG
development, although not authors, contributed many of the ideas and answers that are
documented in this redbook.

Preface xvii
The previous authors of this redbook:
Matt Amanat
Sean Crawford
Amarnath Hiriyannappa
Philippe Jachimczyk
Craig McKenna
Joao Marcos Leite
Barry Mellish
Robert Symons

Omar Amin
Carlos Fuente
Tim Graham
Rob Nicholson
Howard Rankin
IBM Hursley

Bill Wiegand
IBM Advanced Technical Support

Huang Chen
Parker Grannis
Margaret Ticknor
Doug Vassello
Tom Wishart
Roger Wofford
IBM Raleigh

John Gressett
IBM Rochester

Sameera Memon
Jeannie Ostdiek
Chris Saul
Will Scott
IBM San Jose

Sharon Wang
IBM Chicago

Jim Morrison
IBM Menlo Park

Charlotte Brooks
Tom Cady
Emma Jacobs
Deanna Polm
Sangam Racherla
IBM ITSO Center San Jose

Tom and Jenny Chang


Garden Inn Hotel, Los Gatos, California

xviii IBM TotalStorage SAN Volume Controller


Become a published author
Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with
specific products or solutions, while getting hands-on experience with leading-edge
technologies. You'll team with IBM technical professionals, Business Partners and/or clients.

Your efforts will help increase product acceptance and client satisfaction. As a bonus, you'll
develop a network of contacts in IBM development labs, and increase your productivity and
marketability.

Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html

Comments welcome
Your comments are important to us!

We want our Redbooks to be as helpful as possible. Send us your comments about this or
other Redbooks in one of the following ways:
򐂰 Use the online Contact us review redbook form found at:
ibm.com/redbooks
򐂰 Send your comments in an Internet note to:
redbook@us.ibm.com
򐂰 Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. QXXE Building 80-E2
650 Harry Road
San Jose, California 95120-6099

Preface xix
xx IBM TotalStorage SAN Volume Controller
1

Chapter 1. Introduction to storage


virtualization
This chapter describes the need for storage virtualization and the IBM approach to both
in-band and out-of-band storage virtualization. The fundamental differences between the two
architectures are articulated to explain why IBM has chosen to use in-band virtualization for
the IBM TotalStorage SAN Volume Controller (the focus of the remainder of this redbook).

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. 1


1.1 The need for storage virtualization
At the business level, clients are faced with three major storage challenges:
򐂰 Managing storage growth: Storage needs to continue to grow at over 50% per year.
Managing storage is becoming more complex than ever. Customers now have to deal with
multiple server platforms and different operating systems, which may be connected to a
storage area network (SAN) with multiple and diverse storage platforms.
򐂰 Increasing complexity: Although the declining cost of storage per megabyte (MB) makes
it attractive to add additional disks, the increasing complexity of managing this storage
results in over-used staff and under-used information technology (IT) resources.
Combining this with the shortage of skilled storage administrators, it is possible to add
significant cost and introduce risk to storage management.
򐂰 Maintaining availability: The added complexity of 24x7 environments significantly
reduces, for example, the efficiency of conducting routine maintenance, scheduling
backups, data migration, and introducing new software and hardware. This problem is
compounded by the fact that, as availability increases, so does the inherent costs.

Storage needs are rising, and the challenge of managing disparate storage systems is
growing. IBM TotalStorage SAN Volume Controller brings devices together in a virtual pool to
make all storage appear as:
򐂰 One “logical” device to centrally manage and to allocate capacity as needed
򐂰 One solution to help achieve the most effective use of key storage resources on demand

Virtualization solutions can be implemented in the storage network, in the server, or in the
storage device itself. The IBM storage virtualization solution is SAN-based, which helps allow
for a more open virtualization implementation. Locating virtualization in the SAN, and
therefore in the path of input/output (I/O) activity, helps to provide a solid basis for
policy-based management. The focus of IBM on open standards means its virtualization
solution supports freedom of choice in storage-device vendor selection.

The IBM TotalStorage SAN Volume Controller solution is to:


򐂰 Simplify storage management
򐂰 Reduce IT data storage complexity and costs while enhancing scalability
򐂰 Extend on-demand flexibility and resiliency to the IT infrastructure

1.2 In-band virtualization


In a conventional SAN, the logical unit numbers (LUNs) that are defined within the storage
subsystem are directly presented to the host or hosts. In-band virtualization, otherwise
known as block aggregation, essentially means having an appliance in the data path that can
take physical storage from one or more storage subsystems and offer it to hosts in the form of
a virtual disk (VDisk).

The SNIA Block Aggregation Model (Figure 1-1 on page 3) specifies that block aggregation
can be performed within hosts (servers), in the storage network (storage routers, storage
controllers), or in storage devices (intelligent disk arrays).

2 IBM TotalStorage SAN Volume Controller


Application

File/record subsystem

Database File system


(dbms) (FS)

Resource mgmt, configuration

Redundancy mgmt (backup, …)


High availability (fail-over, …)
Services subsystem
Storage domain

Discovery, monitoring
Host-based block aggregation

Capacity planning
Security, billing
Block
aggregation SN-based block aggregation

Device-based block aggregation

Storage devices (disks, …)

Block
subsystem
Copyright 2000, Storage Network Industry Association

Figure 1-1 SNIA Block Aggregation Model

While each of these approaches has pros and cons and all are available in various forms
from various vendors, IBM chose to develop its latest block aggregation product (IBM
TotalStorage SAN Volume Controller) within the storage network.

Block aggregation within the storage network provides three significant benefits to clients:
򐂰 Increased storage administrator productivity
Administrators can manage, add, and migrate physical disks non-disruptively from an
application server point of view. This is accomplished by providing insulation between the
server’s view of the logical disks and the disks as presented by the storage subsystem.
Productivity is improved by allowing administrators to perform management functions
when convenient rather than waiting for ever decreasing maintenance windows.
Downtime requirements are all but eliminated.
򐂰 Providing a common platform for advanced functions
By providing a logical view of physical storage, advanced functions like disaster recovery
can be done at a single point in the SAN in a consistent way regardless of the underlying
physical storage. FlashCopy®, Metro Mirror (formerly referred to as Peer-to-Peer Remote
Copy (PPRC)), and data migration can also be performed in a consistent way. This
common platform is used to provide other advanced functions over time such as
advanced security and quality of service (QoS) capabilities.
򐂰 Improved capacity utilization
Spare capacity on underlying physical disks can be reallocated non-disruptively from an
application server point of view irrespective of the server operating system or platform
type. Logical disks can be created from any of the physical disks being managed by the
virtualization device (that is, vendor agnostic).

Figure 1-2 on page 4 shows the IBM approach to block aggregation.

Chapter 1. Introduction to storage virtualization 3


Figure 1-2 IBM plan for block aggregation

In addition to the three major benefits outlined above, abstracting the hosts from directly
accessing the storage subsystem or subsystems has many other benefits over other methods
of block aggregation including:
򐂰 It provides the ability to add advanced functions and apply them to the entire storage
infrastructure. The first release of the product offered:
– Copy Services (“Metro Mirror” (formerly referred to as PPRC) and FlashCopy)
– Data migration
– Read and Write Caching
򐂰 Later releases of the product offer such functions as:
– Quality of Service
– Performance based data migration
– Performance optimization in the data path
– Advanced security
򐂰 It does not lock a client into a particular storage hardware vendor.
򐂰 It is not intrusive on the hosts.
򐂰 It can offload function from the hosts.
򐂰 It can support storage management from multiple ISVs.
򐂰 It offers superior scalability.

IBM’s virtualization product provides redundant, modular, and scalable solutions. It is based
on a clustered IBM SAN appliance running a Linux kernel to support high availability and
performance. Additional nodes are capable of being added non-disruptively providing
enterprise class scalability. IBM’s long history of storage controller development has enabled
us to develop systems where, in the exceptionally rare case that a failure occurs, the
virtualization device can fail and recover gracefully. Figure 1-3 on page 5 shows a
representation of the IBM TotalStorage SAN Volume Controller.

4 IBM TotalStorage SAN Volume Controller


Figure 1-3 Conceptual diagram of the IBM SAN Volume Controller

In summary, enterprise class block aggregation functionality is added to the storage network.
IBM’s solution improves storage administrator productivity, provides a common base for
advanced functions, and provides for more efficient use of storage. IBM’s product is designed
to be delivered as a horizontally scalable, integrated solution based on IBM SAN appliance,
and Linux using a fault tolerant clustered architecture.

1.3 Out-of-band virtualization


Out-of-band virtualization, otherwise known as file aggregation, is when the virtualization
appliance is not in the data path. Typically, out-of-band virtualization is more geared toward
file sharing across the SAN. To this end, it typically involves a single file system in a single
name space.

File aggregation is a similar technique as block aggregation. However, rather than dealing
with blocks of data, file aggregation addresses the needs of accessing and sharing files in a
storage network. In the SNIA model, hosts get file metadata from file system or Network
Attached Storage (NAS) controllers, and then access the data directly. File aggregation can
be used in conjunction with or independent from block aggregation. Figure 1-4 on page 6
shows the SNIA file aggregation model.

Chapter 1. Introduction to storage virtualization 5


Figure 1-4 SNIA file aggregation model

The IBM approach is through the use of a common file system based on the IBM Storage
Tank™ technology initiative. Initially, this file system covers all SAN-based files and is later
expanded to cover all files in an enterprise. IBM provides a metadata server cluster for
managing information about files and have designed its file system clients to access disks
directly. For clients with both SANs and NAS, IBM provides a converged SAN and NAS
solution based on the Storage Tank technology.

The IBM solution, the IBM TotalStorage SAN file system, is designed to provide a common
file system specifically designed for storage networks. By managing file details (metadata) on
the storage network instead of in individual servers, IBM can make a single file system
available to all application servers on that network. Doing so provides immediate benefits: a
single point of management and a single name space, and common management for all files
in the network, eliminating management of files on a server by server basis.

The SAN file system’s design automates routine and error prone tasks using policy based
automation, initially to manage file placement and handle “out of space” conditions. The SAN
file system’s design also allows the first true heterogeneous file sharing, where the reader
and writer of the exact same data can run different operating systems. Initially, the SAN file
system provides a range of the most commonly used operating systems in the enterprise
SANs (see Figure 1-5 on page 7).

6 IBM TotalStorage SAN Volume Controller


Figure 1-5 The IBM plan for file aggregation

The SAN file system metadata servers are based on clustered IBM SAN appliances running
Linux to support high availability and performance. The metadata servers provide file locks
and all other file information (such as location) to authorized application servers, which are
running the SAN file system client code (no application changes necessary).

After file information is passed from the metadata server to the application server, the
application server can access the blocks that comprise that file directly through the SAN (not
through the metadata server). By providing direct access to data from the application server
to the underlying storage (virtualized or not), IBM’s solution provides the benefits of
heterogeneous file sharing with local file system performance.

Since the metadata servers have a complete understanding of all files on the SAN, including
the essential metadata to make important decisions, it is a logical point to manage the
storage in the network through policy-based controls. For example, when news files are
created, the metadata server can decide where to place each file based on specified criteria
such as file type.

The SAN file system metadata server provides the ability to group storage devices according
to their characteristics, such as latency and throughput. These groupings, called storage
pools, allow administrators to manage data according to the characteristics that matter to
them. For example, an administrator can define a storage pool for mission critical applications
using highly reliable storage arrays that are backed up nightly and have full disaster recovery
capabilities. The administrator can also define a storage pool for less critical applications with
weekly tape backups and minimal disaster recovery capabilities. Using this level of storage
classification, an administrator can set up automated policies that determine which files are
placed in which storage pools based on the required service levels.

Because the SAN file system metadata is separate from the application data, files can be
manipulated while remaining active. For example, files being processed by a mission-critical

Chapter 1. Introduction to storage virtualization 7


application can be non-disruptively moved within or across storage pools without stopping the
application. Similarly, data migration from one storage system to another can be handled
non-disruptively by having the metadata server move the pools to new physical disks, and
then disconnecting the old disks, all done without quiescing applications.

The SAN file system offers a logical extension to current NAS and SAN environments. While
NAS has proven successful in the marketplace, it is does not take advantage of a SAN
infrastructure. NAS Filers become the keepers of their own file metadata and must be
managed separately from the files on SAN attached application servers. IBM’s approach is to
add NAS capabilities to the SAN file system, thereby allowing storage administrators to
manage the NAS file data with the same tools as for their application servers. This approach
of SAN and NAS convergence helps lower TCO in these environments.

To facilitate the adoption of the SAN file system, the client code and the client reference
implementation source code are licensed at no cost. In addition, the metadata server
protocols are made publicly available. IBM will work with the industry to encourage
convergence of the SAN file system protocols with other standards.

In summary, the IBM TotalStorage SAN file system is a common SAN-wide file system that
permits centralization of management and improved storage utilization at the file level. The
SAN file system is delivered in a highly available configuration based on an IBM SAN
appliance with active-active failover and clustering for the metadata servers, providing high
availability and fault tolerance. The SAN file system is also being designed to provide policy
based storage automation capabilities for provisioning and data placement, nondisruptive
data migration, and a single point of management for files on a storage network. The use of
the SAN file system can greatly simplify the management of files on SANs and result in a
significant reduction in TCO.

Figure 1-6 shows a diagram of the IBM SAN file system architecture.

Figure 1-6 Conceptual diagram of the IBM SAN file system architecture

8 IBM TotalStorage SAN Volume Controller


1.4 Conclusion
In conclusion, IBM TotalStorage SAN Volume Controller enables storage virtualization. This
allows clients to reap the benefit of better application business responsiveness, maximized
storage utilization, dynamic resource allocation, improved storage administration utilization
and reduced storage outage.

In-band and out-of-band virtualization provide two very distinct yet complementary
approaches to virtualization. IBM will extol the virtues of each in two separate products. Both
products fulfill different requirements, and therefore, use different approaches to
virtualization.

The rest of this redbook is dedicated to the IBM TotalStorage SAN Volume Controller and its
method of in-band virtualization. For greater detail about the technology and implementation
of the IBM TotalStorage SAN file system, see IBM TotalStorage: Introducing the SAN File
System, SG24-7057.

Chapter 1. Introduction to storage virtualization 9


10 IBM TotalStorage SAN Volume Controller
2

Chapter 2. IBM TotalStorage SAN Volume


Controller overview
This chapter describes the major concepts behind the IBM TotalStorage SAN Volume
Controller to provide the framework for discussion for the remainder of this redbook.

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. 11


2.1 Maximum supported configurations
At the time of writing, the maximum supported configurations for the SVC are as shown in
Table 2-1.

Table 2-1 Maximum supported configurations


Configuration Maximum number

Hosts 64

Host ports 128

Ports per SAN 256

2.2 Glossary of commonly used terms


Before providing an overview the IBM TotalStorage SAN Volume Controller, we begin this
chapter with a short glossary of terms (in alphabetical order) most commonly used throughout
the remainder of this redbook.

Boss node
A single node acts as the boss node for overall management of the cluster. If the boss node
fails, another node in the cluster will take over the responsibilities.

Configuration node
At any one time, a single node in the cluster is used to manage configuration activity. This
configuration node manages a cache of the configuration information that describes the
cluster configuration and provides a focal point for configuration commands. Similarly, at any
one time, a single node acts as the boss node for overall management of the cluster.

Extent
An extent is a fixed size unit of data that is used to manage the mapping of data between
MDisks and VDisks.

Front-end and back-end


SAN Volume Controller takes managed disks and presents these to application servers
(hosts). The managed disks are looked after by the “back-end” application of the SAN Volume
Controller. The virtual disks presented to hosts are looked after by the “front-end” application
in the SAN Volume Controller.

Grain
A grain is the unit of data represented by a single bit in a FlashCopy bitmap, 256 K in SAN
Volume Controller.

I/O group
An input/output (I/O) group contains two SAN Volume Controller nodes defined by the
configuration process. Each SAN Volume Controller node is associated with exactly one I/O
group. The nodes in the I/O group provide access to the VDisks in the I/O group.

LU and LUN
Strictly speaking, there is a difference between an logical unit (LU) and a logical unit number
(LUN). A LUN is a unique identifier used on a SCSI bus that enables it to differentiate

12 IBM TotalStorage SAN Volume Controller


between up to eight separate devices (each of which is a logical unit). In practice, the two
terms are used interchangeably. In this book, when we refer to a LUN, we refer to the unit of
storage that is defined in a storage subsystem such as an IBM TotalStorage Enterprise
Storage Server® (ESS), IBM TotalStorage DS4000, DS6000 and DS8000 series Storage
Server or storage servers from other vendors.

Managed disk
Managed disk (MDisk) is a SCSI disk presented by a RAID controller and managed by the
SAN Volume Controller. The MDisk must not be configured to be visible to host systems on
the SAN.

Managed disk group


The managed disk group (MDG) is a collection of MDisks that jointly contain all the data for a
specified set of VDisks.

Master console
The master console is the platform on which the software used to manage the SAN Volume
Controller runs.

Node
Node is the name given to the individual servers in an SAN Volume Controller cluster on
which the SAN Volume Controller software runs.

SAN Volume Controller


The SAN Volume Controller is a SAN appliance designed for attachment to a variety of host
computer systems, which carries out block level virtualization of disk storage.

Virtual disk
Virtual disk (VDisk) is a SAN Volume Controller device that appears to host systems attached
to the SAN as a SCSI disk. Each VDisk is associated with exactly one I/O group.

2.3 Virtualization overview


The IBM TotalStorage SAN Volume Controller nodes are the hardware elements of the IBM
TotalStorage SAN Volume Controller, a member of the IBM TotalStorage virtualization family
of solutions. The SAN Volume Controller combines servers into a high availability cluster.
Each of the servers in the cluster is populated with 4 GB of high-speed memory, which serves
as the cluster cache. A management card is installed in each server to monitor various
parameters which the cluster uses to determine the optimum and continuous data path. The
cluster is protected against data loss by uninterruptible power supplies. The SAN Volume
Controller nodes are always installed in pairs for high availability.

Storage virtualization addresses the increasing cost and complexity in data storage
management. It addresses this increased complexity by shifting storage management
intelligence from individual SAN disk subsystem controllers into the network via a
virtualization cluster of nodes.

The SAN Volume Controller solution is designed to reduce both the complexity and costs of
managing your SAN-based storage. With the SAN Volume Controller, you can:
򐂰 Simplify management and increase administrator productivity by consolidating storage
management intelligence from disparate disk subsystem controllers into a single view.

Chapter 2. IBM TotalStorage SAN Volume Controller overview 13


򐂰 Improve application availability by enabling data migration between disparate disk storage
devices non-disruptively.
򐂰 Improve disaster recovery and business continuance needs by applying and managing
copy services across disparate disk storage devices within the Storage Area Network
(SAN). These solutions include a Common Information Model (CIM) Agent, enabling
unified storage management based on open standards for units that comply with CIM
Agent standards.
򐂰 Provides advanced features and functions to the entire SAN, such as:
– Large scalable cache
– Copy Services
– Space management (later releases to include Policy Based Management)
– Mapping based on desired performance characteristics
– Quality of Service (QoS) metering and reporting

Note: The SAN Volume Controller is not a RAID controller. The disk subsystems attached
to SANs that have the SAN Volume Controller provide the basic RAID setup. The SAN
Volume Controller uses what is presented to it as a managed disk to create virtual disks.

2.4 Compass architecture


The IBM TotalStorage SAN Volume Controller is based on the COMmodity PArts Storage
System (Compass) architecture developed at the IBM Almaden Research Center.

The overall goal of the Compass architecture is to create storage subsystem software
applications that require minimal porting effort to leverage a new hardware platform. To meet
this goal:
򐂰 Compass, although currently deployed on the Intel® hardware platform, can be ported to
other hardware platforms.
򐂰 Compass, although currently deployed on a Linux kernel, can be ported to other Portable
Operating System Interface (POSIX)-compliant operating systems.
򐂰 Compass uses commodity adapters and parts wherever possible. To the highest extent
possible, it only uses functions in the commodity hardware that are commonly exercised
by the other users of the parts. This is not to say that Compass software could not be
ported to a platform with specialized adapters. However, the advantage in specialized
function must be weighed against the disadvantage of future difficulty in porting and in
linking special hardware development plans to the release plans for applications based on
the Compass architecture.
򐂰 Compass is developed in such a way that it is as easy as possible to troubleshoot and
correct software defects.
򐂰 Compass is designed as a scalable, distributed software application that can run in
increasing sets of Compass nodes with near linear gain in performance while using a
shared data model that provides a single pool of storage for all nodes.
򐂰 Compass is designed so that there is a single configuration and management view of the
entire environment regardless of the number of Compass nodes in use.

The approach is to minimize the dependency on unique hardware, and to allow exploitation of
or migration to new SAN interfaces simply by plugging in new commodity adapters.
Performance growth over time is assured by the ability to port Compass to just about any
platform and remain current with the latest processor and chipset technologies on each. The
SAN Volume Controller implementation of the Compass architecture has exploited Linux as a

14 IBM TotalStorage SAN Volume Controller


convenient development platform to deploy this function. This has, and will continue to
enhance the ability of IBM to deploy robust function in a timely way.

2.4.1 SAN Volume Controller clustering


In simple terms, a cluster is a collection of servers that, together, provide a set of resources to
a client. The key point is that the client has no knowledge of the underlying physical hardware
of the cluster. This means that the client is isolated and protected from changes to the
physical hardware, which brings a number of benefits. Perhaps the most important of these
benefits is high availability. Resources on clustered servers act as highly available versions of
unclustered resources. If a node (an individual computer) in the cluster is unavailable, or too
busy to respond to a request for a resource, the request is transparently passed to another
node capable of processing it. Clients are, therefore, unaware of the exact locations of the
resources they are using. For example, a client can request the use of an application without
being concerned about either where the application resides or which physical server is
processing the request. The user simply gains access to the application in a timely and
reliable manner. Another benefit is scalability. If you need to add users or applications to your
system and want performance to be maintained at existing levels, additional systems can be
incorporated into the cluster.

The IBM TotalStorage SAN Volume Controller is a collection of up to eight cluster nodes,
added in pairs. In future releases, the cluster size will be increased to permit further
performance scalability. These nodes are managed as a set (cluster) and present a single
point of control to the administrator for configuration and service activity.

Note: Although the SAN Volume Controller code is based on a Linux kernel, the clustering
feature is not based on Linux clustering code. The clustering failover and failback feature is
part of the SAN Volume Controller application software.

Within each cluster, one node is defined as the configuration node. This node is assigned the
cluster IP address and is responsible for transitioning additional nodes into the cluster.

During normal operation of the cluster, the nodes communicate with each other. If a node is
idle for a few seconds, then a heartbeat signal is sent to assure connectivity with the cluster.
Should a node fail for any reason, the workload intended for it is taken over by another node
until the failed node has been restarted and re-admitted to the cluster (which happens
automatically). In the event that the microcode on a node becomes corrupted, resulting in a
failure, the workload is transferred to another node. The code on the failed node is repaired,
and the node is re-admitted to the cluster (again, all automatically).

For I/O purposes, SAN Volume Controller nodes within the cluster are grouped into pairs,
called I/O groups, with a single pair being responsible for serving I/O on a given VDisk. One
node within the I/O group represents the preferred path for I/O to a given VDisk. The other
node represents the non-preferred path. This preference alternates between nodes as each
VDisk is created within an I/O group to balance the workload evenly between the two nodes.

Note: The preferred node by no means signifies absolute ownership. The data is still
accessed by the partner node in the I/O group in the event of a failure or if the preferred
node workload becomes too high.

Beyond automatic configuration and cluster administration, the data transmitted from
attached application servers is also treated in the most reliable manner. When data is written
by the host, the preferred node within the I/O group stores a write in its own write cache and
the write cache of its partner (non-preferred) node before sending an “I/O complete” status

Chapter 2. IBM TotalStorage SAN Volume Controller overview 15


back to the host application. New with SVC version 2.1, write cache is automatically destaged
to disk after two minutes of no writes to a vdisk. To ensure that data is written in the event of a
node failure, the surviving node empties all of its remaining write cache and proceeds in
write-through mode until the cluster is returned to a fully operational state.

Note: Write-through mode is where the data is not cached in the nodes, but written directly
to the disk subsystem instead. While operating in this mode, performance is somewhat
degraded. More importantly, it ensures that the data makes it to its destination without the
risk of data loss. A single copy of data in cache would constitute exposure to data loss.

As yet another data protection feature, the SAN Volume Controller is supplied with
uninterruptible power supply units. In addition to voltage regulation to protect valuable
electronic components within the SAN Volume Controller configuration, in the event of a main
power outage, the uninterruptible power supply provides enough power to destage data to the
SAN Volume Controller internal disk and shut down the nodes within the SAN Volume
Controller cluster gracefully. This is a feature found in most high-end disk subsystems.

2.4.2 SAN Volume Controller virtualization


The SAN Volume Controller provides block aggregation and volume management for disk
storage within the SAN. In simpler terms, this means that the SAN Volume Controller
manages a number of back-end disk subsystem controllers and maps the physical storage
within those controllers to logical disk images that can be seen by application servers and
workstations in the SAN. The SAN must be zoned in such a way that the application servers
cannot see the same back-end LUNs seen by the SAN Volume Controller, preventing any
possible conflict between the SAN Volume Controller and the application servers both trying
to manage the same back-end LUNs.

As described earlier, when an application server performs I/O to a VDisk assigned to it by the
SAN Volume Controller, it can access that VDisk via either of the nodes in the I/O group. Each
node may only be in one I/O group and since each I/O group only has two nodes, the
distributed redundant cache design in the SAN Volume Controller only needs to be two-way.

The SAN Volume Controller I/O groups are connected to the SAN in such a way that all
back-end storage and all application servers are visible to all of the I/O groups. The SAN
Volume Controller I/O groups see the storage presented to the SAN by the back-end
controllers as a number of disks, known as managed disks. Because the SAN Volume
Controller does not attempt to provide recovery from physical disk failures within the
back-end controllers, MDisks are recommended, but not necessarily required, to be a RAID
array. The application servers should not see the MDisks at all. Instead, they should see a
number of logical disks, known as virtual disks or VDisks, which are presented to the SAN by
the SAN Volume Controller.

MDisks are collected into groups, known as managed disk groups (MDGs). The MDisks that
are used in the creation of a particular VDisk must all come from the same MDG. Each MDisk
is divided into a number of extents. The default, and minimum, extent size is 16 MB, and the
maximum extent size is 512 MB, based on the definition of its MDG. These extents are
numbered sequentially from the start to the end of each MDisk. Conceptually, this is
represented as shown in Figure 2-1 on page 17.

16 IBM TotalStorage SAN Volume Controller


Figure 2-1 Extents being used to create a virtual disk

The virtualization function in the SAN Volume Controller maps the VDisks seen by the
application servers to the MDisks presented by the back-end controllers. I/O traffic for a
particular VDisk is, at any one time, handled exclusively by the nodes in a single I/O group.
Although a cluster can have many nodes within it, the nodes handle I/O in independent pairs.
This means that the I/O capability of the SAN Volume Controller scales well (almost linearly),
since additional throughput can be obtained by simply adding additional I/O groups.
Figure 2-2 summarizes the various relationships that bridge the physical disks through to the
virtual disks within the SAN Volume Controller architecture.

Mapping from virtual disks to managed disks


1. Stripe extents across multiple managed disks
2. Sequentially group across one or more mDisks
3. Image mode - one-to-one managing

Figure 2-2 The relationship between physical and virtual disks

Virtualization mappings
Several different mapping functions are provided by the SAN Volume Controller:

Chapter 2. IBM TotalStorage SAN Volume Controller overview 17


򐂰 Striped: Here a VDisk is mapped to a number of MDisks in a MDG. The extents on the
VDisk are striped over the MDisks. Therefore, if the VDisk is mapped to 5 MDisks, then
the first, sixth, eleventh, etc. extent comes from the first MDisk; the second, seventh,
twelfth (and so on) extent will come from the second MDisk; and so on. This is the default
mapping.
򐂰 Sequential: Here a VDisk is mapped to a single MDisk in a MDG. There is no guarantee
that sequential extents on the MDisk map to sequential extents on the VDisk, although this
may be the case when the VDisk is created.

Note: There are no ordering requirements in the MDisk to VDisk extent mapping
function for either striped or sequential VDisks. This means that if you examine the
extents on an MDisk, it is quite possible for adjacent extents to be mapped to different
VDisks. It is also quite possible for adjacent extents on the MDisk to be mapped to
widely separated extents on the same VDisk, or to adjacent extents on the VDisk. In
addition, the position of the extents on the MDisks is not fixed by the initial mapping,
and can be varied by the user performing data migration operations.

򐂰 Image: Image mode sets up a one-to-one mapping of extents on an MDisk to the extents
on the VDisk. Because the VDisk has exactly the same extent mapping as the underlying
MDisk, any data already on the disk is still accessible when migrated to a SAN Volume
Controller environment. Within the SAN Volume Controller environment, the data can
(optionally) be seamlessly migrated off the image mode VDisk to a striped or sequential
VDisk within a MDG.

2.4.3 SAN Volume Controller multipathing


Each SAN Volume Controller node presents a VDisk to the SAN via multiple paths, usually
four. In normal operation, two nodes provide redundant paths to the same storage. This
means that, depending on zoning, a single host bus adapter (HBA) sees up to eight paths to
each LUN presented by the SAN Volume Controller. Because most operating systems cannot
resolve multiple paths back to a single physical device, IBM provides a multipathing device
driver.

The multipathing driver supported by the SAN Volume Controller is the IBM Subsystem
Device Driver (SDD). It manages the multiple paths from the host to the SAN Volume
Controller making use of the preferred paths in a round robin manner before using any
non-preferred path. SDD performs data path failover in the event of a failure within the SAN
Volume Controller or the host path, while masking out the additional disks that are otherwise
seen by the hosts due to the redundant paths through the SAN fabric.

FYI: The SDD code is updated to support both the SAN Volume Controller and the ESS.
Provided that the latest version is used, IBM supports the concurrent connections of a host
to both a SAN Volume Controller and native ESS environment as long as each ESS LUN is
only seen by either the SVC or the host but not by both.

2.5 SAN Volume Controller logical configuration


Figure 2-3 on page 19 shows an example of a SAN Volume Controller configuration.

18 IBM TotalStorage SAN Volume Controller


Figure 2-3 SAN Volume Controller logical view

Configuration notes
򐂰 The Fibre Channel SAN connections between the SAN Volume Controller and the
switches are optical fibre preferably running at 2 Gb/s. However, the SAN Volume
Controller is also supported in 1 Gb/s Fibre Channel fabrics.
򐂰 To provide high availability, the SAN Volume Controller nodes should be configured in
redundant SAN fabrics.
򐂰 The Fibre Channel switches need to be zoned to permit the hosts to see the SAN Volume
Controller nodes and the SAN Volume Controller nodes to see the RAID Controllers. The
SAN Volume Controller nodes within a cluster must be able to see each other and the
master console. In addition, if there are two SAN Volume Controller clusters with
MetroMirror (formerly referred to as Peer-to-Peer Remote Copy (PPRC)) services
between them, zoning must be set so that all the nodes in both clusters see all the other
nodes in both clusters.
򐂰 In addition to a Fibre Channel connection or connections, each device has an Ethernet
connection for configuration and error reporting, though only one of the nodes, the
configuration node, binds an IP address to its Ethernet connection.

2.6 SAN Volume Controller compatibility


The SAN Volume Controller is capable of supporting Windows NT®, 2000 and 2003, AIX,
SUSE SLES 8 and Red Hat EL AS Linux, Novell Netware 6.5 with clustering, VMWare ESX,
Sun Solaris, and HP-UX hosts. The SAN switch support is also broad and includes the IBM
TotalStorage SAN Switches and members of the McDATA, CNT, and Cisco families of SAN
Switches and Directors.

Chapter 2. IBM TotalStorage SAN Volume Controller overview 19


Support of disk subsystems includes the IBM TotalStorage DS4000, DS6000 and DS8000
series servers and the IBM TotalStorage Enterprise Storage Server. It also supports several
models of Hitachi Thunder and Lightning, EMC Clariion and Symmetrix, and Hewlett-Packard
StorageWorks Modular Arrays. Future releases of the SAN Volume Controller will include
support for other third-party disk subsystems from manufacturers such as HDS, HPQ and
EMC.

For the SAN Volume Controller supported operating systems, hosts, HBAs, SAN switches
and storage subsystems, see the links for supported hardware list and recommended
software levels for the appropriate SVC code level under the Install and Use Tab on the Web
at:
http://www-1.ibm.com/servers/storage/support/software/sanvc/installing.html

2.7 Software licensing


There are three parameters to consider when licensing the SAN Volume Controller software:
򐂰 The total amount of storage that is managed by the SAN Volume Controller cluster
This may be greater than the amount of storage that is virtualized by the cluster. See
Figure 2-4.

SAN Volume Controller Storage Software Usage


Usage is charged based on the amount of Managed disk and not
the amount storage virtualized by a cluster

SAN Volume Controller

VDisk VDisk VDisk


(1TB) (1TB) (1TB)
3 TB of virtualized storage

I/O Group

SVC Node SVC Node

MDisk MDisk MDisk MDisk 8 TB of managed storage


(2TB) (2TB) (2TB) (2TB) (used to determine SVC Storage Software usage)

DS4000
ESS Raw storage
seies

Figure 2-4 Base software license diagram

20 IBM TotalStorage SAN Volume Controller


򐂰 The amount of virtualized storage that you want to participate in simultaneous FlashCopy
relationships
You may want to manage 8 TB of storage, but FlashCopy only 1 TB of storage. In this
case, you need a 2 TB FlashCopy license since there is 1 TB of source and 1 TB of target
FlashCopy volumes. See Figure 2-5.

FlashCopy Usage
Priced based on the amount of storage using FlashCopy in a cluster
Storage using FlashCopy is derived from the total size of all source/target virtual
disks in active FlashCopy relationships

SAN Volume Controller

Source Target

VDisk VDisk VDisk 1 TB of Source + 1 TB of Target =


(1TB) (1TB) (1TB) 2 TB in FlashCopy relationship
FlashCopy
Relationship

I/O Group

SVC Node SVC Node

8 TB of managed storage
MDisk MDisk MDisk MDisk (used to determine SVC Storage Software
(2TB) (2TB) (2TB) (2TB) usage)

DS4000
ESS Raw storage
series

Figure 2-5 FlashCopy storage license

Chapter 2. IBM TotalStorage SAN Volume Controller overview 21


򐂰 The amount of virtualized storage that you want to MetroMirror to another system
You may be managing 8 TB of storage and FlashCopy copying 1 TB (2 TB Fibre Channel
license required) and MetroMirror copying 1 TB. In this case, both clusters that are taking
part in the MetroMirror relationship require a 1 TB MetroMirror license. In the case of
intracluster MetroMirror, where both primary and secondary volumes are in the same
cluster, then the license must be large enough to cover both.
See Figure 2-6 for the intracluster scenario. For intercluster MetroMirror, only the amount
of virtualized storage that is in a MetroMirror relationship at that site needs to be covered
by the MetroMirror license.

MetroMirror Usage: Intracluster


Intra Cluster MetroMirror primary and secondary exist in the same SVC cluster
For intracluster MetroMirror usage tracking similar to FlashCopy usage
Priced based on the amount of storage using MetroMirror
Storage using MetroMirror is derived from the total size of all primary and secondary virtual
disks in active MetroMirror relationships

SAN Volume Controller

Primary Secondary
1 TB of Primary + 1 TB of Secondary =
VDisk VDisk VDisk
2 TB in MetroMirror relationship
(1TB) (1TB) (1TB)
IntraCluster
MetroMirror

I/O Group

SVC Node SVC Node

8 TB of managed storage
MDisk MDisk MDisk MDisk (used to determine SVC Storage Software
(2TB) (2TB) (2TB) (2TB) usage)

DS4000
ESS Raw storage
series

Figure 2-6 MetroMirror intracluster

Figure 2-7 on page 23 shows the intercluster MetroMirror relationship and how this affects
licensing.

22 IBM TotalStorage SAN Volume Controller


MetroMirror Usage: Cross cluster
Cross cluster MetroMirror primary and secondary exist in two separate SVC clusters.
Priced based on the amount of storage using MetroMirror across clusters.
Storage using MetroMirror is derived from the total size of all primary and secondary virtual disks in
active MetroMirror relationships (intracluster or crosscluster).
Note: For cross cluster MetroMirror, the customer must have two SAN Volume Controller clusters.
Therefore, the customer must purchase two SVC Storage Software licenses with MetroMirror for both
clusters.
SAN Volume Controller SAN Volume Controller

Primary Secondary
Up to 8 TB SVC VDisk Up to 2 TB SVC
Storage SW VDisk VDisk Storage SW
(1TB) (2TB) (2TB)
Up to 2 TB Cross cluster MetroMirror Up to 2 TB
MetroMirror Relationship MetroMirror

I/O Group I/O Group

SVC Node SVC Node SVC Node SVC Node

MDisk MDisk MDisk MDisk MDisk


(2TB) (2TB) (2TB) (2TB) (2TB)

DS4000
DS4000 series DS4000
ESS
series series

Figure 2-7 MetroMirror intercluster relationship

You can increase any one of these three licenses independently of the other. That is, you can
increase the total amount of managed storage without increasing the other licenses if the
amounts of storage being copied remains unchanged. Similarly, you can change the copy
licenses independently of each other.

Chapter 2. IBM TotalStorage SAN Volume Controller overview 23


24 IBM TotalStorage SAN Volume Controller
3

Chapter 3. Planning and configuration


This chapter discusses the steps required when planning to install an IBM TotalStorage SAN
Volume Controller (SVC) in your storage network. It looks at the implications for your storage
network.

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. 25


3.1 General planning rules
To achieve the most benefit from SAN Volume Controller (SVC), pre-installation planning
should include several important steps. These steps ensure that SVC provides the best
possible performance, reliability, and ease of management for your application needs. Proper
configuration also helps minimize downtime by avoiding changes to SVC and the storage
area network (SAN) environment to meet future growth needs.

Planning the SVC requires the following steps:


1. Document the number of hosts (application servers) to attach to the SVC, the traffic profile
activity (read or write, sequential or random), the performance requirements (input/output
(I/O) per second), the total storage capacity, the storage capacity for point-in-time copy
(FlashCopy), the storage capacity for remote copy (Metro Mirror), the storage capacity per
host, and the host logical unit number (LUN) sizes.
2. Define the local and remote SAN fabrics and clusters if a remote copy or a secondary site
are needed.
3. Define the number of clusters and the number of pairs of nodes (between 1 and 4) for
each site. Each pair of nodes (an I/O group) is the container for the virtual disks. How
many I/O groups are needed depends on the overall performance.
4. Design the SAN according to the requirement for high availability. Consider the total
number of ports and the bandwidth needed between the host and the SVC, the SVC and
the disk subsystem, between the SVC nodes, and for the ISL between the local and
remote fabric.
5. Define the managed disks (MDisks) in the disk subsystem.
6. Define the managed disk groups (MDGs). This depends on the disk subsystem in place
and the data migration needs.
7. Create and re-partition the VDisks between the different I/O groups and the different
MDGs in such a way as to optimize the I/O load between the hosts and the SVC. This can
be an equal re-partition of all the VDisks between the different nodes or a re-partition,
which takes into account the expected load from the different hosts.
8. Plan for the physical location of the equipment in the rack.

Note: See IBM TotalStorage Virtualization Family SAN Volume Controller: Planning
Guide, GA22-1052, for hardware location and connection charts.

You need IP addresses for the SVC Cluster, the SVC service IP address, master console,
and switches. See IBM TotalStorage Virtualization Family SAN Volume Controller: Planning
Guide, GA22-1052 for more information.

3.2 Physical planning


There are main factors to take into account when carrying out the physical planning of an
SVC installation. The physical site must have the following characteristics:
򐂰 Power, cooling, and location requirements are present for the SVC and uninterruptible
power supplies.
򐂰 An SVC node is one EIA unit high.

26 IBM TotalStorage SAN Volume Controller


򐂰 Each of the uninterruptible power supplies (UPS) that comes with the SVC version 2.1 is
one EIA unit high; the UPS shipped with the earlier version of the SVC is two EIA units
high.
򐂰 The master console is two EIA units high: one for the server and one for the keyboard and
monitor.
򐂰 Other hardware devices may be in the rack, such as IBM TotalStorage DS4000, IBM
TotalStorage DS6000, SAN switches, Ethernet switch, and others.
򐂰 The maximum power rating of the rack and input power supply must not be exceeded.

3.2.1 Physical rules


The SVC must be installed in pairs to provide high availability, and each node in an I/O group
must be connected to different UPSs as shown in Figure 3-1.

Power connection
SVCnode1/U PS1

Power connection
SVCnode2/U PS2

Serial connection
SVCnode1/U PS1

Serial connection
SVCnode2/U PS2

Figure 3-1 Node uninterruptible power supply setup

In SVC versions prior to SVC 2.1, the Powerware 5125 UPS was shipped with the SVC; in
SVC 2.1, Powerware 5115 UPS is shipped with the SVC. You can upgrade an existing SVC
cluster to 2.1 and still use the UPS Powerware 5125 that was delivered with the SVC prior to
2.1.
򐂰 Each SVC node of an I/O group must be connected to a different UPS.
򐂰 Each UPS shipped with SVC 2.1 supports one node only, but each UPS in earlier versions
of SVC supports up to two SVC nodes (in distinct I/O groups).
򐂰 Each UPS pair that supports a pair of nodes must be connected to a different power
domain (if possible) to reduce the chances of input power loss.
򐂰 The UPSs must be installed in the lowest available position in the rack. If necessary, move
lighter units toward the top.
򐂰 A cluster can contain no more than 8 SVC nodes (this increases in later releases of the
product).
򐂰 The power and serial connection from a node must be connected to the same UPS,
otherwise the node will not boot.
򐂰 The UPS in SVC 2.1 can be mixed with UPSs from earlier SVC versions, but the UPS
rules above have to be followed, and SVC nodes in the same I/O group must be attached
to the same type of UPSs, though not the same UPS.

Chapter 3. Planning and configuration 27


Figure 3-2 shows a layout sample within a rack.

EIA 36 Blank
EIA 35 Ethernet Hub 1
EIA 34 Blank
EIA 33 Blank
EIA 32 Blank
EIA 31 Blank
EIA 30 Blank
EIA 29 Blank
EIA 28 FC Switch 1
EIA 27 FC Switch 1
EIA 26 Blank
EIA 25 Blank
EIA 24 Blank
EIA 23 Blank
EIA 22 SAN Volume Controller node 4
EIA 21 SAN Volume Controller node 3
EIA 20 SAN Volume Controller node 2
EIA 19 SAN Volume Controller node 1
EIA 18 master console
EIA 17 master console keyboard and monitor
EIA 16
EIA 15 disk subsystem 4
EIA 14
EIA 13
EIA 12 disk subsystem 3
EIA 11
EIA 10
EIA 9 disk subsystem 2
EIA 8
EIA 7
EIA 6 disk subsystem 1
EIA 5
EIA 4
UPS 2
EIA 3
EIA 2
UPS 1
EIA 1
Figure 3-2 Sample rack layout

3.2.2 Cable connections


Complete a cable connection table to document all of the connections required for the setup:
򐂰 Nodes
򐂰 UPS
򐂰 Ethernet
򐂰 Fibre Channel ports
򐂰 Master console

A typical planning chart is shown in Figure 3-3 on page 29.

28 IBM TotalStorage SAN Volume Controller


Figure 3-3 Cable connection table

3.3 SAN planning and configuration


SAN storage systems using the SVC may be configured with two or up to eight SVC nodes,
arranged in an SVC cluster. These are attached to the SAN fabric, along with disk
subsystems and host systems. The SAN fabric is zoned to allow the SVCs to “see” the disk
subsystems, and for the hosts to “see” the SVCs. The hosts are not able to directly “see” or
operate LUNs on the disk subsystems that are assigned to the SVC cluster. The SVC nodes
within an SVC cluster must be able to see each other and all the storage assigned to the SVC
cluster.

The zoning capabilities of the SAN switch are used to create these distinct zones. The SVC in
Release 2 supports 1 Gb/s or 2 Gb/s Fibre Channel fabric only. In later versions, this will
change to support new standards.

All SVC nodes in the SVC cluster are connected to the same SANs, and present virtual disks
to the hosts. These virtual disks are created from managed disks presented by the disk
subsystems. There are two distinct zones in the fabric:
򐂰 One host zone in which the hosts can see and address the SVC nodes
򐂰 One disk zone in which the SVC nodes can see and address the LUNs presented by the
disk subsystems

Hosts are not permitted to operate on the disk subsystem LUNs directly if the LUNs are
assigned to the SVC. All data transfer happens through the SVC nodes. Under some
circumstances, a disk subsystem may present LUNs to both the SVC (as managed disks,
which it then virtualizes to hosts) and to other hosts in the SAN. There are some configuration
limitations as to how this may be achieved.

Figure 3-4 on page 30 shows the data flow across the physical topology.

Chapter 3. Planning and configuration 29


Host Host Host Host

Host zone
SVC
SVC
Disk zone

Managed disks

SVC RAID RAID RAID RAID


SVC Ctrl Ctrl Ctrl Ctrl

Data Transfer
Figure 3-4 SVC physical topology

Logically, the two zones can be thought of as two separate logical SANs, leading to the
diagram shown in Figure 3-5.

Host Host Host Host

Host zone
SVC

Disk zone

Managed Disks

RAID RAID RAID


Ctrl Ctrl Ctrl

Figure 3-5 SVC logical topology

30 IBM TotalStorage SAN Volume Controller


3.3.1 SAN definitions
The following definitions are used in this section.

ISL hop
An interswitch link (ISL) is a connection between two switches, and is counted as an “ISL
hop.” The number of “hops” is always counted on the shortest route between two N-ports
(device connections). In an SVC environment, the number of ISL hops is counted on the
shortest route between the pair of nodes farthest apart. It measures distance only in terms of
ISLs in the fabric.

Oversubscription
Oversubscription is the ratio of the sum of the traffic on the initiator N-port connection, or
connections to the traffic on the most heavily loaded ISL(s) where more than one is used
between these switches. This assumes a symmetrical network, and a specific workload
applied evenly from all initiators and directed evenly to all targets. A symmetrical network
means that all the initiators are connected at the same level, and all the controllers are
connected at the same level.

As an example, on a 16-port switch, where there are 14 host connections going through two
ISL connections, the oversubscription is 14:2, or 7:1 (14/2), with seven hosts “sharing” one
ISL. In the SVC environment, each ISL link oversubscription may not exceed six.

Redundant SAN
A redundant SAN is a SAN configuration in which there is no single point of failure (SPoF), so
no matter what component fails, data traffic will continue. Connectivity between the devices
within the SAN is maintained, albeit possibly with degraded performance, when an error has
occurred. A redundant SAN design is normally achieved by splitting the SAN into two
independent counterpart SANs (two SAN fabrics), so even if one counterpart SAN is
destroyed, the other counterpart SAN keeps functioning.

Counterpart SAN
A counterpart SAN is a non-redundant portion of a redundant SAN. A counterpart SAN
provides all the connectivity of the redundant SAN, but without the 100% redundancy. An
SVC node is typically connected to a redundant SAN made out of two counterpart SANs. A
counterpart SAN is often called a SAN fabric. For example, if you have only one switch, this is
one fabric, or one counterpart SAN, but if you have two switches but they are not connected
together, you have two fabrics, or two counterpart SANs, and one redundant SAN if devices
are connected to both SANs.

Local fabric
Since the SVC supports remote copy, there may be significant distances between the
components in the local cluster and those in the remote cluster. The local fabric is comprised
of those SAN components (switches, cables, and so on), which connect the components
(nodes, hosts, switches) of the local cluster together.

Remote fabric
Since the SVC supports remote copy, there may be significant distances between the
components in the local cluster and those in the remote cluster. The remote fabric is
comprised of those SAN components (switches, cables, and so on) which connect the
components (nodes, hosts, switches) of the remote cluster together.

Chapter 3. Planning and configuration 31


Local and remote fabric interconnect
These are the SAN components that are used to connect the local and remote fabrics. They
may simply be single mode optical fibers driven by high-power GBICs or SFPs. Or they may
be other more sophisticated components like channel extenders or special SFP modules, or
with using the CNT Ultranet Edge Storage Router. This can be used to extend the distance to
thousands of kilometers. Performance will degrade as distance increases.

Fibre Channel port logins


This is the number of hosts that can see any one SVC node port. Some disk subsystems,
such as the IBM TotalStorage Enterprise Storage Server (ESS) for example, recommend
limiting the number of hosts that use each port, to prevent excessive queuing at that port.
Clearly, if the port fails or the path to that port fails, the host may fail over to another port and
the fan-in criteria may be exceeded in this degraded mode.

Channel extender
A channel extender is a device for long distance communication connecting other SAN fabric
components. Generally, these may involve protocol conversion to asynchronous transfer
mode (ATM) or Internet Protocol (IP) or some other long distance communication protocol.

3.3.2 General design guidelines with SVC

Note: The SVC is not a RAID controller so the data integrity has to be on a disk subsystem
that uses RAID to protect the data.

To ensure high availability in SVC installations, the following rules apply when you design a
SAN with the SVC.

For the SVC intracluster


The following general guidelines apply:
򐂰 An SVC node always contains two host bus adapters (HBAs), each of which has two Fibre
Channel (FC) ports. If an HBA fails, this remains a valid configuration, and the node
operates in degraded mode. If an HBA is physically removed from an SVC node, then the
configuration is unsupported.
򐂰 To maintain application uptime in the unlikely event of an individual SVC node failing, SVC
nodes are always deployed in pairs (I/O groups). If a node fails or is removed from the
configuration, the remaining node operates in a degraded mode, but is still a valid
configuration. The remaining mode operates in write through mode (the cache is disabled
for the write).
򐂰 The UPS must be in the same rack as the nodes, and a maximum of two nodes (in distinct
I/O groups) is allowed to be connected to each UPS. With UPSs shipped with SVC 2.1,
the UPS can only have one node connected.
򐂰 The Fibre Channel SAN connections between the SVC node and the switches are optical
fiber. The connections between the switches and the SVC nodes run at either 1 Gb/s or 2
Gb/s. However, all of the FC ports on SVC nodes in a single cluster must run at one
speed. Operation with different speeds running on the node to switch connections in a
single cluster is illegal.
򐂰 It is recommended that the two nodes within an I/O group be co-located, and co-location is
a recommendation even for an SVC cluster. An SVC cluster may be connected, via the
SAN fabric switches, to application hosts, disk subsystems or other SVC clusters, via
short wave or long wave optical FC connections, with a distance of up to 300 m (short

32 IBM TotalStorage SAN Volume Controller


wave 2 Gb/s) or 10 km (long wave) between the cluster and the host, and between the
cluster and the disk subsystem. Longer distances are supported between SVC clusters
when using intercluster Metro Mirror.
򐂰 A cluster should be regarded as a single entity for disaster recovery purposes. This
includes the disk subsystem that is providing the quorum disks for that cluster. This means
that the cluster and the quorum disks should be co-located. Locating the components of a
single cluster in different physical locations for the purpose of disaster recovery is not
recommended, as this may lead to issues over maintenance, service and quorum disk
management.

For the SVC intercluster


Two SVC clusters may not share the same disk subsystem. The consequences of sharing the
same disk subsystem can result in data loss. If the same MDisk becomes visible on two
different SVC clusters, then this is an error that can cause data corruption.

For the SAN fabric


The following guidelines apply:
򐂰 On the SAN, the SVC nodes are always connected to SAN switches and nothing else.
Each node must be connected to each of the counterpart SANs within the redundant SAN
fabric. Operation with direct connections between host and node, or disk subsystem and
node, is unsupported.
򐂰 The Fibre Channel switch must be zoned to permit the hosts to see the SVC nodes, and
the SVC nodes to see the disk subsystems. The SVC nodes within a cluster must be able
to see each other, the disk subsystems, and the front-end host HBAs.
򐂰 Mixed speeds are permitted within the fabric, but not for intracluster communication. The
user may use lower speeds to extend distance or to make use of 1 Gb/s legacy
components.
򐂰 Each of the local or remote fabrics should not contain more than three ISL hops within
each fabric. Operation with more ISLs is unsupported. When a local and a remote fabric
are connected together for remote copy purposes there should only be one ISL hop
between the two SVC clusters. This means that some ISLs may be used in a cascaded
switch link between local and remote clusters, provided that the local and remote cluster
internal ISL count is less than three. This gives a maximum of seven ISL hops in an SVC
environment with both local and remote fabrics.
򐂰 The switch configuration in an SVC fabric must comply with the switch manufacturer’s
configuration rules. This may impose restrictions on the switch configuration. For example,
it may be that a switch manufacturer’s requirement that no other manufacturer’s switches
are present in the SAN. Operation outside the switch manufacturer’s rules is not
supported.
򐂰 The SAN contains only supported switches as listed on the Web at:
http://www.ibm.com/storage/support/2145
Operation with other switches is unsupported.
򐂰 Host HBAs in dissimilar hosts or dissimilar HBAs in the same host need to be in separate
zones. For example, if you have AIX and Microsoft hosts, they need to be in separate
zones. Here dissimilar means that the hosts are running different operating systems or
use different hardware platforms. Therefore, different levels of the same operating system
are regarded as similar. This is a SAN interoperability issue rather than an SVC
requirement.
򐂰 Different vendor switches may not be intermixed in the same counterpart SAN. A
redundant SAN made up of two counterpart SANs, may contain different vendor switches,

Chapter 3. Planning and configuration 33


but still not in the same counterpart SAN. For example, you can have one counterpart
SAN containing only Brocade switches and the other counterpart SAN containing only
McDATA switches. As an exception, when running in vendor interoperability mode in the
SAN environment, it is supported to connect the IBM Eserver® BladeCenter™ using the
integrated SAN switch to a SAN containing switches from another switch vendor.

Disk subsystem guidelines


The following guidelines apply:
򐂰 In the SAN, disk subsystems are always connected to SAN switches and nothing else.
Multiple connections are allowed from the redundant controllers in the disk subsystem to
improve data bandwidth performance. It is not mandatory to have a connection from each
redundant controller in the disk subsystem to each counterpart SAN. For example, in a
DS4000 configuration in which the DS4000 contains two redundant controllers, only two
controller minihubs are normally used. This means that Controller A in the DS4000 is
connected to counterpart SAN A, and controller B in the DS4000 is connected to
counterpart SAN B. Operation with direct connections between host and controller is
unsupported.
򐂰 The SVC is configured to manage LUNs exported only by disk subsystems as listed on the
Web at:
http://www.ibm.com/storage/support/2145
Operation with other disk subsystems is unsupported.
򐂰 All SVC nodes in an SVC cluster must be able to see the same set of disk subsystem
ports on each disk subsystem controller. Operation in a mode where two nodes see a
different set of ports on the same controller becomes degraded. The system logs errors
requesting a repair action. This can occur if inappropriate zoning was applied to the fabric.
It can also occur if inappropriate LUN masking is used. This has important implications for
disk subsystem, such as DS4000, which imposes exclusivity rules on which HBA world
wide names (WWNs) a storage partition can be mapped to. It is up to the user to check
that the planned configuration is supported. You can find the supported hardware list at:
http://www.ibm.com/storage/support/2145

Host and application servers guidelines


The following guidelines apply:
򐂰 Each SVC node presents a virtual disk (VDisk) to the SAN through four paths. Since in
normal operation two nodes are used to provide redundant paths to the same storage, this
means that a host HBA can see eight paths to each LUN presented by the SVC. We use
zoning to limit the pathing to two paths. The hosts must run a multipathing device driver to
resolve this back to a single device. The multipathing driver supported and delivered by
SVC is the IBM Subsystem Device Driver (SDD). Native Multi-path I/O (MPIO) drivers on
selected hosts are supported. For operating system specific information about MPIO
support, see:
http://www.ibm.com/storage/support/2145
򐂰 The number of paths from the SVC nodes to a host must not exceed eight. The maximum
number of host HBA ports must not exceed four. Without any switch zoning, note the
following calculation:
Number of SVC ports x number of HBA ports on host = number of paths
8 SVC ports x 4 (max) host ports = 32 paths
To restrict the number of paths to a host, the switches should be zoned so that each host
HBA port is zoned with one SVC node port of each node from the I/O groups the host will
get VDisks assigned.

34 IBM TotalStorage SAN Volume Controller


򐂰 If a host has multiple HBA ports, then each port should be zoned to a different HBA set of
SVC ports to maximize high availability and performance.

For management
The following guidelines apply:
򐂰 In addition to a Fibre Channel connection, each device has an Ethernet connection for
configuration and error reporting. These connections are aggregated together through an
Ethernet switch.
򐂰 All nodes in an SVC cluster must be in the same IP subnet. This is because the node in
the SVC cluster must be able to assume the same SVC cluster IP address or SVC service
IP address.
򐂰 If IBM Tivoli® SAN Manager on the master console is used to watch the device status on
the SAN, the master console must only be in the zones you will have the IBM Tivoli SAN
Manager to monitor. If you include the master console in the storage zone, this might have
an influence on the boot time for the master console. The master console is not intended
to be used for any other purpose, and therefore does not need storage from the
SAN/SVC.

3.3.3 Boot support


The SVC supports SAN boot for Windows 2003 using MPIO, HP-UX by using PVLinks as the
multipathing software for the boot device and Solaris 9 running Veritas Volume
Manager/DMP.

3.3.4 Configuration saving


The configuration data for the SVC cluster is hardened in each node by the advanced
reliability, availability, and serviceability (RAS) design and integration with the uninterruptible
power supply. We recommend to save the configuration externally, when changes such as
adding new nodes, disk subsystems, and so on have been done in the cluster.

3.3.5 High availability SAN design and configuration rules with SVC
Figure 3-6 on page 36 shows a basic two node configuration. To provide high availability, the
SVC should be configured in redundant SAN fabrics. Our configuration, as shown in
Figure 3-6 on page 36, is a redundant fabric made up of two 16-port switches.

Chapter 3. Planning and configuration 35


W2K3_1 AIX_270 Linux W2K3_2
P1 P2 P1 P2 P1 P2 P1 P2

P1 Master

P2 Console

2(P4)
(p3)1
HB PCI 1 HB PCI 1
1(p3) (p4)2
0 3 4 8 9 12 14 14 12 9 8 4 3 0
SVC1 SW11 SAN swith SW21 SAN swith SVC1
node1 node2
1 2 5 5 2 1
2(p2) (p1)1
HBA PCI 2 HBA PCI 2
1(p1) (p2)2

DS4300_A DS4300_B

Figure 3-6 Simple two-node SVC high availability configuration

3.4 Zoning
To manage the zoning in the fabric, we recommend that you use port zoning. Port zoning
provides protection from changing a component, such as a Fibre Channel adapter, in the
fabric.

All SVC nodes ports in an SVC cluster must be able to see the same set of disk subsystem
ports on each disk subsystem controller. Operation in a mode where two nodes see a
different set of ports on the same disk subsystem is degraded, and the system will log errors
requesting a repair action. There is a storage zone that contains the storage and all the SVC
ports. If additional storage is attached, the port used must be added to the storage zone.

For the latest information, regularly check whether your configuration is supported at the
following Web site:
http://www.ibm.com/storage/support/2145

Figure 3-7 on page 37 shows an example of a host zone where each host adapter is zoned to
two SVC I/O group HBA ports, one from each node in the I/O group. The numbers 11, 12, 13
and 14 are for node1 FC port 1, 2, 3 and 4. The blue zone consists of A1, 13 and 22 meaning
host adapter FC port1, node1 FC port 3 and node2 FC port 2. The position in the switch
where the FC port is connected is used when making the zone definitions. For example, 13 is
connected to the switch with domain ID 11 port 3, (11,3); remember that the first port in the
switch starts numbering at zero.

36 IBM TotalStorage SAN Volume Controller


Host
HOST-A1 Zone HOST-A2 Zone
A1 A2 (22,1;12,2;12,3)
(21,1;11,2;11,3)

ID ID
A1 21 A2 22

Fabric 1 Fabric 2

ID ID
11 22 13 24 11
21 12 23 14 12

Host port A1 and 11 12 13 14 21 22 23 24 Host port A2 and


one SVC port SVC node1 SVC node2 one SVC port
per SVC node per SVC node
Note: No storage
subsystem ports!
Figure 3-7 Host zoning example

3.5 Naming convention


Naming conventions in the open systems environment have always been a challenge. The
challenges come from finding naming conventions that will continue to be steady as changes
occur to the environment. Everyone has their own way of naming equipment in an IT
infrastructure. When working in a SAN environment where an SVC cluster is installed, we
recommend assigning names that help in locating and identifying equipment, and that provide
information about connections so any changes and troubleshooting are easier.

One way to do this is to include site name, equipment name and adapter information in the
naming convention. As an example, in a two-site solution, site A and site B, all equipment in
site A is identified by odd numbers, while all equipment at site B is identified by even
numbers. Such an example could look like this: SVC1N1P1, where SVC1 is the name of the
equipment, number 1 indicates that it is located in site A, N1 is the node name for the SVC
cluster and the P1 is the SVC FC port number. On site B, the name would have been
SVC2N1P1. Names stemming from popular culture are always amusing but do not always
give any meaningful information about what the equipment is or where it is located which has
several disadvantages.

Figure 3-8 on page 38 shows an example of a naming convention.

Chapter 3. Planning and configuration 37


Naming convention

SW11 SW12
SVC1N2 SVC2N2

SVC1N1 SVC2N1
SW21 SW22

FAST1-A FAST1-B FAST2-A FAST2-B

DS4301_A DS4301_B DS4302_A DS4302_B

Figure 3-8 An example of name convention in dual SVC setup

Below, we list examples of names in an SVC cluster setup, and we will document those we
will use in this redbook.

SVC naming convention examples:


򐂰 SVC1N1 = SVC cluster 1, node 1
򐂰 SVC2N1 = SVC cluster 2, node 1
򐂰 SVC2N2P3 = SVC cluster 2, node 2, FC port 3

Disk subsystem name convention examples:


򐂰 DS4301_A
򐂰 DS4302_B
򐂰 EVA301_A
򐂰 ESS01B3A1

An explanation of the names for the disk subsystem:


򐂰 DS4301_A_1 where DS43 tells you the type of storage back-end, here a DS4300
򐂰 DS4301_A_1 where 01 is the number of this DS4300 in your installation, and also gives
you the information that it is placed at site A (1 is an odd number).
򐂰 DS4301_A_1 where_A is the name of the controller in the DS4300
򐂰 DS4301_A_1 where _1 is the FC port number on controller A, but port number is only
used in SAN zoning information, not on the SVC, here we would only recommend DS4301
as the name.
򐂰 DS4302_B, will then mean a DS4300, located at site B, controller B.
򐂰 EVA301_A, will then mean a EVA3000, located at site A, controller A
򐂰 ESS01B3A1, will then mean ESS01, located at site A, Bay3, Adapter 1 for an ESS storage
bay and adapter locations.

38 IBM TotalStorage SAN Volume Controller


SVC cluster 1, Master console FC ports 1 and 2:
򐂰 SVC1MCP1
򐂰 SVC1MCP2

Host Fibre Channel ports FC0 and FC1:


򐂰 HOSTNAMExx_FC0
򐂰 HOSTNAMExx_FC1

Where xx is a number used to identify the server and give information as to its location. For
example, AIX01_fc0, AIX01_fc1 gives you the type of server, what server it is and where it is
located, in this example at site A.

SAN switch names:


򐂰 SW11 where SW indicates a switch, the first number give a fabric ID, and the last is the
location, and even combining the last two numbers together is also useful as the domain
ID of the switch.
򐂰 SW12 is then SAN switch 2 in fabric 1, located at site B, domain id 12.
򐂰 SW21 is then SAN switch 1 in fabric 2, located at site A, domain id 21.

When using these kinds of names, we have a limit of 10 switches in a fabric, so if we need
more switches we will use two digits for the switch number, for example, SW101 and so on.

SVC zone names:


򐂰 SVC1 which includes SVC1N1P1, SVC1N1P2, SVC1N1P3, SVC1N1P4, SVC1N2P1 and
so on for all SVC1 nodes.
򐂰 SVC2 which includes all SVC2 node ports.

Storage zone names:


򐂰 SVC1_DS4301
򐂰 SVC2_DS4302

Host zone names:


򐂰 HOSTNAME_FC0_SVC1
򐂰 HOSTNAME_FC1_SVC1

Master console zone name:


򐂰 SVC1MCP1_SVC1

Metro Mirror zone name:


򐂰 SVC1_SVC2

When changing the domain IDs this can affect your zoning setup. Therefore, you must
change them first before you change your zoning information. The standard in a SAN address
is made up of switch Domain ID, port number and the address of the port:
xxppaa

where xx is the Domain ID, pp is the port number and aa is the address of the port.

Brocade did not have SAN hubs or switches with more than 16 ports in the beginning, and
used the first p to specify the type of the product, hub or switch. Because switches today need
to support more than 16 ports, Brocade created a function called core PID, so that you could
change the addressing in the switch to support addresses using both ‘pp”s as the port
number.

Chapter 3. Planning and configuration 39


A change of the domain ID or core pid disrupts some UNIX® operating system and also
affects how the UNIX operating system addresses the device SCSI address. Here is an
example of a Brocade port address:
xxypaa

where xx is the domain, y is the PID, p is the port, and aa is the address, 00 for N ports or
loop address for NL ports. This is only for Brocade/IBM switches with 16 or fewer ports; in
switches with more ports, the start address is always 00. Consider this example for a 16 port
Brocade/IBM switch:
010100 domain=01, PID = 0, port=1

Some older Brocade/IBM switches have the pid set to 1 and switches that do not merge with
switches set to 0. This is from the early days of Fibre Channel. Now the switches need to use
both bytes for port addressing. See the IBM SAN Survival Guide, SG24-6143, for more
information regarding switches.

Note: A change of the domain ID or core pid disrupts some UNIX operating systems. Make
sure you check first before you attempt this when storage is already defined and in use.

3.5.1 Dual room high availability configuration with the SVC


Figure 3-9 shows a high availability configuration of the SVC when two SVC clusters are in
two different rooms/locations. This is the configuration that we recommend for maximum
availability.

Up to 64 hosts
Host01 Hostnn Host02 Hostnn
in each SVC cluster

SVC cluster1 SVC cluster2


SVC1N1 SVC2N1
I/O group 0 I/O group 0
SVC1N2 SVC2N2

SVC1N3 SVC2N3
I/O group 1 I/O group 1
SVC1N4 SVC2N4

UPS1A SW11 SW12 UPS2A


Up to 256 ports
in each fabric
UPS1B UPS2B
SW21 SW22

DS4301_A DS4301_B Up to 64 RAID controllers DS4302_A DS4302_B

Site A Site B
Figure 3-9 High availability SAN Volume Controller cluster in a two-site configuration

40 IBM TotalStorage SAN Volume Controller


3.5.2 Local and remote SAN fabrics with SVC
The SVC supports both intracluster and intercluster Metro Mirror. From the intracluster point
of view, the only reasonable candidate for a Metro Mirror operation is the other node in the
same I/O group. Intercluster operation needs a pair of clusters, separated by a number of
moderately high bandwidth links which means that the bandwidth should be large enough to
handle the writes from the Metro Mirror process, but no reads are done over the links. This
describes the configuration that is shown in Figure 3-9 on page 40:
򐂰 In the SVC, the local and remote fabric interconnect supported is a single ISL hop
between a switch in the local fabric and a switch in the remote fabric, if it is a single mode
fibre, up to 10 km in length. Check the support Web site for any changes to this at:
http://www-1.ibm.com/servers/storage/support/software/sanvc/installing.html
򐂰 SVC 2.1 supports operation with Fibre Channel DWDM extenders and Dark Fibre using
the CNT Ultranet Edge Storage Router. The supported distances depend on the SAN
fabric vendor. For example, in a Brocade SAN, the latency supported is 10 ms, and in a
McDATA SAN it is 34 ms. When using an approximation of 150 km per ms of latency, this
gives a distance of 1500 km in a Brocade SAN and 5100 km in a McDATA SAN. These
numbers are not guaranteed and are only a rule of thumb. For accuracy, the connections
need to be measured to get a more correct estimate of the distance that can be supported.
򐂰 In Metro Mirror configurations, additional zones are required that contain only the local
nodes and the remote nodes. This is valid for the local hosts to see the remote nodes and
for the remote hosts to see the local nodes. It is unsupported to create a zone that
presents both the local and remote disk subsystem and either local and remote nodes or
both.

3.6 SVC disk subsystem planning


This section describes the various types of disk groups and their relationships.

In the configuration shown in Figure 3-10 on page 42, the disk subsystem is presenting a
LUN to the SVC and another LUN to host B. The SVC presents the VDisks created from the
MDisk to the host A. Since the disk subsystem is a DS4000, host B would have RDAC
installed to support the direct attachment, and SDD is installed on host A to support the
attachment to the SVC. This is a supported configuration.

Chapter 3. Planning and configuration 41


Host A Host B
SDD RDAC

VDisk

SVC SAN
MDisk LUN

Array Array
RAID Controller

Figure 3-10 Disk subsystem shared

With the ESS, you can attach directly to an ESS LUN and to a VDisk from the SVC that
comes from the ESS, as long as the same LUN is not assigned to both the host and the SVC.

42 IBM TotalStorage SAN Volume Controller


The host uses SDD to access the LUNs presented from both the ESS and the SVC. This is a
supported configuration and is shown in Figure 3-11.

Host A
SDD

VDisk

SVC SAN
MDisk LUN

Array Array
Enterprise Storage Server

Figure 3-11 Host connected to ESS and SVC

In the configuration shown in Figure 3-12 on page 44, the host needs to have both the RDAC
driver installed for access to the DS4000 and the SDD installed to access the SVC.

The SVC supports the use of the IBM SDD, native Multi-path I/O (MPIO) drivers on selected
operating systems, and some other operating system specific software.

Check for supported configurations on the Web at:


http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002443

Chapter 3. Planning and configuration 43


Host A
SDD/RDAC

VDisk

SVC SAN
MDisk LUN

Array Array
DS4000

Figure 3-12 DS4000 supported configuration

3.6.1 Block virtualization


The managed disk group (MDG) is at the center of the many-to-many relationship between
managed disks and virtual disks. It acts as a container into which managed disks contribute
chunks of disk blocks, known as extents, and from which virtual disks consume these extents
of storage. MDisks in the SVC are LUNs assigned from the disk subsystem to the SVC, and
can be either managed or unmanaged. A managed MDisk is an MDisk assigned to an MDG.
򐂰 MDGs are collections of managed disks. A managed disk is contained within exactly one
MDG.
򐂰 An SVC supports up to 128 MDGs.
򐂰 There is no limit to the number of virtual disks that can be in an MDG other than the per
cluster limit.
򐂰 MDGs are collections of virtual disks. Under normal circumstances, a virtual disk is
associated with exactly one MDG. The exception to this is when a virtual disk is migrated
between MDGs.

3.6.2 MDGs, I/O groups, virtual disks, and managed disk


Figure 3-13 on page 45 shows three disk subsystems that were configured to provide a
number of LUNs. In SVC terminology, each of these logical units or LUNs is a managed disk.
Disk subsystem A contains two managed disks, known as M1 and M2. Disk subsystem B
contains managed disks M3 and M4. Disk subsystem C contains managed disks M5 and M6.

44 IBM TotalStorage SAN Volume Controller


I/O Group PQ

SVC Node SVC Node


P Q

I/O Group RS

SVC Node
R

Some other Some other


I/O group I/O group
SVC Node
S
V1 V2 V4 V5

V3

V6 V7

Managed
Disk
Managed Disk Group X Group Y Managed Disk Group Z

M1 M2 M3 M4 M5 M6

RAID Controller A RAID Controller B RAID Controller C

Figure 3-13 Disk relationships

Figure 3-13 also shows three MDGs, X, Y, and Z. Each managed disk is contained within a
single MDG and that one MDG can span controllers. SVC supports an arbitrary relationship
between disk subsystems and MDGs. The MDG simply contains a collection of managed
disks from the set of available controllers.

Note: We recommend that only LUNs from one disk subsystem form part of an MDG.

Additionally, Figure 3-13 shows virtual disks numbered V1 to V7. Each virtual disk is
contained entirely within an MDG. This is the normal situation. The only exception to this is
during migration.

Virtual disks are also members of another collection, namely I/O groups. Figure 3-13 shows
two I/O groups named PQ and RS. An I/O group contains an arbitrary set of virtual disks and
exactly two SVC nodes (unless one has failed). The I/O group defines which nodes support
I/O access from hosts.

There is no fixed relationship between I/O groups and MDGs. An individual virtual disk is
normally a member of one MDG and one I/O group:
򐂰 The MDG defines which managed disks from the disk subsystem that makes up the virtual
disk.
򐂰 The I/O group defines which SVC nodes provide I/O access to the virtual disk.

Chapter 3. Planning and configuration 45


3.6.3 Extents
A virtual disk occupies an integer number of extents. Its length does not need to be an integer
multiple of the extent size, but must be an integer multiple of the block size. Any space left
over between the last logical block in the virtual disk and the end of the last extent in the
virtual disk is unused.

You can define a VDisk with the smallest granularity of 512 bytes (a block). However, an entire
extent is reserved even if it is partially used. An extent is never shared between virtual disks.
Each extent on a managed disk is contained with at the most one virtual disk. Free extents
are associated with no virtual disk.

SVC supports extent sizes of 16 MB, 32 MB, 64 MB, 128 MB, 256 MB, and 512 MB. The
extent size is a property of the MDG, which is set when the MDG is created. It cannot be
changed and all managed disks, which are contained in the MDG, have the same extent size.
It follows that all virtual disks associated with the MDG must also have the same extent size.
Table 3-1 shows the relationship between the extent size and the maximum capacity of the
cluster.

Table 3-1 Extent size and maximum cluster capacities


Extent size Maximum cluster capacity

16 MB 64 TB

32 MB 128 TB

64 MB 256 TB

128 MB 512 TB

256 MB 1 PB

512 MB 2 PB

3.6.4 Image mode virtual disk


Image mode provides a direct block-for-block translation from the managed disk to the virtual
disk with no virtualization. This mode is intended to allow virtualization of managed disks,
which already contain data that was written directly, not through an SVC node from a
pre-virtualized disk subsystem. When an image mode virtual disk is created, it directly
corresponds to the managed disk it is created from.

This allows you to insert an SVC into the data path of an existing storage configuration with
minimal downtime. After the SVC is inserted into the data path using image mode, you can
use the migration facilities to migrate the data to managed mode and rearrange the data while
an application is accessing the data.

When you create an image mode virtual disk, the managed disk specified must not be a
member of an MDG. The managed disk is made a member of the specified MDG as a result
of the creation of the image mode virtual disk.

Image mode provides direct mapping from managed disk to virtual disk. You can think of it as
a property of both virtual disks and managed disks.

The capacity specified must be less than or equal to the size of the managed disk. If it is less
than the size of the managed disk, then the unused space in the managed disk is not
available for use in any other virtual disk. There is no facility to specify an offset. Therefore,
logical block address (LBA) “N” on the resulting image mode virtual disk maps directly to LBA

46 IBM TotalStorage SAN Volume Controller


“N” on the image mode managed disk. Image mode virtual disks have a minimum size of one
block (512 bytes) and always occupy at least one extent.

Image mode managed disks are members of an MDG, but do not contribute free extents to
the pool of free extents. Therefore, an image mode managed disk can have at most one
virtual disk associated with it.

3.6.5 Managed mode virtual disk


Disks operating in managed mode provides a full set of virtualization functions.

Within an MDG, the SVC supports an arbitrary relationship between extents on (managed
mode) virtual disks and extents on managed disks. Subject to the constraint that each
managed disk extent is contained in at most one virtual disk, each virtual disk extent maps to
exactly one managed disk extent (except when in the progress of migrating).

Figure 3-14 shows virtual disk V, which is made up of a number of extents. Each of these
extents is mapped to an extent on one of the managed disks A,B or C. The mapping table
stores the details of this indirection. You see that some of the managed disk extents are
unused. That is to say there is no virtual disk extent which maps to them. These unused
extents are available for use in creating new virtual disks, migration, expansion, and so on.

Figure 3-14 Simple view of block virtualization

Creating managed mode virtual disks


When a virtual disk is created, the SVC needs to know the policy to apply to create the initial
assignment of managed disk extents to virtual disk extents. The supported policies are listed
in the following sections. These policies are only used for the creation of a new virtual disk.

Chapter 3. Planning and configuration 47


After the virtual disk is created, the policy has no effect and is not considered when making
decisions during migration operations.

Striped
When a virtual disk is created using a striped policy, its extents are allocated from the
specified ordered list of managed disks. The allocation algorithm starts with the first managed
disk in the ordered list and attempts to allocate an extent from it then it moves to the next disk,
and so on, for each managed disk in turn. If the specified managed disk has no free extents,
then it misses its turn and the turn passes to the next managed disk in the list. When the end
of the list is reached, the algorithm loops back to the first disk in the list. Allocation proceeds
until all extents required have been allocated.

When selecting which extent to allocate from the chosen managed disk, the policy followed is
as described in 3.6.6, “Allocation of free extents” on page 48. This allocation policy leads to a
coarse grained striping. The granularity of the striping is at the extent level. This coarse
grained striping is unlikely to result in large bandwidth for sequential transfers but is likely to
spread the workload caused by random small transactions across the managed disks from
which the extents are allocated.

Wide striping increases the probability that the data on the virtual disk will be lost due to the
failure of one of the managed disks across which the virtual disk is striped. It is acceptable for
the list to contain only one disk. In this case, extents are allocated from a single disk as
described in Allocation of free extents. Contrast this with the allocation scheme for the
sequential policy.

Sequential
When a virtual disk is created using a sequential policy, its extents are allocated from a single
specified managed disk. The SVC searches for regions of the target managed disk that
contain free extents that are sequential so the region is large enough to allocate the virtual
disk from completely sequential extents. If it finds more than one such region, it chooses the
smallest region which satisfies this condition. If it doesn’t find any suitable regions, creation of
the virtual disk fails.

3.6.6 Allocation of free extents


Migration operations and some of the virtualization operations require the allocation of a
specific number of extents from a specific set of managed disks. The algorithm used to
achieve this is described in the following sections.

Choosing the managed disk to allocate from


Where the set of managed disks to allocate extents from contains more than one disk,
extents are allocated from managed disks in a round robin fashion. If a managed disk has no
free extents when its turn arrives, then its turn is missed and the round robin moves to the
next managed disk in the set which has a free extent.

As the algorithm progresses, disks with no free extents on the previous pass of the round
robin are queried for free extents on each turn of the algorithm in case extents become free.

Choosing the extent to allocate from a specific managed disk


When an extent is to be allocated from a specific managed disk, the allocation policy is to
allocate the next free extent from a list of free extents held by the SVC cluster for the specific
managed disk.

48 IBM TotalStorage SAN Volume Controller


3.6.7 Selecting MDGs
There is only one question relating to the selection of MDGs:
򐂰 From which MDisk group should I create my VDisk?

The answer to this question is that you need to keep in mind that an individual virtual disk is a
member of one MDG and one I/O group:
򐂰 The MDG defines which managed disks provided by the disk subsystem make up the
virtual disk.
򐂰 The I/O group defines which SVC nodes provide I/O access to the virtual disk.

Note: There is no fixed relationship between I/O groups and MDGs.

Therefore, you could define the virtual disks using the following:
򐂰 Optimize the performance between the hosts and SVC by repartitioning the VDisks
between the different nodes of the SVC cluster. This means spreading the load equally on
the nodes in the SVC cluster.
򐂰 Get the level of performance, reliability, and capacity you require by using the MDG that
corresponds to your needs (you can access any MDG from any node). Meaning choose
the MDG that fulfils the demands for your VDisk, with respect to performance, reliability
and capacity.

Note: In an MDG where the VDisks are all striped, we recommend that the MDisks be the
same size, if possible. If not, you might not be able to create a VDisk using the last extents
in the MDG. For example, if you have two MDisks available, one with 10 GB capacity and
another with 20 GB capacity, you cannot create a new VDisk between 20-30 GB, because
the extents cannot be striped equally.

3.6.8 I/O handling and offline conditions


For a virtual disk to be online, all managed disks in the MDG or MDGs associated with the
virtual disk must be online. This applies to image mode virtual disks and managed mode
virtual disks. A virtual disk is offline if any managed disk in the MDG is offline, even if that
managed disk does not contribute any extents to the virtual disk in question, or the managed
disk has no allocated extents.

Note: Normally, a virtual disk is associated with just one MDG. However, for the duration of
a migration between MDGs, the virtual disk is associated with two MDGs. In this case, the
offline rules apply to both MDGs for the duration of the migration only.

Referring back to Figure 3-13 on page 45, this means that if managed disk M1 is taken offline
by disk subsystem A, virtual disks V1 and V2 are taken offline by the SVC.

This notion of offline and online is managed on a node basis. Therefore, if a condition arises
that causes one SVC node to see a managed disk offline, then the affected virtual disks are
taken offline on that node only, but will still be online at the other node.

For example, refer again to Figure 3-13 on page 45. If the SAN connection between disk
subsystem B and SVC node P were to fail, then node P would lose contact with managed
disks M3 and M4. Since M3 is in MDG X and M4 is in MDG Y, this takes offline on node P all
the virtual disks in MDGs X and Y. Therefore, hosts accessing node P see virtual disk V2 go

Chapter 3. Planning and configuration 49


offline. Hosts accessing V2 via node Q continue to see the virtual disk as online. When using
SDD, the paths to node P show offline, while the paths to node Q show online, and the host
still has access to the virtual disks.

3.6.9 Quorum disks


A quorum disk is used to resolve tie-break situations, when the “voting set” of nodes disagree
on the current cluster state. The voting set is an overview of the SVC cluster configuration
running at a given point in time, and is the set of nodes and quorum disk which are
responsible for the integrity of the SVC cluster. On cluster creation, the voting set consists of
a single node with a unique ID of 1, which was used to create the cluster. When nodes are
integrated into the SVC cluster, they get added to the voting set, and when a node is removed
from the SVC cluster it will also be removed from the voting set. A failed node is considered
as a removed node, and is removed from the voting set.

When MDisks are added to the SVC cluster, it checks the MDisk to see if they can be used as
quorum disk. If the MDisk fulfils the demands, the SVC will assign the three first MDisks as
quorum candidates, and one of them is selected as the active quorum disk. If possible, the
SVC will place the quorum candidates on different disk subsystems. Once the quorum disk
has been selected, however, no attempt is made to ensure that the other quorum candidates
are presented via different disk subsystems. When the set of quorum disk candidates has
been chosen it is fixed. A new quorum disk candidate will only be chosen if:
򐂰 The administrator requests that a specific MDisk becomes a quorum disk using the
svctask setquorum command.
򐂰 An MDisk which is a quorum disk is deleted from an MDG.
򐂰 An MDisk which is a quorum disk changes to image mode.

An MDisk will not be replaced as a quorum disk candidate simply because it is offline.

The cluster must contain at least half of its nodes in the voting set to function. A tie-break
situation can occur if exactly half the nodes in the cluster fail at the same time, or if the cluster
is divided so that exactly half the nodes in the cluster cannot communicate with the other half.

For example, in a cluster of four nodes, if any two nodes fail at the same time, or any two
cannot communicate with the other two, a tie-break condition exists and must be resolved. To
resolve the tie-break condition a quorum disk is used. The cluster automatically chooses
three managed disks to be quorum disks. One of these disks is used to settle a tie-break
condition. If a tie-break condition occurs, the first half of the cluster to access of the quorum
disk after the split has occurred locks the disk and continues to operate. The other side stops.
This action prevents both sides from becoming inconsistent with each other.

In a two-site solution, we recommend using two SVC clusters, one at each site, and mirroring
the data by using either host mirror software/functions or by using SVC Metro Mirror. In a
two-site solution with only one SVC cluster, you can have a situation where you will lose
access to the data. For example, in a four node SVC cluster, with two nodes at each location,
the quorum will only be located at one of the sites, and if that site “dies,” the remaining two
nodes cannot get access to the quorum disk, and will also shut down. As a result, the entire
SVC cluster is shut down, even though only one site is out. The same applies in a two-node
SVC cluster, if you put the two nodes in different locations or rooms.

Important: A cluster should be regarded as a single entity for disaster recovery purposes.
This means that the cluster and the quorum disk should be co-located.

50 IBM TotalStorage SAN Volume Controller


3.6.10 Virtualization operations on virtual disks
You can perform several operations on virtual disks as explained in the following sections.

Expanding a virtual disk


A virtual disk can be expanded. The granularity of expansion is one block (512 bytes). If the
expansion requires the allocation of additional extents then these are allocated to the virtual
disk from the managed disks specified using the algorithm described in “Allocation of free
extents” on page 48. Expanding a virtual disk using the sequential policy forces the
virtualization policy to be changed to striped.

A virtual disk can also be expanded by a single extent. This gives the user the ability to
expand the virtual disk by selecting individual managed disk extents, allowing any desired
mapping from virtual to managed extents to be created.

A security clear feature is provided to allow the resulting additional space to be overwritten
with zeros.

Image mode virtual disks cannot be expanded. They must first be migrated to managed
mode.

Note: Not all operating systems can tolerate the expansion of a virtual disk. A reboot or
remount of the disk may be needed to use the additional space.

Reducing a virtual disk


A virtual disk can be shrunk. The granularity of shrinking is one block (512 bytes). If the shrink
operation allows extents to be freed, then these are returned to the pool of free extents for
allocation by later virtualization and migration operations.

Image mode virtual disks cannot be reduced in size. They must first be migrated to managed
mode.

Warning: Not all operating systems can tolerate a virtual disk being reduced in size. You
must be cautious and know where data resides or data loss can occur.

Deleting a virtual disk


A virtual disk can be deleted. When a virtual disk is deleted, all host mappings are deleted
and any cached read or write data is discarded. Also, any FlashCopy mappings or Metro
Mirror relationships in which the disk are participating are also deleted.

If the virtual disk was operating in managed mode, then the extents are returned to the pool of
free extents for allocation by later virtualization operations. If the virtual disk was an image
mode virtual disk, deleting the virtual disk causes the managed disk to be ejected from the
MDG. The mode of the managed disk is returned to “unmanaged” mode. This makes the
delete operation the inverse of the create operation for image mode disks.

3.6.11 Creating an MDisk group (extent size rules)


There are several guidelines or rules that you must follow when creating an MDisk group.

Number of MDGs
The number of MDisk groups depends on the following factors:

Chapter 3. Planning and configuration 51


򐂰 The need for image mode virtual disks (data migration)
򐂰 The need for managed mode virtual disks with sequential policy
򐂰 The models of the disk subsystem controller (disk subsystem with cache or without,
DS4000, ESS, and so on) that have different properties on performance, availability,
response time, and so on.

It is possible to have a common MDG for the SVC cluster. However, a virtual disk (VDisk) is
offline if any managed disk in the MDG is offline, even if that managed disk does not
contribute any extents to the virtual disk in question, or the managed Disk has no allocated
extents. The more managed disks there are in an MDG, the more the VDisk (host LUN) is
striped and the better the performance is.

We recommend that you:


򐂰 Create at least one separate MDG for all the image mode virtual disks.
򐂰 Create one separate MDG for each array (or RAID) type presented from a disk
subsystem. So the MDGs are characterize by performance, RAID level, reliability, vendor
and so on.

Note: It may be wise to keep each disk subsystem in a separate MDisk group. This
prevents a failure in storage subsystem A from affecting VDisks in an MDisk group
from storage subsystem B. If a VDisk is composed of MDisks from both A and B, then
a failure in either A or B causes the VDisk to be unavailable.

򐂰 You can name them in a way that it is easiest for you when you create a virtual disk to
associate it with the MDG with the level of performance and reliability needed. Consider
these examples: pool1_high_perf_high_rela, pool2_low_perf_low_rela,
mDisk_grp_ESS1, mDisk_grp_DS40002, and mDisk_grp_raid10; we use the following in
our SVC environment for this redbook: MDG1_DS43, MDG2_DS43 and so on.

Size of extent
If you want to migrate a VDisk from one MDisk group to another MDisk group, the extent size
must be the same between the two MDisk groups. Because of this, it can be useful to set a
common extent size for all the MDisk groups. A value of 32 MB (128 TB) or 64 MB (256 TB)
can be a best trade-off between performance and capacity.

If you need to migrate a VDisk from one MDisk group to another MDisk group that has
another extent size, you need to use Metro Mirror so the VDisks have to belong to the same
I/O group if they are in the same SVC cluster.

Configuration steps
The parameters needed to create an MDG are:
򐂰 The name you want to assign to the MDG
򐂰 List of the managed disk you want to include
򐂰 The extent size you want to use

The operations are:


1. Create an MDG.
2. Add managed disks to the MDG.

52 IBM TotalStorage SAN Volume Controller


3.6.12 Creating a managed disk
First, you need to create the logical disks (LUNs) in your disk subsystem. We recommend
that you use the maximum LUN size to be presented as an MDisk. The discovery of the
managed disk is automatically done by the SVC. The managed disk is in unmanaged mode
until you include it in an MDG.

You need at least one managed disk for the support of the quorum disk used in the cluster. All
SVC nodes must have access at any time to all the managed disks.

The size of the managed disk can be up to 2 TB. You can use some common sizes for all the
managed disks (16 GB, 32 GB). This helps for simplicity and to ensure that as much as
possible all the MDisk are used in the striping process for a managed disk with striped policy.
If you have three managed disks, two of 4 GB and one of 210 GB, then very quickly only the
disk of 210 GB is used in the striping process.

For image mode and managed mode with sequential policy virtual disks, you must create
managed disks (LUNs in the disk subsystem) at a minimum of the same size of the origin disk
you want to migrate (for the image mode), or that you want to copy (for the managed mode
with sequential policy). If no extents are available, the creation of the virtual disks fail.

Configuration steps
Define the logical or physical disks (logical units) in the disk subsystem.

When you include a managed disk into an MDG, the mode changes from unmanaged to
managed.

3.6.13 Creating a virtual disk


An individual virtual disk is a member of one MDG and one I/O group. The MDG defines
which managed disks provide the disk subsystem that makes up the virtual disk. The I/O
group defines which SVC nodes provide I/O access to the virtual disk.

Note: There is no fixed relationship between I/O groups and MDGs.

Define all the virtual disks to:


򐂰 Optimize the performance between the hosts and SVC by repartitioning the VDisks
between the different nodes of the SVC cluster.
򐂰 Get the level of performance, reliability, and capacity that you require by using the MDG
that corresponds to your needs. You can access any MDG from any node.

When you create a VDisk, it is associated to one node of an I/O group. By default, every time
you create a new VDisk, it is associated to the next node using a round robin algorithm. For
example, you may have four hosts (host1, host2, host3, host4) with 100 VDisks for each host
of the same size with the same level of I/O activity, and a four node (two I/O groups) cluster.
The result is 100 VDisks on each node (25 VDisks from host1, 25 VDisks from host2, and so
on).

You can specify a preferred access node. This is the node through which to send I/O to the
VDisk instead of using the round robin algorithm. For example, in one host with four VDisks
(VD1, VD2, VD3, and VD4). VD1 and VD3 have a high level of I/O activity. VD2 and VD4
have a low level of I/O activity. If you use the round robin algorithm, VD1 and VD3 are on the
same node 1 of the I/O group, and VD2 and VD4 are on the same node 2 of the I/O group. To

Chapter 3. Planning and configuration 53


avoid this, use the preferred node feature to specify VD1 on node 1 and VD3 on node 2. We
recommend that you use the preferred node for I/O option when you create your VDisks.

A virtual disk is defined for an I/O group which provides the following benefits:
򐂰 The VDisk is “exported” by the two nodes of the I/O group to the host via eight paths (four
paths for each node). We use zoning to limit it to two paths from each node.
򐂰 Each write is copied into the cache of the two nodes before acknowledgment is sent to the
host.

Even if you have eight paths for each virtual disk, all I/O traffic flows only towards one node
(the preferred node). Therefore, only four paths are really used by SDD. The other four are
used only in case of a failure of the preferred node.

Before you create a virtual disk, you can check the amount of space that is available in the
MDG. You can determine the free capacity for an MDisk or an MDisk group as shown in
Example 3-1.

Example 3-1 lsmdiskgrp command


IBM_2145:admin>svcinfo lsmdiskgrp ARR36P5N
id 0
name ARR36P5N
status online
mDisk_count 2
vDisk_count 0
capacity 333.9GB
extent_size 32
free_capacity 333.9GB

Creating image mode virtual disks


Use image mode virtual disks when a managed disk already has data on it, from a pre
virtualized disk subsystem. When an image mode virtual disk is created, it directly
corresponds to the managed disk from which it is created. Therefore, virtual disk LBA x =
managed disk LBA x.

When you create an image mode disk, the managed disk must have a mode of unmanaged
and therefore does not belong to any MDG. A capacity of 0 is not allowed. Image mode virtual
disks may be created in sizes with a minimum granularity of 512 bytes, and must be at least
one block (512 bytes) in size.

The SVC may reserve an integer number of extents on which to hold the image mode disk. It
effectively rounds up its size to the nearest whole number of extents.

Creating managed mode virtual disks with sequential policy


When you create a managed mode virtual disk with sequential policy, you must use a
managed disk containing free extents which are sequential and in a size that is equal or
greater than the size of the virtual disk you want to create. This is due to the fact that, even if
you know the number of free extents in a managed disk, you do not know if they are
sequential.

Modifying a virtual disk


You can change the I/O group with which a virtual disk is associated. This requires a flush of
the cache within the nodes in the current I/O group to ensure that all data is written to disk. I/O
should be suspended at the host level before you perform this operation.

54 IBM TotalStorage SAN Volume Controller


Configuration steps
The parameters that you need to create a virtual disk are:
򐂰 The name you want to assign to the virtual disk
򐂰 The I/O group you want to use
򐂰 The MDG you want to use
򐂰 The capacity you want to define

The operations are:


1. Create a managed mode VDisk.
2. Create an image mode VDisk.
3. Modify the VDisk.

3.6.14 Quality of service on VDisk


You can set the I/O governing rate which is a cap on the amount of I/O that is accepted for
this virtual disk. You can set it in terms of I/O per second or MBs per second. By default no I/O
governing is set when a virtual disk is created.

An I/O threshold is expressed as a number of I/Os, or a number of MBs, over a minute. The
threshold is evenly divided between all SVC nodes that service the VDisk, that is to say,
between the nodes which form the I/O Group of which the VDisk is a member.

The algorithm operates two levels of policing: while I/O is under threshold, and when
threshold is reached. While a VDisk on each SVC node has been receiving I/O at a rate
below the governed level, then no governing is performed. A check is made every minute that
the VDisk on each node is continuing to receive I/O below the threshold level. If this check
shows that the host has exceeded its threshold on one or more nodes, then policing begins
for new I/Os.

While policing is in force:


򐂰 A threshold quantity is calculated for a one second period.
򐂰 I/Os are counted over a period of a second.
򐂰 If I/Os are received in excess of the one second threshold quantity on any node in the I/O,
group those and later I/Os are pended.
򐂰 When the second expires, a new threshold quantity is established, and any pended I/Os
are re-driven under the new threshold quantity.

If a host stays within its one second threshold quantity on all nodes in the I/O Group for a
period of 1 minute, then the policing is relaxed, and monitoring takes place over the 1 minute
period as it was before the threshold was reached.

3.6.15 Creating a host (LUN masking)


The maximum number of FC HBAs is four per host.

Configuration steps
The parameters needed to create a host and virtual disk to host mapping are:
򐂰 The name you want to assign to the host
򐂰 The list of the WWPN of the FC HBAs of the host
򐂰 The name of the VDisks you want to assign to the host

Chapter 3. Planning and configuration 55


The operations are:
1. Create a host.
2. Create a VDisk to host mapping.

3.6.16 Expanding an SVC cluster configuration


You can expand an SVC cluster configuration as explained in the following sections.

Adding a node to a cluster


If you defined a two-node SVC cluster, you can easily add new nodes to add new hosts and
disk subsystems, or to redistribute workload.

Adding a new disk controller to an MDisk group


MDisk groups can span disk subsystems. We recommend that you do not do this. Each
MDisk group should, in normal circumstances, comprise disks from one disk subsystem. See
3.6.2, “MDGs, I/O groups, virtual disks, and managed disk” on page 44.

VDisk size increase and decrease


The SVC allows you to increase and decrease the size of VDisks. Not all operating systems
allow this. See “Expanding a virtual disk” on page 51.

3.6.17 Migration
This facility allows the mapping of virtual disk extents to managed disk extents to be changed,
without interrupting a host’s access to that virtual disk. You can perform this for any virtual
disk managed by the SVC. You may use this for:
򐂰 Redistributing workload within a cluster across the disk subsystem
򐂰 Moving workload onto newly installed storage
򐂰 Moving workload off old or failing storage, ahead of decommissioning it
򐂰 Moving workload to rebalance a changed workload
򐂰 Migrating data from legacy disk subsystem to SVC managed storage
򐂰 Migrate data from one disk subsystem to another

For further details about the migration facility, see Chapter 13, “Migration to the SAN Volume
Controller” on page 493.

3.7 SVC supported capabilities


Table 3-2 lists the maximum values of parameters to use when you plan an installation.

Table 3-2 IBM TotalStorage SAN Volume Controller maximum values


Element Maximum Notes

Cluster properties

Nodes 8 Configured as four I/O groups

I/O groups 4 Each containing two nodes

VDisks per cluster 4096

VDisk size 2 TB

56 IBM TotalStorage SAN Volume Controller


Element Maximum Notes

VDisk paths 8 Number of paths to a VDisk on a host

VDisks per I/O group 1024

SCSI mappings/cluster 20 K

MDisk groups per cluster 128

MDisks 4096 Represents an average of 64 per controller

MDisks per MDisk group 128

MDisk size 2 TB Defined by a 32-bit LBA limit

Host properties

VDisks per cluster 4096 This includes virtualized LUNs and image
LUNs. Max 1024 VDisk per IO group

VDisks per host ID 512 This is the maximum number of VDisks


mapped to a single host ID

Paths per FC port 2 Each FC port on a HBA in a host must max


have 2 paths to a VDisk

FC ports per host 4

Host ports per cluster 128 Number of unique configured

Host IDs per cluster 64 A host ID is associated with one or more hosts
ports. A host ID is also used to associate SCSI
LUNs with VDisks.

Virtual disks to SCSI 20000 This is the total number of map table entries.
LUN/host ID mappings

Host LUN size 2 TB Defined by a 32-bit LBA limit

Disk subsystem properties

Disk subsystems 64 Maximum number of devices fabric WWNNs

Worldwide node name


(WWNN) per subsystem 4
Disk subsystem WWPN 256 16 ports per controller
LUs per controller 4096

SAN properties

SAN ports 256 Maximum SAN ports per fabric. This is the
maximum number of ports that can be zoned to
an SVC, the fabric itself may be larger

Fabrics 2 Dual fabric configurations

Disk subsystem 64 A maximum of 128 ports

ISL hops in a fabric 7 Local or remote fabric should not contain more
than 3 ISL hops

ISL hops from host to SVC 3


node

Chapter 3. Planning and configuration 57


Element Maximum Notes

ISL hops from disk 3


subsystem to SVC node

ISL hops from local cluster 1


to remote cluster

3.7.1 Adding ESS storage to the SVC


Perform the following steps:
1. Go to the Web site and check the prerequisites:
http://www.ibm.com/storage/support/2145
2. You may need to upgrade the microcode level of the ESS to support this attachment.
3. Before the MDisks can be presented to the SVC, define the ESS ports to the storage zone
to all SVC node port.
4. Sign on to the ESS specialist. On the left side of the window that opens, select Storage
Allocation. Figure 3-15 shows the Storage Allocation window.
5. On the Storage Allocation -- Graphical View panel, click Open System Storage.

Figure 3-15 Storage Allocation window

58 IBM TotalStorage SAN Volume Controller


6. Go to the Configure Host Adapter Ports window (Figure 3-16). Configure the host bay
ports in the ESS if you haven’t already. Select the port or ports. For Fibre Channel
Topology, select Point to Point (Switched Fabric). For Fibre Channel Protocol, select
FCP (Open Systems).

Figure 3-16 Configure Host Adapter Ports window

7. Go to the Modify Host Systems window (Figure 3-17). Follow these steps:
a. Enter the host system nickname, which is h3_svclnode1_a, in this case.
b. Select its WWPN from the list of WWPNs or type it in manually.
c. Select the Fibre Channel ports that you want to present LUNs to the node.

Figure 3-17 ESS Modify Host System window

Chapter 3. Planning and configuration 59


We defined all four node ports for both nodes to bay1, card 1 port A, and bay 3, card 1 port
A. This distributes access across two bays in the ESS for redundancy. See Figure 3-18.

Figure 3-18 Volume assignments

60 IBM TotalStorage SAN Volume Controller


8. Share the volumes across both ports in the ESS. Now the volumes are presented to the
SVC on both ports. You can find the MDisks on the SVC and rename them to a unique
name identifying the origin. Consider the example of E1_407_14830 - ESS number
1,_volume serial number in ESS 407-14830. You can add the MDisk to an MDG called
ESS_14830, for example. See Figure 3-19.

Note: The LUN size is recommended to be a full array size that is presented to the
SVC as an MDisk.

Figure 3-19 Viewing the two paths

3.7.2 Adding DS4000 storage to the SVC


To add DS4000 storage to the SVC, follow these steps:
1. Check the prerequisites on the Web at:
http://www.ibm.com/storage/support/2145
2. Check the supported firmware levels and configurations before you connect to the SVC.
You can see the firmware version for the DS4000 in the Storage Manager by choosing
View -> Subsystem Profile, as shown in Figure 3-20 on page 62 and Figure 3-21 on
page 62.

Chapter 3. Planning and configuration 61


Figure 3-20 Where to find the Storage Subsystem Profile

Figure 3-21 The Storage Subsystem Profile showing the firmware version

62 IBM TotalStorage SAN Volume Controller


Figure 3-22 DS4000 mappings view

3. We defined one storage partition in the DS4000 with all of the SVC node ports defined,
one host partition with eight ports. See Figure 3-22.

Figure 3-23 Array 1

Chapter 3. Planning and configuration 63


These arrays were created with names that reflect the quantity and size of the physical
drives. This may be important to reflect performance and size when you create MDisk groups.
Our opinion is that it is more important to map back the MDisk names to the physical RAID
control. Therefore, you can use F1_Array_number_LUNname. The arrays should alternate
between controller A and controller B for the preferred path. See Figure 3-24.

Figure 3-24 Array 2

Figure 3-25 shows the host type for SVC.

Figure 3-25 Host type for storage partition

64 IBM TotalStorage SAN Volume Controller


Figure 3-26 shows the port mapping. Now the volumes are presented to the SVC on both
ports. You can find the MDisks on the SVC and rename them to a unique name identifying the
origin. Consider the example of F1_Array_LUN - DS4000 number 1,_array number_LUN
number. You can add the MDisk to an MDisk group called DS4000_14830, for example.

Figure 3-26 Port mapping

3.7.3 LUN layout


When assigning LUNs from your disk subsystem to the SVC, assign the LUN to all ports in
the SVC, and spread the LUNs in the disk subsystem, so you get the best performance and
reliability.

In the DS4000, you should have an equal number of arrays and LUNs, equally spread on the
two controllers. After having assigned spare disks in the DS4000, define your arrays with
RAID protection, and create as few LUNs you can in the array. If possible make the LUNs the
same size, so your can utilize the full capacity when striping VDisk on the SVC.

In the ESS, you should make a LUN in each disk group (eight pack), so the striping is spread
over all disk groups, device adapters and the two controllers in the ESS.

In an MDG where the VDisks are all striped, we recommend that all the MDisks be the same
size if possible. If not, it may not be possible to create a VDisk using the last extents in the
MDG. For example, if you have two MDisks available, one with 10 GB free capacity and
another with 20 GB free capacity, you can’t make a new VDisk between 20-30 GB, even
though you have the free capacity, because the extents cannot be striped equally.

Chapter 3. Planning and configuration 65


66 IBM TotalStorage SAN Volume Controller
4

Chapter 4. Performance and capacity


planning
While storage virtualization with SVC improves flexibility and provides simpler management
of a storage infrastructure, it may also provide a substantial performance advantage for a
variety of workloads. The SVC’s caching capability and its ability to stripe VDisks across
multiple disk arrays is the reason why performance improvement is significant when
implemented with midrange disk subsystems, since this technology is often only provided
with high-end enterprise disk subsystems.

To ensure the desired performance and capacity of your storage infrastructure, we


recommend that you perform a performance and capacity analysis to reveal the business
requirements of your storage environment. When this is done, you can use the guidelines in
this chapter to design a solution which meets the business requirements.

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. 67


4.1 Performance considerations
When discussing performance for a system, it always comes down to identifying the
bottleneck, and thereby the limiting factor of a given system. At the same time, we must take
into consideration the component for whose workload we do identify a limiting factor, since it
may not be the same component which is identified as the limiting factor for different
workloads.

When designing a storage infrastructure using SVC, or implementing SVC in an existing


storage infrastructure, we must therefore take into consideration the performance and
capacity of the SAN, the disk subsystems, the SVC and the known/expected workload.

4.1.1 SAN
Today, the standard throughput of a SAN is 2 GBit/s full-duplex per connection (with the next
likely widespread customer implementation to be 4 GBps). When implementing SVC, each
I/O group (two SVC storage engines) requires eight Fibre Channel connections; this means
that each I/O group has a potential throughput of 16 GBit/s ~1.4 GB/s full-duplex (assuming 2
GBit/s equal 180 MB/s actual throughput), which exceeds the capability of the SVC engines.
Following best practices for SAN design, it is unlikely that the SAN will be the limiting factor
and it will therefore not be discussed further.

4.1.2 Disk subsystem


Discussing capacity and performance for various disk subsystems is beyond the scope of this
redbook. In general, you must ensure that the performance of the disk subsystems is
sufficient compared to the workload.

In most cases, the SVC will be able to improve the performance of older disk systems with
slow controllers or uncached disk systems. This happens because of caching in the SVC, and
the ability to stripe VDisks across multiple arrays. Also, the SVC provides the ability to spread
the data load of a specific host across VDisks. These VDisks can be striped across disk
arrays, and can also reside on separate disk subsystems (in different MDGs).

Note: For availability reasons, it is recommended not to span managed disk groups (MDG)
across disk subsystems, since failure of one disk subsystem will make the MDG go offline,
and thereby all VDisks belonging to the MDG will go offline.

4.1.3 SVC
The SVC cluster is scalable up to eight I/O groups (eight pairs of SVC nodes), and the
performance is almost linear when adding more I/O groups to a SVC cluster, until it becomes
limited by other components in the storage infrastructure. While virtualization with the SVC
provides a great deal of flexibility, it does not diminish the necessity to have a SAN and disk
subsystems which can deliver the desired performance.

In the following sections, we solely discuss the performance of the SVC and assume there
are no bottlenecks in the SAN or on the disk subsystem.

Latency
Latency is the delay added to the response time for an I/O operation. All in-band storage
virtualization solutions add some latency to cache miss I/Os. This is not a unique
characteristic of the SVC.

68 IBM TotalStorage SAN Volume Controller


However, the SVC latency is very low. For a 4 KB read operation, the SVC introduces
approximately 60 µs (microseconds, millionths of a second) of additional delay. Considering
that a typical cache miss response time is approximately 10 ms (milliseconds, thousandths of
a second), the delay typically caused by the SVC is negligible (less than 1% of total response
time).

In the real world, the effect of latency is normally even less. All writes to the SVC are cache
hits, so they add no latency. Because of the advanced cache algorithm in the SVC, many
reads are also cache hits. Only cache miss I/Os add latency.

Note: Adding additional I/O groups (pairs of SVC nodes) to a cluster will not add any extra
latency.

4.1.4 Maximum capability of an I/O group


The SVC cluster consists of I/O groups, where each I/O group consists of a pair of SVC
nodes (also called storage engines). Since the SVC cluster is linearly scalable we will first
focus on performance for a single I/O group.

A single I/O group can handle up to 200,000 read hits per second of 512 bytes, where all I/Os
are cache hits, or up to 40,000 write hits per second of 4 KB. The write number is lower
(approximately 20 000 writes per second of 4 KB) because all the writes are mirrored to both
nodes in a pair, and the SVC must destage data to disk after it is written to both caches.
Normal client workloads achieve less throughput than these numbers because real workloads
include cache miss I/Os and may also transfer more than 4 KB.

For sequential read operations, such as database scans or backup operations, a single I/O
group can achieve up to 1 GB per second, given that the back-end disk configuration is
properly configured to provide this level of throughput. Close to this level of throughput is
possible with client workloads.

SPC-1 workload
For workloads similar to the Storage Performance Council (SPC) benchmark, SPC-1, which
is probably the best guideline for a mixed workload, the SVC can deliver 28,000 4 KB IO/sec
per I/O group.

As the performance scalability of a SVC cluster is nearly linear, a configuration with two, three
or four I/O groups can deliver approximately twice, three, and four times the throughput of an
SVC cluster with a single I/O group. This means an eight node SVC cluster can deliver
>100,000 IO/sec for a SPC-1 like workload.

Note: For workloads with a 70% read and 30% write, the SVC can deliver 30 000 4Kbyte
IO/sec per I/O group.

For more information about SPC, refer to http://www.storageperformance.org/

4.2 Planning guidelines


The areas to be considered when planning a storage infrastructure using SVC, or
implementing the SVC in an existing storage infrastructure, are listed below.

Chapter 4. Performance and capacity planning 69


Determine the number of SVC clusters
If you plan to deploy the SVC in a geographically dispersed environment, for example a dual
site design for disaster recovery reasons, it is essential to use two SVC clusters. Due to the
design of the SVC cluster, it is not recommended to split I/O groups geographically (or the
nodes in an I/O group), since this will not provide the resiliency needed for a disaster
recovery solution.

Note: IBM recommends the use of two cluster configurations for all production disaster
recovery systems. Customers who wish to use split cluster operation should contact their
IBM Regional Advanced Technical Specialist for specialist advice relating to their particular
circumstances.

Determine the number of I/O groups


When considering the number of I/O groups to use in an SVC cluster, note that the 1 GB/s
sequential throughput capability of an I/O group is usually the limiting factor when configuring
attached storage. If you have information about the workloads that you plan to use with the
SVC, you can use that information to size the amount of capacity you can configure per I/O
group. To be conservative, assume a throughput ability of about 800 MB/s per I/O group.

Number of ports in the SAN used by SVC


Since each I/O group has eight Fibre Channel ports (four on each node), you will need eight
ports in the SAN per I/O group, respectively four ports in each fabric (two from each node).

Number of paths from SVC to disk subsystems


Generally all SVC ports in an SVC cluster should see the same ports on the disk subsystems,
and this means unless you reserve ports on the disk subsystem for direct host access, all
ports on the disk subsystem must be seen by all SVC nodes ports, in the respective fabrics.

When configuring, for example, the IBM TotalStorage DS4300 for use with the SVC, you
should configure two controller ports in the DS4300 to be accessed by all SVC ports in the
cluster.

Number of paths from hosts to the SVC


Generally, each HBA on the host should only see two SVC ports per I/O group, one port on
each SVC node. This means that a host with two HBAs has four paths to each I/O group, two
to each node through separate fabrics. At the same time it is recommended to spread paths
evenly for all host systems across the ports on each SVC node based on the known/expected
I/O load per host. In Figure 4-1 on page 71, it is illustrated how the paths are distributed
evenly across the SVC node ports for two hosts.

70 IBM TotalStorage SAN Volume Controller


...1100xxx
HBA 0
HBA 0 SVC1
Host A ...1200xxx
IO grp 0
HBA 1 ...1300xxx Node 1
HBA 1
...1400xxx

...1100zzz
HBA 0 SVC1
...1200zzz
HBA 0 IO grp 0
...1300zzz Node 2
Host B
HBA 1 ...1400zzz HBA 1

Fabric 1 Fabric 2
...1100xxx ...1200xxx
...1300xxx ...1400xxx
...1100zzz ...1200zzz
...1300zzz ...1400zzz
HostA_HBA0 HostA_HBA1
HostB_HBA0 HostB_HBA1

Figure 4-1 Distribute I/O load evenly among SVC node ports

When implementing the SVC in an existing storage infrastructure, it is generally


recommended that the zoning is performed so each host has two paths per HBA to each I/O
group as explained above. Though, if based on carefully reasoned analysis, you can preserve
the same number of paths to the SVC I/O groups as previously when accessing the disk
subsystems directly, even if more than two paths per HBA was configured. For large SANs
you should be aware that configuring more than two paths per HBA potentially increases the
risk of reaching the maximum number of queued commands for the SVC cluster.

4.2.1 I/O queue depth handling in large SANs


The purpose of discussing I/O queue depth is to avoid situations where an SVC node reaches
its maximum number of queued commands.

Note: Unless you have an SVC configuration close to the maximum supported, and all
your hosts are simultaneously busy, it is unlikely that you will encounter problems with I/O
queue depth limitations for your SVC.

As a guideline, you should consult your IBM representative in case your calculation shows
that the I/O queue limit is less than 20; see “Homogeneous queue depth calculation” on
page 73.

The enqueueing of tasks consumes internal resources in the SVC node. Each SVC node can
handle 10,000 concurrent commands, distributed across all hosts and all VDisks.

Mechanisms are provided to ensure correct operation in the event that the I/O queue is full.
Each host port will be guaranteed to be able to enqueue a single command on an SVC node
(this is per node, not per VDisk). I/O governing can be used to restrict the I/Os a host can
submit. If the SVC runs out of resources to enqueue an I/O that it has received, the algorithm

Chapter 4. Performance and capacity planning 71


in Figure 4-2 takes effect, to handle the situation when the maximum of queued commands is
reached.

Unable to enqueue an I/O

Has initiator already consumed Yes


1. Set “Unit Attention - Commands cleared by another initiator”
its specially reserved command on the LUN for that initiator
on this node

2. Discard the command

No If a Unit Attention is already set,


then the command is simply discarded

Has the initiator at least one task Yes Return Task Set Full status,
queued for that LUN on this port using the specially reserved command

No

The initiator has no tasks Then Return Check Condition


queued for that LUN on this port “Unit Attention - Commands aborted by another initiator”
to the received command,
using the specially reserved command

Figure 4-2 I/O queue depth algorithm

This algorithm allows the SVC to discard commands, and give a valid reason to the host as to
why this has happened. This algorithm is also used when internal recoveries within the SVC
node mean that the SVC is unable to start new host I/Os immediately, and the SVC
consequently runs out of resources to enqueue all the I/Os that are received.

Unfortunately, many host operating systems do not have helpful recovery algorithms if this
situation persists for more than 15 seconds, and the result will often be that one or more hosts
present errors to applications resulting in application failures. Following these
recommendations will hopefully avoid this.

Note: This issue is not in any way specific to the SVC. All controllers and operating
systems have the same issues if the maximum queue depth is reached.

Calculating a queue depth limit


When calculating the queue depth, the following factors should be considered:
򐂰 Although the maximum number of queued commands is per node and there are two
nodes in an I/O group, the system must continue to function when one of the nodes in an
I/O group is not available. Thus we must consider an I/O group to have the same number
of queued commands as a node.However, when a node fails, the number of paths to each
disk is halved. In practice, this effect can be neglected, and we can count nodes rather
than I/O groups in the calculation below.

72 IBM TotalStorage SAN Volume Controller


򐂰 If a VDisk is mapped so that it can be seen by more than one host, then each host that it is
mapped to may send a number of commands to it.
򐂰 Multipathing drivers on most hosts round robin I/Os amongst the available I/O paths. For
hosts which do not currently do this, it may be the case that this behavior may change in
the future, and we need to avoid “breaking” customer’s configurations when this happens.
򐂰 If a device driver times out a command, it will typically re-issue that command almost
immediately. The SVC will have both the original command and the retry in the command
queue in addition to any Error Recovery Process (ERP) commands that are issued.

In order for the maximum queue depth not to be reached, it must be the case for an I/O group
that:
򐂰 For all Vdisks associated with the I/O group, and for all hosts that are mapped to be able
to see each VDisk, and for all paths on each host, the sum of the queue depths must be
less than 10,000. Because ERPs can consume some number of queued command slots
this number is reduced to 7000 to allow a safety margin.

Homogeneous queue depth calculation


In systems where:
򐂰 The available queued commands are to be shared out among all paths, rather than giving
some hosts additional resources
򐂰 The VDisks are shared out evenly amongst the I/O groups in the cluster

Then the queue depth for each vdisk should be set on the hosts using the following
calculation:

q = Round up ((n * 7000) / (v * p*c)1 )

q = Per device path q-depth setting

n = Number of nodes in the cluster

v = Number of VDisks configured in the cluster

p= Number of paths per VDisk per host. A path is a route from a host FC port to an SVC
FC port which is recognized by the host as giving access to the vdisk.

c= The number of hosts which can concurrently access each VDisk. Very few applications
support concurrent access from multiple hosts to a single VDisk. Examples where multiple
hosts have concurrent access to a disk include where the SAN File System (SFS) is in use.
Thus, typically c will be 1.

In Example 4-1, we calculate the I/O queue for a homogenous SVC configuration with eight
SVC nodes and the maximum number of supported VDisks.

Example 4-1 Calculation of I/O queue depth for a homogenous system


n=8:An 8 node SVC cluster
v=4096:Number of VDisks per SVC cluster is max. 4096
c=1:1hosts is able to access each vdisk
p=4:each host has 4 paths to each vdisk (2 HBAs each with 2 paths to the I/O group)
Calculating the queue depth, roundup q= (8*7000) / (4096*4*1) = 4

So, the queue depth in the operating systems should be set to four concurrent commands per
path.

Chapter 4. Performance and capacity planning 73


Non-homogeneous queue depth calculation
In some cases, it will be appropriate to give favored hosts additional resources to allow them
to queue additional commands, or the number of VDisks supported by each I/O group may be
different. In these cases, the queue depth is calculated in the following way:

Consider each I/O group in turn:

For each VDisk, consider each host to which that VDisk has a mapping. This gives a set of
(host, VDisk) pairs. As long as the total sum of all (host, VDisk) pairs (queue depth) is less
than 7000, the system should not experience problems due to queue full situations.

The above calculation assumes that there is a significant probability that all of the hosts will
initiate the number of concurrent commands that they are limited to. That is to say, each host
is busy. If there are a large number of fairly idle hosts in the configuration, which are not going
to be initiating very many concurrent commands, then it may be possible to reason that the
queue depth does not need to be limited even if the calculation above says that it should be.
If this is the case we can recommend the queue depth be increased/not set.

How to limit the queue depth


Once the appropriate queue depth limit has been determined as described above, it must be
applied.

Each operating system has an OS/HBA specific way to limit the queue depth on a per device
path basis. An alternative to setting a per path limit is to set a limit on the HBA. In
Example 4-2 we calculate the I/O queue depth to be set on each HBA for a host accessing 40
VDisks, in an environment where the maximum of concurrent I/O commands per path limit
has been determined to be 5.

Example 4-2 Calculating the I/O queue depth to be set on each HBA
If the number of concurrent I/O commands per path limit is 5, and the host has access to 40
VDisks through two adapters (4 paths per VDisk, 2 paths per HBA per VDisk)
The calculation is:
4 paths per VDisk, with a limit of 5 concurrent I/O commands per path, equals 20
concurrent I/O commands per Vdisk
40 VDisks with 20 concurrent I/O commands per Vdisk, equals 800 concurrent I/O commands
for the host
Therefore it may be appropriate to place a queue depth limit of (40*(4*5))/2=400 on each
adapter.

This allows sharing of the queue depth allocation between VDisks.

Note: For a system that is already configured, the result (v*p*c) is actually the same
number as can be determined by issuing datapath query device on all hosts, and
summing-up the number of paths.

4.2.2 SVC managed and virtual disk layout planning


This section details managed and virtual disk planning.

Extent size for managed disks


When configuring managed disks with the SVC, create managed disk groups (MDGs) to use
the largest practical extent size. Doing so maximizes the learning ability of the SVC adaptive

74 IBM TotalStorage SAN Volume Controller


cache. Remember that managed disk space is allocated in units of whole extents, VDisks
whose size is not a multiple of the extent size can waste space.

For example, an 18 GB VDisk uses 36 extents of 512 MB, but an 18.1 GB VDisk uses 37
such extents. Of course, a 17.9 GB VDisk also uses 36 extents of 512 MB and wastes little
space.

If most of your VDisks are a multiple of 512 MB in size, use 512 MB extents. If you expect to
have many LUNs that are not a multiple of 512MB, use the largest extent size that results in
acceptable space utilization. Recent performance testing has shown that using an extent size
of 128 MB has no appreciable effect on performance.

Note: The extent size does not have a great impact on the performance of an SVC
installation, and the most important is to be consistent across MDG, meaning that you
should use the same extent size in all MDGs within an SVC cluster to avoid limitations
when migrating VDisks from one MDG to another. At the same time, using the largest
practical extent size increases the maximum capacity of your SVC cluster.

When configuring DS4300 storage servers, consider using a segment size of 256 KB. This
size helps to optimize sequential performance without hurting random I/O performance.

When creating VDisks, the default choice of striping the extents across MDisks in the MDG is
normally the best choice, since this will balance I/Os across all the managed disks in the
MDG, which tends to optimize overall performance and helps to reduce hot spots.

The SVC allocates a preferred path for each VDisk. This is the node within an I/O group that
is normally used to handle I/Os for that particular VDisk. The default, which is to alternate
VDisks between the nodes of an I/O group in the order the VDisks are created, normally
produces good results, and this will generally balance the load well across both nodes of an
I/O group.

In cases where it is known that the VDisks vary greatly in size, or where the I/O load to
different VDisks varies greatly, it is possible that a significant imbalance can arise, which may
impact the performance. In these cases, you can use the -node parameter when creating
VDisks (or the equivalent graphical user interface parameter) to specify which node of an I/O
group should be the preferred path, in order to balance the workload evenly for the I/O group.

Defining LUNs on the managed disk subsystems


When installing new storage to be managed by the SVC, you need define only a simple LUN
arrangement. Generally, it is recommended to create one LUN per disk array in the disk
subsystems. When placing an existing disk subsystem under the control of the SVC, there is
no specific need to change the LUN definitions. Simply include the LUNs in MDGs and start
to create VDisks.

VDisks are created using managed disks within an MDG. Accordingly, all the managed disks
in a single MDG should have the same (or similar) performance characteristics. If you mix
managed disks with different performance characteristics, VDisks may exhibit uneven
performance where I/Os to different portions of the VDisk perform differently.

Chapter 4. Performance and capacity planning 75


76 IBM TotalStorage SAN Volume Controller
5

Chapter 5. Initial installation and


configuration of the SVC
This chapter describes the initial installation and configuration procedures for the IBM
TotalStorage SAN Volume Controller (SVC) using the service panel and the cluster Web
interface.

Note: The service panel consists of the display window and buttons on the front of each
SVC node.

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. 77


5.1 Preparing for installation
See Chapter 3, “Planning and configuration” on page 25, for information pertaining to
physical connectivity, storage area network (SAN) zoning, and assigning disk to the SVC.

5.2 Secure Shell overview


Secure Shell (SSH) is used to secure data flow between the SVC cluster configuration node
(SSH server) and a client (either a command line client (command line interface (CLI) or the
CIMOM)). The connection is secured by the means of a private key and public key pair:
򐂰 A public key and a private key are generated together as a pair.
򐂰 A public key is uploaded to the SSH server.
򐂰 A private key identifies the client and is checked against the public key during the
connection. The private key must be protected.
򐂰 The SSH server must also identify itself with a specific host key.
򐂰 If the client does not have that host key yet, it is added to a list of known hosts.

Secure Shell is the communication vehicle between the management system (usually the
master console) and the SVC cluster.

SSH is a client-server network application. The SVC cluster acts as the SSH server in this
relationship. The SSH client provides a secure environment from which to connect to a
remote machine. It uses the principles of public and private keys for authentication.

When an SSH client (A) attempts to connect to a server (B), a key is needed to authenticate
the connection. The key consists of two halves: the public and private keys. The public key is
put onto (B). When (A) tries to connect, the private key on (A) can authenticate with its public
half on (B).

SSH keys are generated by the SSH client software. This includes a public key, which is
uploaded and maintained by the cluster, and a private key that is kept private to the
workstation that is running the SSH client. These keys authorize specific users to access the
administration and service functions on the cluster. Each key pair is associated with a
user-defined ID string that can consist of up to 40 characters. Up to 100 keys can be stored
on the cluster. New IDs and keys can be added and unwanted IDs and keys can be deleted.

To use the CLI or SVC graphical user interface (GUI), an SSH client must be installed on that
system, the SSH key pair must be generated on the client system and the client’s SSH public
key must be stored on the SVC cluster or clusters.

The SVC master console has the freeware implementation of SSH-2 for Windows called
PuTTY pre-installed. This software provides the SSH client function for users logged into the
master console who want to invoke the CLI or GUI to manage the SVC cluster.

5.2.1 Generating public and private SSH key pair using PuTTY
Perform the following steps to generate SSH keys on the SSH client system (master
console).

Note: These keys will be used in the step documented in section 5.4.1, “Uploading the
SSH public key to the SVC cluster” on page 90.

78 IBM TotalStorage SAN Volume Controller


1. Start the PuTTY Key Generator to generate public and private SSH keys. From the client
desktop, select Start -> Programs -> PuTTY -> PuTTYgen.
2. On the PuTTY Key Generator GUI window (Figure 5-1), generate the keys:
a. Select the SSH2 RSA radio button.
b. Leave the number of bits in a generated key value at 1024.
c. Click Generate.

Figure 5-1 PuTTY key generator GUI

Chapter 5. Initial installation and configuration of the SVC 79


3. The message in the Key section of the window changes. Figure 5-2 shows this message.

Figure 5-2 PuTTY random key generation

Note: The blank area indicated by the message is the large blank rectangle on the GUI
inside the section of the GUI labelled Key. Continue to move the mouse pointer over the
blank area until the progress bar reaches the far right. This generates random
characters to create a unique key pair.

80 IBM TotalStorage SAN Volume Controller


4. After the keys are generated, save them for later use as follows:
a. Click Save public key as shown in Figure 5-3.

Figure 5-3 Saving the public key

b. The user is prompted for a name (for example, pubkey) and a location for the public key
(for example, C:\Support Utils\PuTTY). Click Save.
If another name or location is chosen, ensure a record of them is kept because the
name and location of this SSH public key must be specified in the step documented in
section 5.4.1, “Uploading the SSH public key to the SVC cluster” on page 90.

Note: The PuTTY Key Generator saves the public key with no extension by default.
It is recommended that the string “pub” be used in naming the public key, for
example, “pubkey”, to easily differentiate the SSH public key from the SSH private
key.

c. In the PuTTY Key Generator window, click Save private key.


d. The user is prompted with a warning message as shown in Figure 5-4. Click Yes to
save the private key without a passphrase.

Figure 5-4 Saving the private key without passphrase

Chapter 5. Initial installation and configuration of the SVC 81


e. When prompted, enter a name (for example, icat) and location for the private key (for
example, C:\Support Utils\PuTTY). Click Save.
If another name or location is chosen, ensure a record of it is kept because the name
and location of the SSH private key must be specified when the PuTTY session is
configured in the step documented in 5.4.2, “Configuring the PuTTY session for the
CLI” on page 92.

Note: The PuTTY Key Generator saves the private key with the PPK extension.

5. Close the PuTTY Key Generator GUI.


6. Using Windows Explorer on the master console, navigate to the directory where the
private key was saved (for example, C:\Support Utils\PuTTY).
7. Copy the private key file (for example, icat.ppk) to the C:\Program
Files\IBM\svcconsole\cimom directory.

Important: If the private key was named something other than icat.ppk, make sure it is
renamed to icat.ppk in the C:\Program Files\IBM\svcconsole\cimom folder. The GUI
(which will be used later) expects the file to be called icat.ppk and for it to be in this
location.

8. Stop and restart the IBM CIM Object Manager so that the change will take effect:
a. From the Windows desktop, select Start -> Settings -> Control Panel ->
Administrative Tools -> Services.
b. Right-click IBM CIM Object Manager. Select Stop.
c. When Windows has stopped the service, click it again. Select Start.

5.3 Basic installation


This section provides step-by-step instructions for building the SVC cluster initially.

5.3.1 Creating the cluster (first time) using the service panel
This section provides the step-by-step instructions needed to create the cluster for the first
time using the service panel. Use Figure 5-5 as a reference for the buttons to be pushed in
the steps that follow.

Power Up/Down Select Display Panel Name


button Left/Right button
navigation buttons

Figure 5-5 SVC service front panel

82 IBM TotalStorage SAN Volume Controller


Prerequisites
Ensure that the SVC nodes are physically installed. Prior to configuring the cluster, ensure
that the following information is available:
򐂰 License: the license indicates whether the customer is permitted to use FlashCopy,
MetroMirror, or both. It also indicates how much capacity the customer is licensed to
virtualize.
򐂰 Cluster IP addresses (one for the cluster and another for service access).
򐂰 Subnet IP mask.
򐂰 Gateway IP address.

Process
After the hardware is physically installed into racks, complete the following steps to initially
configure the cluster through the service panel:
1. Choose any node that is to become a member of the cluster being created.
2. At the service panel of that node, click and release the Up or Down navigation button
continuously until Node: is displayed.

Important: If a time-out occurs when entering the input for the fields during these
steps, the user must begin again from step 2. All the changes are lost, so be sure to
have all the information on hand before beginning.

3. Click and release the Left or Right navigation button continuously until Create Cluster?
is displayed.
4. Click the Select button. If IP Address: is displayed on line 1 of the service display, go to
step 5. If Delete Cluster? is displayed in line 1 of the service display, this node is already
a member of a cluster. Either the wrong node was selected, or this node was already used
in a previous cluster. The ID of this existing cluster is displayed in line 2 of the service
display.
a. If the wrong node was selected, this procedure can be exited by clicking the Left,
Right, Up, or Down button (it cancels automatically after 60 seconds).
b. If it is certain that the existing cluster is not required, follow these steps:
i. Click and hold the Up button.
ii. Click and release the Select button. Then release the Up button. This deletes the
cluster information from the node. Go back to step 1 and start again.

Important: When a cluster is deleted, all client data contained in that cluster is lost.

5. Click the Select button.


6. Use the Up or Down navigation button to change the value of the first field of the IP
Address to the value that has been chosen.

Note: Pressing and holding the Up or Down buttons will increment or decrease the IP
address field by 10s. The field value rotates from 0 to 255 with the Down button, and
from 255 to 0 with the Up button.

7. Use the Right navigation button to move to the next field. Use the Up or Down navigation
buttons to change the value of this field.

Chapter 5. Initial installation and configuration of the SVC 83


8. Repeat step 7 for each of the remaining fields of the IP address.
9. When the last field of the IP address has been changed, click the Select button.
10.Click the Right button. Subnet Mask: is displayed.
11.Click the Select button.
12.Change the fields for Subnet Mask in the same way that the IP address fields were
changed.
13.When the last field of Subnet Mask has been changed, click the Select button.
14.Click the Right navigation button. Gateway: is displayed.
15.Click the Select button.
16.Change the fields for Gateway in the same way that the IP address fields were changed.
17.When changes to all Gateway fields have been made, click the Select button.
18.Click the Right navigation button. Create Now? is displayed.
19.When the settings have all been verified as accurate, click the Select navigation button.
To review the settings before creating the cluster, use the Right and Left buttons. Make
any necessary changes, return to Create Now?, and click the Select button.
If the cluster is created successfully, Password: is displayed in line 1 of the service display
panel. Line 2 contains a randomly generated password, which is used to complete the
cluster configuration in the next section.

Important: Make a note of this password now. It is case sensitive. The password is
displayed only for approximately 60 seconds. If the password is not recorded, the
cluster configuration procedure must be started again from the beginning.

20.After the Password: display times out, if the cluster is created successfully, Cluster: is
displayed in line 1 of the service display panel. Also, the cluster IP address is displayed on
line 2 when the initial creation of the cluster. is completed.
If the cluster is not created, Create Failed: is displayed in line 1 of the service display.
Line 2 contains an error code. Refer to the error codes that are documented in IBM
TotalStorage Virtualization Family SAN Volume Controller: Service Guide, SC26-7542, to
find the reason why the cluster creation failed and what corrective action to take.

Important: At this time, do not repeat this procedure to add other nodes to the cluster.
Adding nodes to the cluster is accomplished in 6.1, “Adding nodes to the cluster” on
page 108, and in 7.1, “Adding nodes to the cluster” on page 120.

5.4 Completing the initial cluster setup using the Web interface
Perform the following steps to connect to the cluster:
1. Using a Web browser from within the SVC cluster’s subnet (the Cluster IP address
svcclusterip can be reached successfully with a ping command), access the SVC cluster.
Type the following line in the Web browser’s address field:
https://svcclusterip/create
Here svcclusterip is the SVC cluster IP address configured in the service panel earlier
(for example, 9.11.226.195).

84 IBM TotalStorage SAN Volume Controller


Important: Because the SVC cluster interface is secure, it is required to access it as
Secure HTTP (HTTPS).

2. When the security alert regarding certificates (Figure 5-6) pops up, select Yes.

Figure 5-6 Certificate Security Alert

3. The Enter Network Password window (Figure 5-7) opens. For the User Name, type admin.
For the Password, type the password that was shown on the service panel when the
cluster was created in the previous section. Click OK.

Note: The password is case sensitive.

Figure 5-7 User ID and password validation

Chapter 5. Initial installation and configuration of the SVC 85


4. The browser accesses the SVC and displays the Create New Cluster wizard window as
shown in Figure 5-8. Click Continue.

Figure 5-8 Create New Cluster wizard

5. The Create New Cluster page (Figure 5-9 on page 87) opens. Fill in the following details:
– A new admin password to replace the random one that the cluster generated: the
password is case sensitive and can consist of A to Z, a to z, 0 to 9, and the underscore.
It cannot start with a number and has a minimum of one character, and a maximum of
15 characters.
– A service password to access the cluster for service operation: the password is case
sensitive and can consist of A to Z, a to z, 0 to 9, and the underscore. It cannot start
with a number and has a minimum of one character and a maximum of 15 characters.
– A cluster name: the cluster name is case sensitive and can consist of A to Z, a to z, 0 to
9, and the underscore. It cannot start with a number and has a minimum of one
character and a maximum of 15 characters.
– A service IP address to access the cluster for service operations.

Note: The service IP address is different from the cluster IP address. However,
because the service IP address is configured for the cluster, it must be on the same
IP subnet.

– The fabric speed of the Fibre Channel network (either 1 Gb/s or 2 Gb/s).
– The Administrator Password Policy check box, if selected, enables a user to reset the
password from the service panel (helpful, for example, if the password is forgotten).
This check box is optional.

86 IBM TotalStorage SAN Volume Controller


Note: The SVC should be in a secure room if this function is enabled since anyone
who knows the correct key sequence can reset the admin password.
The key sequence is:
1. From the Cluster: menu item displayed on the service panel, click the Left or
Right button until Recover Cluster? is displayed.
2. Click the Select button. Service Access? should be displayed.
3. Click and hold the Up button and then click and release the Select button. This
generates a new random password. Write it down.

Be careful because clicking and holding the Down button and clicking and releasing
the Select button places the node in service mode.

Click the Create New Cluster button.

Figure 5-9 Create New Cluster Web interface

Chapter 5. Initial installation and configuration of the SVC 87


4. A number of progress windows appear as shown in Figure 5-10. Click Continue each time
when prompted.

Figure 5-10 Create New Cluster Progress page

5. When the changes are accepted, the cluster displays the Enter Network Password
window again. Type the User Name admin and the new admin password.

Note: By this time, the service panel display on the front of the configured node should
display the cluster name entered previously (for example, REDSTONE1).

6. Log back in. The first of several configuration pages is displayed beginning with the Error
Notification Settings page shown in Figure 5-11 on page 89. These settings will be
covered in 9.6.3, “Setting up error notification” on page 251. For now, click Update
Settings and then on the next page, click Continue when prompted.

88 IBM TotalStorage SAN Volume Controller


Figure 5-11 Error Notification Settings configuration page

The Featurization Settings window (Figure 5-12 on page 90) is displayed. To continue, at a
minimum the Virtualization Limit (Gigabytes) field must be filled out. If you are licensed for
FlashCopy and MetroMirror (the panel reflects Remote Copy in this example), the
Enabled radio buttons may also be selected here. These features are discussed in more
detail in 10.6.5, “Setting features” on page 354 or 9.6.5, “Setting features” on page 252.
Click the Set Features button.

Chapter 5. Initial installation and configuration of the SVC 89


Figure 5-12 Featurization Settings Configuration page

Note: The SVC uses the standard of 1 GB = 1024 MB. Therefore, typing 10 GB in the
Featurization Settings page provides us with 10240 MB rather than 10000 MB as with
other disk subsystems. This screen uses the previous term “Remote Copy” to refer to
MetroMirror.

5.4.1 Uploading the SSH public key to the SVC cluster


After the Set Features button on the featurization page above is clicked, the Featurization
Settings confirmation page (Figure 5-13 on page 91) is displayed. Follow these steps:
1. On this page, either click the Continue button (to immediately upload an admin SSH
public key) or the Bypass button (to upload the SSH public key at a later time). It is
recommended that the Continue button be clicked because an SSH public key must be
uploaded from the pair previously created and saved in 5.2.1, “Generating public and
private SSH key pair using PuTTY” on page 78 before the cluster can be administered
from either the CLI or GUI.

90 IBM TotalStorage SAN Volume Controller


Figure 5-13 Featurization Settings confirmation page

2. The Add SSH Public Key page (Figure 5-14) opens. Browse or type the fully qualified
directory path and file name of the public key created and saved in 5.2.1, “Generating
public and private SSH key pair using PuTTY” on page 78. Then type the name of the user
ID to be associated with this admin key pair (for example, admin) and click Add Key.

Figure 5-14 Add SSH Public Key

3. On the next page (Figure 5-15 on page 92), a message is displayed indicating that a new
SSH administrator key associated with the ID admin was added. Click Continue.

Chapter 5. Initial installation and configuration of the SVC 91


Figure 5-15 Adding SSH admin key successful

The basic setup requirements for the SVC cluster using the service panel and cluster Web
interface have now been completed. The next step is to complete the installation and
configuration of the SVC cluster using either the CLI or CIM Agent and Console for SVC GUI.

5.4.2 Configuring the PuTTY session for the CLI

Note: Remember that the CLI is case sensitive.

Before the CLI can be used, the PuTTY session must be configured using the SSH keys
generated earlier in section 5.2.1, “Generating public and private SSH key pair using PuTTY”
on page 78.

Perform these steps to configure the PuTTY session on the SSH client system:
1. From the Windows desktop, select Start -> Programs -> PuTTY -> PuTTY to open the
PuTTY Configuration GUI window.
2. In the PuTTY Configuration window (Figure 5-16 on page 93), from the Category panel on
the left, click Session.

Note: The items selected in the Category panel affect the content that appears in the
right panel.

92 IBM TotalStorage SAN Volume Controller


Figure 5-16 PuTTY Configuration window

3. In the right panel, under the Specify your connection by host name or IP address section,
select the SSH radio button. Under the Close window on exit section, select the Only on
clean exit radio button. This ensures that if there are any connection errors, they will be
displayed on the user’s screen.
4. From the Category panel on the left side of the PuTTY Configuration window, click
Connection -> SSH to display the PuTTY SSH Configuration window as shown in
Figure 5-17.

Figure 5-17 PuTTY SSH Connection Configuration window

5. In the right panel, in the section Preferred SSH protocol version, select radio button 2.
6. From the Category panel on the left side of the PuTTY Configuration window, click
Connection -> SSH -> Auth.

Chapter 5. Initial installation and configuration of the SVC 93


7. In the right panel, in the Private key file for authentication: field under the Authentication
Parameters section, type the fully qualified directory path and file name of the SSH client
private key file created earlier. See Figure 5-18.

Figure 5-18 PuTTY Configuration: Private key location

Tip: Either click Browse to select the file name from the system directory, or
alternatively, type the fully qualified file name (for example, C:\Support
Utils\PuTTY\icat.PPK).

8. From the Category panel on the left side of the PuTTY Configuration window, click
Session.
9. In the right panel, follow these steps as shown in Figure 5-19 on page 95:
a. Under the Load, save or delete a stored session section, select Default Settings and
click Save.
b. For Host Name (or IP address), type the IP address of the SVC cluster.
c. In the Saved Sessions field, type a name (for example, SVC) to associate with this
session.
d. Click Save.

94 IBM TotalStorage SAN Volume Controller


Figure 5-19 PuTTY Configuration: Saving a session

The PuTTY Configuration window can now either be closed or left open to continue.

5.4.3 Starting the PuTTY CLI session


The PuTTY application is required for all CLI tasks. If it was closed for any reason, restart the
session as detailed here:
1. From the master console desktop, open the PuTTY application by selecting Start ->
Programs -> PuTTY.
2. On the PuTTY Configuration window (Figure 5-20 on page 96), select the session saved
earlier (for example, SVC) and click Load.
3. Click Open.

Chapter 5. Initial installation and configuration of the SVC 95


Figure 5-20 Open PuTTY command line session

4. If this is the first time the PuTTY application is being used since generating and uploading
the SSH key pair, a PuTTY Security Alert window with a prompt pops up stating there is a
mismatch between the private and public keys as shown in Figure 5-21. Click Yes, which
invokes the CLI.

Figure 5-21 PuTTY Security Alert

96 IBM TotalStorage SAN Volume Controller


5. At the Login as: prompt, type admin and press Enter (the user ID is case sensitive). As
shown in Figure 5-22, the private key used in this PuTTY session is now authenticated
against the public key uploaded to the SVC cluster in 5.4.1, “Uploading the SSH public key
to the SVC cluster” on page 90.

Figure 5-22 User authentication using SSH key pairs

The tasks required to configure the CLI for SVC administration from the Master Console have
now been completed. The PuTTY session can now be closed. Continue to the next section to
configure the GUI on the master console.

5.4.4 Configuring the GUI

Note: Remember that data entries made through the GUI are case sensitive.

If this is the first time the central administration GUI is being used, it must be configured as
explained here:
1. Open the GUI using one of the following methods:
– Double-click the icon marked SAN Volume Controller Console on the master
console’s desktop.
– Open a Web browser on the master console and point to the address:
http://localhost:9080/ica
– Open a Web browser on a separate workstation and point to the address:
http://mastetconsoleipaddress:9080/ica
In this example, the master console was accessed by the second method.
2. On the Signon page (Figure 5-23 on page 98), type the user ID superuser and the default
password of passw0rd. Click OK.

Note: Passwords for the central administration GUI are separate from the passwords
set for individual SVC clusters.

Chapter 5. Initial installation and configuration of the SVC 97


Figure 5-23 GUI signon

3. At some point in the initial configuration, it is recommended that the default password be
changed. On the GUI Welcome panel (Figure 5-25 on page 99), select the Change
Password option in the My Work panel. The Change Password panel is displayed as
shown in Figure 5-24. Enter the new password in the field labelled New Password and
re-enter the same password in the field below labelled Re-Enter New Password. Click OK.

Figure 5-24 Change default password

98 IBM TotalStorage SAN Volume Controller


Note: Like all passwords, this is case sensitive.

4. On the GUI Welcome panel (Figure 5-25), click the Add SAN Volume Controller Cluster
button in the center of the panel. If the GUI default password was changed as in step 3,
this button may not be displayed. If so, click Clusters in the My Work Panel, then select
Add Cluster from the drop-down menu.

Figure 5-25 Adding the SVC cluster for management

5. On the Adding Clusters panel (Figure 5-26 on page 100), type the IP address of the SVC
cluster and click OK.

Important: Do not select the Create (initialize) Cluster box. Doing so invokes the initial
cluster installation process, which is explained in Chapter 5, “Initial installation and
configuration of the SVC” on page 77. If it is selected by mistake, the cluster is
re-initialized and any configuration settings determined previously are lost.

Chapter 5. Initial installation and configuration of the SVC 99


Figure 5-26 Adding Clusters panel

6. A Security Alert will pop up as shown in Figure 5-27. Click Yes to continue.

Figure 5-27 Security Alert

100 IBM TotalStorage SAN Volume Controller


7. A pop-up window appears and prompts for the user ID and password of the SVC cluster
as shown in Figure 5-28. Enter the user ID admin and the cluster admin password that was
set earlier. Click OK.

Figure 5-28 SVC cluster user ID and password signon window

8. The Maintaining SSH Keys panel is displayed as shown in Figure 5-30 on page 102. If it
does not display automatically, click Clusters in the My Work panel menu. The Viewing
Clusters panel (Figure 5-29) is displayed. Click the Select box, select Maintain SSH Keys
from the drop-down menu, and click Go.

Figure 5-29 Viewing Clusters panel

9. The Maintaining SSH Keys panel is displayed as shown in Figure 5-30 on page 102.
a. Browse to find the public key.
b. In the Access Level box, select the Service radio button.

Chapter 5. Initial installation and configuration of the SVC 101


c. Enter an ID in the ID field. In this example, the same ID (admin) is used again. This is
the ID that was used for the administrator access level entered previously during the
initial configuration.
d. Click the Add Key button in the lower left of the panel.

Note: Using the same ID for both access levels helps identify both as coming from the
same SSH client for potential later maintenance of SSH key pairs. Any descriptive
string will suffice; the ID does not have to be admin, nor does it have to match the ID
used earlier for administrative access level.

Figure 5-30 Maintaining SSH Keys panel

10.The Added message is displayed in the Maintaining SSH Keys panel as shown in
Figure 5-31 on page 103.

102 IBM TotalStorage SAN Volume Controller


Figure 5-31 Added message in SSH Public Key Maintenance

11.Click the X in the upper right corner of the Maintaining SSH Keys panel as shown in
Figure 5-31.
12.The Viewing Clusters panel (Figure 5-32 on page 104) is displayed. If it does not display
automatically, click Clusters in the My Work panel menu. Click the Select box, leave the
default option Launch the SAN Volume Controller Application selected from the
drop-down menu, and click Go.

Chapter 5. Initial installation and configuration of the SVC 103


Figure 5-32 Using the Viewing Clusters panel to Launch the SAN Volume Controller Application

13.A new window opens and displays the Welcome panel for the SAN Volume Controller
Application for the selected cluster, as shown in Figure 5-33.

Figure 5-33 Welcome window for the selected cluster’s SAN Volume Controller Application

104 IBM TotalStorage SAN Volume Controller


14.The tasks required to configure the GUI for SVC administration have now been
completed. Either close the browser session completely or leave it open on the Welcome
panel and continue to Chapter 6, “Quickstart configuration using the CLI” on page 107 or
Chapter 7, “Quickstart configuration using the GUI” on page 119 to add the second node
to the cluster. If SSH access from other workstations is desired, proceed to the next
section(s).

Configuring SSH for non-master console Windows clients


The SVC cluster IP address must be able to be successfully reached using the ping
command from the Windows workstation from which cluster access is desired. The software
putty.exe and puttygen.exe can be downloaded from the following site:
http://www.chiark.greenend.org.uk/~sgtatham/putty/

PuTTY can also be found on the SAN Volume Controller CD-ROM that was shipped with the
SVC nodes. Generate and store the key pair as in the examples above.

To upload the public key onto the SAN Volume Controller:


1. Browse to the SAN Volume Controller Console at
http://<MasterConsole_IP_Address>:9090/ica
2. Log in under the superuser account.
3. Click Clusters in the My Work panel on the left.
4. Click the Select box to the left of the cluster to which access is desired.
5. From the drop-down menu, select Maintain SSH Keys and click Go.
6. Type a descriptive ID for the workstation in the ID field.
7. Select Administrator or Service for the level of access.
8. Click Browse and locate the SSH public key on the workstation.
9. Click the Add key button.

Configuring SSH for AIX clients


1. The SVC cluster IP address must be able to be successfully reached using the ping
command from the AIX workstation from which cluster access is desired.
2. Open SSL must be installed for OpenSSH to work.
3. Install OpenSSH on the AIX client:
a. Installation images can be found at:
http://www-124.ibm.com/developerworks/projects/opensshi
b. Follow the instructions carefully as OpenSSL must be installed before using SSH.
4. Generate an SSH key pair:
a. Issue a cd to go to the /.ssh directory.
b. Run the command ssh-keygen -t rsa.
c. The following message is displayed: Generating public/private rsa key pair.
Enter file in which to save the key (//.ssh/id_rsa).
d. Pressing Enter will use the default in parentheses above; otherwise, enter a file name
(for example, aixkey) and press Enter.
e. The following prompt is displayed: Enter a passphrase (empty for no passphrase).
Press Enter.
f. The following prompt is displayed: Enter same passphrase again:. Press Enter again.

Chapter 5. Initial installation and configuration of the SVC 105


g. A message is displayed indicating the key pair has been created. The private key file
will have the name entered above (for example, aixkey). The public key file will have
the name entered above with an extension of .pub (for example, aixkey.pub).
5. Upload the public key onto the SVC by browsing to the Master Console at
http://<MasterConsole_IP_Address>:9090/ica
6. Log in under the superuser account.
7. Click Clusters in the My Work panel on the left.
8. Click the Select box to the left of the cluster to which access is desired.
9. From the drop-down menu, select Maintain SSH Keys and click Go.
10.Type a descriptive ID for the workstation in the ID field.
11.Select Administrator or Service for the level of access.
12.Click Browse and locate the SSH public key on the AIX workstation.
13.Click the Add key button.
14.To SSH from the AIX client to the SVC, on the AIX client type:
ssh admin@<clusterip>
The private key to be used can be specified by typing:
ssh -i <privatekeyname> admin@<clusterip>

5.5 Summary
This chapter provides an overview of the Secure Shell (SSH) client-server network
application used to secure the data flow between a client and the SVC cluster configuration
node (server). It provides instructions for generating an SSH private and public key pair using
the PuTTY application provided by the SVC master console, uploading the public key to the
SVC cluster and copying and renaming the private key to the master console for use by the
SVC GUI and PuTTY command line interfaces.

Instructions for creating a single-node cluster from the service panel are explained in detail,
followed by instructions for completing the cluster definition using the native Web interface.

The next step outlined is the configuration of the PuTTY session on the master console, and
instructions are provided for invoking the command line interface using PuTTY. The last
required step for completing the initial installation and configuration of the SVC cluster,
configuring the graphic user interface (GUI) on the master console, is also described.

For users desiring optional access to the SVC cluster from other workstations capable of
accessing the SVC cluster’s subnet, instructions are provided for configuring the SSH client
on Windows and AIX clients at the end of the chapter.

Upon completion of the instructions in this chapter, the SVC cluster can be managed either by
using the master console GUI to launch the SVC GUI application or by invoking the SVC
command line interface (CLI) using either PuTTY or the SSH client. At this point, either
method can be used to add additional nodes and manage the cluster.

106 IBM TotalStorage SAN Volume Controller


6

Chapter 6. Quickstart configuration using


the CLI
This chapter describes the basic configuration procedures required to get the IBM
TotalStorage SAN Volume Controller (SVC) environment up and running as quickly as
possible using the command line interface (CLI). See Chapter 9, “SVC configuration and
administration using the CLI” on page 207, for more information about these and other
configuration and administration procedures.

Important: The CLI is case sensitive.

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. 107
6.1 Adding nodes to the cluster
After cluster creation is completed through the service panel and cluster Web interface, only
one node (the configuration node) is set up. To be a fully functional SVC cluster, you must add
a second node to the configuration.

To add a node to a cluster, gather the necessary information as explained in the following
steps:
1. We need some information from the existing node. Gather this information by using the
svcinfo lsnode node1 command as shown in Example 6-1.

Note: The name of the nodes already in the cluster are shown on the service panel
displays on each node. By default, the first node is called node1. We show how to
change this in a later topic.

Example 6-1 svcinfo lsnode command


IBM_2145:admin>svcinfo lsnode node1
id 1
name node1
UPS_serial_number YM100032B422
WWNN 5005076801000364
status online
IO_group_id 0
IO_group_name io_grp0
partner_node_id
partner_node_name
config_node yes
UPS_unique_id 20400000C2484082
port_id 5005076801400364
port_status active
port_id 5005076801300364
port_status active
port_id 5005076801100364
port_status active
port_id 5005076801200364
port_status active

The most important information to look for here is the IO_group_name because we use this
when adding our second node to the SVC cluster configuration.

Note: You can see in Example 6-1 that no partner_node_id or partner_node_name


exists for node1 as yet.

2. See what nodes are available for inclusion in the SVC cluster configuration. Enter the
svcinfo lsnodecandidate command:
IBM_2145:admin>svcinfo lsnodecandidate
id node_cover_name UPS_serial_number UPS_unique_id
500507680100035A 000683 YM100032B425 20400000C2484085

Note: You must also take care that the nodes within an I/O group are attached to
different UPS.
You can determine the UPS to which a node is attached from the output of svcinfo
lsnodecandidate (UPS serial #column).

108 IBM TotalStorage SAN Volume Controller


If this command returns no information and your second node is powered on and zones
are correctly defined, pre-existing cluster configuration data may be stored in it. If you are
sure this node is not part of another active SVC cluster, you can use the service panel to
delete the existing cluster information. After this is complete, re-issue the svcinfo
lsnodecandidate command and you should see it listed.
For information about how to delete an existing cluster configuration using the service
panel, see 5.3.1, “Creating the cluster (first time) using the service panel” on page 82. For
information about storage area network (SAN) zoning, see Chapter 3, “Planning and
configuration” on page 25.

3. Using a combination of the information obtained in the previous steps, add the second
node to the SVC cluster configuration. Enter the svctask addnode command. The full
syntax of the command is:
svctask addnode {-panelname panel_name | -wwnodename wwnn_arg} [-name new_name]
-iogrp iogrp_name_or_id
Note the following explanation:
– panelname: Name of the node as it appears on the panel
– wwnodename: Worldwide node name (WWNN) of the node
– name: Name to be allocated to the node
– iogrp: I/O group to which the node is added

Note: -wwnodename and -panelname are mutually exclusive, only one is required to
uniquely identify the node.

Here is an example of this command:


IBM_2145:admin>svctask addnode -wwnodename 500507680100035a -iogrp io_grp0
Node, id [2], successfully added
In this example:
– 500507680100035a is the ID found using the svcinfo lsnodecandidate command.

Note: The wwnodename is one of the few things in the CLI that is not case sensitive.

– io_grp0 is the name of the I/O group to which node1 belonged as found using the
svcinfo lsnode node1 command.

Note: Because we did not provide the -name parameter, the SVC automatically
generates the name nodeX (where X is the ID sequence number assigned by the SVC
internally). In our case, this is node2.

If you want to provide a name, you can use A to Z, a to z, 0 to 9, and the underscore.
The name can be between one and 15 characters in length. However, it cannot start
with a number or the word node because this prefix is reserved for SVC assignment
only.

4. If we display the node information for node1 again, as shown in Example 6-2, node1 now
has a partner_node_id of 2 and a partner_node_name of node2.

Example 6-2 svcinfo lsnode command


IBM_2145:admin>svcinfo lsnode node1
id 1
name node1

Chapter 6. Quickstart configuration using the CLI 109


UPS_serial_number YM100032B422
WWNN 5005076801000364
status online
IO_group_id 0
IO_group_name io_grp0
partner_node_id 2
partner_node_name node2
config_node yes
UPS_unique_id 20400000C2484082
port_id 5005076801100364
port_status active
port_id 5005076801200364
port_status active
port_id 5005076801300364
port_status active
port_id 5005076801400364
port_status active

You have now completed the cluster configuration and you have a fully redundant SVC
environment.

6.2 Setting the cluster time zone and time


Perform the following steps to set the cluster time zone and time:
1. Find out for which time zone your cluster is currently configured. Enter the svcinfo
showtimezone command as shown here:
IBM_2145:admin>svcinfo showtimezone
id timezone
522 UTC
If this setting is correct (for example, 522 UTC), you can skip to Step 4. If not, continue
with Step 2.
2. To find what time zone code is associated with your time zone, enter the svcinfo
lstimezones command as shown in Example 6-3. A truncated list is provided for this
example.

Example 6-3 svcinfo lstimezones command


IBM_2145:admin>svcinfo lstimezones
. . .
508 UCT
509 Universal
510 US/Alaska
511 US/Aleutian
512 US/Arizona
513 US/Central
514 US/Eastern
515 US/East-Indiana
516 US/Hawaii
517 US/Indiana-Starke
518 US/Michigan
519 US/Mountain
520 US/Pacific
521 US/Samoa
. . .

110 IBM TotalStorage SAN Volume Controller


3. Now that you know which time zone code is correct for you (in our case 514), set the time
zone by issuing the svctask settimezone command:
IBM_2145:admin>svctask settimezone -timezone 514
4. Set the cluster time by issuing the svctask setclustertime command:
IBM_2145:admin>svctask setclustertime -time 0512150403
The format of the time is MMDDHHmmYY.

You have now completed the tasks necessary to set the cluster time zone and time.

6.3 Creating host definitions


Perform the following steps to create host definitions within the SVC:
1. To determine which hosts ports are eligible for definition, issue the svcinfo
lshbaportcandidate command as shown in Figure 6-4.

Example 6-4 svcinfo lshbaportcandidate command


IBM_2145:admin>svcinfo lshbaportcandidate
id
210000E08B09691D
210100E08B25C440
210000E08B08AFD6
210100E08B29691D
210000E08B05C440
210000E08B04D451
10000000C92945EB
210100E08B29951D
210000E08B09951D
210100E08B24D451
10000000C92926C3
210100E08B259C41
210000E08B059C41

This command shows all WWNs that are visible to the SVC which were not already
defined to a host. If your WWN does not appear, verify that the host logged into the switch
and that zoning is updated to allow SVC and host ports to see each other as explained in
Chapter 3, “Planning and configuration” on page 25.

Note: There are situations when the information presented may include host HBA ports
which are no longer logged in or even part of the SAN fabric. For example, a host HBA
port is unplugged from a switch but svcinfo lshost still shows the WWPN logged in to
all SVC nodes. The incorrect entry will be removed when another device is plugged into
the same switch port that previously contained the removed host HBA port.

2. The output from this command shows that we have eleven QLogic ports (21xxx) and two
Emulex ports (10xxx). By checking the hosts and confirming with the switch nameserver,
you determine that the 10xxx WWNs belong to the AIX host. Therefore, you have
everything necessary to create a host definition. We can do this in one of two ways:
– You can add WWN port definitions to a host one at a time using the mkhost and
addhostport commands as shown in Figure 6-5 on page 112.

Chapter 6. Quickstart configuration using the CLI 111


Example 6-5 svctask mkhost and addhostport commands
IBM_2145:admin>svctask mkhost -name AIX_270 -hbawwpn 10000000c92945eb
Host id [0] successfully created

IBM_2145:admin>svctask addhostport -hbawwpn 10000000c92926c3 AIX_270

Note: The -name is optional. If you do not specify a -name, the default is hostX,
where X is the ID sequence number assigned by the SVC internally.

The -hbawwpn is mandatory. If you do not specify the -hbawwpn parameter, this will
cause the command to fail.

If you prefer to provide a name for your host (as we have), you can use letters A to
Z, a to z, numbers 0 to 9, and the underscore. It can be between one and 15
characters in length. However, it cannot start with a number or the word host
because this prefix is reserved for SVC assignment only.

Tip: Some HBA device drivers will not log in to the fabric until they can see target LUNs.
As they do not log in, their WWPNs will not be known as candidate ports. You can specify
the force flag (-force) with this command to stop the validation of the WWPN list.

The -name parameter is used to name the host (in our case aix1) and the -hbawwpn
parameter is filled in using data retrieved from the lshbaportcandidate command.
– Add all ports at the same time using a modification of the mkhost command as shown
here:
IBM_2145:admin>svctask mkhost -name W2K_npsrv3 -hbawwpn 210100e08b259c41:210000e08b
059c41
Host id [1] successfully created
3. Check that the host definitions were correctly created using the svcinfo lshost command
as shown in Example 6-6.

Example 6-6 svcinfo lshost commands


IBM_2145:admin>svcinfo lshost
id name port_count
0 AIX_270 2
1 W2K_npsrv3 2

IBM_2145:admin>svcinfo lshost AIX_270


id 0
name AIX_270
port_count 2
type generic
WWPN 10000000C92945EB
node_logged_in_count 2
WWPN 10000000C92926C3
node_logged_in_count 2

IBM_2145:admin>svcinfo lshost W2K_npsrv3


id 1
name W2K_npsrv3
port_count 2
type generic
WWPN 210100E08B259C41
node_logged_in_count 2

112 IBM TotalStorage SAN Volume Controller


WWPN 210000E08B059C41
node_logged_in_count
2

You have now completed the tasks required to add host definitions to your SVC configuration.

6.4 Displaying Managed Disks


Perform the following steps to display MDisks:
1. See which MDisks are available. Enter the svcinfo lsmdiskcandidate command as
shown in Example 6-7. This displays all detected MDisks that are not currently part of a
managed disk group (MDG).

Example 6-7 svcinfo lsmdiskcandidate command


IBM_2145:admin>svcinfo lsmdiskcandidate
id
0
1
2
3
4
5
6

Alternatively, you can list all MDisks (managed or unmanaged) by issuing the svcinfo
lsmdisk command as shown in Example 6-8.

Example 6-8 svcinfo lsmdisk command


IBM_2145:admin>svcinfo lsmdisk -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
0:mdisk0:online:unmanaged:::200.4GB:0000000000000000:controller0:600a0b80000fbdf0000002753f
b8b1c200000000000000000000000000000000
1:mdisk1:online:unmanaged:::200.4GB:0000000000000001:controller0:600a0b80000fbdfc000002a33f
b8b30900000000000000000000000000000000
2:mdisk2:online:unmanaged:::407.2GB:0000000000000002:controller0:600a0b80000fbdf0000002773f
b8b22a00000000000000000000000000000000
3:mdisk3:online:unmanaged:::407.2GB:0000000000000003:controller0:600a0b80000fbdfc0000029f3f
b8b1f700000000000000000000000000000000
4:mdisk4:online:unmanaged:::817.4GB:0000000000000004:controller0:600a0b80000fbdf0000002793f
b8b2ac00000000000000000000000000000000
5:mdisk5:online:unmanaged:::681.2GB:0000000000000005:controller0:600a0b80000fbdfc000002a13f
b8b25b00000000000000000000000000000000
6:mdisk6:online:unmanaged:::200.4GB:0000000000000007:controller0:600a0b80000fbdfc000002a53f
bcda8100000000000000000000000000000000

From this output, you can see additional information about each MDisk (such as current
status). For the purpose of our current task, we are only interested in the unmanaged
disks because they are candidates for MDGs (all MDisk in our case).

Tip: The -delim: parameter collapses output instead of wrapping text over multiple
lines.

2. If not all MDisks that you expect are visible, rescan the Fibre Channel network available by
entering the svctask detectmdisk command:

Chapter 6. Quickstart configuration using the CLI 113


IBM_2145:admin>svctask detectmdisk
3. If you run the svcinfo lsmdiskcandidate command again and your MDisk or MDisks are
still not visible, check that the logical unit numbers (LUNs) from your subsystem have
been properly assigned to the SVC and that appropriate zoning is in place (for example,
the SVC can see the disk subsystem). See Chapter 3, “Planning and configuration” on
page 25, for details about how to set up your SAN fabric.

6.5 Creating Managed Disk Groups


Perform the following steps to create Managed Disk Groups (MDG):
1. From the information obtained in the previous section, add MDisks to MDGs using one of
the following methods:
– Issue the svctask mkmdiskgrp command as shown here where you can add multiple
MDisks to the MDG at the same time:
IBM_2145:admin>svctask mkmdiskgrp -name MDG0_DS43 -ext 32 -mdisk mdisk0:mdisk2
mDisk Group, id [0], successfully created
This command creates an MDG called MDG0_DS43. The extent size used within this
group is 32 MB. Two MDisks (mdisk0 and mdisk2) are added to the group.

Note: The -name and -mdisk parameters are optional. If you do not enter a -name,
the default is MDiskgrpX, where X is the ID sequence number assigned by the SVC
internally. If you do not enter the -mdisk parameter, an empty MDG is created.

If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, and
the underscore. It can be between one and 15 characters in length, but it cannot
start with a number or the word mDiskgrp because this prefix is reserved for SVC
assignment only.

By running the svcinfo lsmdisk command again, you should now see the MDisks
(mdisk0 and mdisk2) as “managed” and part of the MDG MDG0_DS43 as shown in
Example 6-9.

Example 6-9 svcinfo lsmdisk command


IBM_2145:admin>svcinfo lsmdisk -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
0:mdisk0:online:managed:1:MDG0_DS43:200.4GB:0000000000000000:controller0:600a0b80000fbdf000
0002753fb8b1c200000000000000000000000000000000
1:mdisk1:online:unmanaged:::200.4GB:0000000000000001:controller0:600a0b80000fbdfc000002a33f
b8b30900000000000000000000000000000000
2:mdisk2:online:managed:1:MDG0_DS43:407.2GB:0000000000000002:controller0:600a0b80000fbdf000
0002773fb8b22a00000000000000000000000000000000
3:mdisk3:online:unmanaged:::407.2GB:0000000000000003:controller0:600a0b80000fbdfc0000029f3f
b8b1f700000000000000000000000000000000
4:mdisk4:online:unmanaged:::817.4GB:0000000000000004:controller0:600a0b80000fbdf0000002793f
b8b2ac00000000000000000000000000000000
5:mdisk5:online:unmanaged:::681.2GB:0000000000000005:controller0:600a0b80000fbdfc000002a13f
b8b25b00000000000000000000000000000000
6:mdisk6:online:unmanaged:::200.4GB:0000000000000007:controller0:600a0b80000fbdfc000002a53f
bcda8100000000000000000000000000000000

114 IBM TotalStorage SAN Volume Controller


– If you want to add an MDisk to an existing MDG or want to add MDisks one at a time,
combine the mkmdiskgrp command to create the initial MDG and then use the addmdisk
command, as shown in Example 6-10, to add other MDisks to it.

Example 6-10 svctask mkmdiskgrp and addmdisk commands


IBM_2145:admin>svctask mkmdiskgrp -name MDG1_DS43 -ext 32 -mdisk mdisk1
MDisk Group, id [1], successfully created

IBM_2145:admin>svctask addmdisk -mdisk mdisk3 MDG1_DS43

The first command in this example creates an MDisk group called MDG1_DS43. The
extent size used within this group is 32 MB. One MDisk (mdisk1) is added to the group.
The second command adds a second MDisk (mdisk3) to the same MDG.
By running the svcinfo lsmdisk command again, you now see the MDisks (mdisk1
and mdisk3) as “managed” and part of the MDG MDG1_DS43 (see Example 6-11).

Example 6-11 svcinfo lsmdisk command


IBM_2145:admin>svcinfo lsmdisk -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
0:mdisk0:online:managed:1:MDG0_DS43:200.4GB:0000000000000000:controller0:600a0b80000fbdf000
0002753fb8b1c200000000000000000000000000000000
1:mdisk1:online:managed:0:MDG1_DS43:200.4GB:0000000000000001:controller0:600a0b80000fbdfc00
0002a33fb8b30900000000000000000000000000000000
2:mdisk2:online:managed:1:MDG0_DS43:407.2GB:0000000000000002:controller0:600a0b80000fbdf000
0002773fb8b22a00000000000000000000000000000000
3:mdisk3:online:managed:0:MDG1_DS43:407.2GB:0000000000000003:controller0:600a0b80000fbdfc00
00029f3fb8b1f700000000000000000000000000000000
4:mdisk4:online:unmanaged:::817.4GB:0000000000000004:controller0:600a0b80000fbdf0000002793f
b8b2ac00000000000000000000000000000000
5:mdisk5:online:unmanaged:::681.2GB:0000000000000005:controller0:600a0b80000fbdfc000002a13f
b8b25b00000000000000000000000000000000
6:mdisk6:online:unmanaged:::200.4GB:0000000000000007:controller0:600a0b80000fbdfc000002a53f
bcda8100000000000000000000000000000000

For information about other tasks, such as adding MDisks to MDGs, renaming MDGs or
deleting MDGs, see Chapter 9, “SVC configuration and administration using the CLI” on
page 207.

You have now completed the tasks required to create an MDG.

6.6 Creating a VDisk


When creating a VDisk, you must enter several parameters (some mandatory, some optional)
at the CLI. The full command syntax is:
svctask mkvdisk -mdiskgrp name|id -iogrp name|id -size size [-fmtdisk]
[-vtype seq|striped|image] [-node name|id] [-unit b|kb|mb|gb|tb|pb]
[-mDisk name|id_list] [-name name] [-instance instance]

The parameters are defined as:


򐂰 -mdiskgrp: Name or ID of the MDG in which to create the VDisk.
򐂰 -iogrp: Name or ID of I/O group which is to own the VDisk.
򐂰 -size: Capacity (numerical), not necessary for image mode VDisks.
򐂰 -fmtdisk: Optional parameter to force a format of the new VDisk.

Chapter 6. Quickstart configuration using the CLI 115


򐂰 -vtype: Optional parameter to specify the type of VDisk (sequential, striped or image
mode). Default (if nothing is specified) is striped.
򐂰 -node: Optional parameter to specify the name or ID of the preferred node. Default (if
nothing is specified) is to alternate between nodes in the I/O group.
򐂰 -unit: Optional parameter to specify the data units for capacity parameter. Default (if
nothing is specified) is MB.
򐂰 -mdisk: Optional parameter to specify the name or ID of the MDisk or MDisks to be used
for the VDisk. This is only required for sequential and image mode VDisks since striped
VDisks use all MDisks that are available in the MDG by default.

Note: You can use this parameter for striped VDisks, for example, if you want to specify
that the VDisk only uses a subset of the MDisks available within a MDG.

򐂰 -name: Optional parameter to assign a name to the new VDisk. Default (if nothing is
specified) is to assign the name VDiskX, where X is the ID sequence number assigned by
the SVC internally.
Note: If you want to provide a name, you can use A to Z, a to z, 0 to 9, and the
underscore. It can be between one and 15 characters in length. However, it cannot
start with a number or the word VDisk since this prefix is reserved for SVC assignment
only.

򐂰 -instance: The Virtual Disk (also known as Host Logical Unit Number (HLUN)) instance
number. This value overrides that which would be assigned automatically, and influences
other numbers that may be assigned later.

Now perform the following steps to create VDisks:


1. Create a striped VDisk using the svctask mkvdisk command (we cover sequential and
image mode VDisks in a later section). See Example 6-12. This command creates a
10 GB, striped VDisk called VD0_AIX_270 within the MDG MDG0_DS43 and assigned it
to the I/O group iogrp_0.

Example 6-12 svctask mkvdisk commands


IBM_2145:admin>svctask mkvdisk -mdiskgrp MDG0_DS43 -iogrp io_grp0 -size 10 -vtype striped
-unit gb -name VD0_AIX_270
Virtual Disk, id [0], successfully created

2. Create the VDisks (five in this example) using the previous command several times. The
result can be displayed using the svcinfo lsvdisk command as shown in Example 6-13.

Example 6-13 svcinfo lsvdisk command


IBM_2145:admin>svcinfo lsvdisk -delim :
id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type:FC_id:FC
_name:RC_id:RC_name
0:VD0_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
1:VD1_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
2:VD2_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
3:VD3_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
4:VD4_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
5:VD5_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::

To display more information about a specific VDisk, enter a variant of the svcinfo lsvdisk
command as shown in Example 6-14 on page 117.

116 IBM TotalStorage SAN Volume Controller


Example 6-14 svcinfo lsvdisk command
IBM_2145:admin>svcinfo lsvdisk VD0_AIX_270
id 0
name VD0_AIX_270
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name MDG0_DS43
capacity 10.0GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
throttling 0
preferred_node_id 2
fast_write_state empty

For information about other tasks, such as deleting a VDisk, renaming a VDisk, or expanding
a VDisk, see Chapter 9, “SVC configuration and administration using the CLI” on page 207.

You have now completed the tasks required to create a VDisk.

6.7 Assigning a VDisk to a host


Using the VDisk and host definition created in previous sections, assign VDisks to hosts
ready for their use. To do this, use the svctask mkvdiskhostmap command:
IBM_2145:admin>svctask mkvdiskhostmap -host AIX_270 VD0_AIX_270
Virtual Disk to Host map, id [0], successfully created

IBM_2145:admin>svctask mkvdiskhostmap -host AIX_270 VD1_AIX_270


Virtual Disk to Host map, id [1], successfully created

This command assigns VDisks VD0_AIX_270 and VD1_AIX_270 to host AIX_270.

Note: The optional parameter -scsi scsi_num can help assign a specific LUN ID to a
VDisk that is to be associated with a given host. The default (if nothing is specified) is to
increment based on what is already assigned to the host.

For information about other tasks, such as deleting a VDisk to host mapping, see Chapter 9,
“SVC configuration and administration using the CLI” on page 207.

You have now completed all the tasks required to assign a VDisk to an attached host. You are
ready to proceed to Chapter 8, “Host configuration” on page 153, to begin to use the
assigned VDisks.

Chapter 6. Quickstart configuration using the CLI 117


118 IBM TotalStorage SAN Volume Controller
7

Chapter 7. Quickstart configuration using


the GUI
This chapter describes the basic configuration procedures required to get your IBM
TotalStorage SAN Volume Controller (SVC) environment up and running as quickly as
possible using the CIM Agent and Console for SAN Volume Controller graphical user
interface (GUI).

See Chapter 10, “SVC configuration and administration using the GUI” on page 265, for more
information about these and other configuration and administration procedures.

Important: Data entries made through the GUI are case sensitive.

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. 119
7.1 Adding nodes to the cluster
After cluster creation is completed through the service panel and cluster Web interface, only
one node (the configuration node) is set up. To be a fully functional SVC cluster, a second
node must be added to the configuration.

Perform the following steps to add nodes to the cluster.

Note: If you closed your browser at the end of the last section, start with Step 1. If you left
it open, skip to Step 2 on page 120.

1. Open the GUI using one of the following methods:


– Double-click the SAN Volume Controller Console icon on your master console’s
desktop.
– Open a Web browser on the master console and point to the address:
http://localhost:9080/ica
– Open a Web browser on a separate workstation and point to the address:
http://masteconsoleipaddress:9080/ica
In our case, the master console IP address is 9.42.164.140.

On the Signon page (Figure 7-1), type the user ID superuser and the password passw0rd.
These are the default user ID and password. Click OK.

Figure 7-1 GUI signon page

2. You see the GUI Welcome page as shown in Figure 7-2 on page 121. This page has
several links: My Work (top left), a Recent Tasks list (bottom left), the GUI version and
build level information (right, under the graphic), and a hypertext link to the SVC download
page:
http://www.ibm.com/storage/support/2145

120 IBM TotalStorage SAN Volume Controller


Under My Work on the left, click the Clusters link.

Figure 7-2 GUI Welcome page

Chapter 7. Quickstart configuration using the GUI 121


3. On the Viewing Clusters panel (Figure 7-3), select the check box next to the cluster on
which you want to perform actions (in our case SVC1). Select Launch the SAN Volume
Controller application from the list and click Go.

Figure 7-3 Selecting to launch the SAN Volume Controller application

4. The SAN Volume Controller Console Application launches in a separate browser window
(Figure 7-4 on page 123). On this page, as with the Welcome page, you can see several
links under My Work (top left), a Recent Tasks list (bottom left), the SVC Console version
and build level information (right, under graphic), and a hypertext link that will bring you to
the SVC download page:
http://www.ibm.com/storage/support/2145
Under My Work, click the Work with Nodes option and then the Nodes link.

122 IBM TotalStorage SAN Volume Controller


Figure 7-4 SVC Console Welcome page

5. The Viewing Nodes panel (Figure 7-5 on page 124) opens. Note the input/output (I/O)
group name (for example, io_grp0). Select the node you want to add. Ensure that Add a
node is selected from the list and click Go.

Chapter 7. Quickstart configuration using the GUI 123


Figure 7-5 Viewing Nodes panel

Note: You can rename the existing node to your own naming convention standards (we
show how to do this later). On your panel, it should appear as node1 by default.

6. The next panel (Figure 7-6 on page 125) displays the available nodes. Select the node.
Associate it with an I/O group and provide a name (for example, SVC3N2). Click OK.

124 IBM TotalStorage SAN Volume Controller


Figure 7-6 Adding a Node to a Cluster panel

Note: If you do not provide a name, the SVC automatically generates the name nodeX,
where X is the ID sequence number assigned by the SVC internally. In our case, this is
node2.

If you want to provide a name (as we have), you can use letters A to Z, a to z, numbers
0 to 9, and the underscore. It can be between one and 15 characters in length, but
cannot start with a number or the word node since this prefix is reserved for SVC
assignment only.

In our case, we only have enough nodes to complete the formation of one I/O group.
Therefore, we added our new node to the I/O group that node1 was already using, namely
io_grp0 (you can rename from the default of iogrp0 using your own naming convention
standards).
If this panel does not display any available nodes, check if your second node is powered
on and that zones are appropriately configured in your switches. It can also be possible
that a pre-existing cluster configuration data is stored on it. If you are sure this node is not
part of another active SVC cluster, use the service panel to delete the existing cluster
information. When this is complete return to this panel and you should see the node listed.
For information about zoning requirements, see Chapter 3, “Planning and configuration”
on page 25. For information about how to delete an existing cluster configuration using the
service panel, see 5.3, “Basic installation” on page 82.
7. Return to the Viewing Nodes panel (Figure 7-7 on page 126). It shows the status change
of the node from Adding to Online.

Chapter 7. Quickstart configuration using the GUI 125


Figure 7-7 Node added successfully

Note: This panel does not automatically refresh. Therefore, you continue to see the
adding status only until you click the Refresh button.

You have completed the cluster configuration and now you have a fully redundant SVC
environment.

126 IBM TotalStorage SAN Volume Controller


7.1.1 Installing certificates
As we continue with setting up the SVC cluster we will come across many instances where
we are prompted with security warnings regarding unrecognized certificates. The security
warning panel (Figure 7-8) shows three options.

Figure 7-8 Security Alert window

These options are:


򐂰 Yes: Clicking yes accepts the certificate for this task. This option allows you to proceed
using the unrecognized certificate. Each time you select a task which transmits secure
information, you are prompted to accept another certificate. In most cases, you are
prompted multiple times due to the two-way data exchange, which occurs between the
management workstation and the SVC cluster. In some cases, this can cause your
browser to crash.
򐂰 No (default): Clicking this option rejects the certificate for this task and does not allow you
to proceed.
򐂰 View Certificate: Clicking this option launches the Certificate window (Figure 7-9 on
page 128), from where you can install the certificate. If you do not want to be prompted
repeatedly to accept or reject certificates, we recommend you choose this option.

Chapter 7. Quickstart configuration using the GUI 127


Figure 7-9 Certificate Information

Follow these steps to install a certificate:


1. From the Security Alert window (Figure 7-8 on page 127), select View Certificate.
2. The Certificate window (Figure 7-9) opens. Click Install Certificate.
3. The Welcome to the Certificate Import Wizard information panel (Figure 7-10) opens.
Click Next.

Figure 7-10 Certificate Import Wizard

128 IBM TotalStorage SAN Volume Controller


4. On the Certificate Store panel (Figure 7-11), click Next.

Figure 7-11 Certificate Store panel

5. You may be prompted with the Root Certificate Store confirmation window (Figure 7-12). If
you are, click Yes.

Figure 7-12 Root Certificate Store

6. You should see a message stating that the import was successful (Figure 7-13). Click OK.

Figure 7-13 Certificate Import successful

7. You return to the Certificate Information window (Figure 7-9 on page 128) you saw earlier.
Click OK.
8. Provide the admin user ID and password when prompted.

From this point, you should no longer be asked to accept or reject certificates from the SVC
cluster.

Chapter 7. Quickstart configuration using the GUI 129


Note: Future code upgrades may result in new certificate IDs, so you may have to go
through this process again.

7.2 Setting the cluster time zone and time


Perform the following steps to set the cluster time zone and time:
1. From the SVC Welcome page (Figure 7-4 on page 123), select the Manage Cluster
option and the Set Cluster Time link.
2. The Cluster Date and Time Settings panel (Figure 7-14) opens. At the top of the panel,
you see the current settings. If necessary, make adjustments and ensure that the Update
cluster data and time and Update cluster time zone check boxes are selected. Click
Update.

Figure 7-14 Cluster Date and Time Settings panel

Note: You may be prompted for the cluster user ID and password. If you are, enter
admin and the password you set earlier.

3. You see the message “The Cluster Time Zone Setting Has Been Updated”.

You have now completed the tasks necessary to set the cluster time zone and time.

130 IBM TotalStorage SAN Volume Controller


7.3 Creating host definitions
Perform the following steps to create host objects within the SVC:
1. From the SVC Welcome page (Figure 7-4 on page 123), select the Working with Virtual
Disks option and then the Hosts link.
2. You see the Filtering Hosts panel (not shown), which we explain later. Click the Bypass
filter button at the top of this panel.
3. The Viewing Hosts panel (Figure 7-15) opens. Select Create a host from the list and click
Go.

Figure 7-15 Viewing Hosts panel

4. On the Creating Hosts panel (Figure 7-16 on page 132), follow these steps:
a. Type a name for your host (for example, LINUX1).

Note: If you do not provide a name, the SVC automatically generates the name
hostX, where X is the ID sequence number assigned by the SVC internally.

If you want to provide a name (as we have), you can use the letters A to Z, a to z,
numbers 0 to 9, and the underscore. It can be between one and 15 characters in
length. However, it cannot start with a number or the word host because this prefix is
reserved for SVC assignment only.

b. From the Available Port list, select the WWN or WWNs, one at a time, and click the
Add button.
c. When you are done adding the WWNs, click OK.

Chapter 7. Quickstart configuration using the GUI 131


Figure 7-16 Creating Hosts panel

Note: This panel shows all WWNs that are visible to the SVC and that have not already
been defined to a host. If your WWN does not appear, check that the host has logged
into the switch and that zoning in the switches is updated to allow SVC and host ports to
see each other. This is described in Chapter 3, “Planning and configuration” on
page 25.

132 IBM TotalStorage SAN Volume Controller


5. You return to the Viewing Hosts panel (Figure 7-17) where you should see your newly
created host.

Figure 7-17 Host added successfully

For information about other tasks, such as adding host ports, deleting host ports, or deleting
hosts, see Chapter 10, “SVC configuration and administration using the GUI” on page 265.

You have now completed the tasks required to add host definitions to your SVC configuration.

7.4 Displaying managed disks


Perform the following steps to display MDisks:
1. From the SVC Welcome page (Figure 7-4 on page 123), select the Work with Managed
Disks option and then the Managed Disks link.
2. When the Filtering Managed Disks (MDisk) panel opens, click Bypass filter to open the
Viewing Managed Disks panel.
3. On the Viewing Managed Disks panel (Figure 7-18 on page 134), if your MDisks are not
displayed, rescan the Fibre Channel network. Select Discover MDisks from the list and
click Go.

Chapter 7. Quickstart configuration using the GUI 133


Figure 7-18 Discover MDisks

Note: If your MDisks are still not visible, check that the logical unit numbers (LUNs) from
your subsystem are properly assigned to the SVC and that appropriate zoning is in place
(for example, SVC can see the disk subsystem). See Chapter 3, “Planning and
configuration” on page 25, for more details about how to setup your storage area network
(SAN) fabric.

7.5 Creating Managed Disks Groups


Perform the following steps to create a managed disk group (MDG):
1. From the SVC Welcome page (Figure 7-4 on page 123), select the Work with Managed
Disks option and then the Managed Disks Groups link.
2. When the Filtering Managed Disks (MDisk) Groups panel opens, click Bypass filter.
3. The Viewing Managed Disks Groups panel (Figure 7-19 on page 135) opens. Select
Create an MDisk Group from the list and click Go

134 IBM TotalStorage SAN Volume Controller


Figure 7-19 .Selecting the option to create an MDisk group

4. On the Create Managed Disk Group panel (Figure 7-20 on page 136), click Next.

Chapter 7. Quickstart configuration using the GUI 135


Figure 7-20 Create Managed Disk Group panel

5. On the Name the group and select the managed disks panel (Figure 7-21 on page 137),
follow these steps:
a. Type a name for the MDG.

Note: If you do not provide a name, the SVC automatically generates the name
MDiskgrpX, where X is the ID sequence number assigned by the SVC internally.

If you want to provide a name (as we have), you can use the letters A to Z, a to z,
numbers 0 to 9, and the underscore. It can be between one and 15 characters in
length, but cannot start with a number or the word MDiskgrp because this prefix is
reserved for SVC assignment only.

b. From the MDisk Candidates box, one at a time, select the MDisks to put into the MDG.
Click Add to move them to the Selected MDisks box.
c. Click Next.

136 IBM TotalStorage SAN Volume Controller


Figure 7-21 Name the group and select the managed disks panel

Chapter 7. Quickstart configuration using the GUI 137


6. From the list shown in Figure 7-22, select the extent size to use. Then click Next.

Figure 7-22 Select Extent Size panel

138 IBM TotalStorage SAN Volume Controller


7. On the Verify Managed Disk Group panel (Figure 7-23), verify that the information
specified is correct. Then click Finish.

Figure 7-23 Verify MDG wizard

8. Return to the Viewing Managed Disk Groups panel (Figure 7-24 on page 140) where the
MDG (in our case MDG2_DS43) is displayed.

Chapter 7. Quickstart configuration using the GUI 139


Figure 7-24 MDG added successfully

For information about other tasks, such as adding MDisks to MDGs and renaming MDGs or
deleting MDGs, see Chapter 10, “SVC configuration and administration using the GUI” on
page 265.

We have completed the tasks required to create a MDG.

7.6 Creating a VDisk


Perform the following steps to create VDisks:
1. From the SVC Welcome page (Figure 7-4 on page 123), select the Work with Virtual
Disks option and then the Virtual Disks link.
2. When the Filtering Virtual Disks (VDisks) panel opens, click Bypass filter.
3. The Viewing Virtual Disks panel (Figure 7-25 on page 141) opens. Select Create VDisk
from the list. Click Go.

140 IBM TotalStorage SAN Volume Controller


Figure 7-25 Viewing Virtual Disks panel

Chapter 7. Quickstart configuration using the GUI 141


4. The Create Virtual Disks wizard (Figure 7-26) opens. Click the Next button.

Figure 7-26 Create Virtual Disks wizard

142 IBM TotalStorage SAN Volume Controller


5. On the Choose an I/O Group and a Managed Disk Group panel (Figure 7-27), follow these
steps:
a. Select the I/O group to associate the VDisk with from the list. In our case, we only have
one, io_grp0, so we must select it from the list.

Note: You can let the system choose the preferred node and I/O group.

b. Optionally, choose a preferred node. The default (if nothing is selected) is to alternate
between nodes in the I/O group.
c. Select the MDisk group in which to create the VDisk from the list. In our case we
selected MDG1_DS4302
d. Click Next.

Figure 7-27 Choosing an I/O group and a MDG panel

Chapter 7. Quickstart configuration using the GUI 143


6. On the Select the Type of VDisk panel (Figure 7-28), follow these steps:
a. Select the type of VDisk to create (striped or sequential) from the list.
b. Select the number of VDisks to create.
c. Click Next.

Figure 7-28 Select the Type of VDisk panel

144 IBM TotalStorage SAN Volume Controller


7. On the Name the Virtual Disk(s) creating panel (Figure 7-29), type a name for the VDisk.
In our case we used VD0_LINUX1. Click Next.

Note: If you do not provide a name, the SVC automatically generates the name
VDiskX, where X is the ID sequence number assigned by the SVC internally.

If you want to provide a name (as we have), you can use letters A to Z, a to z, numbers
0 to 9, and the underscore. It can be between one and 15 characters in length.
However, it cannot start with a number or the word VDisk because this prefix is
reserved for SVC assignment only.

Figure 7-29 Name the Virtual Disk(s) panel

Chapter 7. Quickstart configuration using the GUI 145


8. On the Select Attributes for modetype-mode VDisk panel (modetype is the type of VDisk
you selected in the previous step) as shown in Figure 7-30, follow these steps:
a. Optionally, choose the Managed Disk Candidates upon which to create the VDisk.
Click Add to move them to the Managed Disks Striped in this Order box.
Striped VDisks, by default, use all MDisks within a MDG. Therefore, it is not necessary
to select anything here. However, you may want to select from the list, for example, if
you want to specify that the VDisk only uses a subset of the MDisks available within a
MDG.
For image and sequential VDisks, we do not see the managed disk candidates or
managed disks striped in the Order box. Instead we see the Managed Disk Used to
Create VDisk and the top one in the list selected by default.
b. Type the capacity of the VDisk. Select the unit of capacity from the list.
Remember, capacity is calculated based on 1 GB = 1024 MB. Therefore, an entry of 10
GB actually provides 10240 MB instead of 10000 MB as with other disk subsystems.
c. Optionally, choose to format the VDisk by selecting the check box.

Note: Formatting destroys any existing data on the VDisk.

d. After completing all the necessary entry fields, click Next.

Figure 7-30 Select Attributes for a VDisk panel

9. On the Verify VDisk panel (Figure 7-31 on page 147), verify the selections. We can select
the Back button at any time to make changes.

146 IBM TotalStorage SAN Volume Controller


Once satisfied with the selections, click Finish. We return to the Viewing Virtual Disks
panel where we should now see our newly created VDisk.

Figure 7-31 Verify VDisk

10.l After selecting the Finish option we will get a screen (Figure 7-32 on page 148) that tells
us the result of the action.

Chapter 7. Quickstart configuration using the GUI 147


Figure 7-32 VDisk creation success

11.Click Finish again and we will see a list (Figure 7-33 on page 149) of all created VDisks.

148 IBM TotalStorage SAN Volume Controller


Figure 7-33 List al all created VDisks

For information about other tasks, such as deleting a VDisk, renaming a VDisk or expanding
a VDisk, see Chapter 10, “SVC configuration and administration using the GUI” on page 265.

We have now completed the tasks required to create a VDisk.

7.7 Assigning a VDisk to a host


Perform the following steps to map a VDisk to a host:
1. From the SVC Welcome page (Figure 7-4 on page 123), select the Work with Virtual
Disks option and then the Virtual Disks link.
2. When the Filtering Virtual Disks (VDisks) panel opens, click Bypass filter.
3. On the Viewing Virtual Disks panel (Figure 7-34 on page 150), select the radio button next
to the VDisk to assign. Select Map a VDisk to a host from the list and click Go.

Chapter 7. Quickstart configuration using the GUI 149


Figure 7-34 Assign a VDisk to a host

Note: The SCSI LUN ID option allows you to assign a specific LUN ID to a VDisk that
you are mapping to a host. The default, if nothing is selected, is to increment the SCSI
LUN ID by one based on what is already assigned to the host.

4. On the Creating a VDisk-to-Host Mappings panel (Figure 7-35 on page 151), select the
target host. Optionally assign a specific SCSI LUN ID, and click OK.

150 IBM TotalStorage SAN Volume Controller


.

Figure 7-35 Creating VDisk-to-Host Mappings panel

5. We get an information panel that shows the status as shown in Figure 7-36.

Figure 7-36 VDisk to host mapping successful

Chapter 7. Quickstart configuration using the GUI 151


6. We now return to the Viewing Virtual Disks panel.

For information about other tasks such as deleting a VDisk to host mapping, see Chapter 10,
“SVC configuration and administration using the GUI” on page 265.

We have completed all the tasks required to assign a VDisk to an attached host. We are
ready to proceed to Chapter 8, “Host configuration” on page 153, to begin using the assigned
VDisks.

152 IBM TotalStorage SAN Volume Controller


8

Chapter 8. Host configuration


This chapter describes the basic host configuration procedures required to connect a
supported host to the IBM TotalStorage SAN Volume Controller (SVC).

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. 153
8.1 SAN configuration
You must take great care when setting up the storage area network (SAN) to ensure that the
zoning rules discussed in 3.4, “Zoning” on page 36, are followed. The SVC is configured so
that all its ports are in an SVC zone together. There is a storage zone that comprises all SVC
ports and all ports on the disk subsystem, and there are multiple host zones, that each consist
of one HBA port and one port from each of the two SVC nodes.

8.2 SVC setup


Figure 8-1 shows a basic configuration with multiple heterogeneous hosts connected to the
SVC nodes through two switches.

W2K3_1 AIX_270 Linux W2K3_2


P1 P2 P1 P2 P1 P2 P1 P2

P1 Master

P2 Console

2(P4)
(p3)1
HB PCI 1 HB PCI 1
1(p3) (p4)2
0 3 4 8 9 12 14 14 12 9 8 4 3 0
SVC1 SW11 SAN switch SW21 SAN switch SVC1
node1 node2
1 2 5 5 2 1
2(p2) (p1)1
HBA PCI 2 HBA PCI 2
1(p1) (p2)2

DS4300_A DS4300_B

Figure 8-1 SAN Volume controller setup

8.3 Switch and zoning configuration


Example 8-1 lists the details of SAN switch SW21 in the zoning used in our setup.

Example 8-1 Switch SW21 configuration our “SWITCH_1”

SW21:admin> switchshow
switchName: SW21
switchType: 9.2
switchState: Online
switchMode: Native

154 IBM TotalStorage SAN Volume Controller


switchRole: Principal
switchDomain: 21
switchId: fffc15
switchWwn: 10:00:00:60:69:51:87:0a
switchBeacon: OFF
Zoning: ON (SVC_Redbook21_Conf)
port 0: id N2 Online F-Port 21:01:00:e0:8b:28:af:d6 --> Master Console
port 1: id N2 Online F-Port 50:05:07:68:01:10:03:64 --> SVC1 node 2 port 1
port 2: id N2 Online F-Port 50:05:07:68:01:10:03:5a --> SVC1 node 1 port 1
port 3: id N2 Online F-Port 50:05:07:68:01:30:03:64 --> SVC1 node 2 port 3
port 4: id N2 Online F-Port 50:05:07:68:01:30:03:5a --> SVC1 node 1 port 3
port 5: id N2 Online F-Port 20:08:00:a0:b8:0f:bd:f1 --> DS4300
port 6: id N2 No_Light
port 7: id N2 No_Light
port 8: id N2 Online F-Port 10:00:00:00:c9:29:26:c3 --> AIX_270 fcs1
port 9: id N2 Online F-Port 21:01:00:e0:8b:25:d8:21 --> Linux
port 10: -- N2 No_Module
port 11: id N2 No_Light
port 12: id N2 Online F-Port 21:00:00:e0:8b:13:82:40 --> W2K3_1
port 13: -- N2 No_Module
port 14: id N2 Online F-Port 21:00:00:e0:8b:0e:27:8c --> W2K3_2
port 15: -- N2 No_Module

SW21:admin> cfgshow
Effective configuration:
cfg: SVC_Redbook21_Conf
SVC_Storage; SVC_Console; AIX_270_SVC1; Linux_SVC1; W2K3_1_SVC1; W2K3_2_SVC1

zone: SVC_Storage 21,1; 21,2; 21,3; 21,4; 21,5


zone: SVC_Console 21,0; 21,1; 21,2; 21,3; 21,4
zone: AIX_270_SVC1 21,1; 21,2; 21,4;
zone: Linux_SVC1 21,1; 21,2; 21,9
zone: W2K3_1_SVC1 21,3; 21,4; 21,12
zone: W2K3_2_SVC1 21,3; 21,4; 21,14

Example 8-2 lists the details of SAN switch SW11 in the zoning used in Figure 8-1 on
page 154

Example 8-2 Switch SW11 configuration our “SWITCH _2”


SW11:admin> switchshow
switchName: SW11
switchType: 9.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 11
switchId: fffc0b
switchWwn: 10:00:00:60:69:51:87:2f
switchBeacon: OFF
Zoning: ON (SVC_Redbook21_Conf)
port 0: id N2 Online F-Port 21:00:00:e0:8b:08:af:d6 --> Master console
port 1: id N2 Online F-Port 50:05:07:68:01:20:03:64 --> svc1 node 2 port 2
port 2: id N2 Online F-Port 50:05:07:68:01:20:03:5a --> SVC1 node 1 port 2
port 3: id N2 Online F-Port 50:05:07:68:01:40:03:64 --> SVC1 node 2 port 4
port 4: id N2 Online F-Port 50:05:07:68:01:40:03:5a --> SVC1 node 1 port 4
port 5: id N2 Online F-Port 20:09:00:a0:b8:0f:bd:f2 --> DS4300
port 6: id N2 No_Light
port 7: id N2 No_Light
port 8: id N2 Online F-Port 10:00:00:00:c9:29:45:eb --> AIX_270 fcs0

Chapter 8. Host configuration 155


port 9: id N2 Online F-Port 21:01:00:e0:8b:02:84:d0 --> Linux
port 10: -- N2 No_Module
port 11: id N2 No_Light
port 12: id N2 Online F-Port 21:00:00:e0:8b:13:7c:47 --> W2K3_1
port 13: -- N2 No_Module
port 14: id N2 Online F-Port 21:00:00:e0:8b:0e:2a:8c --> W2K3_2
port 15: -- N2 No_Module

SW11:admin> cfgshow
Effective configuration:
cfg: SVC_Redbook21_Conf
SVC_Storage; SVC_Console; AIX_270_SVC1; Linux_SVC1; W2K3_1_SVC1; W2K3_2_SVC1

zone: SVC_Storage 11,1; 11,2; 11,3; 11,4; 11,5


zone: SVC_Console 11,0; 11,1; 11,2; 11,3; 11,4
zone: AIX_270_SVC1 11,1; 11,2; 11,4;
zone: Linux_SVC1 11,1; 11,2; 11,9
zone: W2K3_1_SVC1 11,3; 11,4; 11,12
zone: W2K3_2_SVC1 11,3; 11,4; 11,14

Even if there are 16 possible paths (two server ports x 8 SVC node ports), only four paths
exist because of the switch zoning. A zone consists of one port of the server, and one port of
each node.

Figure 8-2 shows one of the zones. The zone W2K3_1_SVC1 connects the host “Win2k_1
with SVC1”. There are two paths from the FC Host adapter 1, to port 1 of SVC1 node1, and to
port 1 of SVC1 node 2. Two other paths are from the FC Host adapter 2 to port 2 of SVC1
node 1, and port 2 of SVC1 node 2. That gives the host system “Win2K3_1” a maximum of
four paths.

156 IBM TotalStorage SAN Volume Controller


W2K3_1 AIX_270 Linux W2K3_2
P1 P2 P1 P2 P1 P2 P1 P2

P1 Master

P2 Console

2(P4)
(p3)1
HB PCI 1 HB PCI 1
1(p3) (p4)2
0 3 4 8 9 12 14 14 12 9 8 4 3 0
SVC1 SW11 SAN switch SW21 SAN switch SVC1
node1 node2
1 2 5 5 2 1
2(p2) (p1)1
HBA PCI 2 HBA PCI 2
1(p1) (p2)2

DS4300_A DS4300_B

Figure 8-2 Zoning for W2K3_1_SVC1

8.4 AIX-specific information


The following section details specific information that pertains to the connection of AIX-based
hosts to the SVC environment.

8.4.1 Configuring the AIX host


To configure the AIX host, follow these steps:
1. Install the HBA or HBAs on your AIX host system.
2. Install and configure the 2145 and SDD drivers.
3. Configure the switches (zoning) if needed.
4. Connect your AIX host system to the switches.
5. Configure your host, VDisks, and host mapping in the SAN Volume Controller.
6. Run the cfgmgr command to discover the VDisks created on SVC.

8.4.2 Support information


This section details the current support information. It is vital to check the Web sites listed
regularly for any updates.

Chapter 8. Host configuration 157


Operating system versions and maintenance levels
The versions of AIX listed in Table 8-1 are supported. See the following Web site for the latest
information:
http://www.storage.ibm.com/support/2145

Table 8-1 Versions of AIX supported with SVC


Operating system level Machine level

AIX 4.3.3 Recommended Maintenance Level 11

AIX 5.1 Recommended Maintenance Level 6

AIX 5.2 Recommended Maintenance Level 4

AIX 5.3

Subsystem Device Driver (SDD)


SDD is a pseudo device driver designed to support the multipath configuration environments
in IBM products. It resides in a host system with the native disk device driver and provides the
following functions:
򐂰 Enhanced data availability
򐂰 Dynamic input/output (I/O) load balancing across multiple paths
򐂰 Automatic path failover protection
򐂰 Concurrent download of licensed internal code

When using SDD it adds the paths for one VDisk, that is seen as many hdisk, into a vpath
device. This vpath device is now the volume that AIX uses for handling I/O to the disk, and
therefore in any command, such as mkvg (make volumegroup) you must use the vpath as the
device instead of the hdisk.

The following version of SDD for AIX is supported. See the following Web site for the latest
information:
http://www.storage.ibm.com/support/2145
򐂰 SDD for AIX version 1.5.1.4

Supported host adapters


The HBAs listed in Table 8-2 are supported for AIX. See the following Web site for the latest
information:
http://www.storage.ibm.com/support/2145

Table 8-2 Supported HBA in pSeries servers


Platform HBA AIX Level

4.3.3 5.1 5.2 5.3

IBM pSeries® FC 6227 √ √ √ √

FC 6228 √ √ √ √

FC 6229, √ √ √ √
type5704

8.4.3 Host adapter configuration settings


You can check the availability of the FC Host Adapters by using the command shown in
Example 8-3.

158 IBM TotalStorage SAN Volume Controller


Example 8-3 FC Host Adapter availability
# lsdev -Cc adapter |grep fcs
fcs0 Available 20-58 FC Adapter
fcs1 Available 20-60 FC Adapter

You can also find the worldwide port name (WWPN) of your FC Host Adapter and check the
firmware level as shown in Example 8-4. The Network Address is the WWPN for the FC
adapter.

Example 8-4 FC Host Adapter settings and WWPN


# lscfg -vl fcs0
fcs0 P2-I4/Q1 FC Adapter

Part Number.................09P1162
EC Level....................D
Serial Number...............KT12203617
Feature Code/Marketing ID...2765
Manufacturer................0010
FRU Number..................09P1173
Network Address.............10000000C92945EB
ROS Level and ID............02903331
Device Specific.(Z0)........4002206D
Device Specific.(Z1)........10020193
Device Specific.(Z2)........3001506D
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF101493
Device Specific.(Z5)........02903331
Device Specific.(Z6)........06113331
Device Specific.(Z7)........07113331
Device Specific.(Z8)........20000000C92945EB
Device Specific.(Z9)........SS3.30X1
Device Specific.(ZA)........S1F3.30X1
Device Specific.(ZB)........S2F3.30X1
Device Specific.(YL)........P2-I4/Q1

# lscfg -vl fcs0lscfg -vl fcs1


fcs1 P2-I1/Q1 FC Adapter

Part Number.................03N2452
EC Level....................D
Serial Number...............1C13308E15
Manufacturer................001C
Feature Code/Marketing ID...2765
FRU Number..................09P0102
Network Address.............10000000C92926C3
ROS Level and ID............02C03891
Device Specific.(Z0)........1002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........02000909
Device Specific.(Z4)........FF401050
Device Specific.(Z5)........02C03891
Device Specific.(Z6)........06433891
Device Specific.(Z7)........07433891
Device Specific.(Z8)........20000000C92926C3
Device Specific.(Z9)........CS3.82A1
Device Specific.(ZA)........C1D3.82A1
Device Specific.(ZB)........C2D3.82A1

Chapter 8. Host configuration 159


Device Specific.(YL)........P2-I1/Q1

8.4.4 SDD installation


At the time of writing, version 1.5.1.4 for AIX of SDD is supported. See the following Web site
for the latest information about SDD for AIX:
http://www.storage.ibm.com/support/2145

After you download the appropriate version of the SDD, install SDD using the standard AIX
procedure.

Download the SDD driver to a directory, for example /tmp/sdd on the AIX server, list the file in
the directory, execute the inutoc command, list to see the itoc file is present and install the
fileset to AIX. When in the directory where the file(s) is downloaded proceed as shown in
Example 8-5

Example 8-5 Installing SDD on AIX


# ls -l
total 4568
-rw-r--r-- 1 root 201 952320 Aug 26 20:32 devices.sdd.52.1.5.1.4.bff
-rw-r--r-- 1 root 201 1382400 Apr 14 2004 devices.sdd.52.rte

# inutoc .

# ls -l
total 4576
-rw-r--r-- 1 root system 1348 Nov 15 11:58 .toc
-rw-r--r-- 1 root 201 952320 Aug 26 20:32 devices.sdd.52.1.5.1.4.bff
-rw-r--r-- 1 root 201 1382400 Apr 14 2004 devices.sdd.52.rte

# installp -ac -d . all

Example 8-6 on page 160 checks the installation of SDD and San Volume Controller.

Example 8-6 Checking of 2145 SAN Volume Controller and SDD device driver
# lslpp -l "*sdd*"
Fileset Level State Description
----------------------------------------------------------------------------
Path: /usr/lib/objrepos
devices.sdd.52.rte 1.5.1.4 COMMITTED IBM Subsystem Device Driver
for AIX V52

Path: /etc/objrepos
devices.sdd.52.rte 1.5.1.4 COMMITTED IBM Subsystem Device Driver
for AIX V52
# lslpp -l “*2145*”
Fileset Level State Description
----------------------------------------------------------------------------
Path: /usr/lib/objrepos
devices.fcp.disk.ibm2145.rte
1.1.0.3 COMMITTED SAN Volume Controller

Path: /etc/objrepos
devices.fcp.disk.ibm2145.rte
1.1.0.3 COMMITTED SAN Volume Controller

160 IBM TotalStorage SAN Volume Controller


You can also check that the SDD server is operational as shown in Example 8-7.

Example 8-7 SDD server is operational


# lssrc -s sddsrv
Subsystem Group PID Status
sddsrv 5828 active

# ps -eaf |grep sdd


root 5828 7744 0 09:27:13 - 0:22 /usr/sbin/sddsrv
root 18666 17554 1 18:00:55 pts/1 0:00 grep sdd

8.4.5 Discovering the assigned VDisk


Before adding a new volume from the SVC, the AIX host system “AIX_270” had the
configuration as shown in Example 8-8.

Example 8-8 Status of AIX host system ‘AIX_270’

# lspv
hdisk0 000c309d1df43813 rootvg active
hdisk1 000c309df4fd2353 None

# lsvg
rootvg

The configuration of the host AIX_270, the VDisks VD0_AIX_270 to VD5_AIX_270, and the
mapping between the host and the VDisks are defined in the SVC as shown in Example 8-9
on page 161.

On the SVC, you can check that the WWPNs, as listed in Example 8-4 on page 159, are
logged into the SVC for the host “AIX_270” by entering:
svcinfo lshost AIX_270

You can also find the serial numbers of the VDisks with the following command:
svcinfo lshostvdiskmap

Example 8-9 SVC definitions for host system ‘AIX_270’


IBM_2145:admin>svcinfo lshost AIX_270
id 0
name AIX_270
port_count 2
type generic
WWPN 10000000C92945EB
node_logged_in_count 2
WWPN 10000000C92926C3
node_logged_in_count 2

IBM_2145:admin>svcinfo lshostvdiskmap
id / name / SCSI_id / vdisk_id / vdisk_name / wwpn / vdisk_UID
0 / AIX_270 / 0 / 3 / VD1_AIX_270 / 10000000C92945EB / 600507680189801B2000000000000006
0 / AIX_270 / 1 / 4 / VD2_AIX_270 / 10000000C92945EB / 600507680189801B2000000000000007
0 / AIX_270 / 2 / 5 / VD3_AIX_270 / 10000000C92945EB / 600507680189801B2000000000000008
0 / AIX_270 / 3 / 6 / VD4_AIX_270 / 10000000C92945EB / 600507680189801B2000000000000009
0 / AIX_270 / 4 / 7 / VD5_AIX_270 / 10000000C92945EB / 600507680189801B200000000000000A

IBM_2145:admin>svcinfo lsvdisk VD1_AIX_270

Chapter 8. Host configuration 161


id 3
name VD1_AIX_270
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name MDG0_DS43
capacity 10.0GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
throttling 0
preferred_node_id 1
fast_write_state empty

IBM_2145:admin>svcinfo lsvdiskhostmap VD1_AIX_270


id / name / SCSI_id / host_id / host_name / wwpn / vdisk_UID
3 / VD1_AIX_270 / 0 / 0 / AIX_270 / 10000000C92945EB / 600507680189801B2000000000000006
3 / VD1_AIX_270 / 0 / 0 / AIX_270 / 10000000C92926C3 / 600507680189801B2000000000000006

You have to run cfgmgr on the AIX host to get the new disks and vpath configured; if you run
the configmanager (cfgmgr) on each FC adapter, it will not create the vpaths, only the new
hdisks. To get the vpath configured you need to run the cfallvpath command after issuing
the cfgmgr command on the FC adapters:
# cfgmgr -l fcs0
# cfgmgr -l fcs1
# cfallvath

Or use the cfgmgr command to check the whole system, but this is time intensive.
# cfgmgr

The disk configuration of the AIX host system now appears as shown in Example 8-10. The
VDisk “VD1_AIX_270” is seen as vpath1. The vpath1 is made of hdisk3, hdisk8, hdisk13, and
hdisk18. They correspond to the four paths between the host “AIX_270” and the VDisk
“VD1_AIX_270”. Even if there are 16 possible paths (two server ports x 8 SVC node ports),
only four exist because of the switch zoning. The zone consists of one port of the server and
one port of each SVC node.

You can check the serial number of the VDisks by entering the following command shown in
Example 8-10:

Example 8-10 Four VDisks from SVC added with four different paths for each VDisk
# lsdev -Cc disk
hdisk0 Available 10-60-00-8,0 16 Bit SCSI Disk Drive
hdisk1 Available 10-60-00-9,0 16 Bit SCSI Disk Drive
hdisk2 Available 10-70-01 SAN Volume Controller Device
hdisk3 Available 10-70-01 SAN Volume Controller Device
hdisk4 Available 20-58-01 SAN Volume Controller Device
hdisk5 Available 20-58-01 SAN Volume Controller Device
hdisk6 Available 10-70-01 SAN Volume Controller Device
hdisk7 Available 10-70-01 SAN Volume Controller Device
hdisk8 Available 10-70-01 SAN Volume Controller Device

162 IBM TotalStorage SAN Volume Controller


hdisk9 Available 10-70-01 SAN Volume Controller Device
hdisk10 Available 10-70-01 SAN Volume Controller Device
hdisk11 Available 10-70-01 SAN Volume Controller Device
hdisk12 Available 10-70-01 SAN Volume Controller Device
hdisk13 Available 10-70-01 SAN Volume Controller Device
hdisk14 Available 20-58-01 SAN Volume Controller Device
hdisk15 Available 20-58-01 SAN Volume Controller Device
hdisk16 Available 20-58-01 SAN Volume Controller Device
hdisk17 Available 20-58-01 SAN Volume Controller Device
hdisk18 Available 20-58-01 SAN Volume Controller Device
hdisk19 Available 20-58-01 SAN Volume Controller Device
hdisk20 Available 20-58-01 SAN Volume Controller Device
hdisk21 Available 20-58-01 SAN Volume Controller Device
vpath0 Available Data Path Optimizer Pseudo Device Driver
vpath1 Available Data Path Optimizer Pseudo Device Driver
vpath2 Available Data Path Optimizer Pseudo Device Driver
vpath3 Available Data Path Optimizer Pseudo Device Driver
vpath4 Available Data Path Optimizer Pseudo Device Driver

# lspv
root@rome > lspv
hdisk0 000c309d1df43813 rootvg active
hdisk1 000c309df4fd2353 None
hdisk2 none None
hdisk3 none None
hdisk4 none None
hdisk5 none None
hdisk6 none None
hdisk7 none None
hdisk8 none None
hdisk9 none None
hdisk10 none None
hdisk11 none None
hdisk12 none None
hdisk13 none None
hdisk14 none None
hdisk15 none None
hdisk16 none None
hdisk17 none None
hdisk18 none None
hdisk19 none None
hdisk20 none None
hdisk21 none None
vpath0 none None
vpath1 000c309d2f0b1e01 itsoaixvg active
vpath2 none None
vpath3 none None
vpath4 none None

# lsvpcfg
vpath0 (Avail ) 600507680189801B2000000000000006 = hdisk2 (Avail ) hdisk7 (Avail ) hdisk12
(Avail ) hdisk17 (Avail )
vpath1 (Avail pv itsoaixvg) 600507680189801B2000000000000007 = hdisk3 (Avail ) hdisk8
(Avail ) hdisk13 (Avail ) hdisk18 (Avail )
vpath2 (Avail ) 600507680189801B2000000000000008 = hdisk4 (Avail ) hdisk9 (Avail ) hdisk14
(Avail ) hdisk19 (Avail )
vpath3 (Avail ) 600507680189801B2000000000000009 = hdisk5 (Avail ) hdisk10 (Avail ) hdisk15
(Avail ) hdisk20 (Avail )
vpath4 (Avail ) 600507680189801B200000000000000A = hdisk6 (Avail ) hdisk11 (Avail ) hdisk16
(Avail ) hdisk21 (Avail )

Chapter 8. Host configuration 163


# lspv vpath1
PHYSICAL VOLUME: vpath1 VOLUME GROUP: itsoaixvg
PV IDENTIFIER: 000c309d2f0b1e01 VG IDENTIFIER 000c309d00004c00000001003dc0f44b
PV STATE: active
STALE PARTITIONS: 0 ALLOCATABLE: yes
PP SIZE: 16 megabyte(s) LOGICAL VOLUMES: 2
TOTAL PPs: 639 (10224 megabytes) VG DESCRIPTORS: 2
FREE PPs: 606 (9696 megabytes) HOT SPARE: no
USED PPs: 33 (528 megabytes)
FREE DISTRIBUTION: 128..95..127..128..128
USED DISTRIBUTION: 00..33..00..00..00

To make the volumegroup itsoaixvg, we used the vpath in the mkvg command, instead of the
hdisk.

# mkvg -y itsoaixvg vpath1

8.4.6 Using SDD


You can also check that device 1 (vpath1/VD1_AIX_270) is in the “open” state because it is
used by a volume group. Only two paths of the four paths have activity. They are the two
paths connected to the preferred node of the input/output (I/O) group of the SVC cluster. The
two other paths are used as alternate paths in case of a failure of the preferred path (see
Example 8-11).

Example 8-11 SDD commands used to check the availability of the adapters and devices
# datapath query adapter

Active Adapters :2

Adpt# Adapter Name State Mode Select Errors Paths Active


0 fscsi0 NORMAL ACTIVE 43 0 10 2
1 fscsi1 NORMAL ACTIVE 42 0 10 2

# datapath query device


Total Devices : 5

DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized


SERIAL: 600507680189801B2000000000000006
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk2 CLOSE NORMAL 0 0
1 fscsi0/hdisk7 CLOSE NORMAL 0 0
2 fscsi1/hdisk12 CLOSE NORMAL 0 0
3 fscsi1/hdisk17 CLOSE NORMAL 0 0

DEV#: 1 DEVICE NAME: vpath1 TYPE: 2145 POLICY: Optimized


SERIAL: 600507680189801B2000000000000007
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk3 OPEN NORMAL 0 0
1 fscsi0/hdisk8 OPEN NORMAL 43 0
2 fscsi1/hdisk13 OPEN NORMAL 0 0
3 fscsi1/hdisk18 OPEN NORMAL 42 0

164 IBM TotalStorage SAN Volume Controller


DEV#: 2 DEVICE NAME: vpath2 TYPE: 2145 POLICY: Optimized
SERIAL: 600507680189801B2000000000000008
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk4 CLOSE NORMAL 0 0
1 fscsi0/hdisk9 CLOSE NORMAL 0 0
2 fscsi1/hdisk14 CLOSE NORMAL 0 0
3 fscsi1/hdisk19 CLOSE NORMAL 0 0

DEV#: 3 DEVICE NAME: vpath3 TYPE: 2145 POLICY: Optimized


SERIAL: 600507680189801B2000000000000009
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk5 CLOSE NORMAL 0 0
1 fscsi0/hdisk10 CLOSE NORMAL 0 0
2 fscsi1/hdisk15 CLOSE NORMAL 0 0
3 fscsi1/hdisk20 CLOSE NORMAL 0 0

DEV#: 4 DEVICE NAME: vpath4 TYPE: 2145 POLICY: Optimized


SERIAL: 600507680189801B200000000000000A
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk6 CLOSE NORMAL 0 0
1 fscsi0/hdisk11 CLOSE NORMAL 0 0
2 fscsi1/hdisk16 CLOSE NORMAL 0 0
3 fscsi1/hdisk21 CLOSE NORMAL 0 0

8.4.7 Creating and preparing volumes for use


A volume group itsoaixvg is created on vpath1 (VD1_AIX_270), and the file system itsofs1 is
mounted on itsoaixvg. See Example 8-12 on page 165.

Example 8-12 ‘AIX_270’ host system new volume group and file system configuration
# lsvg -o
itsoaixvg
rootvg

# lsvg -l itsoaixvg
itsoaixvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
loglv00 jfslog 1 1 1 open/syncd N/A
lv00 jfs 62 62 1 open/syncd /itsofs1

# df -k
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 16384 7960 52% 1306 16% /
/dev/hd2 1867776 1265440 33% 20505 5% /usr
/dev/hd9var 16384 11020 33% 433 11% /var
/dev/hd3 32768 29196 11% 57 1% /tmp
/dev/hd1 16384 15820 4% 18 1% /home
/proc - - - - - /proc
/dev/hd10opt 32768 26592 19% 294 4% /opt
/dev/lv00 1015808 403224 61% 19 1% /itsofs1

Chapter 8. Host configuration 165


8.4.8 Expanding an AIX volume
It is possible to expand a VDisk in the SVC cluster, even if it’s mapped to a host. Some
operating systems like AIX version 5.2 and higher versions, can handle the volumes being
expanded, even if the host has applications running. The volume group where the VDisk is
assigned, if assigned to any, must not be a concurrent accessible volumegroup. A VDisk that
is defined in a FlashCopy or Metro Mirror mapping on the SVC cannot be expanded, unless
the mapping is removed, which means the FlashCopy or Metro Mirror on that VDisk has to be
stopped before it is possible to expand the VDisk.

An example of how to expand a volume on an AIX host, where the volume is a VDisk from the
SVC, is shown in Example 8-13.

Example 8-13 Expanding VDisk on AIX

To list a VDisk size, use the command svcinfo lsvdisk <VDisk_name>. This gives this
information for the VDisk aix_v1, before expanding the VDisk.

IBM_2145:admin>svcinfo lsvdisk aix_v1


id 7
name aix_v1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG1_DS43
capacity 6.0GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 600507680189801B200000000000002A
throttling 0
preferred_node_id 12
fast_write_state empty

Here it is shown that the capacity is 6.0GB, and what the vdisk_UID is. To find what vpath this
VDisk is on the AIX host, we use the SDD command, datapath query device on the AIX
host.

# datapath query device

Total Devices : 4

DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized


SERIAL: 600507680189801B200000000000002A
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk2 OPEN NORMAL 0 0
1 fscsi0/hdisk7 OPEN NORMAL 44 0
2 fscsi1/hdisk11 OPEN NORMAL 0 0
3 fscsi1/hdisk15 OPEN NORMAL 33 0

166 IBM TotalStorage SAN Volume Controller


DEV#: 1 DEVICE NAME: vpath1 TYPE: 2145 POLICY: Optimized
SERIAL: 600507680189801B200000000000002B
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk3 OPEN NORMAL 10 0
1 fscsi0/hdisk8 OPEN NORMAL 0 0
2 fscsi1/hdisk12 OPEN NORMAL 13 0
3 fscsi1/hdisk16 OPEN NORMAL 0 0

DEV#: 2 DEVICE NAME: vpath2 TYPE: 2145 POLICY: Optimized


SERIAL: 600507680189801B200000000000002C
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk5 OPEN NORMAL 0 0
1 fscsi0/hdisk9 OPEN NORMAL 40 0
2 fscsi1/hdisk13 OPEN NORMAL 0 0
3 fscsi1/hdisk17 OPEN NORMAL 29 0

DEV#: 3 DEVICE NAME: vpath3 TYPE: 2145 POLICY: Optimized


SERIAL: 600507680189801B200000000000002D
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk6 OPEN NORMAL 12 0
1 fscsi0/hdisk10 OPEN NORMAL 0 0
2 fscsi1/hdisk14 OPEN NORMAL 10 0
3 fscsi1/hdisk18 OPEN NORMAL 0 0

Here we can see that the VDisk with vdisk_UID 600507680189801B200000000000002A is


vpath0 on the AIX host, because the vdisk_UID match the SERIAL on the AIX host. To see
the size of the volume on the AIX host, we use the lspv command:

# lspv vpath0
PHYSICAL VOLUME: vpath0 VOLUME GROUP: fc_source_vg
PV IDENTIFIER: 000c309d6de458f3 VG IDENTIFIER 000c309d00004c00000001006de474a1
PV STATE: active
STALE PARTITIONS: 0 ALLOCATABLE: yes
PP SIZE: 16 megabyte(s) LOGICAL VOLUMES: 1
TOTAL PPs: 383 (6128 megabytes) VG DESCRIPTORS: 2
FREE PPs: 382 (6112 megabytes) HOT SPARE: no
USED PPs: 1 (16 megabytes)
FREE DISTRIBUTION: 77..76..76..76..77
USED DISTRIBUTION: 00..01..00..00..00

Which shows that the volume size is 6128MB, equal to 6GB. To expand the volume on the
SVC we use the command svctask expandvdisksize, to increase the capacity on the VDisk.
In this example we expand the VDisk with 1GB.

IBM_2145:admin>svctask expandvdisksize -size 1 -unit gb aix_v1

To check that the VDisk had been expanded, we use the same procedure as before
expanding the VDisk. Here we can see that the VDisk aix_v1 has been expanded to 7GB in
capacity.

IBM_2145:admin>svcinfo lsvdisk aix_v1

Chapter 8. Host configuration 167


id 7
name aix_v1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG1_DS43
capacity 7.0GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 600507680189801B200000000000002A
throttling 0
preferred_node_id 12
fast_write_state empty

If, on the AIX host, we tried to see if the volume vpath0 had been expanded, we would not be
able to do so because AIX has not received any information that anything has changed in the
configuration. To get AIX to see the extra capacity on the volume without stopping any
applications we use the chvg command as shown here, where fc_source_vg is the name of
the volumegroup that vpath0 belongs to. If AIX does not return anything, the command was a
success, and the volume changes in this volumegroup have been saved. If AIX cannot see
any changes in the volumes it will return a message indicating this.

# chvg -g fc _source _vg

To check the size of the vpath0, we use the lspv command again.

# lspv vpath0
PHYSICAL VOLUME: vpath0 VOLUME GROUP: fc_source_vg
PV IDENTIFIER: 000c309d6de458f3 VG IDENTIFIER 000c309d00004c00000001006de474a1
PV STATE: active
STALE PARTITIONS: 0 ALLOCATABLE: yes
PP SIZE: 16 megabyte(s) LOGICAL VOLUMES: 1
TOTAL PPs: 447 (7152 megabytes) VG DESCRIPTORS: 2
FREE PPs: 446 (7136 megabytes) HOT SPARE: no
USED PPs: 1 (16 megabytes)
FREE DISTRIBUTION: 89..89..89..89..90
USED DISTRIBUTION: 01..00..00..00..00

Here we can see that the volume now has a size of 7152MB, equal to 7GB. After this we can
expand the filesystems in this volumegroup to use the new capacity.

8.4.9 Removing an SVC volume on AIX


Before we remove a VDisk assigned to an AIX host we have to make sure that there is no
data on it, and that no applications are depending on the volume. This is a standard AIX
procedure. Move all data off the volume, remove the volume in the volumegroup, and delete
the vpath and the hdisks associated to the vpath. Then remove the vdiskhostmap on the SVC

168 IBM TotalStorage SAN Volume Controller


for that volume, and that VDisk is not needed any longer, delete it so the extents will be
available when we create a new VDisk on the SVC.

8.4.10 Running commands on an AIX host system


To issue CLI commands you must install and prepare the SSH client system on the AIX host
system. For AIX 5L™ Power 5.1, 5.2 and 5.3, get OpenSSH from the Bonus Packs. We also
need its prerequisite, OpenSSL, from the AIX toolbox for Linux applications for Power
Systems. For AIX 4.3.3, the software is available from the AIX toolbox for Linux applications.

The AIX installation images from IBM DeveloperWorks are available at the following Web site:
http://oss.software.ibm.com/developerworks/projects/openssh

To generate the key files on AIX issue the following command:


ssh-keygen -t rsa -f filename

The -t specifies the type of key to generate, rsa1, rsa2 or dsa, the value for rsa2 is just rsa,
for rsa1 the type needs to be rsa1. When creating the key to the SVC use type rsa2. The -f
specifies the filename on the AIX server that the private and public key gets (the public key
gets the extension .pub after the filename).

The public key has then to be installed on the SVC, and this can be done by using the master
console. Copy the public key to the master console, and install the key to the SVC, as
described in the preceding chapters.

On the AIX server, make sure that the private key and the public key is in the .ssh directory,
and in the home directory of the user.

To connect to the SVC and use a CLI session from the AIX host, issue the following
command:
ssh -l admin -i filename svc

We can also issue the commands directly on the AIX host, and this is useful when making
scripts. To do this, add the SVC commands to the previous command, for example to list the
hosts defined on the SVC enter the following command:
ssh -l admin -i filename svc svcinfo lshost

Where -l admin is the user on the SVC we will connect to, -i filename is the filename of the
private key generated, and svc is the name or ip-address of the SVC; after that follows the
SVC command svcinfo lshost.

8.5 Windows-specific information


The following sections detail specific information about the connection of Windows 2000
based hosts to the SVC environment.

8.5.1 Configuring Windows 2000 and Windows 2003 hosts


Follow these steps:
1. Install the HBA or HBAs on the Windows 2000/2003 server.
2. Install and configure SDD/MPIO.
3. Shut down the Windows 2000/2003 host system.

Chapter 8. Host configuration 169


4. Configure the switches (zoning) if needed.
5. Connect the Windows 2000/2003 server FC Host adapters to the switches.
6. Restart the Windows 2000/2003 host system.
7. Configure the host, MDisks and host mapping in the SVC.
8. Use Rescan disk in Computer Management of the Windows 2000/2003 server to discover
the MDisks created on the SAN Volume Controller.

8.5.2 Support information


This section details where to obtain various types of support information.

Operating system versions and maintenance levels


At the time of writing, the versions of Windows listed in Table 8-3 are supported. See the
following Web site for the latest information:
http://www.storage.ibm.com/support/2145

Table 8-3 Versions of Windows supported with SVC at the time of this writing
Operating system level Machine level

Windows NT4 Enterprise Server Service Pack 6a

Windows 2000 Server and Advanced Server Service Pack 4

Windows 2003 Server (Standard and Enterprise


Edition)

Supported host adapters


See the supported hardware list for the latest information about supported HBAs and driver
levels for Windows:
http://www.storage.ibm.com/support/2145

170 IBM TotalStorage SAN Volume Controller


8.5.3 Host adapter installation and configuration
Refer to the manufacturer’s instructions for installation and configuration of the HBAs. We can
check that the FC adapter is installed as shown in Figure 8-3.

Figure 8-3 QLogic FC Host Adapter

8.5.4 SDD installation on Windows


At the time of writing, V1.5.1.1 for Windows NT 4 and 1.5.1.3 for Windows 2000 and Windows
2003 of SDD are supported. See the following Web site for the latest information about SDD
for Windows:
http://www.storage.ibm.com/support/2145

After downloading the appropriate version of SDD from this Web site, run setup to install
SDD. Answer Yes to Windows Digital Signature prompt. Answer Yes to reboot the system.

We can check that the installation of SDD is complete. From the Windows desktop, click Start
-> Programs -> Subsystem Device Driver -> readme.

8.5.5 Windows 2003 and MPIO


Microsoft Multi Path Input Output (MPIO) solutions are designed to work in conjunction with
device specific modules (DSMs) written by vendors, the MPIO driver package does not, by
itself, form a complete solution. This joint solution allows the storage vendors to design
device specific solutions that are tightly integrated with the Windows operating system.

MPIO drivers: MPIO is not shipped with Windows operating system, storage vendors must
pack the MPIO drivers with its own DSM. IBM Subsystem Device Driver DSM (SDDDSM) is
IBM’s multipath IO solution based on Microsoft MPIO technology, it’s a device specific module
specifically designed to support IBM storage devices on Windows 2003 servers.

The intention of MPIO is to get a better integration of multipath storage solution with the
operating system, and gives the possibility to support SAN boot, using multipaths in the SAN
infrastructure.

Chapter 8. Host configuration 171


8.5.6 Discovering the assigned VDisk
Before adding a new volume from the SAN Volume Controller, the Windows 2000 host
system had the configuration as shown in Figure 8-4, with only local disks.

Figure 8-4 Windows 2003 host system before adding a new volume from SVC

The configuration of the host “W2K3_2”, the VDisk “W2K3_2_1”, and the mapping between
the host and the VDisk are defined in the SAN Volume Controller as described in
Example 8-14.

We can check that the WWPN is logged into the SAN Volume Controller for the host
“W2K3_2” by entering the following command:
svcinfo lshost W2K3_2

We can also find the serial number of the VDisks by entering the following command:
svcinfo lshostvdiskmap

Example 8-14 SVC configuration for Windows 2000/2003 host system


IBM_2145:admin>svcinfo lshost W2K3_2
id 1
name W2K3_2
port_count 2
type generic
WWPN 210000E08B0E2A8C
node_logged_in_count 2
WWPN 210000E08B0E278C
node_logged_in_count 2

IBM_2145:admin>svcinfo lshostvdiskmap W2K3_2


id / name / SCSI_id / vdisk_id / vdisk_name / wwpn / vdisk_UID
1 / W2K3_2 / 0 / 20 / W2K3_2_1 / 210000E08B0E2A8C / 600507680189801B2000000000000019
1 / W2K3_2 / 1 / 21 / W2K3_2_2 / 210000E08B0E2A8C / 600507680189801B200000000000001A
1 / W2K3_2 / 2 / 22 / W2K3_2_3 / 210000E08B0E2A8C / 600507680189801B200000000000001B

IBM_2145:admin>svcinfo lsvdisk W2K3_2_1

172 IBM TotalStorage SAN Volume Controller


id 20
name W2K3_2_1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 2
mdisk_grp_name MDG2_DS43
capacity 6.0GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
throttling 0
preferred_node_id 12
fast_write_state empty

IBM_2145:admin>svcinfo lsvdiskhostmap W2K3_2_1


id / name / SCSI_id / host_id / host_name / wwpn / vdisk_UID
20 / W2K3_2_1 / 0 / 1 / W2K3_2 / 210000E08B0E2A8C / 600507680189801B2000000000000019
20 / W2K3_2_1 / 0 / 1 / W2K3_2 / 210000E08B0E278C / 600507680189801B2000000000000019

After the rescan disks is complete from the Computer Management window, the new volume
is found and assigned to the drive W: letter as shown in Figure 8-5.

Figure 8-5 Windows 2003 host system with tree new volumes from SVC

The volume is identified as an IBM 2145 SCSI Disk Device. The number of IBM 2145 SCSI
Disk Devices that you see is equal to:

Chapter 8. Host configuration 173


(# of VDisks) x (# of paths per IO group per HBA) x (# of HBAs)

This is shown in Figure 8-6 on page 174. This corresponds to the number of paths between
the Windows 2003 host system and the SAN Volume Controller. However, we see one IBM
2145 SDD Disk Device per VDisk.

When following the SAN zoning recommendation, this gives us for one VDisk and a host with
two HBAs: (# of VDisk) x (# of paths per IO group per HBA) x (# of HBAs) = 1 x 2 x 2 = 4
paths.

Figure 8-6 Number of devices found related to the number of paths

8.5.7 Expanding a Windows 2000/2003 volume


It is possible to expand a VDisk in the SVC cluster, even if it’s mapped to a host. Some
operating systems, such as Windows 2000 and Windows 2003, can handle the volumes
being expanded even if the host has applications running. A VDisk that is defined to be in a
FlashCopy or Metro Mirror mapping on the SVC cannot get expanded unless the mapping is
removed, which means the FlashCopy or Metro Mirror on that VDisk has to be stopped,
before it is possible to expand the VDisk.

If the volume is part of a Microsoft Cluster (MSCS), Microsoft recommends that you shut
down all nodes except one, and that applications in the resource which use the volume that is
going to be expanded is stopped, before expanding the volume. Applications running in other
resources can continue. After expanding the volume, start the application and the resource,
and then restart the other nodes in the MSCS.

To expand a volume while in use on Windows 2000 and Windows 2003, we used Diskpart.
The Diskpart tool is part of Windows 2003, for other Windows versions you can download it
free of charge from Microsoft. Diskpart is a tool developed by Microsoft to ease administration
of storage. It is a command line interface where you can manage disks, partitions and
volumes, by using scripts or direct input on the command line. You can list disks and volumes,

174 IBM TotalStorage SAN Volume Controller


select them, and after selecting get more detailed information, create partitions, extend
volumes and more. For more information see the Microsoft website.
http://www.microsoft.com

An example of how to expand a volume on an Windows 2003 host, where the volume is a
VDisk from the SVC, is shown in Example 8-15 on page 176.

Chapter 8. Host configuration 175


Example 8-15 Expanding a VDisk attached to Windows 2003

To list a VDisk size, use the command svcinfo lsvdisk <VDisk_name>. This gives this
information for the VDisk W2K3_2_1, before expanding the VDisk.

IBM_2145:admin>svcinfo lsvdisk W2K3_2_1


id 20
name W2K3_2_1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 2
mdisk_grp_name MDG2_DS43
capacity 6.0GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 600507680189801B2000000000000019
throttling 0
preferred_node_id 12
fast_write_state empty

Here we can see that the capacity is 6 GB, and also what the vdisk_UID is.To find what vpath
this VDisk is on the Windows 2003 host, we use the SDD command, datapath query device
on the windows host.To open a command window for SDD, from your desktop, click Start ->
Programs -> Subsystem Device Driver -> Subsystem Device Driver Management.

C:\Program Files\IBM\Subsystem Device Driver>datapath query device

Total Devices : 3

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 600507680189801B2000000000000019
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0
1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 6626 0
2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 6641 0

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 600507680189801B200000000000001A
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 12 0
1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0
2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 15 0
3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 600507680189801B200000000000001B

176 IBM TotalStorage SAN Volume Controller


============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0
1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 13 0
2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 14 0

Here we can see that the VDisk with vdisk_UID 600507680189801B2000000000000019 is


Disk1 on the Windows host, because the vdisk_UID matches the SERIAL on the Windows
host. To see the size of the volume on the Windows host, we use disk manager as shown in
Figure 8-7 and Figure 8-8 on page 178.

Figure 8-7 Volume size before expansion on Windows 2003, disk manager view

Chapter 8. Host configuration 177


Figure 8-8 Volume size before expansion on Windows 2003, disk properties view

This shows that the volume size is 5.99GB, equal to 6GB. To expand the volume on the SVC
we use the command svctask expandvdisksize, to increase the capacity on the VDisk. In
this example we expand the VDisk by 1GB.

IBM_2145:admin>svctask expandvdisksize -size 1 -unit gb W2K3_2_1

To check that the VDisk has been expanded we use the same procedure as before. Here we
can see that the VDisk W2K3_2_1 has been expanded to 7GB in capacity.

IBM_2145:admin>svcinfo lsvdisk W2K3_2_1


id 20
name W2K3_2_1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 2
mdisk_grp_name MDG2_DS43
capacity 7.0GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 600507680189801B2000000000000019
throttling 0
preferred_node_id 12

178 IBM TotalStorage SAN Volume Controller


fast_write_state empty

In Disk Management on the Windows host, we have to perform a rescan for the disks, after
which the new capacity is shown for the disk, as shown in Figure 8-9.

Figure 8-9 Expanded volume in disk manager

This shows that Disk1 now has 1020MB unallocated new capacity. To make this capacity
available for the filesystem, use the diskpart command at a DOS prompt.

C:\>diskpart

Microsoft DiskPart version 5.2.3790


Copyright (C) 1999-2001 Microsoft Corporation.
On computer: NPSRV3

DISKPART> list volume

Volume ### Ltr Label Fs Type Size Status Info


---------- --- ----------- ----- ---------- ------- --------- --------
Volume 0 D CD-ROM 0 B Healthy
* Volume 1 C W2K3 NTFS Partition 19 GB Healthy System
Volume 2 W VDisk NTFS Partition 6142 MB Healthy

This gives a view of all the volumes in the Windows 2003 host, select the disk labeled VDisk.

DISKPART> select volume 2

Volume 2 is the selected volume.

Chapter 8. Host configuration 179


DISKPART> list volume

Volume ### Ltr Label Fs Type Size Status Info


---------- --- ----------- ----- ---------- ------- --------- --------
Volume 0 D CD-ROM 0 B Healthy
Volume 1 C W2K3 NTFS Partition 19 GB Healthy System
* Volume 2 W VDisk NTFS Partition 6142 MB Healthy

DISKPART> detail volume

Disk ### Status Size Free Dyn Gpt


-------- ---------- ------- ------- --- ---
* Disk 1 Online 7162 MB 1020 MB

This shows detailed l information about the disk, including the unallocated capacity. The size
is the total size of the volume including the unallocated capacity of 1020MB. To get the
capacity available for the filesystem, expand the volume by issuing the extend command.

DISKPART> extend

DiskPart successfully extended the volume.

DISKPART> detail volume

Disk ### Status Size Free Dyn Gpt


-------- ---------- ------- ------- --- ---
* Disk 1 Online 7162 MB 0 B

Here we can see that there is no free capacity on the volume anymore.

DISKPART> list volume

Volume ### Ltr Label Fs Type Size Status Info


---------- --- ----------- ----- ---------- ------- --------- --------
Volume 0 D CD-ROM 0 B Healthy
Volume 1 C W2K3 NTFS Partition 19 GB Healthy System
Volume 2 W VDisk NTFS Partition 7162 MB Healthy

The list volume command now shows that the filesystem’s new size is 7162MB. And the
result of the expansion is shown in Figure 8-10 on page 181 and Figure 8-11 on page 181.

180 IBM TotalStorage SAN Volume Controller


Figure 8-10 Disk manager after expansion of Disk1

Figure 8-11 The new capacity of Disk1

Chapter 8. Host configuration 181


8.5.8 Removing a disk on Windows
When we want to remove a disk from Windows, and the disk is an SVC VDisk, we need to
follow the standard Windows procedure to make sure that there is no data we wish to
preserve on the disk, that no applications are using the disk, and that no I/O is going to the
disk. After this we will remove the VDisk mapping on the SVC. Here we need to make sure
we are removing the correct VDisk, and to check this we use SDD to find the Serial number
for the disk, and on the SVC we use lshostvdiskmap to find the VDisk name and number. We
also check that the SDD Serial number on the host matches the UID on the SVC for the
VDisk.

When the VDisk mapping is removed, we will do a rescan for the disk, and Disk Management
on the server will remove the disk, and the vpath will go into the status of close/offline on the
server. We can check this by using the SDD command datapath query device, but the vpath
that is closed will first be removed after a reboot of the server. Example 8-16 shows how we
can remove a SVC VDisk from a Windows server.

Example 8-16 Removing SVC disk from Windows server

Figure 8-12 The Disk Manager before removing the disk

In the example we will remove Disk 2 (E:). To find the correct VDisk information we find the
Serial/UID number using SDD.
C:\Program Files\IBM\Subsystem Device Driver>datapath query device

Total Devices : 2

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 600507680189801B2000000000000019
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0

182 IBM TotalStorage SAN Volume Controller


1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 18510 0
2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 18555 0

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 600507680189801B2000000000000037
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 3793 0
1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0
2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 3762 0
3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0

Knowing the Serial/UID of the VDisk and the hostname W2K3_2, we will find the vdisk
mapping to remove using the lshostvdiskmap command on the SVC, and after this we will
remove the actual vdisk mapping.
IBM_2145:admin>svcinfo lshostvdiskmap W2K3_2
id / name / SCSI_id / vdisk_id / vdisk_name / wwpn / vdisk_UID
1 / W2K3_2 / 0 / 20 / W2K3_2_1 / 210000E08B0E2A8C / 600507680189801B2000000000000019
1 / W2K3_2 / 1 / 24 / W2K3_2_tgt / 210000E08B0E2A8C / 600507680189801B2000000000000037
IBM_2145:admin>svctask rmvdiskhostmap -host W2K3_2 W2K3_2_tgt
IBM_2145:admin>svcinfo lshostvdiskmap W2K3_2
id / name / SCSI_id / vdisk_id / vdisk_name / wwpn / vdisk_UID
1 / W2K3_2 / 0 / 20 / W2K3_2_1 / 210000E08B0E2A8C / 600507680189801B2000000000000019

Here we can see that the VDisk is removed from the server. On the server we then do a disk
rescan in Disk Management, and here we now see the correct disk (Disk2), has been
removed as shown in Figure 8-13. SDD also shows us that the status for Disk2 has changed
to close/offline.

Figure 8-13 Disk Manager showing the remaining disks

Chapter 8. Host configuration 183


C:\Program Files\IBM\Subsystem Device Driver>datapath query device

Total Devices : 2

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 600507680189801B2000000000000019
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0
1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 18522 0
2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 18560 0

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 600507680189801B2000000000000037
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 CLOSE OFFLINE 3799 0
1 Scsi Port2 Bus0/Disk2 Part0 CLOSE OFFLINE 0 0
2 Scsi Port3 Bus0/Disk2 Part0 CLOSE OFFLINE 3770 0
3 Scsi Port3 Bus0/Disk2 Part0 CLOSE OFFLINE 0 0

The disk Disk2 is now removed from the server, but the remove the SDD information of the
disk, we need to reboot the server, but this can wait to a more suitable time.

8.5.9 Using SDD


To open a command window for SDD, from the desktop, click Start -> Programs ->
Subsystem Device Driver -> Subsystem Device Driver Management.

We can use the SDD-specific commands explained in the Subsystem Device Driver User's
Guide, SC26-7637, as shown in Figure 8-14.

Figure 8-14 Datapath query commands

Or we can open our browser with the host’s IP address as shown in Figure 8-15 on page 185.

Before this can work, we need to configure SDD to activate the Web interface. In SDD 1.5.0.x
(or earlier), sddsrv by default was bound to a TCP/IP port and listening for incoming requests.

184 IBM TotalStorage SAN Volume Controller


In SDD 1.5.1.x (or later), sddsrv does not bind to any TCP/IP port by default, but allows port
binding to be dynamically enabled or disabled. For all platforms except Linux, the SDD
package ships a template file of sddsrv.conf that is named sample_sddsrv.conf. On all UNIX
platforms except Linux, the sample_sddsrv.conf file is located in the /etc directory. On
Windows platforms, the sample_sddsrv.conf file is in the directory where SDD is installed.
You must use the sample_sddsrv.conf file to create the sddsrv.conf file in the same directory
as sample_sddsrv.conf by simply copying it and naming the copied file sddsrv.conf. You can
then dynamically change port binding by modifying the parameters in sddsrv.conf.

Figure 8-15 SDD query information using Web browser at <Win2k_1 ip add>:20001

8.6 Microsoft Volume Shadow Copy Service


The SVC provides support for the Microsoft Volume Shadow Copy Service (VSS).

To enable support for VSS you must install IBM TotalStorage support for Microsoft Volume
Shadow Copy Service (IBM VSS Provider)

For more information about Microsoft Volume Shadow Copy service refer to:
http://www.microsoft.com

8.6.1 How it works


The IBM VSS Provider interfaces to the Microsoft Volume Shadow Copy Service and to the
CimAgent on the master console, thereby seamless integrating FlashCopy on the SVC with
an initiator such as a backup application issuing a snapshot command, and subsequently
performs the backup from the FlashCopy target VDisk to create a PiT data consistent backup.

The Microsoft Volume Shadow Copy service handles quiesce of the applications and OS
buffer flush prior to triggering the FlashCopy on the SVC, and when done it thaws the
quiesce, and application IO is resumed.

To enable SVC support for VSS, we install the IBM TotalStorage hardware provider on the
windows host.

Chapter 8. Host configuration 185


8.6.2 Installing the IBM TotalStorage hardware provider
IBM TotalStorage support for Microsoft Volume Shadow Copy Service can be downloaded
from
http://www.ibm.com

When installing IBM TotalStorage support for Microsoft Volume Shadow Copy Service follow
the instructions in the configuration guide.

In the following section, we go through the steps which you must perform before you can use
the IBM VSS Provider.
򐂰 On the Master console
– You must create a Superuser for the IBM VSS provider (if no superuser is created, the
default superuser for the master console must be used)
򐂰 Installing IBM VSS Provider
– You must define the IP address of the Master console and the superuser and password
to be used by the IBM VSS provider
– You must enable access from the Windows host to the trustore file from the Master
console (we recommend copying it to the windows host)
򐂰 On the SVC
– You must create two hosts (VSS_FREE and VSS_Reserved) on the SVC, to be used
by the IBM VSS Provider
– You must create VDisk(s) equal in size to the potential FlashCopy source VDisks and
map them to the host VSS_FREE

On the Master console


During the install, you will be prompted for the IP address to the Master console (CIM host)
and a valid username with superuser credentials. In our setup we create a user on the Master
console for the IBM VSS Provider called vssuser, as shown in Figure 8-16 on page 187, we
select Add a Superuser and click Go.

186 IBM TotalStorage SAN Volume Controller


Figure 8-16 Add a Superuser.

As shown in Figure 8-17 on page 188, we define the username vssuser, and the password,
and click OK to create the user.

Chapter 8. Host configuration 187


Figure 8-17 Defining the Superuser name and password.

When the user has been created, we are returned to the users list as shown in Figure 8-18 on
page 189, and the vssuser is ready to used by the IBM VSS Provider.

188 IBM TotalStorage SAN Volume Controller


Figure 8-18 Viewing users

On the Windows host


Following is a subset of the windows displayed when installing the IBM VSS Provider.

During the install of the IBM VSS Provider we are prompted for the IP address of the Master
console (CIM Host) and the previously created vssuser (CIM User). Then we click Next as
shown in Figure 8-19 on page 190.

Chapter 8. Host configuration 189


Figure 8-19 Defining the CIM Host and User to be used by the IBM VSS Provider.

As shown in Figure 8-20 on page 191, we enter the password for the CIM agent user vssuser
and click Next to proceed.

190 IBM TotalStorage SAN Volume Controller


Figure 8-20 Entering the password for the CIM agent user (vssuser).

After selecting whether to use SSL or not when connecting to the CIM agent (window not
shown), we must enter the location where we have copied the truststore file and the
password as shown in Figure 8-21 on page 192.

Chapter 8. Host configuration 191


Figure 8-21 Entering the path and password to the truststore file.

On the SVC
As shown in Figure 8-17, we create the hosts to which we are going to map the VDisks to be
used by the IBM VSS Provider using the CLI (this can also be done using the GUI).

Example 8-17 Creating the hosts VSS_FREE and VSS_RESERVED


IBM_2145:admin>svctask mkhost -hbawwpn 5000000000000000 -force -name VSS_FREE
Host id [1] successfully created
IBM_2145:admin>svctask mkhost -hbawwpn 5000000000000001 -force -name VSS_RESERVED
Host id [2] successfully created

In Figure 8-18, we map the target VDisks to the VSS_FREE.

Example 8-18 Mapping the target VDisks


IBM_2145:admin>svctask mkvdiskhostmap -host VSS_FREE VDisk1T
Virtual Disk to Host map, id [5], successfully created
IBM_2145:admin>svctask mkvdiskhostmap -host VSS_FREE VDisk2T
Virtual Disk to Host map, id [6], successfully created

8.6.3 Using VSS


In the following scenario, we use NTBackup to initiate a FlashCopy.

Since VSS activation is the default for NTBackup, we select the object we want to back up
and click the Start Backup button. As shown in Figure 8-22 on page 193, the Backup Job
Information box pops-up.

192 IBM TotalStorage SAN Volume Controller


Figure 8-22 Selecting the object to backup and reviewing the Backup Job Information.

After verification of the backup job, we click Start Backup to start the backup job, which then
subsequently prepares activation of VSS, as shown in Figure 8-23 on page 194.

Chapter 8. Host configuration 193


Figure 8-23 Preparing backup using shadow copy.

Initiated by NTBackup, VSS checks for available VSS providers and triggers the FlashCopy
interfacing with the IBM VSS provider. In Figure 8-24 on page 195 the FlashCopy mapping
created by the VSS provider is listed.

194 IBM TotalStorage SAN Volume Controller


Figure 8-24 Viewing FlashCopy mapping created by the IBM VSS provider.

After the FlashCopy mapping has been triggered, NTBackup backs up the selected objects
facilitated by the IBM VSS provider from the FlashCopy target. As shown in Figure 8-25 on
page 196, the backup has completed.

Chapter 8. Host configuration 195


Figure 8-25 Backup completed

As the backup has completed, the IBM VSS provider deletes the FlashCopy mapping while
backup completion has been notified by the initiating NTBackup, and the use of the
FlashCopy target is no longer needed. As shown in Figure 8-26 on page 197, the FlashCopy
mapping has been removed.

196 IBM TotalStorage SAN Volume Controller


Figure 8-26 Viewing FlashCopy mapping has been removed upon completion of the backup.

8.6.4 Running a command line (CLI) on a Windows host system


To issue CLI commands, we must install and prepare the SSH client system on the Windows
host system.

We can install the PuTTY SSH client software on a Windows host using the PuTTY
Installation program. This is in the SSHClient\PuTTY directory of the SAN Volume Controller
Console CD-ROM. Or, you can download PuTTY from the following Web site:
http://www.chiark.greenend.org.uk/~sgtatham/putty/

The following Web site offers SSH client alternatives for Windows:
http://www.openssh.com/windows.html

Cygwin software has an option to install an OpenSSH client. You can download Cygwin from
the following Web site:
http://www.cygwin.com/

8.7 Linux (on Intel) specific information


The following section details specific information pertaining to the connection of Linux on
Intel-based hosts to the SVC environment.

8.7.1 Configuring the Linux host


Follow these steps to configure the Linux host:
1. Install the HBA or HBAs on the Linux server.

Chapter 8. Host configuration 197


2. Install the Kernel.
3. Configure the switches (zoning) if needed.
4. Connect the Linux server FC Host adapters to the switches.
5. Install SDD for Linux.
6. Configure the host, VDisks, and host mapping in the SAN Volume Controller.
7. Reboot the Linux server to discover the VDisks created on SVC.

8.7.2 Support information


For the latest support information, consult the IBM TotalStorage SAN Volume Controller Web
site at:
http://www.storage.ibm.com/support/2145

For version 1.2.1 of the SVC, the following support information was available at the time of
writing.
򐂰 Operating system versions and maintenance levels:
– Red Hat EL Advanced Server 2.1 with Kernel 2.4.9-e.43 EL,
– Red Hat EL Advanced Server 3.0 with kernel 2.4.21-15.0.3 EL
– SLES 8 United Linux 4 with kernel 2.4.21-231
򐂰 SDD:
– Version 1.5.1.1-14 of SDD for Linux is supported
򐂰 Supported host adapters: For Red Hat EL Advanced servers.
– QLA2310F, QLA2310FL, QLA2340, QLA2340L, QLA2342 and QLA2342L with IBM
driver 7.01.01 and BIOS 1.43
򐂰 Supported host adapters: For SLES 8 United Linux
– QLA2340, QLA2340L, QLA2342 and QLA2342L with IBM driver 6.06.64 and BIOS
1.35

8.7.3 Host adapter configuration settings


See the IBM TotalStorage Virtualization SAN Volume Controller: Host Attachment Guide,
SC26-7563 for detailed information.

8.7.4 Discovering the assigned VDisk


The cat /proc/scsi/scsi command shows the devices that the SCSI driver has probed. In
our configuration, we can see two VDisks per SCSI adapter as shown in Figure 8-27 on
page 199.

198 IBM TotalStorage SAN Volume Controller


Figure 8-27 proc/scsi/scsi

8.7.5 Using SDD on Linux


For installing and configuring SDD for Linux refer to the Subsystem Device Driver User's
Guide, SC26-7637.

The rpm -iv IBMsdd-1.4.0.0-4.i686.rh7.rpm command installs the package. To install the
subsystem device driver, make sure the RPM is executable.

Issue the command shown in Figure 8-28:


chmod 755 IBMsdd-1.4.0.0-4.i686.rh7.rpm

Figure 8-28 install SDD rpm

SDD is installed to the /opt/IBMsdd/bin directory as shown in Figure 8-29 on page 200.

Chapter 8. Host configuration 199


Figure 8-29 opt SDD directory

To manually load and configure SDD on Linux, use the following steps. To match the level of
the Kernel to the SDD module, enter the following command as shown in Figure 8-30:
insmod ./sdd-mod.o-2.4.9-e.16enterprise

Figure 8-30 insmod

Issue the cfgvpath query to view the name and serial number of the VDisk configured in the
SAN Volume Controller as shown in Figure 8-31.

Figure 8-31 cfgvpath query

The cfgvpath command configures the SDD vpath devices. The configuration information is
saved by default to the file /etc/vpath.conf. You can save the configuration information to a
specified file name by entering the following command. To see the output from the cfgvpath
command and see the vpath configuration file enter the following command and see the
output in Figure 8-32 on page 201:
cfgvpath -f file_name.cfg

200 IBM TotalStorage SAN Volume Controller


Figure 8-32 cfgvpath

Issue the chkconfig -- level command to enable SDD to run at system startup:
chkconfig --level 5 sdd on

To verify the setting, enter the following command:


chkconfig --list sdd

This is shown in Figure 8-33.

Figure 8-33 SDD run level

If necessary, we can disable the startup option by entering:


insserv -r sdd

Run the datapath query commands to display the online adapters, paths to adapters. Notice
that the preferred paths are used from one of the nodes, that is to say path 0 and 2. Paths 1
and 3 connect to the other node and are used as alternate or backup paths for high
availability. This is shown in Figure 8-34 on page 202.

SDD has three different path-selection policy algorithms.


1. Failover only (fo)
– All I/O operations for the device are sent to the same (preferred) path unless the path
fails because of I/O errors. Then an alternate path is chosen for subsequent I/O
operations.
2. Load balancing (lb)
– The path to use for an I/O operation is chosen by estimating the load on the adapter to
which each path is attached. The load is a function of the number of I/O operations
currently in process. If multiple paths have the same load, a path is chosen at random
from those paths. Load-balancing mode also incorporates failover protection. The
load-balancing policy is also known as the optimized policy.

Chapter 8. Host configuration 201


3. Round robin (rr)
– The path to use for each I/O operation is chosen at random from paths that were not
used for the last I/O operation. If a device has only two paths, SDD alternates between
the two.

You can dynamically change the SDD path-selection policy algorithm, by using the SDD
command datapath set device policy.

Figure 8-34 The datapath query

We can see the SDD path-selection policy algorithm that is active on the device, when you
use the datapath query device command. Figure 8-34 shows that the active policy is
optimized which means that the SDD path-selection policy algorithm active is load balancing.

Example 8-19 shows the VDisk information from the SVC command line.

Example 8-19 svcinfo redhat1


IBM_2145:admin>svcinfo lshost redhat1
id 2
name redhat1
port_count 2
type generic
WWPN 210000E08B091541
node_logged_in_count 2
WWPN 210100E08B291541
node_logged_in_count 2

IBM_2145:admin>svcinfo lshostvdiskmap -delim : redhat1


id:name:SCSI_id:vDisk_id:vDisk_name:wwpn:vDisk_UID
2:redhat1:0:5:redhatdisk1:210000E08B091541:600507680182001B2000000000000007

IBM_2145:admin>svcinfo lsvdisk redhatdisk1


id 5
name redhatdisk1
IO_group_id 0
IO_group_name ITSO_grp0
status online
mDisk_grp_id 0

202 IBM TotalStorage SAN Volume Controller


mDisk_grp_name ARR36P5N
capacity 300.0MB
type striped
formatted no
mDisk_id
mDisk_name
FC_id
FC_name
RC_id
RC_name
throttling 0
preferred_node_id 0

8.7.6 Creating and preparing volumes for use


Follow these steps:
1. Create a partition on the vpath device as shown in Figure 8-35.

Figure 8-35 fdisk

Chapter 8. Host configuration 203


2. Create a file system on the vpath as shown in Figure 8-36.

Figure 8-36 mkfs

3. Create the mount point and mount the vpath drive as shown in Figure 8-37.

Figure 8-37 mountpoint

8.8 SUN Solaris support information


For the latest information about SUN Solaris support, see:
http://www.storage.ibm.com/support/2145

204 IBM TotalStorage SAN Volume Controller


8.8.1 Operating system versions and maintenance levels
At the time of writing, Sun Solaris 8 (V5.8) and Sun Solaris 9 (V5.9) are supported in 64 bit
only.

8.8.2 Multipath solutions supported


At the time of writing, SDD for Solaris is supported, and the current recommended version is
SDD 1.5.1.5, in 64 bit only.

On Sun Solaris 9, Veritas Volume Manager 3.5 MP3 is tested with Native DMP (ASL).

The maximum number of VDisks supported by SDD on Sun Solaris 8 and 9 is 512, with a
maximum of four paths to each VDisk.

The maximum number of VDisks supported by DMP on Sun Solaris 9 is 512, with a maximum
of four paths to each VDisk.

8.8.3 SDD dynamic pathing


Solaris supports dynamic pathing when you either add more paths to an existing vdisk, or if
you present a new vdisk to a host. No user intervention is required. SDD is aware of the
preferred paths which SVC sets per vdisk. SDD will uses a round robin algorithm when failing
over paths; that is, it will try the next known preferred path. If this fails and all preferred paths
have been tried it will round robin on the non-preferred paths until it finds a path that is
available. If all paths are unavailable the vdisk will go offline. It can take some time therefore
to perform path failover when multiple paths go offline.

SDD under Solaris performs load balancing across the preferred paths where appropriate.

Veritas Volume Manager with DMP Dynamic Pathing


Veritas VM with DMP automatically selects the next available I/O path for I/O requests
dynamically without action from the administrator. VM with DMP is also informed when you
repair or restore a connection, and when you add or remove devices after the system has
been fully booted (provided that the operating system recognizes the devices correctly). The
new JNI drivers support the mapping of new vdisks without rebooting the Solaris host.

Veritas VM with DMP does not support preferred pathing with SVC.

Veritas VM with DMP does support load balancing across multiple paths with SVC.

Co-existence with SDD and Veritas VM with DMP


Veritas Volume Manager with DMP will coexist in "pass-thru" mode with SDD. This means
that DMP will use the vpath devices provided by SDD.

SAN Boot Support


Boot from SAN is supported under Solaris 9 running Veritas Volume Manager/DMP.

Boot from SAN is not supported when SDD is used as the multi-pathing software.

8.9 HP-UX support information


For the latest information about HP-UX support, refer to:
http://www.storage.ibm.com/support/2145

Chapter 8. Host configuration 205


Operating system versions and maintenance levels
At the time of writing HP-UX V11i v1 for PA-RISC is supported.

Multipath solutions supported


At the time of writing SDD for HP-UX is supported, and the recommended version is SDD
1.5.1.3, in 64 bit only. Dynamic expanding and shrinking of vdisks mapped to HP-UX hosts is
not supported.

The maximum number of VDisks supported by SDD on HP-UX is 512, with a maximum of 4
paths to each VDisk.

SDD dynamic pathing


HP-UX supports dynamic pathing when you either add more paths to an existing vdisk, or if
you present a new vdisk to a host.

SDD is aware of the preferred paths which SVC sets per vdisk. SDD will use a round robin
algorithm when failing over paths, that is, it will try the next known preferred path. If this fails
and all preferred paths have been tried it will round robin on the non-preferred paths until it
finds a path that is available. If all paths are unavailable the vdisk will go offline. It can take
some time, therefore, to perform path failover when multiple paths go offline.

SDD under HP-UX performs load balancing across the preferred paths where appropriate.

PVLinks (Physical Volume Links) Dynamic Pathing


PVLinks is not aware of the preferred paths which SVC sets per vdisk. PVLinks will uses a
round robin algorithm when failing over paths, that is, it will try the next known path. The
preferred pathing is ignored.

PVLinks does load balance across any of the paths available to a given vdisk.

SAN Boot Support


SAN Boot is supported on HP-UX by using PVLinks as the multi-pathing software on the boot
device. PVLinks or SDD can be used to provide the multi-pathing support for the other
devices attached to the system.

8.10 More information


For more information about host attachment and configuration to the SVC refer to the IBM
TotalStorage SAN Volume Controller: Host Attachment Guide, SC26-7563.

For more information about SDD configuration refer to the IBM TotalStorage Subsystem
Device Driver User’s Guide, SC26-7637.

206 IBM TotalStorage SAN Volume Controller


9

Chapter 9. SVC configuration and


administration using the CLI
This chapter describes how to use the command line interface (CLI) to perform additional and
advanced configuration and administration tasks that were not covered in Chapter 6,
“Quickstart configuration using the CLI” on page 107. It also discusses the backup and
recovery function that was announced in October 2003.

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. 207
9.1 Managing the cluster
This section details the various configuration and administration tasks that you can perform
on the cluster.

You must issue all of the following commands from a secure SSH command line. To launch
your PuTTY command line, follow these steps:
1. Open the PuTTY application. From your master console desktop, select Start ->
Programs -> PuTTY.
2. On the main screen (Figure 9-1), select the session you created and saved in 5.4.2,
“Configuring the PuTTY session for the CLI” on page 92 (for example, SVC), and click
Load. Then click Open to begin your session.
3. At the Login as: prompt, type admin and press Enter.

Figure 9-1 Starting PuTTY

Command syntax
Two major command sets are available. The svcinfo command set allows us to query the
various components within the IBM TotalStorage SAN Volume Controller (SVC) environment.
The svctask command set allows us to make changes to the various components within the
SVC.

When the command syntax is shown, we see some parameters in square brackets, for
example, [parameter]. This indicates that the parameter is optional in most, if not all
instances. Anything that is not in square brackets is required information. We can view the
syntax of a command by entering one of the following commands:
򐂰 svcinfo -?: Show a complete list of information commands
򐂰 svctask -?: Shows a complete list of task commands
򐂰 svcinfo commandname -?: Shows the syntax of information commands
򐂰 svctask commandname -?: Shows the syntax of task commands

208 IBM TotalStorage SAN Volume Controller


򐂰 svcinfo commandname -filtervalue?: Shows what filters you can use to reduce output of
the information commands

Note: We can also use -h instead of -? For example, svcinfo -h or svctask commandname
-h.

If you look at the syntax of the command by typing svcinfo command name -?, we often see
-filter listed as a parameter. Be aware that the correct parameter is -filtervalue as stated
above.

Tip: You can use the up and down keys on your keyboard to recall commands recently
issued. Then, you can use the left and right, backspace, and delete keys to edit commands
before you resubmit them.

9.1.1 Organizing on-screen content


Sometimes the output of a command can be long and difficult to read on screen. In cases
where we need information about a subset of the total number of available items, we can use
filtering to reduce the output to a more manageable size.

Filtering
To reduce the output that is displayed by an svcinfo command, we can specify a number of
filters depending on which svcinfo command we are running. To see which filters are
available, type the command followed by the -filtervalue? flag to see such output as that
which is shown in Example 9-1.

Example 9-1 svcinfo lsvdisk -filtervalue? command


IBM_2145:admin>svcinfo lsvdisk -filtervalue?

Filters for this view are:


name
id
IO_group_id
IO_group_name
status
mdisk_grp_name
mdisk_grp_id
capacity
type
FC_id
FC_name
RC_id
RC_name
vdisk_name
vdisk_id

When we know the filters, we can be more selective in generating output.


򐂰 Multiple filters can be combined to create specific searches.
򐂰 We can use an * as a wildcard when using names.
򐂰 When capacity is used, the units must also be specified using -u b | kb | mb | gb | tb | pb

For instance, if we issue the svcinfo lsvdisk command with no filters, we see the output as
shown in Example 9-2 on page 210.

Chapter 9. SVC configuration and administration using the CLI 209


Example 9-2 svcinfo lsvdisk command: No filters
IBM_2145:admin>svcinfo lsvdisk -delim :
id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type:FC_id:FC
_name:RC_id:RC_name
0:VdAIX1_FCT:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
1:VdAIX1V1:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
2:VD0_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
3:VD1_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
4:VD2_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
5:VD3_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
6:VD4_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
7:VD5_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
8:VD0_UNIX1:0:io_grp0:online:1:MDG0_DS43:2.0GB:striped::::
9:VDisk1:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:1:FCMap1::
10:VDisk1T:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:1:FCMap1::
11:VDisk2:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
12:VDisk2T:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
13:VDisk3:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:3:FCMap3::
14:VDisk3T:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:3:FCMap3::

Tip: The -delim : parameter truncates the on-screen content and separates data fields
with colons as opposed to wrapping text over multiple lines.

If we now add a filter to our svcinfo command (such as FC_name), we can reduce the output
dramatically as shown in Example 9-3.

Example 9-3 svcinfo lsvdisk command: With filter


IBM_2145:admin>svcinfo lsvdisk -filtervalue 'FC_name=FCMap3
id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type:FC_id:FC
_name:RC_id:RC_name
13:VDisk3:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:3:FCMap3::
14:VDisk3T:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:3:FCMap3::

IBM_2145:admin>svcinfo lsvdisk -filtervalue 'name=VDisk*' -delim :


id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type:FC_id:FC
_name:RC_id:RC_name
9:VDisk1:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:1:FCMap1::
10:VDisk1T:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:1:FCMap1::
11:VDisk2:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
12:VDisk2T:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
13:VDisk3:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:3:FCMap3::
14:VDisk3T:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:3:FCMap3::

The first command shows all Virtual Disks (VDisks) with the FC_name=FCMap3 the SVC
environment. The second command shows us all VDisks with names starting with VDisk. The
wildcard * can be used when names are used.

We are now ready to continue.

9.1.2 Viewing cluster properties


Use the svcinfo lscluster command to display summary information about all clusters
visible to the SVC. To display more detailed information about a specific cluster, run the
command again and append the cluster name parameter (for example, SVC1). Both of these
commands are shown in Example 9-4 on page 211.

210 IBM TotalStorage SAN Volume Controller


Example 9-4 svcinfo lscluster command
IBM_2145:admin>admin>svcinfo lscluster -delim :
id:name:location:partnership:bandwidth:cluster_IP_address:cluster_service_IP_address:id_ali
as
00000200626006C8:SVC1:local:::9.42.164.155:9.42.164.156:00000200626006C8

IBM_2145:admin>svcinfo lscluster svc1


id 00000200626006C8
name SVC1
location local
partnership
bandwidth
cluster_IP_address 9.42.164.155
cluster_service_IP_address 9.42.164.156
total_mdisk_capacity 2914.2GB
space_in_mdisk_grps 2713.8GB
space_allocated_to_vdisks 142.0GB
total_free_space 2772.2GB
statistics_status off
statistics_frequency 15
required_memory 4096
cluster_locale en_US
SNMP_setting all
SNMP_community public
SNMP_server_IP_address 9.42.164.140
subnet_mask 255.255.255.0
default_gateway 9.42.164.1
time_zone 520 US/Pacific
email_setting none
email_id
code_level 2.1.0.0 (build 2.11.0409290000)
FC_port_speed 2Gb
console_IP 0.0.0.0:80
id_alias 00000200626006C8

9.1.3 Maintaining passwords


Use the svctask chcluster command to change the admin and services passwords. The full
syntax of the svctask chcluster command is:
svctask chcluster [-clusterip ip_address] [-serviceip ip_address] [-admpwd [password]
[-servicepwd [password]] [-gw gateway] [-mask subnet_mask] [-speed speed]
[-autoquorum true | false] [-name cluster_name] [-icatip icat_ip_address:port]

Note the following explanation:


򐂰 clusterip: IP address to access the cluster
򐂰 serviceip: IP address used if a node has been expelled from the cluster
򐂰 admpwd: Administrator’s password
򐂰 servicepwd: Service user’s password
򐂰 gw: cluster’s gateway IP address
򐂰 mask: Subnet mask
򐂰 speed: Fabric speed
򐂰 autoquorum: Indicates if quorum disks allocate automatically or not (default = true)
򐂰 name: Name of the cluster
򐂰 icatip: IP address of the ICAT console

The command to change the admin and the service password is:

Chapter 9. SVC configuration and administration using the CLI 211


IBM_2145:admin>svctask chcluster -admpwd admin -servicepwd service

This command changes the current admin password to admin and the current service
password to service.

You have now completed the tasks required to change the admin and service passwords for
Note: You can use letters A to Z, a to z, numbers 0 to 9, and the underscore in a password.
The password can be between one and 15 characters in length.

Since you are using an SSH key pair in your PuTTY command line session, these
password changes only affect graphical user interface (GUI) users who must use
passwords for initial authentication. Also, as you can see, these password are not
encrypted on screen, so make sure no one is looking over your shoulder.

your SVC cluster.

9.1.4 Modifying IP addresses


Using the svctask chcluster command again, we can change the cluster IP address as
shown here:
IBM_2145:admin>svctask chcluster -clusterip 9.42.164.156 -serviceip 9.42.164.161

This command changes the current IP address of the cluster to 9.42.164.156 and the current
service IP address to 9.42.164.161.

Important: If we specify a new cluster IP address, the existing communication with the
cluster through the CLI is broken and the PuTTY application automatically closes. We must
relaunch the PuTTY application and point to the new IP address.

Modifying the IP address of the cluster, although quite simple, means some reconfiguration
for other items within the SVC environments (such reconfiguring our PuTTY application
and the central administration GUI).

The -clusterip and -serviceip parameters can be used in isolation (as shown in the last
two examples) or in combination with other chcluster command parameters.

We have now completed the tasks required to change the IP addresses (cluster and service)
of the SVC environment.

9.1.5 Setting the cluster time zone and time


Perform the following steps to set the cluster time zone and time:
1. Determine what time zone your cluster is currently configured for by issuing the svcinfo
showtimezone command as shown here:
IBM_2145:admin>svcinfo showtimezone
id timezone
520 US/Pacific
If this setting is correct (for example, 514 US/Eastern), skip to Step 4. If not, continue with
Step 2.

212 IBM TotalStorage SAN Volume Controller


2. Determine the time zone code that is associated with the current time zone. To find this,
enter the svcinfo lstimezones command shown in Example 9-5. The list was edited by us
for purposes of the example.

Example 9-5 svcinfo lstimezones command


IBM_2145:admin>svcinfo lstimezones
id timezone
0 Africa/Abidjan
1 Africa/Accra
2 Africa/Addis_Ababa
3 Africa/Algiers
4 Africa/Asmera
5 Africa/Bamako
. . .
508 UCT
509 Universal
510 US/Alaska
511 US/Aleutian
512 US/Arizona
513 US/Central
514 US/Eastern
. . .

In this example, the correct time zone code is 514.


3. Set the time zone by issuing the svctask settimezone command:
IBM_2145:admin>svctask settimezone -timezone 514
4. With the correct time zone, set the cluster time by issuing the svctask setclustertime
command:
IBM_2145:admin>svctask setclustertime -time 1105180504
The format of the time is MMDDHHmmYY.

We have now completed the tasks necessary to set the cluster time zone and time.

9.1.6 Starting a statistics collection


Use the svctask startstats command to start the collection of statistics within the cluster:
IBM_2145:admin>svctask startstats -interval 15

The interval we specify (minimum 15, maximum 60) is in minutes. This command starts
statistics collection and gathers data at 15 minute intervals.

Note: To verify that statistics collection is set, display the cluster properties again, as
shown in Example 9-6, and look for statistics_status and statistics_frequency.

Example 9-6 Statistics collection status and frequency


IBM_2145:admin>svcinfo lscluster SVC1
id 00000200626006C8
name SVC1
location local
partnership
bandwidth
cluster_IP_address 9.42.164.155
cluster_service_IP_address 9.42.164.156
total_mdisk_capacity 2914.2GB

Chapter 9. SVC configuration and administration using the CLI 213


space_in_mdisk_grps 2713.8GB
space_allocated_to_vdisks 142.0GB
total_free_space 2772.2GB
statistics_status on
statistics_frequency 15
required_memory 4096
cluster_locale en_US
SNMP_setting all
SNMP_community public
SNMP_server_IP_address 9.42.164.140
subnet_mask 255.255.255.0
default_gateway 9.42.164.1
time_zone 522 UTC
email_setting none
email_id
code_level 2.1.0.0 (build 2.11.0409290000)
FC_port_speed 2Gb
console_IP 0.0.0.0:80
id_alias 00000200626006C8

We have now completed the tasks required to start statistics collection on the cluster.

9.1.7 Stopping a statistics collection


Use the svctask stopstats command to start the collection of statistics within the cluster:
IBM_2145:admin>svctask stopstats

This command stop statistics collection.

Note: To verify that statistics collection is stopped, display the cluster properties again, as
shown in Example 9-7, and look for statistics_status and statistics_frequency.

Example 9-7 Statistics collection status and frequency


IBM_2145:admin>stopstatsinfo lscluster SVC1
id 00000200626006C8
name SVC1
location local
partnership
bandwidth
cluster_IP_address 9.42.164.155
cluster_service_IP_address 9.42.164.156
total_mdisk_capacity 2914.2GB
space_in_mdisk_grps 2713.8GB
space_allocated_to_vdisks 142.0GB
total_free_space 2772.2GB
statistics_status off
statistics_frequency 15
required_memory 4096
cluster_locale en_US
SNMP_setting all
SNMP_community public
SNMP_server_IP_address 9.42.164.140
subnet_mask 255.255.255.0
default_gateway 9.42.164.1
time_zone 522 UTC
email_setting none

214 IBM TotalStorage SAN Volume Controller


email_id
code_level 2.1.0.0 (build 2.11.0409290000)
FC_port_speed 2Gb
console_IP 0.0.0.0:80
id_alias
00000200626006C8

Notice that the interval parameter is not changed but the status is off. We have now
completed the tasks required to stop statistics collection on our cluster.

9.1.8 Shutting down a cluster


If all input power to a SVC cluster is to be removed for more than a few minutes (for example,
if the machine room power is to be shut down for maintenance), it is important to shut down
the cluster before removing the power. The reason for this is that if the input power is
removed from the uninterruptible power supply units without first shutting down the cluster
and the uninterruptible power supplies themselves, the uninterruptible power supply units
remain operational and eventually become drained of power.

When input power is restored to the uninterruptible power supplies, they start to recharge.
However the SVC does not permit any input/output (I/O) activity to be performed to the
VDisks until the uninterruptible power supplies are charged enough to enable all the data on
the SVC nodes to be destaged in the event of a subsequent unexpected power loss.
Recharging the uninterruptible power supply may take as long as three hours.

Shutting down the cluster prior to removing input power to the uninterruptible power supply
units prevents the battery power from being drained. It also makes it possible for I/O activity
to be resumed as soon as input power is restored.

The following procedure shuts down the cluster.

Use the svctask stopcluster command to shut down your SVC cluster:
IBM_2145:admin>svctask stopcluster

This command shuts down the SVC cluster. All data is flushed to disk before the power is
removed. At this point you lose administrative contact with your cluster, and the PuTTY
application automatically closes.

The resulting output will be presented with the following message:

Warning: Are you sure that you want to continue with the shut down?

Ensure that you have stopped all FlashCopy mappings, Metro Mirror (Remote Copy)
relationships, data migration operations and forced deletions before continuing. Entering y to
this will execute the command. No feedback is then displayed. Entering anything other than
y(es) or Y(ES) will result in the command not executing. No feedback is displayed.

Important: Before shutting down a cluster quiesce all I/O operations that are destined for
this cluster because we will lose access to all VDisks being provided by this cluster. Failure
to do so may result in failed I/O operations being reported to the host operating systems.
There is no need to do this when you shut down a node.

Begin the process of quiescing all I/O to the cluster by stopping the applications on the
hosts that are using the VDisks provided by the cluster.

Chapter 9. SVC configuration and administration using the CLI 215


We have now completed the tasks required to shut down the cluster. We can now shut down
the uninterruptible power supplies by pressing the power button on their front panels.

Note: To restart the cluster, we must first restart the uninterruptible power supply units by
pressing the power button on their front panels. Then we go to the service panel of one of
the nodes within the cluster and push the power on button. After it is fully booted up (for
example, displaying Cluster: on line 1 and the cluster name on line 2 of the display panel),
we can start the other nodes in the same way.

As soon as all nodes are fully booted, we can re-establish administrative contact using
PuTTY, and our cluster is fully operational again.

9.2 Working with nodes


This section explains the various configuration and administration tasks that we can perform
on the nodes within an SVC cluster.

9.2.1 I/O groups


This section explains the tasks that we can perform at an I/O group level.

Viewing I/O group details


Use the svcinfo lsiogrp command, as shown in Example 9-8, to view information about I/O
groups defined within the SVC environment.

Example 9-8 I/O group details


IBM_2145:admin>svcinfo lsiogrp
id name node_count vdisk_count
0 io_grp0 2 15
1 io_grp1 0 0
2 io_grp2 0 0
3 io_grp3 0 0
4 recovery_io_grp 0 0

As we can see, the SVC predefines five I/O groups. In a two-node cluster (like ours) only one
I/O group is actually in use. In a four node cluster, we would have two I/O groups in use. The
other I/O groups (io_grp2 and io_grp3) are for a six or eight node cluster.

The recovery I/O group is a temporary home for VDisks when all nodes in the I/O group that
normally owns them have suffered multiple failures. This allows us to move the VDisks to the
recovery I/O group and then into a working I/O group. Of course, while temporarily assigned
to the recovery I/O group, I/O access is not possible.

Renaming an I/O group


Use the svctask chiogrp command to rename an I/O group:
IBM_2145:admin>svctask chiogrp -name io_grpSVC2 io_grp1

This command renames the I/O group io_grp1 to io_grpSVC2.

216 IBM TotalStorage SAN Volume Controller


Note: The chiogrp command specifies the new name first.

If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, the dash
’-’ and the underscore ’_’. It can be between one and 15 characters in length. However, it
cannot start with a number, dash or the word iogrp since this prefix is reserved for SVC
assignment only.

To see whether the renaming was successful, issue the svcinfo lsiogrp command again
and we should see the change reflected.

We have now completed the tasks required to rename an I/O group.

9.2.2 Nodes
This section details the tasks which can be performed on an individual node level.

Viewing node details


Use the svcinfo lsnode command to view summary information about nodes defined within
the SVC environment. To view more details about a specific node, append the node name (for
example, SVC1_node1) to the command.

Both of these commands are shown in Example 9-9.

Tip: The -delim : parameter truncates the on-screen content and separates data fields
with colons as opposed to wrapping text over multiple lines.

Example 9-9 svcinfo lsnode command


IBM_2145:admin>svcinfo lsnode -delim :
id:name:UPS_serial_number:WWNN:status:IO_group_id:IO_group_name:config_node:UPS_unique_id
1:SVC1_node1:YM100032B422:5005076801000364:online:0:io_grp0:yes:20400000C2484082
11:SVC1_node2:YM100032B425:500507680100035A:online:0:io_grp0:no:20400000C2484085

IBM_2145:admin>svcinfo lsnode SVC1_node1


id 1
name SVC1_node1
UPS_serial_number YM100032B422
WWNN 5005076801000364
status online
IO_group_id 0
IO_group_name io_grp0
partner_node_id 12
partner_node_name SVC1_node2
config_node no
UPS_unique_id 20400000C2484082
port_id 5005076801400364
port_status active
port_id 5005076801300364
port_status active
port_id 5005076801100364
port_status active
port_id 5005076801200364
port_status active

Chapter 9. SVC configuration and administration using the CLI 217


Adding a node
Before we can add a node, we must know which unconfigured nodes we have as
“candidates”. We can find this out by issuing the svcinfo lsnodecandidate command:
IBM_2145:admin>svcinfo lsnodecandidate
id panel_name UPS_serial_number UPS_unique_id
500507680100035A 000683 YM100032B425 20400000C2484085

Note: The node we want to add must be on a different UPS serial number than the UPS on
the first node.

Now that we know the available nodes, we can use the svctask addnode command to add
the node to the SVC cluster configuration. The complete syntax of the addnode command is:
addnode {-panelname panel_name | -wwnodename wwnn_arg} [-name new_name]
-iogrp iogrp_name_or_id

Note the following explanation:


򐂰 panelname: Name of the node as it appears on the panel
򐂰 wwnodename: Worldwide node name (WWNN) of the node
– panelname and wwnodename are mutually exclusive
򐂰 name: Name to be allocated to the node
򐂰 iogrp: I/O group to which the node is added

The command to add a node to the SVC cluster is:


IBM_2145:admin>svctask addnode -panelname 000667 -name SVC_node2 -iogrp io_grp0
Node, id [10], successfully added

This command adds the candidate node with the panelname of 000667 to the I/O group
io_grp0 and name it SVC1_node2.

We used the -panelname parameter (000667), but could have used the -wwnodename
parameter (5005076801000364) instead, for example:
svctask addnode -wwnodename 5005076801000364 -name SVC1_node2 -iogrp io_grp0

We also used the optional -name parameter (SVC1_node2). If you do not provide the -name
parameter, the SVC automatically generates the name nodeX (where X is the ID sequence
number assigned by the SVC internally). In our case it would be node10.

Note: If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, the
dash ’-’ and the underscore ’_’. It can be between one and 15 characters in length.
However, it cannot start with a number, dash or the word node since this prefix is reserved
for SVC assignment only.

Renaming a node
Use the svctask chnode command to rename a node within the SVC cluster configuration.
From now on we are following our naming convention.
IBM_2145:admin>chnode -name SVC1N1 SVC1_node1
IBM_2145:admin>chnode -name SVC1N2 SVC1_node2

This command renames node SVC1_node1 to SVC1N1 and SVC1_node2 to SVC1N2.

218 IBM TotalStorage SAN Volume Controller


Note: The chnode command specifies the new name first. You can use letters A to Z, a to
z, numbers 0 to 9, the dash ’-’ and the underscore ’_’. It can be between one and 15
characters in length. However, it cannot start with a number, dash or the word node since
this prefix is reserved for SVC assignment only.

Deleting a node
Use the svctask rmnode command to remove a node from the SVC cluster configuration:
IBM_2145:admin>svctask rmnode SVC1_node2

This command removes node SVC1_node2 from the SVC cluster.

Since SVC1_node2 was also the configuration node, the SVC transfers the configuration
node responsibilities to a surviving node (in our case SVC1_node1). Unfortunately the PuTTY
session cannot be dynamically passed to the surviving node. Therefore the PuTTY
application loses communication and closes automatically.

We must restart the PuTTY application to establish a secure session with the new
configuration node.

Shutting down a node

Important: If this is the last node in an I/O Group, and there are Virtual Disks still assigned
to the I/O Group the node will not be deleted from the cluster.

If this is the last node in the cluster, and the I/O Group has no Virtual Disks remaining, the
cluster will be destroyed and all virtualization information will be lost. Any data that is still
required should be backed up or migrated prior to destroying the cluster.

Earlier we showed how to shut down the complete SVC cluster in a controlled manner. On
occasion, it may be necessary to shut down a single node within the cluster, to perform such
tasks as scheduled maintenance, while leaving the SVC environment up and running.

Use the svctask stopcluster -node command as shown in Example 9-10 to shut down a
node.

Example 9-10 svctask stopcluster -node command


IBM_2145:admin>svctask stopcluster -node SVC1N1
Are you sure that you want to continue with the shut down? Ensure that you have stopped all
FlashCopy mappings, Remote Copy relationships, data migration operations and forced
deletions before continuing.
yes <ENTER>

This command shuts down SVC1N1 in a graceful manner. When this is done, the other node
in the I/O Group will destage the contents of its cache and will go into write through mode until
the node is powered up and rejoins the cluster.

Note: There is no need to stop FlashCopy mappings, Remote Copy relationships and data
migration operations. The other cluster will handle this but be aware that this cluster is a
single point of failure now.

Chapter 9. SVC configuration and administration using the CLI 219


If this is the last node in an I/O Group, all access to the Virtual Disks in the I/O Group will be
lost. Ensure that this is what you want to do before executing this command and we will need
to specify the -force flag.

By re-issuing the svcinfo lsnode command (as shown in Example 9-11), we can see that the
node is now offline.

Example 9-11 svcinfo lsnode


IBM_2145:admin>svcinfo lsnode -delim :
id:name:UPS_serial_number:WWNN:status:IO_group_id:IO_group_name:config_node:UPS_unique_id
1:SVC1_N1:YM100032B422:0000000000000000:offline:0:io_grp0:no:20400000C2484082
12:SVC1_N2:YM100032B425:500507680100035A:online:0:io_grp0:yes:20400000C2484085

IBM_2145:admin>svcinfo lsnode SVC1_N1


CMMVC5782E The object specified is offline

To restart the node, simply go to the service panel of the node and push the power on button.

We have now completed the tasks required to view, add, delete, rename and shut down a
node within an SVC environment.

9.3 Working with managed disks


This section details the various configuration and administration tasks that you can perform
on the managed disks (MDisks) within the SVC environment.

9.3.1 Disk controller systems


This section details the tasks that you can perform on a disk controller level.

Viewing disk controller details


Use the svcinfo lscontroller command to display summary information about all available
back-end storage systems. To display more detailed information about a specific controller,
run the command again and append the controller name parameter (for example,
controller0). Both of these commands are shown in Example 9-12.

Tip: The -delim : parameter truncates the on-screen content and separates data fields
with colons as opposed to wrapping text over multiple lines.

Example 9-12 svcinfo lscontroller command


IBM_2145:admin>svcinfo lscontroller -delim :
id:controller_name:ctrl_s/n:vendor_id:product_id_low:product_id_high
0:controller0::IBM :1722-600:

IBM_2145:admin>svcinfo lscontroller controller0


id 0
controller_name controller0
WWNN 200800A0B80FBDF0
mdisk_link_count 7
max_mdisk_link_count 7
degraded no
vendor_id IBM
product_id_low 1722-600

220 IBM TotalStorage SAN Volume Controller


product_id_high
product_revision 0520
ctrl_s/n
WWPN 200800A0B80FBDF1
path_count 3
max_path_count 6
WWPN 200900A0B80FBDF2
path_count 4
max_path_count 8

Renaming a controller
Use the svctask chcontroller command to change the name of a storage controller. To
verify the change, run the svcinfo lscontroller command. Both of these commands are
shown in Example 9-13.

Example 9-13 svctask chcontroller command


IBM_2145:admin>svctask chcontroller -name DS4301 controller0

IBM_2145:admin>svcinfo lscontroller -delim :


id:controller_name:ctrl_s/n:vendor_id:product_id_low:product_id_high
0:DS4301::IBM :1722-600:

This command renamed the controller named controller0 to DS4301.

Note: The chcontroller command specifies the new name first. You can use letters A to
Z, a to z, numbers 0 to 9, the dash ’-’ and the underscore ’_’. It can be between one and 15
characters in length. However, it cannot start with a number, dash or the word controller
since this prefix is reserved for SVC assignment only.

9.3.2 Managed disks


This section details the tasks that can be performed at an MDisk level.

MDisk information
Use the svcinfo lsmdisk command to display summary information about all available
managed disks. To display more detailed information about a specific MDisk, run the
command again and append the MDisk name parameter (for example, mdisk0). Both of these
commands are shown in Example 9-14.

Example 9-14 svcinfo lsmdisk command


IBM_2145:admin>svcinfo lmdisk -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
0:mdisk0:online:managed:1:MDG0_DS43:200.4GB:0000000000000000:DS4300:600a0b80000fbdf00000027
53fb8b1c200000000000000000000000000000000
1:mdisk1:online:managed:0:MDG1_DS43:200.4GB:0000000000000001:DS4300:600a0b80000fbdfc000002a
33fb8b30900000000000000000000000000000000
2:mdisk2:online:managed:1:MDG0_DS43:407.2GB:0000000000000002:DS4300:600a0b80000fbdf00000027
73fb8b22a00000000000000000000000000000000
3:mdisk3:online:managed:0:MDG1_DS43:407.2GB:0000000000000003:DS4300:600a0b80000fbdfc0000029
f3fb8b1f700000000000000000000000000000000
4:mdisk4:online:managed:2:MDG2_DS43:817.4GB:0000000000000004:DS4300:600a0b80000fbdf00000027
93fb8b2ac00000000000000000000000000000000

Chapter 9. SVC configuration and administration using the CLI 221


5:mdisk5:online:managed:2:MDG2_DS43:681.2GB:0000000000000005:DS4300:600a0b80000fbdfc000002a
13fb8b25b00000000000000000000000000000000
6:mdisk6:online:unmanaged:::200.4GB:0000000000000007:DS4300:600a0b80000fbdfc000002a53fbcda8
100000000000000000000000000000000

IBM_2145:admin>svcinfo lsmdisk mdisk0


id 0
name mdisk0
status online
mode managed
mdisk_grp_id 1
mdisk_grp_name MDG0_DS43
capacity 200.4GB
quorum_index 0
block_size 512
controller_name DS4300
ctrl_type 4
ctrl_WWNN 200800A0B80FBDF0
controller_id 0
path_count 1
max_path_count 1
ctrl_LUN_# 0
UID 600a0b80000fbdf0000002753fb8b1c200000000000000000000000000000000

Renaming an MDisk
Use the svctask chmdisk command to change the name of an MDisk. To verify the change,
run the svcinfo lsmdisk command. Both of these commands are shown in Example 9-15.

Example 9-15 svctask chmdisk command


IBM_2145:admin>svctask chmdisk -name newmdisk0 mdisk0

IBM_2145:admin>svcinfo lsmdisk mdisk0-delim :


id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
0:newmdisk0:online:managed:1:MDG0_DS43:200.4GB:0000000000000000:DS4300:600a0b80000fbdf00000
02753fb8b1c200000000000000000000000000000000
1:mdisk1:online:managed:0:MDG1_DS43:200.4GB:0000000000000001:DS4300:600a0b80000fbdfc000002a
33fb8b30900000000000000000000000000000000
2:mdisk2:online:managed:1:MDG0_DS43:407.2GB:0000000000000002:DS4300:600a0b80000fbdf00000027
73fb8b22a00000000000000000000000000000000
3:mdisk3:online:managed:0:MDG1_DS43:407.2GB:0000000000000003:DS4300:600a0b80000fbdfc0000029
f3fb8b1f700000000000000000000000000000000
4:mdisk4:online:managed:2:MDG2_DS43:817.4GB:0000000000000004:DS4300:600a0b80000fbdf00000027
93fb8b2ac00000000000000000000000000000000
5:mdisk5:online:managed:2:MDG2_DS43:681.2GB:0000000000000005:DS4300:600a0b80000fbdfc000002a
13fb8b25b00000000000000000000000000000000
6:mdisk6:online:unmanaged:::200.4GB:0000000000000007:DS4300:600a0b80000fbdfc000002a53fbcda8
100000000000000000000000000000000

This command renamed the MDisk named mdisk0 to newdisk0.

Note: The chmdisk command specifies the new name first. You can use letters A to Z, a to
z, numbers 0 to 9, the dash ’-’ and the underscore ’_’. It can be between one and 15
characters in length. However, it cannot start with a number, dash or the word mdisk since
this prefix is reserved for SVC assignment only.

222 IBM TotalStorage SAN Volume Controller


Discovering MDisks
In general, the cluster detects the MDisks automatically when they appear on the network.
However, some Fibre Channel controllers do not send the required SCSI primitives that are
necessary to automatically discover the new MDisks.

If new storage has been attached and the cluster has not detected it, it may be necessary to
run this command before the cluster will detect the new MDisks.

Use the svctask detectmdisk command to scan for newly added MDisks:
IBM_2145:admin>svctask detectmdisk

To check whether any newly added MDisks were successfully detected, run the svcinfo
lsmdisk command as before. If the disks do not appear, check that the disk is appropriately
assigned to the SVC in the disk subsystem and that the zones are properly set up as
explained in Chapter 3, “Planning and configuration” on page 25.

Setting up a quorum disk


The SVC cluster, after the process of node discovery, automatically chooses three MDisks as
quorum disks. Each disk is assigned an index number of 0, 1, or 2.

The quorum disks are only created once when at least one managed MDisk with an available
extent is placed in managed mode.

In the event that half the nodes in a cluster are missing for any reason, the other half cannot
simply assume that the nodes are “dead”. It can simply mean that the cluster state
information is not being successfully passed between nodes for some reason (network failure
for example). For this reason, if half the cluster disappears from the view of the other, each
surviving half attempts to lock the active quorum disk.

Note: There is only ever one active quorum disk. When SVC first discovers LUNs as
MDisks, it chooses three MDisks as quorum disk candidates. One is then chosen as active
and the others are not considered quorum disks in any way. Only if the active quorum disk
becomes unavailable will the cluster go out and choose any of the other two candidates to
take its place. Since the other quorum disk candidates are nothing but candidates, they are
not relevant and are not even considered in any cluster quorum event.

So, in the event of quorum disk index 0 not being available, the next disk (index 1) becomes
the quorum, and so on.

The half of the cluster that is successful in locking the quorum disk becomes the exclusive
processor of I/O activity. It attempts to reform the cluster with any nodes it can still see. The
other half will stop processing IO. This provides a tie-break solution and ensures that both
halves of the cluster do not continue to operate.

In the case that all clusters can see the quorum disk, they will use this quorum disk to
communicate with each other and will decide which half will become the exclusive processor
of I/O activity.

If for any reason (for example, additional backend storage has been installed and you want to
move one or two quorum disks on this newly installed backend storage subsystem) you want
to set your own quorum disks, you can use the svctask setquorum command, as shown in
Example 9-16 on page 224, to reassign the quorum indexes. The managed disk that is
currently assigned the quorum index number is set to a non-quorum disk.

Chapter 9. SVC configuration and administration using the CLI 223


Example 9-16 svctask setquorum command
IBM_2145:admin>svctask setquorum -quorum 0 mdisk0

IBM_2145:admin>svcinfo lsmdisk mdisk0


id 0
name mdisk0
status online
mode managed
mdisk_grp_id 1
mdisk_grp_name MDG0_DS43
capacity 200.4GB
quorum_index 0
block_size 512
controller_name DS4300
ctrl_type 4
ctrl_WWNN 200800A0B80FBDF0
controller_id 0
path_count 1
max_path_count 1
ctrl_LUN_# 0
UID 600a0b80000fbdf0000002753fb8b1c200000000000000000000000000000000
path_count 2
max_path_count 2
ctrl_LUN_# 0000000000000006

As we can see, this command has set mdisk0 as a quorum disk using quorum index 0. We
can also do this for quorum index 1 and 2.

Including an MDisk
If a significant number of errors occur on an MDisk, the SVC automatically excludes it. These
errors can be from a hardware problem, a storage area network (SAN) zoning problem, or the
result of poorly planned maintenance. If it was a hardware fault, you should receive Simple
Network Management Protocol (SNMP) alerts as the state of the disk subsystem (before the
disk was excluded) and undertaken preventative maintenance. If not, the hosts that were
using VDisks, which used the excluded MDisk, now have I/O errors.

By running the svcinfo lsmdisk command, you can see that mdisk3 is excluded in
Example 9-17.

Example 9-17 svcinfo lsmdisk command: Excluded MDisk


IBM_2145:admin>svcinfo lsmdisk -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
0:mdisk0:online:managed:1:MDG0_DS43:200.4GB:0000000000000000:DS4301:600a0b80000fbdf00000027
53fb8b1c200000000000000000000000000000000
1:mdisk1:online:managed:0:MDG1_DS43:200.4GB:0000000000000001:DS4301:600a0b80000fbdfc000002a
33fb8b30900000000000000000000000000000000
2:mdisk2:online:managed:1:MDG0_DS43:407.2GB:0000000000000002:DS4301:600a0b80000fbdf00000027
73fb8b22a00000000000000000000000000000000
3:mdisk3:excluded:managed:0:MDG1_DS43:407.2GB:0000000000000003:DS4301:600a0b80000fbdfc00000
29f3fb8b1f700000000000000000000000000000000
4:mdisk4:online:managed:2:MDG2_DS43:817.4GB:0000000000000004:DS4301:600a0b80000fbdf00000027
93fb8b2ac00000000000000000000000000000000
5:mdisk5:online:managed:2:MDG2_DS43:681.2GB:0000000000000005:DS4301:600a0b80000fbdfc000002a
13fb8b25b00000000000000000000000000000000
6:mdisk6:online:unmanaged:::200.4GB:0000000000000007:DS4300:600a0b80000fbdfc000002a53fbcda8
100000000000000000000000000000000

224 IBM TotalStorage SAN Volume Controller


After taking the necessary corrective action to repair the MDisk (for example, replace failed
disk, repair SAN zones, and so on), we must tell the SVC to include the MDisk again by
issuing the svctask includemdisk command:
IBM_2145:admin>svctask includemdisk mdisk3

Running the svcinfo lsmdisk command again should show mdisk5 online again, as shown in
Example 9-18.

Example 9-18 svcinfo lsmdisk command: Verifying that MDisk is included


IBM_2145:admin>svcinfo lsmdisk -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
0:mdisk0:online:managed:1:MDG0_DS43:200.4GB:0000000000000000:DS4300:600a0b80000fbdf00000027
53fb8b1c200000000000000000000000000000000
1:mdisk1:online:managed:0:MDG1_DS43:200.4GB:0000000000000001:DS4300:600a0b80000fbdfc000002a
33fb8b30900000000000000000000000000000000
2:mdisk2:online:managed:1:MDG0_DS43:407.2GB:0000000000000002:DS4300:600a0b80000fbdf00000027
73fb8b22a00000000000000000000000000000000
3:mdisk3:online:managed:0:MDG1_DS43:407.2GB:0000000000000003:DS4300:600a0b80000fbdfc0000029
f3fb8b1f700000000000000000000000000000000
4:mdisk4:online:managed:2:MDG2_DS43:817.4GB:0000000000000004:DS4300:600a0b80000fbdf00000027
93fb8b2ac00000000000000000000000000000000
5:mdisk5:online:managed:2:MDG2_DS43:681.2GB:0000000000000005:DS4300:600a0b80000fbdfc000002a
13fb8b25b00000000000000000000000000000000
6:mdisk6:online:unmanaged:::200.4GB:0000000000000007:DS4300:600a0b80000fbdfc000002a53fbcda8
100000000000000000000000000000000

Showing the MDisk group


Use the svcinfo lsmdisk command as before to display information about the managed disk
group (MDG) to which an MDisk belongs, as shown in Example 9-19.

Example 9-19 svcinfo lsmdisk command


IBM_2145:admin>svcinfo lsmdisk -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
0:mdisk0:online:managed:1:MDG0_DS43:200.4GB:0000000000000000:DS4301:600a0b80000fbdf00000027
53fb8b1c200000000000000000000000000000000
1:mdisk1:online:managed:0:MDG1_DS43:200.4GB:0000000000000001:DS4301:600a0b80000fbdfc000002a
33fb8b30900000000000000000000000000000000
2:mdisk2:online:managed:1:MDG0_DS43:407.2GB:0000000000000002:DS4301:600a0b80000fbdf00000027
73fb8b22a00000000000000000000000000000000
3:mdisk3:online:managed:0:MDG1_DS43:407.2GB:0000000000000003:DS4301:600a0b80000fbdfc0000029
f3fb8b1f700000000000000000000000000000000
4:mdisk4:online:managed:2:MDG2_DS43:817.4GB:0000000000000004:DS4301:600a0b80000fbdf00000027
93fb8b2ac00000000000000000000000000000000
5:mdisk5:online:managed:2:MDG2_DS43:681.2GB:0000000000000005:DS4301:600a0b80000fbdfc000002a
13fb8b25b00000000000000000000000000000000
6:mdisk6:online:unmanaged:::200.4GB:0000000000000007:DS4301:600a0b80000fbdfc000002a53fbcda8
100000000000000000000000000000000

See 9.3.3, “Managed Disk Groups” on page 228, for more details about MDGs.

Showing a VDisk for an MDisk


Use the svcinfo lsmdiskmember command to display information about the VDisks that use
space on a specific MDisk, as shown in Example 9-20 on page 226.

Chapter 9. SVC configuration and administration using the CLI 225


Example 9-20 svcinfo lsmdiskmember command
IBM_2145:admin>svcinfo lsmdiskmember mdisk0
id
0
1
3
4
5
6

This command shows that the VDisks with IDs 0 to 6 are all using space on mdisk0.

To correlate the IDs displayed in this output to VDisk names, we can run the svcinfo lsvdisk
command. We discuss this command in more detail in 9.4, “Working with virtual disks” on
page 230.

Creating a VDisk in image mode


An image mode disk is a VDisk that has an exact one-to-one (1:1) mapping of VDisk extents
with the underlying MDisk. For example, extent 0 on the VDisk contains the same data as
extent 1 on the MDisk and so on. Without this one-to-one mapping (for example, if extent 0 on
the VDisk mapped to extent 3 on the MDisk), there is little chance that the data on a newly
introduced MDisk is still readable.

Image mode is intended for the purpose of migrating data from an environment outside the
SVC, to an environment within the SVC. A logical unit number (LUN) that was previously
directly assigned to a SAN attached host can now be reassigned to the SVC (possible short
outage) and given back to the same host as an image mode VDisk. During the same outage,
the host and zones can be reconfigured to access the disk via the SVC.

After access is re-established, the host workload can resume while the SVC manages the
transparent migration of the data to other SVC managed VDisks on the same or another disk
subsystem.

We recommend that during the migration phase of the SVC implementation, you add one
MDisk at a time to the SVC environment. This reduces the possibility of error. It also means
that the short outages required to reassign the LUNs from the subsystem or subsystems and
reconfigure the SAN and host can be staggered over a period of time to minimize the
business impact.

Important: Creating an image mode VDisk can be done only using an unmanaged disk
(for example, before you added it to a MDG). We recommend that you create an empty
MDG, called image_mode, or similar since you need to add your newly created image
mode VDisk to a MDG. See 9.3.3, “Managed Disk Groups” on page 228, for information
about creating a MDG.

Use the svctask mkvdisk command to create an image mode VDisk. The full syntax of this
command is:
svctask mkvdisk -mdiskgrp mdisk_group_id | mdisk_group_name
-iogrp io_group_id | io_group_name -size disk_size [-fmtdisk] [-vtype seq|striped]
[-node node_id | node_name] [-unit b|kb|mb|gb|tb|pb] [-mdisk mdisk_id_list |
mdisk_name_list] [-name new_name_arg] [-instance instance]

Here, the parameters are defined as follows:


򐂰 mdiskgrp: Name or ID of the MDG in which to create the VDisk.

226 IBM TotalStorage SAN Volume Controller


򐂰 iogrp: Name or ID of I/O group which is to own the VDisk.
򐂰 size: Capacity (numerical); not necessary for image mode VDisks.
򐂰 fmtdisk: Optional parameter to force a format of the new VDisk.
򐂰 vtype: Optional parameter to specify the type of VDisk (sequential, striped, or image
mode). Default (if nothing is specified) is striped.
򐂰 node: Optional parameter to specify the name or ID of the preferred node. Default (if
nothing is specified) is to alternate between nodes in the I/O group.
򐂰 unit: Optional parameter to specify the data units for capacity parameter. Default (if
nothing is specified) is megabytes (MB).
򐂰 mdisk: Optional parameter to specify the name or ID of the MDisk or MDisks to be used for
the VDisk. This is only required for sequential and image mode VDisks because striped
VDisks use all MDisks that are available in the MDG by default.

Note: You can use this parameter for striped VDisks, for example, if you want to specify
that the VDisk only uses a subset of the MDisks available within a MDG.

򐂰 name: Optional parameter to assign a name to the new VDisk. Default (if nothing is
specified) is to assign the name vdiskX, where X is the ID sequence number assigned by
the SVC internally.

Note: If you do not provide the -name parameter, the SVC automatically generates the
name vdiskX (where X is the ID sequence number assigned by the SVC internally).

If you want to provide a name, You can use letters A to Z, a to z, numbers 0 to 9, the dash
’-’ and the underscore ’_’. It can be between one and 15 characters in length. However, it
cannot start with a number, dash or the word vdisk since this prefix is reserved for SVC
assignment only.

򐂰 instance: The VDisk number. This value overrides that which would be assigned
automatically, and influences other numbers that may be assigned algorithmically later,
such as the VDisk_UID of a VDiskhostmap. Only a value in the lower half of the valid
range may be specified.

The command to create an image mode VDisk is:


IBM_2145:admin>svctask mkvdisk -mdiskgrp MDG2_DS43 -size 200 -iogrp io_grp0 -vtype image
-mdisk mdisk6 -name imagedisk1
Virtual Disk, id [15], successfully created

This command creates an image mode V Disk called imagedisk1 using MDisk mdisk6. The
VDisk belongs to the MDG MDG2_DS43 and is owned by the I/O group io_grp0

If we run the svcinfo lsmdisk command again, notice that mdisk6 now has a status of image
as shown in Example 9-21.

Example 9-21 svcinfo lsmdisk command: mdisk status


IBM_2145:admin>svcinfo lsmdisk -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
0:mdisk0:online:managed:1:MDG0_DS43:200.4GB:0000000000000000:DS4301:600a0b80000fbdf00000027
53fb8b1c200000000000000000000000000000000
1:mdisk1:online:managed:0:MDG1_DS43:200.4GB:0000000000000001:DS4301:600a0b80000fbdfc000002a
33fb8b30900000000000000000000000000000000

Chapter 9. SVC configuration and administration using the CLI 227


2:mdisk2:online:managed:1:MDG0_DS43:407.2GB:0000000000000002:DS4301:600a0b80000fbdf00000027
73fb8b22a00000000000000000000000000000000
3:mdisk3:online:managed:0:MDG1_DS43:407.2GB:0000000000000003:DS4301:600a0b80000fbdfc0000029
f3fb8b1f700000000000000000000000000000000
4:mdisk4:online:managed:2:MDG2_DS43:817.4GB:0000000000000004:DS4301:600a0b80000fbdf00000027
93fb8b2ac00000000000000000000000000000000
5:mdisk5:online:managed:2:MDG2_DS43:681.2GB:0000000000000005:DS4301:600a0b80000fbdfc000002a
13fb8b25b00000000000000000000000000000000
6:mdisk6:online:image:2:MDG2_DS43:200.4GB:0000000000000007:DS4301:600a0b80000fbdfc000002a53
fbcda8100000000000000000000000000000000

9.3.3 Managed Disk Groups


This section explains the tasks that we can perform at an MDG level.

Viewing MDisk group information


Use the svcinfo lsmdiskgrp command, as shown in Example 9-22, to display information
about the MDGs defined in the SVC.

Example 9-22 svcinfo lsmdiskgrp command


IBM_2145:admin>svcinfo lsmdiskgrp -delim :
id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity
0:MDG1_DS43:online:2:0:607.3GB:32:607.3GB
1:MDG0_DS43:online:2:15:607.0GB:32:465.0GB
2:MDG2_DS43:online:3:1:1498.7GB:32:1498.5GB

Creating an MDisk group


Use the svctask mkmdiskgrp command to create an MDG. The full syntax of this command is:
svctask mkmdiskgrp [-name name] [-mdisk name|id_list] -ext size

Note the following explanation:


򐂰 name: Name to assign to new group
򐂰 mdisk: List of names or IDs of MDisks to assign to group
򐂰 ext: Size of extents in this group

The command to create an MDG is:


IBM_2145:admin>svctask mkmdiskgrp -name MDG3_DS43 -ext 32
MDisk Group, id [3], successfully created

This command creates an MDG called MDG3_DS43 with an extent size of 32 MB. Since we did
not specify any MDisks to add to the group with the -mdisk parameter, this is an empty MDG.

If we run the svcinfo lsmdiskgrp command, we should see the MDG created as shown in
Example 9-23.

Example 9-23 svcinfo lsmdiskgrp command


IBM_2145:admin>svcinfo lsmdiskgrp -delim :
id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity
0:MDG1_DS43:online:2:0:607.3GB:32:607.3GB
1:MDG0_DS43:online:2:15:607.0GB:32:465.0GB
2:MDG2_DS43:online:3:1:1498.7GB:32:1498.5GB
3:MDG3_DS43:online:0:0:0:32:0

228 IBM TotalStorage SAN Volume Controller


Renaming an MDisk group
Use the svctask chmdiskgrp command to change the name of an MDG. To verify the
change, run the svcinfo lsmdiskgrp command. Both of these commands are shown in
Example 9-24.

Example 9-24 svctask chmdiskgrp command


IBM_2145:admin>svctask chmdiskgrp -name MDG3image_DS43 MDG3_DS43

IBM_2145:admin>svcinfo lsmdiskgrp -delim :


id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity
0:MDG1_DS43:online:2:0:607.3GB:32:607.3GB
1:MDG0_DS43:online:2:15:607.0GB:32:465.0GB
2:MDG2_DS43:online:3:1:1498.7GB:32:1498.5GB
3:MDG3image_DS43:online:0:0:0:32:0

This command renamed the MDG from MDG3_DS43 to MDG3image_DS43.

Note: The chmdiskgrp command specifies the new name first. You can use letters A to Z, a
to z, numbers 0 to 9, the dash ’-’ and the underscore ’_’. It can be between one and 15
characters in length. However, it cannot start with a number, dash or the word mdiskgrp
since this prefix is reserved for SVC assignment only.

Deleting an MDisk group


Use the svctask rmmdiskgrp command to remove an MDG from the SVC cluster
configuration:
IBM_2145:admin>svctask rmmdiskgrp MDG3image_DS43

IBM_2145:admin>svcinfo lsmdiskgrp -delim :


id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity
0:MDG1_DS43:online:2:0:607.3GB:32:607.3GB
1:MDG0_DS43:online:2:15:607.0GB:32:465.0GB
2:MDG2_DS43:online:3:1:1498.7GB:32:1498.5GB

This command removes the MDG MDG3image_DS43 from the SVC configuration.

Note: If there are MDisks within the MDG, you must use the -force flag, for example:
svctask -force rmmdiskgrp MDG3image_DS43

Ensure that you really want to use this flag as it destroys all mapping information and data
held on the VDisks can not be recovered.

Adding MDisks
If you created an empty MDG as we did, or you simply assign additional MDisks to your SVC
environment at a later date, you can use the svctask addmdisk command to populate the
MDG:
IBM_2145:admin>svctask addmdisk -mdisk mdisk6 MDG3_DS43

You can only add unmanaged MDisks to an MDG. This command adds MDisk mdisk6 to the
MDG named MDG3_DS43.

Chapter 9. SVC configuration and administration using the CLI 229


Important: Do not do this if you want to create an image mode VDisk from the MDisk you
are adding. As soon as you add an MDisk to a MDG, it becomes managed and extent
mapping is not necessarily 1:1 anymore.

Removing MDisks
Use the svctask rmmdisk command to remove an MDisk from a MDG:
IBM_2145:admin>svctask rmmdisk -mdisk mdisk6 MDG3_DS43

This command removes the MDisk called mdisk6 from the MDG named MDG3_DS43

Note: If VDisks are using the MDisks you are removing from the MDG, you must use the
-force flag:
svctask rmmdisk -force -mdisk mdisk6 MDG3_DS43

Even then, the removal only takes place if there is sufficient space to migrate the VDisk
data to other extents on other MDisks which remain in the MDG. After you remove the
MDisk group, it takes some time to change the mode from managed to unmanaged.

Showing MDisks in this group


Use the svcinfo lsmdisk -filtervalue command, as shown in Example 9-25, to see which
MDisks are part of a specific MDG. This command shows all MDisks that are part of the MDG
MDG3_DS43.

Example 9-25 svcinfo lsmdisk -filtervalue: mdisks in MDG


IBM_2145:admin>svcinfo lsmdisk -filtervalue 'mdisk_grp_name=MDG3_DS43' -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
6:mdisk6:online:managed:3:MDG3_DS43:200.4GB:0000000000000007:DS4301:600a0b80000fbdfc000002a
53fbcda8100000000000000000000000000000000

Showing VDisks using this group


Use the svcinfo lsvdisk -filtervalue command, as shown in Example 9-26, to see which
VDisks are part of a specific MDG. This command shows all VDisks that are part of the MDG
MDG0_DS43.

Example 9-26 svcinfo lsvdisk -filtervalue: vdisks in MDG


IBM_2145:admin>svcinfo lsvdisk -filtervalue 'mdisk_grp_name=MDG0_DS43' -delim :
id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type:FC_id:FC
_name:RC_id:RC_name
3:VD1_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::

We have now completed the tasks required to manage the disk controller systems, managed
disks, and MDGs within an SVC environment.

9.4 Working with virtual disks


This section details the various configuration and administration tasks which can be
performed on the VDisks within the SVC environment.

230 IBM TotalStorage SAN Volume Controller


9.4.1 Hosts
This section explains the tasks that can be performed at a host level.

Host information
Use the svcinfo lshost command to display summary information about all hosts defined
within the SVC environment. To display more detailed information about a specific host, run
the command again and append the host name parameter (for example, W2K_npsrv3). Both
of these commands are shown in Example 9-27.

Tip: The -delim: parameter truncates the on-screen content and separates data fields with
colons as opposed to wrapping text over multiple lines.

Example 9-27 svcinfo lshost command


IBM_2145:admin>svcinfo lshost
id name port_count
0 AIX_270 2
1 W2K_npsrv3 2
2 LINUX1 2

IBM_2145:admin>svcinfo lshost W2K_npsrv3


id 1
name W2K_npsrv3
port_count 2
type generic
WWPN 210100E08B259C41
node_logged_in_count 2
WWPN 210000E08B059C41
node_logged_in_count 2

Creating a host
Before creating a host, you need to know that its host bus adapter (HBA) worldwide port
names (WWPNs) are visible to the SVC. To do this, issue the svcinfo lshbaportcandidate
command as shown in Example 9-28.

Example 9-28 svcinfo lshbaportcandidate command


IBM_2145:admin>svcinfo lshbaportcandidate
id
210000E08B09691D
210100E08B25C440
210000E08B08AFD6
210100E08B29691D
210000E08B05C440
210100E08B29951D

After you know the WWPNs that are displayed match our host (use host or SAN switch
utilities to verify), use the svctask mkhost command to create an image mode VDisk. The full
syntax of this command is:
svctask mkhost [-name name] -hbawwpn wwpn_list [-force]

Note the following explanation:


򐂰 name: Name to be assigned to the host
򐂰 hbawwpn: List of HBA WWPNs to be added to host

Chapter 9. SVC configuration and administration using the CLI 231


򐂰 force: Force the creation using the user entered WWPNs. Use this if for any reason the
HBA is not online.

Note: If you do not provide the -name parameter, the SVC automatically generates the
name hostX (where X is the ID sequence number assigned by the SVC internally).

You can use letters A to Z, a to z, numbers 0 to 9, the dash ’-’ and the underscore ’_’. It can
be between one and 15 characters in length. However, it cannot start with a number, dash
or the word host since this prefix is reserved for SVC assignment only.

The command to create a host is shown here:


IBM_2145:admin>svctask mkhost -name SANFS1 -hbawwpn 210000e08b09951d
Host id [3] successfully created

This command creates a host called SANFS1 using WWPN 210000e08b09951d.

Note: You can define a host with multiple ports by using the separator (:) between
WWPNs:
svctask mkhost -name SANFS1 -hbawwpn 210000e08b09951d:210100e08b29951d

Or you can use the addport command, which we show later.

Perhaps your WWPN or WWPNs did not display when you issued the svcinfo
lshbaportcandidate command, but you are sure your adapter is functioning (for example,
you see WWPN in the switch name server) and your zones are correctly setup. In this case,
you can type the WWPN of your HBA or HBAs and use the -force flag to create the host
regardless, as shown here:
IBM_2145:admin>svctask mkhost -name SANFS2 -hbawwpn 210000e08b0995ff -force
Host id [8] successfully created

This command forces the creation of a host called SANFS2 using WWPN 210000e08b0995ff.

Note: WWPNs are one of the few things within the CLI that are not case sensitive.

If we run the svcinfo lshost command again, you should now see your host.

Modify a host
Use the svctask chhost command to change the name of a host. To verify change, run the
svcinfo lshost command. Both of these commands are shown in Example 9-29.

Example 9-29 svctask chhost command


IBM_2145:admin>svctask chhost -name sanfs1 SANFS1

IBM_2145:admin>svcinfo lshost
id name port_count 123qwerfgcv vvvvvvvvvvvgxg6
0 AIX_270 2
1 W2K_npsrv3 2
2 LINUX1 2
3 sanfs1 1

This command renamed the host from SANFS1 to sanfs1.

232 IBM TotalStorage SAN Volume Controller


Note: The chhost command specifies the new name first. You can use letters A to Z, a to
z, numbers 0 to 9, the dash ’-’ and the underscore ’_’. It can be between one and 15
characters in length. However, it cannot start with a number, dash or the word host since
this prefix is reserved for SVC assignment only.

Note: To get more than eight LUNS support for HP-UX there is a -type flag for this com-
mand. Valid options are -type hpux (for HP-UX only) or -type generic (default)

Deleting a host
Use the svctask rmhost command to delete a host from the SVC configuration. This
command deletes the host called sanfs1 from the SVC configuration.
IBM_2145:admin>svctask rmhost sanfs1

Note: If there are any VDisks assigned to the host, you must use the -force flag, for
example:
svctask rmhost -force sanfs1

Adding ports
If you add an HBA to a server that is already defined within the SVC, you can use the svctask
addhostport command to add WWPN definitions to it.

Before you add the new WWPN, you need to know that it is visible to the SVC. To do this, you
issue the svcinfo lshbaportcandidate command as shown here:
IBM_2145:admin>svcinfo lshbaportcandidate
id
210000E08B09691D
210100E08B25C440
210000E08B08AFD6
210100E08B29691D
210000E08B05C440
210100E08B29951D

After you know the WWPNs that are displayed match our host (use host or SAN switch
utilities to verify), use the svctask addhostport command to add the port or ports to the host.
The full syntax of this command is:
svctask addhostport -hbawwpn wwpn_list [-force] host_id|host_name

Note the following explanation:


򐂰 hbawwpn: The list of HBA WWPNs
򐂰 force: Indicates to force the system to use the provided WWPNs
򐂰 host_id: The name the host to which the ports are added

The command to add a host port is:


IBM_2145:admin>svctask addhostport -hbawwpn 210100e08b29951d SANFS1

This command adds the WWPN of 210100e08b29951d to the host SANFS1.

Note: You can add multiple ports at a time by using the separator (:) between WWPNs, for
example:
svctask addhostport -hbawwpn 210000E08B05C440:210000E08B05C440 SANFS2

Chapter 9. SVC configuration and administration using the CLI 233


Perhaps your WWPN or WWPNs did not display when you issued the svcinfo
lshbaportcandidate command, but you are sure your adapter is functioning (for example,
you see WWN in the switch name server) and your zones are correctly setup. In this case,
you can manually type the WWPN of your HBA or HBAs and use the -force flag to create the
host regardless as shown here:
IBM_2145:admin>svctask addhostport -hbawwpn 10000000c935b623 -force AIX_270

This command forces the addition of the WWPN 1000000c935b623 to the host called AIX_270.

Note: WWPNs are one of the few things within the CLI that are not case sensitive.

If you run the svcinfo lshost command again, you should see your host with an updated port
count (2 in our example).

Example 9-30 svcinfo lshost command: port count


IBM_2145:admin>svcinfo lshost
id name port_count
0 AIX_270 2
1 W2K_npsrv3 2
2 LINUX1 2
3 SANFS1 2

Deleting ports
If you make a mistake when adding, or if you remove an HBA from a server that is already
defined within the SVC, you can use the svctask rmhostport command to remove WWPN
definitions from an existing host.

Before you remove the WWPN, be sure that it is the right one. To find this out, you issue the
svcinfo lshost command (our host is SANFS1) as shown in Example 9-31.

Example 9-31 svcinfo lshost command


IBM_2145:admin>svcinfo lshost SANFS1
id 3
name SANFS1
port_count 2
type generic
WWPN 210000E08B09951D
node_logged_in_count 2
WWPN 210100E08B29951D
node_logged_in_count 2

When you know the WWPN, use the svctask rmhostport command to delete a host port.
The full syntax of this command is:
svctask rmhostport -hbawwpn wwpn_list [-force] host_id|host_name

Note the following explanation:


򐂰 hbawwpn: List of HBA WWPNs
򐂰 force: Forces the system to use the provided WWPNs
򐂰 host_id: The name the host to which the ports are removed

The command to remove a host port is:


IBM_2145:admin>svctask rmhostport -hbawwpn 210100e08b29951d SANFS1

234 IBM TotalStorage SAN Volume Controller


This command removes the WWPN of 210100e08b29951d from host SANFS1.

Note: You can remove multiple ports at a time by using the separator (:) between WWPNs,
for example:

svctask rmhostport -hbawwpn 210000E08B05C440:210000E08B05C440 SANFS2

Showing the VDisks mapped to this host


Use the svcinfo lshostvdiskmap command to show which VDisks are assigned to a specific
host:
IBM_2145:admin>svcinfo lshostvdiskmap -delim : LINUX1
id:name:SCSI_id:vdisk_id:vdisk_name:wwpn:vdisk_UID
2:LINUX1:0:8:VD0_UNIX1:210000E08B04D451:600507680189801B200000000000000B

From this command, you can see that the host LINUX1 has only one VDisk called VD0_UNIX1
assigned. The SCSI LUN id is also shown. This is the id by which the Virtual Disk is being
presented to the host. If no host is specified, all defined host to VDisk mappings will be
returned.

Note: Although the -delim: flag normally comes at the end of the command string, in this
case, you must specify this flag before the host name. Otherwise, it returns the following
message: CMMVC6070E An invalid or duplicated parameter, unaccompanied argument,
or incorrect argument sequence has been detected. Ensure that the input is as per the
help.

9.4.2 Virtual disks


This section explains the tasks that you can perform at a virtual disk level.

VDisk information
Use the svcinfo lsvdisk command to display summary information about all VDisks defined
within the SVC environment. To display more detailed information about a specific VDisk, run
the command again and append the VDisk name parameter (for example, aix_vdisk0). Both
of these commands are shown in Example 9-32.

Example 9-32 svcinfo lsvdisk command


IBM_2145:admin>svcinfo lsvdisk -delim :
id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type:FC_id:FC
_name:RC_id:RC_name
0:VdAIX1_FCT:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
1:VdAIX1V1:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
2:VD0_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
3:VD1_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
4:VD2_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
5:VD3_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
6:VD4_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
7:VD5_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
8:VD0_UNIX1:0:io_grp0:online:1:MDG0_DS43:2.0GB:striped::::
9:VDisk1:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:0:FCMap1::
10:VDisk1T:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:0:FCMap1::
11:VDisk2:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:1:FCMap2::
12:VDisk2T:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:1:FCMap2::
13:VDisk3:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:2:FCMap3::
14:VDisk3T:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:2:FCMap3::

Chapter 9. SVC configuration and administration using the CLI 235


15:vdisk15:0:io_grp0:online:2:MDG2_DS43:200.0MB:striped::::

IBM_2145:admin>svcinfo lsvdisk VD0_AIX_270


id 2
name VD0_AIX_270
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name MDG0_DS43
capacity 10.0GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
throttling 0
preferred_node_id 12
fast_write_state empty

Creating a VDisk
Use the svctask mkvdisk command to create an image mode VDisk. The full syntax of the
mkvdisk command is:
mkvdisk -mdiskgrp name|id -iogrp name|id -size size [-fmtdisk]
[-vtype seq|striped|image] [-node name|id] [-unit b|kb|mb|gb|tb|pb]
[-mdisk name|id_list] [-name name] [-instance instance]

Here the parameters are defined as follows:


򐂰 -mdiskgrp: Name or ID of the MDG in which to create the VDisk.
򐂰 -iogrp: Name or ID of I/O group that is to own the VDisk.
򐂰 -size: Capacity (numerical); not needed for image mode VDisks.
򐂰 -fmtdisk: Optional parameter to force a format of the new VDisk.
򐂰 -vtype: Optional parameter to specify the type of VDisk (sequential, striped or image
mode). Default (if nothing is specified) is striped.
򐂰 -node: Optional parameter to specify the name or ID of the preferred node. Default (if
nothing is specified) is to alternate between nodes in the I/O group.
򐂰 -unit: Optional parameter to specify the data units for capacity parameter. Default (if
nothing is specified) is MB.
򐂰 -mdisk: Optional parameter to specify the name or ID of the MDisk or MDisks to be used
for the VDisk. This is only required for sequential and image mode VDisks because striped
VDisks use all MDisks that are available in the MDG by default.

Note: You can use this parameter for striped VDisks. For example, you may want to
specify that the VDisk only uses a subset of the MDisks available within a MDG.

򐂰 -name: Optional parameter to assign a name to the new VDisk. Default (if nothing is
specified) is to assign the name vdiskX, where X is the ID sequence number assigned by
the SVC internally.

236 IBM TotalStorage SAN Volume Controller


Note: If you do not provide the -name parameter, the SVC automatically generates the
name vdiskX (where X is the ID sequence number assigned by the SVC internally).

If you want to provide a name, You can use letters A to Z, a to z, numbers 0 to 9, the
dash ’-’ and the underscore ’_’. It can be between one and 15 characters in length.
However, it cannot start with a number, dash or the word vdisk since this prefix is
reserved for SVC assignment only.

򐂰 -instance: The Virtual Disk (also known as Host Logical Unit Number (HLUN)) instance
number. This value overrides that which would be assigned automatically, and influences
other numbers that may be assigned later.

The command to create a VDisk is shown here:


IBM_2145:admin>svctask mkvdisk -mdiskgrp MDG3_DS43 -iogrp io_grp0 -size 20 -name LINUX2
Virtual Disk, id [16], successfully created

This command creates a striped (default) VDisk called LINUX2 of 20 MB (MB default). The
VDisk belongs to the MDG MDG3_DS43 and is owned by the I/O group io_grp0.

To create a sequential VDisk, simply insert the -vtype and -mdisk parameters as shown here:
IBM_2145:admin>svctask mkvdisk -mdiskgrp MDG3_DS43 -iogrp io_grp0 -size 20 -vtype seq
-mdisk mdisk7 -name LINUX2
Virtual Disk, id [17], successfully created

This command creates a sequential VDisk called mdisk7 of 20 MB (MB default). The VDisk is
created on MDisk LINUX2 within the MDG MDG3_DS43 and is owned by the I/O group
io_grp0.

Note: An entry of 1 GB uses 1024 MB.

Deleting a VDisk
When executing this command on an existing managed mode VDisk any data that remained
on it will be lost. The extents that made up this VDisk will be returned to the pool of free
extents available in the Managed Disk Group.

If any Remote Copy, FlashCopy or Host mappings still exist for this virtual disk then the delete
will fail unless the -force flag is specified. Now, any mapping that remains will be deleted
and then the VDisk will be deleted.

If the Vdisk is currently the subject of a migrate to image mode then the delete will fail unless
the -force flag is specified. Now the migration will be halted and the VDisk deleted.

If the command succeeds (without the -force flag) for an image mode disk then the
underlying back-end controller Logical unit will be consistent with the data which a host could
previously have read from the Image Mode Virtual Disk. That is, all fast write data will have

Chapter 9. SVC configuration and administration using the CLI 237


been flushed to the underlying LUN. If the -force flag is used then this guarantee does not
hold.

If any Remote FlashCopy or Host mappings still exist for this VDisk then the delete will fail
unless the -force flag is specified. Now, any mapping that remains will be deleted and then
the Vdisk will be deleted.

If there is any un-destaged data in the fast write cache for this VDisk then the deletion of the
VDisk will fail unless the -force flag is specified. Now any un-destaged data in the fast write
cache will be deleted.

Use the svctask rmvdisk command to delete a VDisk from your SVC configuration:
IBM_2145:admin>svctask rmvdisk LINUX3

This command deletes VDisk LINUX3 from the SVC configuration. If the VDisk is assigned to
a host, you need to use the -force flag to delete the VDisk, for example:
svctask rmvdisk -force LINUX3

Deleting a VDisk-to-host mapping


If you mapped a VDisk to a host by mistake, or you simply want to reassign the VDisk to
another host, use the svctask rmvdiskhostmap command to unmap a VDisk from a host:
IBM_2145:admin>svctask rmvdiskhostmap -host LINUX1 VD0_UNIX1

This command unmaps the VDisk called VD0_UNIX1 from the host LINUX1.

Expanding a VDisk
Expanding a VDisk presents a larger capacity disk to your operating system. Although easily
done using the SVC, you must ensure your operating systems supports expansion before
using this function.

Assuming your operating system supports it, you can use the svctask expandvdisksize
command to increase the capacity of a given VDisk. The full syntax of the expandvdisksize
command is:
expandvdisksize -size size [-mdisk name|id_list] [-fmtdisk]
[-unit b|kb|mb|gb|pb|tb] vdisk_name|vdisk_id

Note the following explanation:


򐂰 size: Capacity by which to expand
򐂰 mdisk: Disks to use as stripe set (optional for striped)
򐂰 fmtdisk: Format disk before use
򐂰 unit: Unit for capacity
򐂰 vdisk_name | vdisk_id: Name or ID of disk to be expanded

Important: Be very careful here. The format option formats the entire VDisk, not only the
new extents.

A sample of this command is:


IBM_2145:admin>svctask expandvdisksize -size 20 -unit mb LINUX2

This command expands the 20 MB LINUX2 VDisk by another 20 MB for a total of 40 MB.

238 IBM TotalStorage SAN Volume Controller


Important: If a VDisk is expanded its type will become striped even if it was previously
sequential or in image mode. If there are not enough extents to expand your VDisk to the
specified size, you receive the following error message: CMMVC5860E
Ic_failed_vg_insufficient_virtual_extents.

Mapping a VDisk to a host


Use the svctask mkvdiskhostmap to map a VDisk to a host. The full syntax is:
svctask mkvdiskhostmap [-force] -host host_id | host_name [-scsi scsi_num_arg] vdisk_name |
vdisk_id

When executed this command will create a new mapping between the Virtual Disk and the
Host specified. This will essentially present this Virtual Disk to the Host, as if the disk was
directly attached to the Host. It is only after this command is executed that the Host can
perform I/O to the Virtual Disk. Optionally, a SCSI LUN id can be assigned to the mapping.
When the HBA in the host scans for devices attached to it, it will discover all Virtual Disks that
are mapped to its Fibre Channel ports. When the devices are found, each one is allocated an
identifier (SCSI LUN id). For example, the first disk found will generally be SCSI LUN 1, and
so on. You can control the order in which the HBA discovers Virtual Disks by assigning the
SCSI LUN id as required. If you do not specify a SCSI LUN id then the cluster will
automatically assign the next available SCSI LUN id, given any mappings that already exist
with that Host.

It is worth noting that some HBA device drivers will stop when they find a gap in the SCSI
LUN ids. For example:
򐂰 Virtual Disk 1 is mapped to Host 1 with SCSI LUN id 1
򐂰 Virtual Disk 2 is mapped to Host 1 with SCSI LUN id 2
򐂰 Virtual Disk 3 is mapped to Host 1 with SCSI LUN id 4

When the device driver scans the HBA, it may stop after discovering Virtual Disks 1 and 2,
because there is no SCSI LUN mapped with id 3. Care should therefore be taken to ensure
that the SCSI LUN id allocation is contiguous.

It is not possible to map a virtual disk to a host more than once at different LUN numbers.
IBM_2145:admin>svctask mkvdiskhostmap -host LINUX1 LINUX2
Virtual Disk to Host map, id [0], successfully created

This command maps the VDisk called LINUX2 to the host called LINUX1.

Modifying a VDisk
Executing the svctask chvdisk command will modify a single property of a Virtual Disk. Only
one property can be modified at a time. So to change the name and modify the I/O Group
would require two invocations of the command. A new name, or label, can be specified. The
new name can be used subsequently to reference the Virtual Disk. The I/O Group with which
this Virtual Disk is associated can be changed. Note that this requires a flush of the cache
within the nodes in the current I/O Group to ensure that all data is written to disk. I/O must be
suspended at the Host level before performing this operation.

The full syntax of the svctask chvdisk command is:


svctask chvdisk [-iogrp name|id] [-rate rate [-unitmb]] [-name name] [-force] name|id

Note the following explanation:


򐂰 iogrp: Name or ID of new I/O group
򐂰 rate: Throttling rate (see I/O governing)

Chapter 9. SVC configuration and administration using the CLI 239


򐂰 unitmb: Specify throttle rate in mb (default is ios)
򐂰 name: New name for disk
򐂰 force: Force the removal of the disk
򐂰 name | id: Existing name, or ID, of disk being changed

Changing the name of a VDisk is quite an obvious task. However, the I/O governing
parameter is a new concept.

I/O governing
I/O governing effectively throttles the amount of I/Os per second (or MBs per second) that can
be achieved to and from a specific VDisk. You may want to do this if you have a VDisk that
has an access pattern, which adversely affects the performance of other VDisks on the same
set of MDisks. For example, it uses most of the available bandwidth.

Of course, if this application is highly important then migrating the VDisk to another set of
MDisks may be advisable. However, in some cases, it is an issue with the I/O profile of the
application rather than a measure of its use or importance.

The choice between I/O and MB as the I/O governing throttle should be based on the disk
access profile of the application. Database applications generally issue large amounts of I/O
but only transfer a relatively small amount of data. In this case, setting an I/O governing
throttle based on MBs per second does not achieve much. It is better to use an I/O per
second throttle.

On the other extreme, a streaming video application generally issues a small amount of I/O,
but transfers large amounts of data. In contrast to the database example, setting an I/O
governing throttle based on I/Os per second does not achieve much, it is better to use an MB
per second throttle.

Note: An I/O governing rate of 0 (displayed as throttling in CLI output of svcinfo lsvdisk
command) does not mean that zero I/O per second (or MBs per second) can be achieved.
It means that no throttle is set.

An example of the chvdisk command is shown here:


IBM_2145:admin>svctask chvdisk -rate 20 -unitmb VD0_UNIX1

IBM_2145:admin>svctask chvdisk -name VD0_LINUX1 VD0_UNIX1

Note: The chvdisk command specifies the new name first. The name can consist of letters
A to Z, a to z, numbers 0 to 9, the dash ’-’ and the underscore ’_’. It can be between one
and 15 characters in length. However, it cannot start with a number, the dash or the word
vdisk because this prefix is reserved for SVC assignment only.

The first command changes the VDisk throttling to 20 MB/sec. while the second command
changes the VDisk name of the VDisk fromVD0_UNIX1 to VD0_LINUX1.

If you want to verify the changes, issue the svcinfo lsvdisk command, as shown in
Example 9-33.

Example 9-33 svcinfo lsvdisk command: Verifying throttling


IBM_2145:svcinfo lsvdisk VD0_LINUX1
id 8
name VD0_LINUX1
IO_group_id 0
IO_group_name io_grp0

240 IBM TotalStorage SAN Volume Controller


status online
mdisk_grp_id 1
mdisk_grp_name MDG0_DS43
capacity 2.0GB
type striped
formatted yes
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
virtual_disk_throttling (MB) 20
preferred_node_id 12
fast_write_state empty

Migrating a VDisk
From time to time, you may want to migrate VDisks from one set of MDisks to another to
retire an old disk subsystem, to have better balance performance across your virtualized
environment, or simply to migrate data into the SVC environment transparently using image
mode. To do so, use the svctask migratevdisk command. The full syntax of the command is:
svctask migratevdisk -mdiskgrp name|id [-threads threads] -vdisk name|id

Note the following explanation:


򐂰 mdiskgrp: MDisk group name or ID
򐂰 threads: Number of threads
򐂰 vdisk: VDisk name or ID

Important: After migration is started, it continues to completion unless it is stopped or


suspended by an error condition or the VDisk being migrated is deleted.

As you can see from the above parameters, before you can migrate your VDisk, you must
know the name of our MDisk and the name of the MDG to which you want to migrate. To find
the name, simply run the svcinfo lsvdisk and svcinfo lsmdiskgrp commands.

Restriction: Before you can migrate an image mode VDisk with a partial extent, you must
change it into a managed disk.

When you know these details, you can issue the migratevdisk command as shown here:
IBM_2145:admin>svctask migratevdisk -mdiskgrp MDG3_DS43 -vdisk VD0_LINUX1

This command moves the VDisk VD0_LINUX1 to the MDG MDG3_DS43.

Note: If insufficient extents are available within your target MDG, you receive an error
message. Make sure the source and target MDisk group have the same extent size.

The optional threads parameter allows you to assign a priority to the migration process.
The default is 4, which is the highest priority setting. However, if you want the process to
take a lower priority over other types of I/O, you can specify 3, 2, or 1.

You can run the svcinfo lsmigrate command at any time to see the status of the migration
process. This is shown in Example 9-34 on page 242.

Chapter 9. SVC configuration and administration using the CLI 241


Example 9-34 svcinfo lsmigrate command
IBM_2145:svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 20
migrate_source_vdisk_index 8
migrate_target_mdisk_grp 3
max_thread_count 4
IBM_2145:admin>svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 37
migrate_source_vdisk_index 8
migrate_target_mdisk_grp 3
max_thread_count 4
IBM_2145:admin>svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 50
migrate_source_vdisk_index 8
migrate_target_mdisk_grp 3
max_thread_count 4
IBM_2145:admin>svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 76
migrate_source_vdisk_index 8
migrate_target_mdisk_grp 3
max_thread_count 4
IBM_2145:admin>svcinfo lsmigrate
IBM_2145:admin>

Note: The progress is a percent complete. If you get no more reply than the process has
finished.

Migrate VDisk to an image mode VDisk


Migrating a VDisk to an imaged mode VDisk allows SVC to be removed from the data
path.This might be useful where SVC is used as a data mover appliance. You can use the
svctask migratetoimage command to do this.

To Migrate VDisk to an imaged mode Vdisk there are some rules.


򐂰 The destination MDisk must be greater than or equal to the size of the VDisk.

򐂰 The Managed Disk specified as the target must be in an unmanaged state


򐂰 Regardless of the mode that the VDisk starts in it is reported as managed mode during the
migration.
򐂰 Both of the MDisks involved are reported as being Image mode during the migration.
򐂰 If the migration is interrupted by a cluster recovery or by a cache problem then the
migration will resume after the recovery completes.

The full syntax of the svctask migratetoimage command is:

svctask migratetoimage -vdisk source_vdisk_id | name -mdisk


unmanaged_target_mdisk_id | name -mdiskgrp managed_disk_group_id | name
[-threads number_of_threads]

242 IBM TotalStorage SAN Volume Controller


򐂰 -vdisk: Specifies the name or id of the source Virtual Disk to be migrated.
򐂰 -mdisk: Specifies the name of the MDisk to which the data must be migrated. This disk
must be unmanaged and large enough to contain the data of the disk being migrated.
򐂰 -mdiskgrp: Specifies the mdisk group into which the MDisk must be placed once the
migration has completed.
򐂰 -threads: Optionally specifies the number of threads to use while migrating these extents,
from 1 to 4.

Here is an example of the command:

IBM_2145:admin>svctask migratetoimage -vdisk LINUX1_VD_2 -mdisk mdisk8 -mdiskgrp


MDG4_DS43_IMG

In this example you migrate the data from LINUX1_VD_2 onto mdisk8 and the mdisk must be
put into the mdisk group MDG4_DS43_IMG.

Shrinking a VDisk
The method that the SVC uses to shrink a VDisk is to remove the required number of extents
from the end of the VDisk. Depending on where your data actually resides on the VDisk, this
can be quite destructive.

For example, you may have a VDisk that consists of 128 extents (0 to 127) of 16 MB (2 GB
capacity) and you want to decrease the capacity to 64 extents (1 GB capacity). In this case,
the SVC simply removes extents 64 to 127. Depending on the operating system, there is no
easy way to ensure that your data resides entirely on extents 0 to 63. Therefore, you may lose
data. Although easily done using the SVC, you must ensure your operating system supports
shrinking, either natively or by using third-party tools, before using this function. In addition,
we recommend that you always have a good up-to-date backup before you execute this task.

Note: Image Mode Virtual Disks cannot be reduced in size. They must first be migrated to
Managed Mode.

Assuming your operating system supports it, you can use the svctask shrinkvdisksize
command to decrease the capacity of a given VDisk. The full syntax of the svctask
shrinkvdisksize command is:
svctask shrinkvdisksize -size size [-unit b|kb|mb|gb|pb|tb] name|id

Note the following explanation:


򐂰 size: Capacity by which to shrink
򐂰 unit: Units applicable to capacity (default is mb)
򐂰 name | id: The name or ID of the disk to be shrunk

Here is an example of this command:


IBM_2145:admin>svctask shrinkvdisksize -size 1 -unit gb VD0_LINUX1

This command shrinks the VDisk VD0_LINUX1 by 1 GB to a new total size of 1 GB.

Showing the MDisks


Use the svcinfo lsvdiskmember command to show which MDisks are used by a specific
VDisk.

Chapter 9. SVC configuration and administration using the CLI 243


Example 9-35 svcinfo lsvdiskmember command
IBM_2145:admin>svcinfo lsvdiskmember VD0_LINUX1
id
6

If you want to know more about these MDisks, you can run the svcinfo lsmdisk command as
explained in 9.3.2, “Managed disks” on page 221 (using the ID displayed above rather than
the name).

Showing the MDisk group


Use the svcinfo lsvdisk command, as shown in Example 9-36, to show to which MDG a
specific VDisk belongs.

Example 9-36 svcinfo lsvdisk command: MDG name


IBM_2145:admin>svcinfo lsvdisk VD0_LINUX1
id 8
name VD0_LINUX1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 3
mdisk_grp_name MDG3_DS43
capacity 1.0GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
virtual_disk_throttling (MB) 20
preferred_node_id 12
fast_write_state empty

If you want to know more about these MDGs, you can run the svcinfo lsmdiskgrp command
as explained in 9.3.3, “Managed Disk Groups” on page 228.

Showing the host the VDisk is mapped to


To show the hosts to which a specific VDisk has been assigned, run the svcinfo
lsvdiskhostmap command as shown in Example 9-37.

Example 9-37 svcinfo lsvdiskhostmap command


IBM_2145:admin>svcinfo lsvdiskhostmap -delim : LINUX2
id:name:SCSI_id:host_id:host_name:wwpn:vdisk_UID
16:LINUX2:0:2:LINUX1:210000E08B04D451:600507680189801B2000000000000013
16:LINUX2:0:2:LINUX1:210100E08B24D451:600507680189801B2000000000000013

This command shows the host or hosts to which the VDisk TSM_POOL was mapped. It is
normal that you see duplicated entries as there are more paths between the cluster and the
host. To be sure that the operating system on the host sees the disk only one time you must
install and configure a multipath software application like SDD. For more information see
Chapter 8, “Host configuration” on page 153 where SDD is explained.

244 IBM TotalStorage SAN Volume Controller


Note: Although the optional -delim : flag normally comes at the end of the command
string, in this case you must specify this flag before the VDisk name. Otherwise, the
command does not return any data.

You have now completed the tasks required to manage the hosts and VDisks within an SVC
environment.

9.4.3 Tracing a host disk back to its source physical disk


1. On your host, run the datapath query device command from your host. You see a long
disk serial number for each vpath device as shown in Example 9-38.

Example 9-38 datapath query device


C:\Program Files\IBM\Subsystem Device Driver>datapath query device
Total Devices : 1

DEV#: 0 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 600507680183001AD00000000000000B
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port4 Bus0/Disk3 Part0 OPEN NORMAL 4 0
1 Scsi Port5 Bus0/Disk3 Part0 OPEN NORMAL 3 0
2 Scsi Port5 Bus0/Disk3 Part0 OPEN NORMAL 0 0
3 Scsi Port4 Bus0/Disk3 Part0 OPEN NORMAL 0 0

2. Run the svcinfo lshostvdiskmap command to return a list of all assigned VDisks.
IBM_2145:admin>svcinfo lshostvdiskmap -delim : W2K
id:name:SCSI_id:vdisk_id:vdisk_name:wwpn:vdisk_UID
0:W2K:0:0:vdisk0:21000000B3474536:600507680183001AD00000000000000B
Look for the disk serial number that matches your datapath query device output. This
host was defined in our SVC as W2K.
3. Run the svcinfo lsvdiskmember vdiskname command for a list of the MDisk or MDisks
that make up the specified VDisk:
IBM_2145:admin>svcinfo lsvdiskmember vdisk0
id
0
4. Query the MDisks with the svcinfo lsmdisk mdiskID to find their controller and LUN
number information as shown in Example 9-39. The output displays the controller name
and the controller LUN ID, which should be enough (provided you named your controller
something unique such as a serial number) to track back to a LUN within the disk
subsystem.

Example 9-39 svcinfo lsmdisk command


IBM_2145:admin>svcinfo lsmdisk 0
id 0
name mdisk0
status online
mode managed
mdisk_grp_id 0
mdisk_grp_name mdiskgrp0
capacity 100.0GB
quorum_index 0
block_size 512

Chapter 9. SVC configuration and administration using the CLI 245


controller_name DS4301
ctrl_type 4
ctrl_WWNN 200400A0B80CEB7C
controller_id 0
path_count 2
max_path_count 2
ctrl_LUN_# 7

9.5 Managing copy services


See Chapter 11, “Copy Services: FlashCopy” on page 365, and Chapter 12, “Copy services:
Metro Mirror” on page 413, for more information.

9.6 Service and maintenance


This section details the various Service and maintenance tasks that you can execute within
the SVC environment.

9.6.1 Upgrading software


This section explains how to upgrade the SVC software.

Package numbering and version


The format for software upgrade packages is three positive integers separated by dots. For
example, a software upgrade package contains something similar to 1.15.0.

Each software package is given a unique number. Two software packages are fully
compatible if the first two numbers are identical. For example, 1.2.15 and 1.2.16 are fully
compatible. However, 1.2.15 and 1.3.0 are not compatible.

If two software packages are fully compatible, then you can upgrade or downgrade between
these packages. If two software packages are not fully compatible, then the new software
upgrade package contains code to upgrade from a list of previous software levels. In this
case, you can apply the software upgrade only if the cluster is currently running one of the
previous software levels. After you apply it, you cannot remove it.

New software utility


It is planned to develop a software utility (which resides on the master console) which will
check software levels in the system against recommended levels which will be documented
on the support website. You will be informed if the software levels are up-to-date, or if you
need to download and install newer levels.

After the software file has been uploaded to the cluster (to the /home/admin/upgrade
directory), it can be selected and applied to the cluster. This is performed by the Web script by
using the svcservicetask applysoftware command. When a new code level is applied, it is
automatically installed on all the nodes within the cluster.

The underlying command line tool runs the script sw_preinstall, which checks the validity of
the upgrade file, and whether it can be applied over the current level. If the upgrade file is
unsuitable, the preinstall script deletes the files this prevents the build up of invalid files on the
cluster.

246 IBM TotalStorage SAN Volume Controller


Precaution before upgrade
Software installation is normally considered to be a customer task. The SVC supports
concurrent software upgrade. That is to say that software upgrade can be performed
concurrently with I/O user operations and some management activities but only the following
CLI commands will be operational from the time the install command is started until the
upgrade operation has either terminated successfully or been backed-out. All other
commands will fail with a message indicating that a software upgrade is in progress.
svcinfo lsxxxx
svcinfo lsxxxxcandidate
svcinfo lsxxxxprogress
svcinfo lsxxxxmember
svcinfo lsxxxxextent
svcinfo lsxxxxdumps
svcinfo caterrlog
svcinfo lserrlogbyxxxx
svcinfo caterrlogbyseqnum
svctask rmnode

Before you upgrade the SVC software, ensure that all I/O paths between all hosts and SANs
are working. Otherwise, the applications may have I/O failures during the software upgrade.

Note: During a software upgrade there are periods where not all of the nodes in the cluster
are operational and as a result the cache operates in write through mode. This will have an
impact upon throughput, latency and bandwidth aspects of performance.

You must also ensure that all I/O paths are working for each host that is running I/O
operations to the SANs during the software upgrade. You can check the I/O paths by using
datapath query commands. Refer to IBM TotalStorage ESS, SAN Volume Controller, SAN
Volume Controller for Cisco, SC26-7608, for more information about datapath query
commands. You do not need to check for hosts that have no active I/O operations to the SAN
during the software upgrade.

Procedure
To upgrade the SVC cluster software, perform the following steps:
1. Upload the new software package using PuTTY Secure Copy. Enter the following
command:
C:\pscp d:\svc_code\030516_full_tgz.gpg admin@SVC1:/upgrade
Authenticating with public key "rsa-key-20030514"
041116_full_tgz.gpg | 80394 kB | 2063.7 kB/s | ETA: 00:00:12 | 92%
2. Check the package was successfully delivered through the PuTTY command line
application by entering the svcinfo lssoftwaredumps command:
IBM_2145:admin>svcinfo lssoftwaredumps
id software_filename
0 041116_full.tgz.gpg
3. Now that the package is uploaded, use the hidden svcservicetask command set to apply
the software upgrade as shown here:
IBM_2145:admin>svcservicetask applysoftware -file 041116_full.tgz.gpg
4. The new code is distributed and applied to each node in the SVC cluster. After installation,
each node is automatically restarted in turn. If a node does not restart automatically during
the upgrade, repair or manually delete that node from the cluster to complete the back out
process.

Chapter 9. SVC configuration and administration using the CLI 247


5. Eventually both nodes should display Cluster: on line one on the SVC front panel and the
name of your cluster on line 2. Be prepared for a long wait (we waited up to 40 minutes in
the upgrade).

Note: During this process, both your CLI and GUI vary from sluggish (very slow) to
unresponsive. The important thing is that I/O to hosts can continue.

6. To verify that the upgrade was successful, you can perform either of the following options:
– Run the svcinfo lscluster and svcinfo lsnodevpd commands as shown in
Example 9-40. The nodevpd information is truncated for purposes of this example. The
real output is much longer.

Example 9-40 svcinfo lscluster and lsnodevpd commands


IBM_2145:admin>svcinfo lscluster SVC1
id 0000020060C006B4
name SVC1
location local
partnership
bandwidth
cluster_IP_address 9.42.164.155
cluster_service_IP_address 9.42.164.160
total_mdisk_capacity 2000.0GB
space_in_mdisk_grps 765.2GB
space_allocated_to_vdisks 34.3GB
total_free_space 1965.7GB
statistics_status off
statistics_frequency 15
required_memory 4096
cluster_locale en_US
SNMP_setting none
SNMP_community
SNMP_server_IP_address 0.0.0.0
subnet_mask 255.255.255.0
default_gateway 9.42.164.1
time_zone 522 UTC
email_setting none
email_id
code_level 0.6.04111600
FC_port_speed 2Gb

IBM_2145:admin>svcinfo lsnodevpd SVC1N1


software: 5 fields
code_level 0.6.04111600
node_name SVC1N1
ethernet_status 1
WWNN 0x500507680100035a
id 1

IBM_2145:admin>svcinfo lsnodevpd SVC1N2


software: 5 fields
code_level 0.6.04111600
node_name SVC1N2
ethernet_status 1
WWNN 0x5005076801000364
id 2

248 IBM TotalStorage SAN Volume Controller


– Copy the error log to your management workstation as explained in 9.6.2, “Running
maintenance procedures” on page 249. Open it in WordPad and search for Software
Install completed.

You have now completed the tasks required to upgrade the SVC software.

9.6.2 Running maintenance procedures


Use the svctask finderr command to generate a list of any unfixed errors in the system.
This command analyzes the last generated log that resides in the /dumps/elogs/ directory on
the cluster.

If you want to generate a new log before analyzing for unfixed errors, run the svctask
dumperrlog command:
IBM_2145:admin>svctask dumperrlog

This generates a file called errlog_timestamp, such as errlog_000667_041111_174042,


where:
򐂰 errlog part of the default prefix for all error log files
򐂰 000667 is the panel name of the current configuration node
򐂰 041111 is the date (YYMMDD)
򐂰 174042 is the time (HHMMSS)

You can add the -prefix parameter to your command to change the default prefix of errlog
to something else, for example:
svctask dumperrlog -prefix svcerrlog

This command creates a file called svcerrlog_timestamp.

To see what the filename is you must enter the following command:
IBM_2145:admin>svcinfo lserrlogdumps
id filename
0 errlog_000683_041111_174042

Note: A maximum of ten error log dump files per node will be kept on the cluster. When the
eleventh dump is made, the oldest existing dump file for that node will be overwritten. Note
that the directory may also hold log files retrieved from other nodes. These files are not
counted. The SVC will delete the oldest file (when necessary) for this node in order to
maintain the maximum number of files. The SVC will not delete files from other nodes
unless the cleandumps command is issued by the user.

After you generate your error log, you can issue the svcinfo finderr command to scan it for
any unfixed errors as shown here:
IBM_2145:admin>svctask finderr
Highest priority unfixed error code is [1060]

As you can see, we have one unfixed error on our system.

To get it analyzed you need to download it onto your own PC.

To know more about this unfixed error, you need to look at the error log in more detail. Use
the PuTTY Secure Copy process to copy the file from the cluster to your local management
workstation as shown in Example 9-41 on page 250. I

Chapter 9. SVC configuration and administration using the CLI 249


Example 9-41 pscp command: Copy error logs off SVC
In W2K3 → Start → Run →

C:\D-drive\Redbook-4H02\PuTTY\pscp admin@SVC1:/dumps/elogs/errlog_000683_041111_174042
c:\SVC_Dumps\errlog.txt

This opens a new connection to the SVC1.


Wait until you see 100% on the screen
Authenticating with public key "rsa-key-20030514"
errlog.txt | 367 kB | 367.1 kB/s | ETA: 00:00:00 | 100%

Using the Run option, you must know where your pscp.exe is located. In this case it is in
C:\D-drive\Redbook-4H02\PuTTY\

This command copies the file called errlog_000683_041111_174042 to the C:\SVC_Dumps


directory on our local workstation and call the file errlog.txt.

Open the file in WordPad (Notepad does not format the screen as well). You should see the
information similar to Example 9-42. The list was truncated for purposes of this example.

Example 9-42 errlog in WordPad


//-------------------
// Error Log Entries
// ------------------

Error Log Entry 0


Node Identifier : SVC1N1
Object Type : cluster
Object ID : 0
Sequence Number : 101
Root Sequence Number : 101
First Error Timestamp : Wed Nov 3 15:10:37 2004
: Epoch + 1099512637
Last Error Timestamp : Wed Nov 3 15:10:37 2004
: Epoch + 1099512637
Error Count : 1
Error ID : 981001 : Cluster Fabric View updated by fabric discovery
Error Code :
Status Flag :
Type Flag : INFORMATION

06 00 00 00 01 00 00 00 00 00 00 00 01 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 04 02 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

Scrolling through, or searching for the term “unfixed” (without quotation marks), you should
find more detail about the problem. There can be more entries in the errorlog that has the
status of unfixed.

After you take the necessary steps to rectify the problem, you can mark the error as fixed in
the log by issuing the svctask cherrstate command against its sequence numbers:
IBM_2145:admin>svctask cherrstate -sequencenumber 195

250 IBM TotalStorage SAN Volume Controller


IBM_2145:admin>svctask cherrstate -sequencenumber 197

If you accidentally mark the wrong error as fixed, you can mark it as unfixed again by
entering the same command and appending the -unfix flag to the end, for example:
svctask cherrstate -sequencenumber 195 -unfix

9.6.3 Setting up error notification


To set up error notification, use the svctask setevent command. The full syntax of the
setevent command is:
svctask setevent [-snmptrap all|no_state|none] [-snmpip ip_address] [-community community]

Note the following explanation:


򐂰 snmptrap: When to raise a trap
򐂰 snmpip: IP address of host running SNMP
򐂰 community: SNMP community

An example of the setevent command is shown here:


IBM_2145:admin>svctask setevent -snmptrap all -snmpip 9.42.164.160 -community SVC

This command sends all events (errors and changes in state) to the SVC community on the
SNMP manager with the IP address 9.42.164.160.

9.6.4 Analyzing the error log


The following types of events and errors are logged in the error log:
򐂰 Events: State changes that are detected by the cluster software and that are logged for
informational purposes. Events are recorded in the cluster error log.
򐂰 Errors: Hardware or software problems that are detected by the cluster software and that
require some repair. Errors are recorded in the cluster error log.
򐂰 Unfixed errors: Errors that were detected and recorded in the cluster error log and that
have not yet been corrected or repaired.
򐂰 Fixed errors: Errors that were detected and recorded in the cluster error log and that have
subsequently been corrected or repaired.

To display the error log, use the svcinfo lserrlog or svcinfo caterrlog commands as
shown in Example 9-43 (output is the same).

Example 9-43 svcinfo caterrlog command


IBM_2145:admin>svcinfo caterrlog -delim :
id:type:fixed:SNMP_trap_raised:error_type:node_name:sequence_number:root_sequence_number:fi
rst_timestamp:last_timestamp:number_of_errors:error_code
0:cluster:no:no:6:SVC1N1:101:101:041103201037:041103201037:1:00981001
0:cluster:no:no:6:SVC1N1:102:102:041103201037:041103201037:1:00981001
0:cluster:no:no:6:SVC1N1:103:102:041103201037:041103201037:1:00981001
0:cluster:no:no:6:SVC1N1:104:102:041103201037:041103201037:1:00981001
0:cluster:no:no:6:SVC1N1:105:102:041103201052:041103201052:1:00981001
0:cluster:no:yes:6:SVC1N1:106:106:041103202640:041103202640:1:00981001
0:cluster:no:yes:6:SVC1N1:107:106:041103202640:041103202640:1:00981001
0:cluster:no:yes:6:n/a:108:108:041103203957:041103203957:1:00981001
2:node:no:yes:6:n/a:109:109:041103203957:041103203957:1:00987102
0:cluster:no:yes:6:n/a:110:108:041103203957:041103203957:1:00981001
1:node:no:yes:6:SVC1N1:9000002:9000002:041104010001:041104010001:1:00988100
0:cluster:no:yes:6:SVC1N1:111:111:041104175926:041104175926:1:00981001

Chapter 9. SVC configuration and administration using the CLI 251


0:cluster:no:yes:6:n/a:112:111:041104175935:041104175935:1:00981001
0:flash:no:yes:6:n/a:113:113:041104223227:041104223227:1:00983001
0:flash:no:yes:6:n/a:114:114:041104223728:041104223728:1:00983003
2:fc_const_grp:no:yes:6:n/a:115:115:041104225231:041104225231:1:00983001
0:cluster:no:yes:6:n/a:116:116:041104225532:041104225532:1:00981001
2:node:no:yes:6:SVC1N1:117:117:041104225532:041104225532:1:00980371
0:cluster:no:yes:6:SVC1N1:118:116:041104225532:041104225532:1:00981001
0:cluster:no:yes:6:SVC1N1:119:119:041104225917:041104225917:1:00981001
0:cluster:no:yes:6:n/a:120:120:041104225932:041104225932:1:00981001
0:cluster:no:yes:6:SVC1N1:121:120:041104225932:041104225932:1:00981001
0:cluster:no:yes:6:n/a:122:122:041104230103:041104230103:1:00981001
3:node:no:yes:6:SVC1N1:123:123:041104230103:041104230103:1:00980371
.........

This command views the error log that was last generated. It shows that nine events are
logged. Use the method described in 9.6.2, “Running maintenance procedures” on page 249,
to upload and analyze the error log in more detail.

To clear the error log, you can issue the svctask clearerrlog command as shown here:
IBM_2145:admin>svctask clearerrlog
Do you really want to clear the log? y

Using the -force flag will stop any confirmation requests from appearing.

When executed this command will clear all entries from the error log. This will proceed even if
there are unfixed errors in the log. It also clears any status events that are in the log.

This is a destructive command for the error log and should only be used when you have either
rebuilt the cluster, or when you have fixed a major problem that has caused many entries in
the error log that you do not wish to manually fix.

9.6.5 Setting features


To change the licensing feature settings, use the svctask chlicense command. The full
syntax of the svctask chlicense command is:
svctask chlicense [-flash on|off] [-remote on|off] [-size capacity]

Note the following explanation:


򐂰 flash: Enable/disable FlashCopy
򐂰 remote: Enable/disable Metro Mirror (Peer-to-Peer Remote Copy (PPRC))
򐂰 size: Set licensed capacity (in GBs)

All three arguments are mutually exclusive.

Before you change the licensing, see what license you already have by issuing the svcinfo
lslicense command as shown in Example 9-44.

Example 9-44 svcinfo lslicense command


IBM_2145:admin>svcinfo lslicense
feature_flash on
feature_remote off
feature_num_gb 2000

Consider, for example, that you have purchased an additional 4 TB of licensing and the
PPRC premium feature. The command you need to enter is shown here:

252 IBM TotalStorage SAN Volume Controller


IBM_2145:admin>svctask chlicense -remote on

IBM_2145:admin>svctask chlicense -size 6000

The first command turns the remote copy feature on. The second command changes the
licensed capacity to 6000 GB (6 TB) which is 4 TB more than what it was.

To verify that the changes you made are reflected in your SVC configuration, you can issue
the svcinfo lslicense command as before. See Example 9-45.

Example 9-45 svcinfo lslicense command: Verifying changes


IBM_2145:admin>svcinfo lslicense
feature_flash on
feature_remote on
feature_num_gb 6000

9.6.6 Viewing the feature log


To view the feature log using the CLI, you must first create a feature log dump. Then copy the
feature log to your management workstation using PuTTY Secure Copy. Finally, open the file
in WordPad.

To create the feature log dump, enter the svctask dumpinternallog command as shown
here:
IBM_2145:admin>svctask dumpinternallog

This creates a file called feature.txt in the /dumps/feature directory on the cluster. To see
whether creation was successful, you can enter the svcinfo lsfeaturedumps command, as
shown here to see that the file was created:
IBM_2145:admin>svcinfo lsfeaturedumps
id feature_filename
0 feature.txt

Note: Only one of these files exists. Therefore, each time you run the dumpinternallog
command, it overwrites any existing feature.txt file.

Now that you created the file, you must copy it to your management workstation using PuTTY
Secure Copy as shown here:
C:\PuTTY>pscp admin@SVC1:/dumps/feature/feature.txt c:\svc_dumps\feature.txt
Authenticating with public key "rsa-key-20030514"
feature.txt | 18 kB | 18.5 kB/s | ETA: 00:00:00 | 100%

Now open the file in WordPad (Notepad does a poor job formatting) to view the output. It
should look similar to Example 9-46. The output list was truncated for purposes of this
example.

Example 9-46 Feature dump in WordPad


//---------------------
//---------------------
// Feature Log Entries
//---------------------
time type value0 value1 value2 value3 value4 value5
3ebfe92f 00000001 00000000 00000000 00000000 00000000 00000000 00000000
3ebfe92f 00000003 00000000 00000000 00000000 00000000 00000000 00000000
3ebfe92f 00000005 00000000 00000000 00000400 00000000 00000000 00000000

Chapter 9. SVC configuration and administration using the CLI 253


3ecb9ddf 00000005 00000000 00000400 000007d0 0000015c 00000000 00000000
3ed286b8 00000003 00000000 00000000 00000000 00000000 00000000 00000000
3ed286bd 00000005 00000000 000007d0 00001770 0000002e 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
........
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
30400364 00000100
aa226003 f9e4c5f1 829a130c 6d12be12

9.6.7 Dump the configuration


To dump configuration information, use the svctask dumpconfig command as shown here:
IBM_2145:admin>svctask dumpconfig -prefix SVC1
The configuration data has been written to /dumps/configs/SVC1_000683_041111_201243

This command creates a configuration dump with the prefix of SVC1 and places it in the
/dumps/configs directory on the cluster. The SVC automatically appends a time stamp to the
file name specified as shown where:
򐂰 000683 is the panel name of the current configuration node.
򐂰 041111 is the date (YYMMDD).
򐂰 201243 is the time (HHMMSS).

Now that the configuration is dumped, you can copy it to your management workstation using
PuTTY Secure Copy as shown in Example 9-47.

Example 9-47 pscp: Copying configuration dumps


C:\PuTTY>pscp admin@SVC1:/dumps/configs/SVC1_000683_041111_201243
c:\svc_dumps\SVC1_201243.txt
Authenticating with public key "rsa-key-20030514"
SVC1_201243.txt | 17 kB | 17.6 kB/s | ETA: 00:00:00 | 100%

Open the file in WordPad (Notepad does not format well) to view the output. It should look
similar to Example 9-48. The output was truncated for purposes of this example.

Example 9-48 config dump in WordPad


cluster_view_array[0] =
{
name : SVC1
unique_id : 10030400364
state : ICOC_OBJ_STATE_UNINITIALISED
used : TRUE
}
controller count = 1
device_view_array[0] =
{
name : DS4301
product_serial_number_array :
vendor_id : 5278866513604911136
product_id_low : 3546358422665834544
product_id_high : 2314885530818453536
used : TRUE

254 IBM TotalStorage SAN Volume Controller


host count = 9
name : AIX_270
port_count : 2
state : ICOC_OBJ_STATE_OFFLINE
used : TRUE
}
name : LINUX1
port_count : 2
state : ICOC_OBJ_STATE_OFFLINE
used : TRUE

9.6.8 Listing dumps


Several commands are available for you to list the dumps that were generated over a period
of time. You can use the lsxxxxdumps command, where xxxx is the object dumps, to return a
list of dumps in the appropriate directory.
svcinfo lsconfigdumps [node_id | node_name]
svcinfo lserrlogdumps [node_id | node_name]
svcinfo lsfeaturedumps [node_id | node_name]
svcinfo lsiostatsdumps [node_id | node_name]
svcinfo lsiotracedumps [node_id | node_name]
svcinfo lssoftwaredumps [node_id | node_name]
svcinfo ls2145dumps [node_id | node_name]

If no node is specified the dumps available on the configuration node are listed.

When executed this command will return a list of the dump files found in the relevant directory
of the specified node. The 2145 will display a list of dumps relating to node assets, including
actual node dumps which are typically named dump.nnn.xxx.yyy and trace files which are
typically named nnn.trc - here nnn, xxx and yyy are a variable length sequence of characters
and xxx, yyy relate to the data/time of the dump.
򐂰 lsconfigdumps will display files from the /dumps/configs directory
򐂰 lserrlogdumps will display error log files from the /dumps/elogs directory
򐂰 lsfeaturedumps will display feature log files from the /dumps/feature directory
򐂰 lsiotracedumps will display I/O trace files from the /dumps/iotrace directory
򐂰 lsiostatsdumps will display I/O statistics files from the /dumps/iostats directory
򐂰 lssoftwaredumps will display software upgrade files from the /home/admin/upgrade
򐂰 ls2145dumps will display a list of node_assert dumps from the /dumps directory

Software upgrade packages are contained in the /home/admin/upgrade directory. These


directories exist on every node in the cluster.

Configuration dump
Dumps contained in the /dumps/configs directory are dumps of the cluster configuration data.
A configuration dump is created by using the svctask dumpconfig command. This dumps the
configuration of the cluster, including all object details, to the /dumps/configs directory. If no
filename prefix is supplied, the default config_ is used. The full, default file name is
config_NNNNNN_YYMMDD_HHMMSS. Here NNNNNN is the node front panel name. If the
command is used with the -prefix option, then the value entered for the -prefix is used instead
of config.

The command to list all dumps in the /dumps/configs directory is svcinfo lsconfigdumps.

Chapter 9. SVC configuration and administration using the CLI 255


Error or event dump
Dumps contained in the /dumps/elogs directory are dumps of the contents of the error and
event log at the time that the dump was taken. An error or event log dump is created by using
the svctask dumperrlog command. This dumps the contents of the error or event log to the
/dumps/elogs directory. If no filename prefix is supplied, the default errlog_ is used. The full,
default file name is errlog_NNNNNN_YYMMDD_HHMMSS. Here NNNNNN is the node front
panel name. If the command is used with the -prefix option, then the value entered for the
-prefix is used instead of errlog.

The command to list all dumps in the /dumps/elogs directory is svcinfo lserrlogdumps.

Featurization log dump


Dumps contained in the /dumps/feature directory are dumps of the featurization log. A
featurization log dump is created by using the svctask dumpinternallog command. This
dumps the contents of the featurization log to the /dumps/feature directory to a file called
feature.txt. Only one of these files exists, so every time the svctask dumpinternallog
command is run, this file is overwritten.

The command to list all dumps in the /dumps/feature directory is svcinfo lsfeaturedumps.

I/O statistics dump


Dumps contained in the /dumps/iostats directory are dumps of the I/O statistics for disks on
the cluster. An I/O statistics dump is created by using the svctask startstats command. As
part of this command, you can specify a time interval at which you want the statistics to be
written to the file (the default is 15 minutes). Every time the time interval is encountered, the
I/O statistics that are collected up to this point are written to a file in the /dumps/iostats
directory. The file names used for storing I/O statistics dumps are
m_stats_NNNNNN_YYMMDD_HHMMSS, or v_stats_NNNNNN_YYMMDD_HHMMSS,
depending on whether the statistics are for MDisks or VDisks. Here NNNNNN is the node
front panel name.

The command to list all dumps in the /dumps/iostats directory is svcinfo lsiostatsdumps.

I/O trace dump


Dumps contained in the /dumps/iotrace directory are dumps of I/O trace data. The type of
data that is traced depends on the options specified by the svctask settrace command. The
collection of the I/O trace data is started by using the svctask starttrace command. The I/O
trace data collection is stopped when the svctask stoptrace command is used. When the
trace is stopped, the data is written to the file. The file name is
prefix_NNNNNN_YYMMDD_HHMMSS. Here NNNNNN is the node front panel name, and
prefix is the value entered by the user for the -filename parameter in the svctask settrace
command.

The command to list all dumps in the /dumps/iotrace directory is svcinfo lsiotracedumps.

Application abends dump


Dumps contained in the /dumps directory are dumps resulting from application abends. Such
dumps are written to the /dumps directory. The default file names are
dump.NNNNNN.YYMMDD.HHMMSS. Here NNNNNN is the node front panel name. In
addition to the dump file, it is possible that there may be some trace files written to this
directory. These are named NNNNNN.trc.

The command to list all dumps in the /dumps directory is svcinfo ls2145dumps.

256 IBM TotalStorage SAN Volume Controller


Software dump
The final option available in the svcinfo lsxxxxdumps command series is the svcinfo
lssoftwaredumps command. This command lists the contents of the /home/admin/upgrade
directory. Any files in this directory are copied there at the time that you want to perform a
software upgrade.

Example 9-49 shows these commands.

Example 9-49 Listing dumps


IBM_2145:admin>svcinfo lsconfigdumps
id config_filename
0 SVC1_000683_041111_173553
1 SVC1_000683_041111_173949

IBM_2145:admin>svcinfo lserrlogdumps
id filename
0 errlog_000683_041111_120658
1 errlog_000683_041111_170137
2 errlog_000683_041111_171618
3 errlog_000683_041111_171725

IBM_2145:admin>svcinfo lsfeaturedumps
id feature_filename
0 feature.txt

IBM_2145:admin>svcinfo lsiostatsdumps
id iostat_filename
0 m_stats_000683_041111_145411
1 v_stats_000683_041111_145412
2 m_stats_000683_041111_150911
3 v_stats_000683_041111_150912
4 v_stats_000683_041111_152412
5 m_stats_000683_041111_152412

IBM_2145:admin>svcinfo ls2145dumps
id 2145_filename
0 svc.config.cron.bak_node2
1 svc.config.cron.xml_node2
2 svc.config.cron.log_node2
3 svc.config.cron.sh_node2
4 svc.config.cron.bak_node3
5 svc.config.cron.log_node3
6 svc.config.cron.xml_node3
7 svc.config.cron.sh_node3
8 dump.000683.041012.201906
9 dump.000683.041028.200908
10 000683.trc.old
11 000683.messages.gz
12 000683.trc
13 ups_log.a
14 ups_log.b

IBM_2145:admin>svcinfo lsiotracedumps
id iotrace_filename
0 vdisktrace_000683_041111_162005

IBM_2145:admin>svcinfo lssoftwaredumps
id software_filename

Chapter 9. SVC configuration and administration using the CLI 257


0 040929_full.tgz.gpg

Other node dumps


All of the svcinfo lsxxxxdumps commands can accept a node identifier as input (for example,
append the node name to the end of any of the above commands). If this identifier is not
specified, then the list of files on the current configuration node (in our case ITSO_node2) is
displayed. If the node identifier is specified, then the list of files on that node is displayed.

However, files can only be copied from the current configuration node (using PuTTY Secure
Copy). Therefore, you must issue the svctask cpdumps command to copy the files from a
non-configuration node to the current configuration node. Subsequenty you can copy them to
the management workstation using PuTTY Secure Copy.

For example, you issue the svcinfo lsconfigdumps ITSO_node1 command. You discover a
dump file and want to copy it to your management workstation for further analysis. In this
case, you must first copy the file to your current configuration node.

To copy dumps from other nodes to the configuration node, use the svctask cpdumps
command. The full syntax of the svctask cpdumps command is:
svctask cpdumps -prefix prefix name|id

Note the following explanation:


򐂰 prefix: Directory and or files to retrieve
򐂰 name|id: Name or ID of node from which the dumps must be retrieved

The prefix you enter depends on which dumps you hope to retrieve from the remote host. The
valid -prefix directories are:
򐂰 /dumps
򐂰 /dumps/iostats
򐂰 /dumps/iotrace
򐂰 /dumps/feature
򐂰 /dumps/config
򐂰 /dumps/elog
򐂰 /home/admin

In addition to the directory, a file filter or wildcard can be specified. For example
/dumps/elog/*.txt will retrieve all files in the /dumps/elog directory that end in .txt.

One of the following rules must be followed when using wildcard *


򐂰 svctask cpdumps -prefix '/dumps/*.txt' (single-quotes)
򐂰 svctask cpdumps -prefix /dumps/\*.txt (backslash)
򐂰 svctask cpdumps -prefix "/dumps/*.txt" (double-quotes)

If the node specified is the current configuration node, no file will be copied.

An example of the command is shown here:


IBM_2145:admin>svctask cpdumps -prefix /dumps/configs SVC1N1

Now that you copied the configuration dump file from SVCN1 to your configuration node, you
can use PuTTY Secure Copy to copy the file to your management workstation for further
analysis as described earlier.

258 IBM TotalStorage SAN Volume Controller


To clear the dumps, you can run the svctask cleardumps command. Again, you can append
the node name if you want to clear dumps off a node other than the current configuration
node (the default for the svctask cleardumps command). The full syntax of the command is:
svctask cleardumps -prefix prefix [name|id]

Note the following explanation:


򐂰 prefix: Directory or file filter
򐂰 name|id: Name or ID of the node. If not specified, the configuration node is cleaned.

Here, -prefix must be one of these directories:


򐂰 /dumps
򐂰 /dumps/iostats
򐂰 /dumps/iotrace
򐂰 /dumps/feature
򐂰 /dumps/config
򐂰 /dumps/elog
򐂰 /home/admin

In addition to the directory, a file filter or wildcard can be specified. For example
/dumps/elog/*.txt will clear all files in the /dumps/elog directory that end in .txt.

One of the following rules must be followed when using wildcard *


򐂰 svctask cpdumps -prefix '/dumps/*.txt' (single-quotes)
򐂰 svctask cpdumps -prefix /dumps/\*.txt (backslash)
򐂰 svctask cpdumps -prefix "/dumps/*.txt" (double-quotes)

The commands in Example 9-50 clear all logs or dumps from the SVCN1 node.

Example 9-50 svctask cleardumps command


IBM_2145:admin>svctask cleardumps -prefix /dumps SVCN1
IBM_2145:admin>svctask cleardumps -prefix /dumps/iostats SVCN1
IBM_2145:admin>svctask cleardumps -prefix /dumps/iotrace SVCN1
IBM_2145:admin>svctask cleardumps -prefix /dumps/feature SVCN1
IBM_2145:admin>svctask cleardumps -prefix /dumps/config SVCN1
IBM_2145:admin>svctask cleardumps -prefix /dumps/elog SVCN1
IBM_2145:admin>svctask cleardumps -prefix /home/admin SVCN1

9.7 SVC cluster configuration backup and recovery


The SVC configuration data is stored on all the nodes in the cluster. It is specially hardened
so that, in normal circumstances, the SVC should never lose its configuration settings.
However in exceptional circumstances, such as a rogue fire sprinkler soaking the SVC cluster
or a multiple hardware failure, this data may become corrupted or lost.

This section details the tasks that you can perform to save the configuration data from a SVC
configuration node and restore it. The following configuration information is backed up:
򐂰 SVC cluster
򐂰 Storage controllers
򐂰 Hosts
򐂰 I/O groups
򐂰 Software licenses
򐂰 Managed disks
򐂰 MDGs

Chapter 9. SVC configuration and administration using the CLI 259


򐂰 SVC nodes
򐂰 SSH keys
򐂰 Virtual disks
򐂰 VDisk-to-host mappings

Important: Before you begin the restore process, consult IBM Support to determine the
cause as to why you cannot access your original configuration data. After the restore
process starts, the original data on the VDisks is destroyed. Therefore, you must ensure
that you have a backup of all user data on the VDisks.

The svcconfig command line tool is a script, used under the CLI, to save and restore
configuration data. It uses secure communications to communicate with a configuration node.
The tool is designed to work if the hardware configuration for restoration is identical to that
during saving.

The prerequisites for having a successful backup are:


򐂰 All nodes in the cluster must be online.
򐂰 No object name may begin with an underscore (_).
򐂰 Do not run any independent operations that could change the cluster configuration while
the backup command runs.
򐂰 Do not make any changes to the fabric or the cluster between backup and restore. If
changes are made, back up your configuration again or you may not be able to restore it
later.

Note: We recommend that you make a backup of the SVC configuration data after each
major change in the environment, such as defining or changing a VDisks, VDisk-to-host
mappings, etc. In addition, you can make a backup after each change. Be aware that only
two versions of the backup file are maintained for each cluster (the previous one has .bak
appended), unless you copy the XML or XML BAK files to another folder.

9.7.1 Backing up the SVC cluster configuration


You can backup your cluster configuration by using the Backing Up a Cluster Configuration
panel or the CLI svcconfig command. This section describes the overall procedure for
backing up your cluster configuration and the conditions that must be satisfied to perform a
successful backup.

Important: We recommend that you make a backup of the SVC configuration data after
each major change in the environment, such as defining or changing a VDisks,
VDisk-to-host mappings, etc.

The backup command extracts configuration data from the cluster and saves it to
svc.config.backup.xml in /tmp. A file svc.config.backup.sh is also produced. You can study
this file to see what other commands were issued to extract information.

A log svc.config.backup.log is also produced. You can study this log for details in regard to
what was done and when. This log also includes information about the other commands
issued.

Any pre-existing file svc.config.backup.xml is archived as svc.config.backup.bak. Only one


such archive is kept. We recommend that you immediately move the .XML file and related
KEY files (see limitations below) off the cluster for archiving. Then erase the files from /tmp

260 IBM TotalStorage SAN Volume Controller


using the svcconfig clear -all command. We also recommend that you change all objects
having default names to non-default names. Otherwise, a warning is produced for objects with
default names. Also the object with the default name is restored with its original name with an
“_r” appended. The prefix _(underscore) is reserved for backup and restore command usage,
and should not be used in any object names.

Important: The tool backs up logical configuration data only, not client data. It does not
replace a traditional data backup and restore tool, but supplements such a tool with a way
to backup and restore the client's configuration. To provide a complete backup and disaster
recovery solution, the user must backup both user (non-configuration) data and
configuration (non-user) data. After restoration of the SVC configuration, the client is
expected to fully restore user (non-configuration) data to the cluster's disks.

Prerequisites
You must have the following prerequisites in place:
򐂰 All nodes must be online.
򐂰 No object name may begin with an underscore.
򐂰 All objects should have non-default names, that is, names that are not assigned by the
SAN Volume Controller.

Although we recommend that objects have non-default names at the time the backup is
taken, this is not mandatory. Objects with default names are renamed when they are restored.

Example 9-51 shows an example of the svcconfig backup command.

Example 9-51 svcconfig backup command


IBM_2145:admin>svcconfig backup
............
CMMVC6112W io_grp io_grp1 has a default name
.
CMMVC6112W io_grp io_grp2 has a default name
.
CMMVC6112W io_grp io_grp3 has a default name
.
CMMVC6112W io_grp recovery_io_grp has a default name
.................
CMMVC6136W No SSH key file svc.config.barry.admin.key
CMMVC6136W No SSH key file svc.config.service.service.key
...................
IBM_2145:admin>svcconfig clear -all
IBM_2145:admin>

Example 9-52 shows the pscp command.

Example 9-52 pscp command


C:\Support Utils\Putty>pscp admin@SVC1:/tmp/svc.config.backup.xml c:\clibackup.xml
Authenticating with public key "rsa-key-20031031"
clibackup.xml | 22 kB | 22.2 kB/s | ETA: 00:00:00 | 100%

C:\Support Utils\Putty>

Context
The following scenario illustrates the value of configuration backup:

Chapter 9. SVC configuration and administration using the CLI 261


1. Use the svcconfig command to create a backup file on the cluster that contains details
about the current cluster configuration.
2. Store the backup configuration on some form of tertiary storage. You must copy the
backup file from the cluster or it becomes lost if the cluster crashes.
3. If a severe enough failure occurs, the cluster maybe lost. Both configuration data (for
example, the cluster definitions of hosts, I/O groups, MDGs, MDisks) and the application
data on the virtualized disks are lost. In this scenario, it is assumed that the application
data can be restored from normal client backup procedures. However, before you can
carry this out, you must reinstate the cluster, as configured at the time of the failure. This
means you restore the same MDGs, I/O groups, host definitions, and the VDisks that
existed prior to the failure. Then you can copy the application data back onto these VDisks
and resume operations.
4. Recover the hardware. This includes hosts, SVCs, disk controller systems, disks, and
SAN fabric. The hardware and SAN fabric must physically be the same as those used
before the failure.
5. Re-initialize the cluster.
6. Restore your cluster configuration using the backup configuration file generated prior to
the failure.
7. Restore the data on your virtual disks (VDisks) using your preferred restore solution or
with help from IBM Service.
8. Resume normal operations.

9.7.2 Restoring the SVC cluster configuration

Important: Always consult IBM Support to restore the SVC cluster configuration from
backup to determine the cause of the loss of your cluster configuration. After the svcconfig
restore -execute command is started, any prior user data on the VDisks should be
considered destroyed and must be recovered from your usual application data backup
process.

See also IBM TotalStorage Open Software Family SAN Volume Controller: Command-Line
Interface User's Guide, SC26-7544.

For a detailed description of the SVC configuration backup and restore functions, see IBM
TotalStorage Open Software Family SAN Volume Controller: Configuration Guide,
SC26-7543.

9.7.3 Deleting configuration backup


This section details the tasks that you can perform to delete the configuration backup files
from the default folder in the SVC master console. You can do this if you already copied them
to an external and secure place.

Delete the SVC Configuration backup files, using the svcconfig clear -all command.

9.8 Cookie crumbs recovery process


There is a procedure called “cookie crumbs recovery” that has been tested and used in select
cases where the cluster has been completely destroyed. (An example would be

262 IBM TotalStorage SAN Volume Controller


simultaneously pulling power cords from all nodes to their UPSs. In this case all nodes would
boot up to node error 578 when power was restored.) This procedure in certain
circumstances is able to recover most user data. This procedure is not to be used by the
customer or IBM CE without direct involvement from IBM level 3 support. It is not published,
but referred to here only to indicate that loss of a cluster may be recoverable without total
data loss requiring restore of application data from backup. It is a very sensitive procedure
and only to be used as a last resort, and cannot recover any data unstaged from cache at the
time of total cluster failure.

9.9 Scripting and its usage under CLI for SVC task automation
Usage of scripting constructs is better for automation of regular operational jobs. You can use
available shells to develop it. And to run a SVC console where operating system is Windows
2000, you can either purchase licensed shell emulation software or download Cygwin from:
http://www.cygwin.com

Scripting enhances the productivity of SVC administrators and integration of their storage
virtualization environment.

We show an example of scripting in Appendix C, “Scripting” on page 625.

You can create your own customized scripts to automate a large number of tasks for
completion at a variety of times and run them through the CLI.

Chapter 9. SVC configuration and administration using the CLI 263


264 IBM TotalStorage SAN Volume Controller
10

Chapter 10. SVC configuration and


administration using the GUI
This chapter describes how to use the IBM TotalStorage SAN Volume Controller graphical
user interface (GUI). This allows you to perform additional and advanced configuration and
administration tasks, which are not covered in Chapter 7, “Quickstart configuration using the
GUI” on page 119.

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. 265
10.1 Managing the cluster using the GUI
This section explains the various configuration and administration tasks that you can perform
on the cluster. All screen shots are taken on GUI version 2.1.0.0a which is available for SVC
V2.1

Installing certificates
You may have already accepted certificates, as suggested in 7.1.1, “Installing certificates” on
page 127. If you did not, you may notice many instances where you are prompted with
security warnings regarding unrecognized certificates. Return to 7.1.1, “Installing certificates”
on page 127, and complete those steps to avoid these messages.

10.1.1 Organizing on-screen content


In the following sections, there are several panels within the SVC GUI for you to perform
filtering (to minimize the amount of data shown on screen) and sorting (to organize the
content on the screen). Since we do not cover these functions elsewhere, this section
provides a brief overview of these functions.

For the purpose of illustration, on the SVC Welcome page, we click the Work with Virtual
Disks option and click the Virtual Disks link.

Filtering
There are two types of filtering: prefiltering and additional filtering.

Prefiltering
When you select an item, such as the Virtual Disks link, you are first prompted with a filtering
panel (Figure 10-1). On this panel, you can provide the criteria from which the SVC generates
the next panel.

Figure 10-1 Filtering

Note: The asterisk (*) acts as a wildcard.

266 IBM TotalStorage SAN Volume Controller


After you have entered the filter criteria (in our case, *W2K*), click the OK button. Then the
Viewing Virtual Disks panel (Figure 10-2) is displayed. It shows only those virtual disks
(VDisks) that meet your filtering criteria.

Figure 10-2 Viewing Virtual Disks: Filtered view

You can not clear the filter by selecting Clear All Filters from the list and click Go. Neither by
clicking the small icon highlighted by the mouse pointer in Figure 10-2. This is used by the
additional filtering method. To clear the filtering you must close the Viewing VDisks panel and
re-open it by clicking the Bypass Filter button.

Additional Filtering
When you are in the VDisk list you can use the additional filter option. It does not matter if you
used the prefilter to get this list or not. Use the additional filter icon as shown in Figure 10-3
on page 268 or use the Show Filter Row in the pull-down menu and click GO. This will bring
you to the next screen as shown in Figure 10-4 on page 268.

Chapter 10. SVC configuration and administration using the GUI 267
Figure 10-3 Additional filter Icon

Figure 10-4 Show filter row

Here you can select the row you want to use the additional filter option. Click on the word
none brings you to the screen in Figure 10-5 on page 269. Here you can filter on the Name
field. Enter AIX in the text field and select the correct condition. Now click on OK.

268 IBM TotalStorage SAN Volume Controller


Figure 10-5 Filter option on Name

A list with VDisks is displayed which only contains AIX somewhere in the name as shown in
Figure 10-6. If you want you can do some additional filtering on the other columns to narrow
your view.

Figure 10-6 Filtered on Name containing AIX

Chapter 10. SVC configuration and administration using the GUI 269
The option to reset the filters is shown in Figure 10-7 use the clear all filter icon or use the
Clear all filter option in the pull-down menu and click on Go.

Figure 10-7 Clear all filter options

Sorting
Regardless of whether you use the pre or additional filter options, when you are on the
Viewing Virtual Disks panel, you can sort the displayed data by selecting Edit Sort from the
list and clicking Go.

Or you can click the small icon highlighted by the mouse pointer in Figure 10-8 on page 271.

270 IBM TotalStorage SAN Volume Controller


Figure 10-8 Selecting Edit Sort

Chapter 10. SVC configuration and administration using the GUI 271
As shown in Figure 10-9, you can sort based on up to three criteria including: name, I/O
group name, status, MDisk group name, capacity (MB), type, FC pair name, and MM name.

Note: The actual sort criteria differs based on the information that you are sorting.

Figure 10-9 Sorting criteria

When you finish making your choices, click OK to regenerate the display based on your
sorting criteria.

272 IBM TotalStorage SAN Volume Controller


If you want to clear the sort, simply select Clear All Sorts from the list and click Go. Or click
the icon highlighted by the mouse pointer in Figure 10-10.

Figure 10-10 Selecting to clear all sorts

Documentation
If you need to access to online documentation, in the upper right corner of the panel, click the
icon. This opens the Help Assistant panel on the left side of the screen as shown in
Figure 10-11.

Figure 10-11 Online help using the i icon

Chapter 10. SVC configuration and administration using the GUI 273
Help
If you need to access online help, in the upper right corner of the panel, click the icon.
This opens a new window called Information center. Here you can search on any item you
want help for.

Figure 10-12 Online help using the ? icon

General housekeeping
If at any time the content in the right side of the frame is “cut off”, you can collapse the My
Work column by clicking the icon at the top of the My Work column. When collapsed, the
small arrow changes from pointing to the left to pointing to the right ( ). Clicking the small
arrow that points right expands the My Work column back to its original size.

In addition, each time you open a configuration or administration window using the GUI in the
following sections, it creates a link for that panel along the top of your Web browser beneath
the main banner graphic. As a general house keeping task, we recommend that you close
each window when you finish with it by clicking the icon to the right of the panel name, but
below the icon. Be careful not to close the entire browser.

10.1.2 Viewing cluster properties


Perform the following steps to display the cluster properties:
1. From the SVC Welcome page, select the Manage Cluster option and then the View
Cluster Properties link.
2. The Viewing General Properties panel (Figure 10-13 on page 275) opens. Click the IP
Addresses, Space, SNMP, Statistics or Metro Mirror links and you see additional
information that pertains to your cluster.

274 IBM TotalStorage SAN Volume Controller


Figure 10-13 View Cluster Properties: General properties

10.1.3 Maintaining passwords


Perform the following steps to maintain passwords:
1. From the SVC Welcome page, select the Manage Cluster option and then the Maintain
Passwords link.
2. Before you can access the Maintain Passwords panel, enter the existing SVC
administration user ID and password when prompted. Click OK.
3. The Maintain Passwords panel (Figure 10-14 on page 276) opens. Enter the new
passwords for the administrator account, the service account, or both. Click Modify
Password.

Note: Passwords are a maximum of 15 alphanumeric case-sensitive characters. Valid


characters are uppercase letters [A through Z], lowercase letters [a through z], digits [0
through 9], dash [ - ], and underscore [ _ ]. The first character cannot be a dash [ - ]

4. Before the next panel is displayed, enter the new user ID and password combination when
prompted.

Chapter 10. SVC configuration and administration using the GUI 275
Figure 10-14 Maintain Passwords panel

When complete, you see the successful update messages as shown in Figure 10-15.

Figure 10-15 Modifying passwords successful update messages

You have now completed the tasks required to change the admin and service passwords for
your SVC cluster.

10.1.4 Modifying IP addresses

Important: If you specify a new cluster IP address, the existing communication with the
cluster through the GUI is broken. You need to relaunch the SAN Volume Controller
Application from the GUI Welcome page.

Modifying the IP address of the cluster, although quite simple, requires some
reconfiguration for other items within the SVC environments. This may include, for
example, reconfiguring the PuTTY application and the central administration GUI.

276 IBM TotalStorage SAN Volume Controller


Perform the following steps to modify the cluster and service IP addresses of our SVC
configuration:
1. From the SVC Welcome page, select the Manage Cluster option and the Modify IP
Addresses link.
2. The Modify IP Addresses panel (Figure 10-16) opens. Make any necessary changes.
Then click Modify Settings.

Tip: Use your mouse or the tab key to move between fields.

Figure 10-16 Modify IP Addresses

3. You advance to the next panel which shows a message indicating that the IP addresses
were updated.

You have now completed the tasks required to change the IP addresses (cluster, service,
gateway and master console) for your SVC environment.

10.1.5 Setting the cluster time zone and time


Perform the following steps to set the cluster time zone and time:
1. From the SVC Welcome page, select the Working with Clusters options and the Set
Cluster Time link.
2. The Cluster Date and Time Settings panel (Figure 10-17 on page 278) opens. At the top of
panel, you see the current settings. If necessary, make adjustments and ensure that the
Update cluster data and time and Update cluster time zone check boxes are selected.
Click Update.

Note: You may be prompted for the cluster user ID and password. If you are, type admin
and the password you set earlier.

Chapter 10. SVC configuration and administration using the GUI 277
Figure 10-17 Cluster Date and Time Settings panel

3. You return to the Cluster Date and Time Settings panel (Figure 10-18), which shows the
new settings.

Figure 10-18 Cluster Date and Time Settings update confirmation

You have now completed the tasks necessary to set the cluster time zone and time.

278 IBM TotalStorage SAN Volume Controller


10.1.6 Starting the statistics collection
Perform the following steps to start statistics collection on your cluster:
1. From the SVC Welcome page, select the Manage Cluster option and the Start Statistics
Collection link.
2. The Starting the Collection of Statistics panel (Figure 10-19) opens. Make an interval
change, if desired. The interval you specify (minimum 15, maximum 60) is in minutes.
Click OK.

Figure 10-19 Starting collection of statistics

3. Although it does not state the current status and it is not obvious, clicking OK turns on the
statistics collection. To verify, click the Cluster Properties link as you did in 10.1.2,
“Viewing cluster properties” on page 274. Then, click the Statistics link. You see the
interval as specified in Step 2 and the status of On as shown in Figure 10-20.

Figure 10-20 Verifying that statistics collection is on

You have now completed the tasks required to start statistics collection on your cluster.

10.1.7 Stopping the statistics collection


Perform the following steps to stop statistics collection on your cluster:

Chapter 10. SVC configuration and administration using the GUI 279
1. From the SVC Welcome page, select the Manage Cluster option and the Stop Statistics
Collection link.
2. The Stopping the Collection of Statistics panel (Figure 10-21) opens. You see a message
asking whether you are sure that you want to stop the statistics collection. Click Yes to
stop the ongoing task.

Figure 10-21 Stopping the collection of statistics

3. The window closes. To verify that the collection has stopped, click the Cluster Properties
link as you did in 10.1.2, “Viewing cluster properties” on page 274. Then, click the
Statistics link. Now you see the status has changed to Off as shown in Figure 10-22.

Figure 10-22 Verifying that statistics collection is off

You have now completed the tasks required to stop statistics collection on your cluster.

10.1.8 Shutting down a cluster


If all input power to a SAN Volume Controller cluster is to be removed for more than a few
minutes (for example, if the machine room power is to be shut down for maintenance), it is
important that you shut down the cluster before you remove the power. The reason for this is
that if the input power is removed from the uninterruptible power supply units without shutting
down the cluster and the uninterruptible power supply units themselves first, the
uninterruptible power supply units remain operational and eventually become drained of
power.

280 IBM TotalStorage SAN Volume Controller


When input power is restored to the uninterruptible power supply units, they start to recharge.
However, the SAN Volume Controller does not permit any input/output (I/O) activity to be
performed to the VDisks until the uninterruptible power supply units are charged enough to
enable all the data on the SAN Volume Controller nodes to be de-staged in the event of an
unexpected power loss. This may take as long as three hours. Shutting down the cluster prior
to removing input power to the uninterruptible power supply units prevents the battery power
from being drained and makes it possible for I/O activity to be resumed as soon as input
power is restored.

Perform the following steps to shut down your cluster:

Important: Before shutting down a cluster you should quiesce all I/O operations that are
destined for this cluster because you will lose access to all VDisks being provided by this
cluster. Failure to do so may result in failed I/O operations being reported to your host
operating systems. There is no need to do this when you only shut down one node.

Begin the process of quiescing all I/O to the cluster by stopping the applications on your
hosts that are using the VDisks provided by the cluster.
1. If you are unsure which hosts are using the VDisks provided by the cluster, follow the
procedure called “Showing the Host the VDisk is mapped to” on page 341.

Repeat the previous step for all VDisks.

1. From the SVC Welcome page, select the Manage Cluster option and the Shut Down
Cluster link.
2. The Shutting down cluster panel (Figure 10-23) opens. You see a message asking you to
confirm whether you want to shut down the cluster. Ensure that you have stopped all
FlashCopy mappings, Remote Copy relationships, data migration operations and forced
deletions before continuing. Click Yes to begin the shutdown process.

Note: At this point, you lose administrative contact with your cluster.

Figure 10-23 Shutting down the cluster

You have now completed the tasks required to shut down the cluster. Now you can shut down
the uninterruptible power supplies by pressing the power button on their front panels.

Chapter 10. SVC configuration and administration using the GUI 281
10.2 Working with nodes using the GUI
Note: To restart the SVC cluster, you must first restart the uninterruptible power supply
units by pressing the power button on their front panels. After they are on, go to the service
panel of one of the nodes within your SVC cluster and press the power on button, releasing
it quickly. After it is fully booted (for example, displaying Cluster: on line 1 and the cluster
name on line 2 of the SVC front panel), you can start the other nodes in the same way.

As soon as all nodes are fully booted and you re-established administrative contact using
the GUI, your cluster is fully operational again.

This section discusses the various configuration and administration tasks that you can
perform on the nodes within an SVC cluster.

10.2.1 I/O groups


This section details the tasks that can be performed on an I/O group level.

Renaming an I/O group


Perform the following steps to rename an I/O group:
1. From the SVC Welcome page, select the Work with Nodes option and the I/O Groups
link.
2. The Viewing Input/Output Groups panel (Figure 10-24) opens. Select the radio button to
the left of the I/O group you want to rename. In this case, we select io_grp1. Ensure that
Rename an I/O Group is selected from the drop-down list. Click Go.

Figure 10-24 Viewing Input/Output Groups

282 IBM TotalStorage SAN Volume Controller


3. On the Renaming I/O Group panel (I/O Group Name is the I/O group you selected in the
previous step), type the New Name you want to assign to the I/O group. Click OK as
shown in Figure 10-25. Our new name is io_grp_SVC2

Figure 10-25 Renaming the I/O group

Note: The name can consist of letters A to Z, a to z, numbers 0 to 9, the dash and the
underscore. It can be between one and 15 characters in length, but cannot start with a
number, the dash or the word iogrp because this prefix is reserved for SVC assignment
only.

You have now completed the tasks required to rename an I/O group.

10.2.2 Nodes
This section discusses the tasks that you can perform at a node level. You perform each task
from the Viewing Nodes panel (Figure 10-26). To access this panel, from the SVC Welcome
page, select the Work with Nodes options and the Nodes link.

Figure 10-26 Viewing Nodes

Chapter 10. SVC configuration and administration using the GUI 283
The drop-down shows the options available at a node level. We will work in this example with
SVC1N1.

Viewing the node details


Perform the following steps to view information about a node within the SVC cluster:
1. From the Viewing Nodes panel (Figure 10-26 on page 283), click the highlighted name of
the node (SVC1N1).
2. The Viewing General Details nodename panel (where nodename is the node you chose
(SVC1N1)) opens as shown in Figure 10-27. Click the Ports and Vital Product Data links
to view additional information about your selected node.

Figure 10-27 General node details

Adding a node
Perform the following steps to add a node to the SVC cluster:
1. From the Viewing Nodes panel (Figure 10-26 on page 283), select Add a Node and click
Go.
2. On the Adding a Node to a Cluster panel (Figure 10-28 on page 285), select a node from
the list of available nodes. Select the I/O group to which you want to assign the new node.
Enter a suitable name for the new node. Click OK.

Note: If you do not provide the name, the SVC automatically generates the name
nodeX (where X is the ID sequence number assigned by the SVC internally). The name
can consist of letters A to Z, a to z, numbers 0 to 9, the dash and the underscore. It can
be between one and 15 characters in length, but cannot start with a number, the dash
or the word node because this prefix is reserved for SVC assignment only.

284 IBM TotalStorage SAN Volume Controller


Figure 10-28 Adding a node

3. Use the Refresh button in Figure 10-29 until the second_node has the status Online.

Figure 10-29 Add node Refresh button

Renaming a node
Perform the following steps to rename a node in the SVC cluster:
1. From the Viewing Nodes panel (Figure 10-26 on page 283), select the radio button to the
left of the node you want to rename. Select Rename a Node from the drop-down list, and
click Go.

Chapter 10. SVC configuration and administration using the GUI 285
2. On the Renaming Node nodename panel (where nodename is the node you selected in
the previous step), type the new name you want to assign to the node. Click OK. See
Figure 10-30.

Note: The name can consist of letters A to Z, a to z, numbers 0 to 9, the dash and the
underscore. It can be between one and 15 characters in length, but cannot start with a
number, the dash or the word node because this prefix is reserved for SVC assignment
only.

Figure 10-30 Renaming a node

Deleting a node
Perform the following steps to delete a node from the SVC cluster:
1. From the Viewing Nodes panel (Figure 10-26 on page 283), select the radio button to the
left of the node you want to delete. Select Delete a Node from the drop-down list and click
Go.
2. On the Deleting a Node from Cluster nodename panel (where nodename is the name of
the node you selected in the previous step), confirm your decision by selecting Yes. See
Figure 10-31.

Figure 10-31 Deleting node from a cluster

3. Use the Refresh button in Figure 10-32 on page 287 until SVC1N1 is no longer on the list.

286 IBM TotalStorage SAN Volume Controller


Figure 10-32 Delete node refresh button

Shutting down a node


Earlier we showed how to shut down the complete SVC cluster in a controlled manner. On
occasion, it may be necessary to shut down a single node within the cluster to perform such
tasks as scheduled maintenance, while leaving the SVC environment up and running. This
function shuts down one node in a graceful manner. When this is done, the other node in the
IO Group will destage the contents of its cache and will go into write through mode until the
node is powered up again and rejoins the cluster.

To shut down a single node in an SVC cluster, perform the following steps:
1. From the Viewing Nodes panel (Figure 10-26 on page 283), select the radio button to the
left of the node you want to shut down. Select Shut Down a Node from the list. Click Go.
2. On the confirmation panel (Figure 10-33) that appears next, select Yes to continue with
the shutdown process.

Figure 10-33 Shutting down a node

To restart the SVC node, simply go to the front panel of that node and push the power on
button.

You have now completed the tasks that are required to view, add, delete, rename, and shut
down a node within the SVC environment.

Chapter 10. SVC configuration and administration using the GUI 287
10.3 Working with managed disks
This section details the various configuration and administration tasks that you can perform
on the managed disks (MDisks) within the SVC environment.

10.3.1 Disk controller systems


This section details the tasks that you can perform at a disk controller level.

Viewing disk controller details


Perform the following steps to view information about a back-end disk controller in use by the
SVC environment:
1. Select the Work with Managed Disks option and then the Disk Controller Systems link.
2. The Viewing Disk Controller Systems panel (Figure 10-34) opens. For more detailed
information about a specific controller, click its ID (highlighted by the mouse cursor in
Figure 10-34).

Figure 10-34 Disk controller systems

3. The Viewing General Details for IDname panel (Figure 10-35 on page 289) opens (where
IDname is the ID you selected). Review the details and click Cancel to return to the
previous panel.

288 IBM TotalStorage SAN Volume Controller


Figure 10-35 Viewing general details about a disk controller

Renaming a disk controller


Perform the following steps to rename a disk controller used by the SVC cluster:
1. Select the radio button to the left of the controller you want to rename. Then select
Rename a Disk Controller System from the list and click Go.
2. On the Renaming Disk Controller System controllername panel (where controllername is
the controller you selected in the previous step), type the new name you want to assign to
the controller and click OK. See Figure 10-36.

Figure 10-36 Renaming a controller

3. You return to the Disk Controller Systems panel. You should now see the new name of
your controller displayed.

Note: The name can consist of letters A to Z, a to z, numbers 0 to 9, the dash and the
underscore. It can be between one and 15 characters in length. However, it cannot
start with a number, the dash or the word controller because this prefix is reserved for
SVC assignment only.

Chapter 10. SVC configuration and administration using the GUI 289
10.3.2 Managed disks
This section details the tasks which can be performed at an MDisk level. You perform each of
the following tasks from the Managed Disks panel (Figure 10-37). To access this panel, from
the SVC Welcome page, click the Work with Managed Disks option and then the Managed
Disks link.

Note: At the Filtering Managed Disks (MDisks) panel, click Bypass filter.

Figure 10-37 Viewing Managed Disks panel

MDisk information
To retrieve information about a specific MDisk, perform the following steps:
1. On the Viewing Managed Disks panel (Figure 10-38 on page 291), click the name of any
MDisk in the list to reveal more detailed information about the specified MDisk.

290 IBM TotalStorage SAN Volume Controller


Figure 10-38 Managed disk details

Tip: If at any time, the content in the right side of frame is “cut off”, you can minimize the
My Work column by clicking the arrow to the right of the My Work heading at the top
right of column (highlighted with mouse pointer in Figure 10-37).

After you minimize the column, you see an arrow in the far left position in the same
location where the My Work column formerly appeared. See Figure 10-39.

2. Review the details and then click Close to return to the previous panel.

Renaming an MDisk
Perform the following steps to rename an MDisk controlled by the SVC cluster:
1. Select the radio button to the left of the MDisk that you want to rename in Figure 10-37 on
page 290. Select Rename an MDisk from the list and click Go.
2. On the Renaming Managed Disk MDiskname panel (where MDiskname is the MDisk you
selected in the previous step), type the new name you want to assign to the MDisk and
click OK. See Figure 10-39 on page 292.

Note: The name can consist of letters A to Z, a to z, numbers 0 to 9, the dash and the
underscore. It can be between one and 15 characters in length. However, it cannot start
with a number, the dash or the word MDisk because this prefix is reserved for SVC
assignment only.

Chapter 10. SVC configuration and administration using the GUI 291
Figure 10-39 Renaming an MDisk

Discovering MDisks
Perform the following steps to discover newly assigned MDisks:
1. Select Discover MDisks from the pull-down list of Figure 10-37 on page 290 and click Go.
2. Any newly assigned MDisks are displayed in the window shown in Figure 10-40.

Figure 10-40 Newly discovered managed disks

Setting up a quorum disk


The SVC cluster, after the process of node discovery, automatically chooses three MDisks as
quorum disks. Each disk is assigned an index number of either 0, 1, or 2.

In the event that half the nodes in a cluster are missing for any reason, the other half cannot
simply assume that the nodes are “dead”. It can simply mean that the cluster state
information is not being successfully passed between nodes for some reason (network failure
for example). For this reason, if half the cluster disappears from the view of the other, each
surviving half attempts to lock the first quorum disk (index 0). In the event of quorum disk
index 0 not available on any node, the next disk (index 1) becomes the quorum, and so on.

The half of the cluster that is successful in locking the quorum disk becomes the exclusive
processor of I/O activity. It attempts to reform the cluster with any nodes it can still see. The

292 IBM TotalStorage SAN Volume Controller


other half will stop processing IO. This provides a tie-break solution and ensures that both
halves of the cluster do not continue to operate.

In the case that all clusters can see the quorum disk then they will use this quorum disk to
communicate with each other, and will decide which half will become the exclusive processor
of I/O activity.

If for any reason (for example, you installed additional backend storage and you want to move
one or two quorum disks onto this newly installed backend storage subsystem) you want to
set your own quorum disks, complete the following tasks:
1. Select the radio button to the left of the MDisk that you want to designate as a quorum.
Then select Set a quorum disk from the list and click Go.
2. On the Setting a Quorum Disk in Figure 10-41, assign a quorum index of 0, 1, or 2 and
click OK.

Figure 10-41 Setting a quorum disk

Quorum disks are only created if at least one MDisk is in managed mode (that is, it was
formatted by SVC with extents in it). Otherwise, a 1330 cluster error message is displayed in
the SVC front panel. You can correct it only when you place MDisks in managed mode.

Including an MDisk
If a significant number of errors occurs on an MDisk, the SVC automatically excludes it.
These errors can be from a hardware problem, a storage area network (SAN) zoning problem
or the result of poorly planned maintenance. If it is a hardware fault, you should receive
SNMP alerts in regard to the state of the hardware (before the disk were excluded) and
undertaken preventative maintenance. If not, the hosts that were using VDisks, which used
the excluded MDisk, now have I/O errors.

From the Viewing Managed Disks panel (Figure 10-42 on page 294), you can see that mdisk3
is excluded.

Chapter 10. SVC configuration and administration using the GUI 293
Figure 10-42 Viewing Managed Disks: Excluding an MDisk

After you take the necessary corrective action to repair the MDisk (for example, replace the
failed disk, repair SAN zones), you can tell the SVC to include the MDisk again. Select the
radio button to the left of the excluded MDisk. Then select Include an MDisk from the
drop-down list and click Go. See Figure 10-43.

Figure 10-43 Including an MDisk

When you return to the Viewing Managed Disks panel (Figure 10-44 on page 295), you see
that mdisk3 is now back in an online state.

294 IBM TotalStorage SAN Volume Controller


Figure 10-44 Viewing Managed Disks: Verifying the included MDisk

Showing an MDisk group


To display information about the managed disk group (MDG) to which an MDisk belongs,
perform the following steps:
1. Select the radio button to the left of the MDisk you want to obtain MDG information about.
Select Show MDisk Group from the list and click Go as shown in Figure 10-45.

Figure 10-45 Show MDisk Group select

2. Click on the name of the Managed Disk Group as shown in Figure 10-46 on page 296.

Chapter 10. SVC configuration and administration using the GUI 295
Figure 10-46 Show MDisk Group

3. You now see a subset (specific to the MDisk you chose in the previous step) as shown in
Figure 10-47.
.

Figure 10-47 View MDG details

Showing a VDisk for an MDisk


To display information about VDisks that reside on an MDisk, perform the following steps:
1. Select the radio button, as shown in Figure 10-48 on page 297, to the left of the MDisk you
want to obtain VDisk information about. Select Show VDisks from the list and click Go.

296 IBM TotalStorage SAN Volume Controller


Figure 10-48 Show VDisk

2. You now see a subset (specific to the MDisk you chose in the previous step) of the View
Virtual Disks panel in Figure 10-49. We cover the View Virtual Disks panel in more detail
in 10.4, “Working with Virtual Disks” on page 313.

Figure 10-49 VDisk list from a selected MDisk

Creating a VDisk in image mode


An image mode disk is a VDisk that has an exact one-to-one (1:1) mapping of VDisk extents
with the underlying MDisk. For example, extent 0 on the VDisk contains the same data as
extent 1 on the MDisk and so on. Without this 1:1 mapping (for example, if extent 0 on the

Chapter 10. SVC configuration and administration using the GUI 297
VDisk mapped to extent 3 on the MDisk), there is little chance that the data on a newly
introduced MDisk is still readable.

Image mode is intended for the purpose of migrating data from an environment without the
SVC to an environment with the SVC. A LUN that was previously directly assigned to a
SAN-attached host can now be reassigned to the SVC (during a short outage) and returned
to the same host as an image mode VDisk, with the user’s data intact. During the same
outage the host, cables and zones can be reconfigured to access the disk, now via the SVC.

After access is re-established, the host workload can resume while the SVC manages the
transparent migration of the data to other SVC managed MDisks on the same or another disk
subsystem.

We recommend that, during the migration phase of the SVC implementation, you add one
image mode VDisk at a time to the SVC environment. This reduces the possibility of error. It
also means that the short outages required to reassign the LUNs from the subsystem or
subsystems and reconfigure the SAN and host can be staggered over a period of time to
minimize the business impact.

Important: You can create an image mode VDisk only by using an unmanaged disk. That
is, you must do this before you add the MDisk that corresponds to your original logical
volume to a MDG.

To create an image mode VDisk, perform the following steps:


1. Select the radio button to the left of the unmanaged MDisk, as shown in Figure 10-50, on
which you want to create an image mode VDisk. Select Create VDisk in image mode
from the list and click Go.

Figure 10-50 Create VDisk in image mode

2. This brings you to the introduction screen (Figure 10-51 on page 299). After reading the
steps click on Next.

298 IBM TotalStorage SAN Volume Controller


Figure 10-51 Introduction screen

3. This brings you to the set attributes screen (Figure 10-52) where you enter the name of
the VDisk you want to create. Tick the box if you want to create an empty VDisk. Click on
Next to continue.

Figure 10-52 Set attributes

Chapter 10. SVC configuration and administration using the GUI 299
Note: If you do not provide a name, the SVC automatically generates the name VDiskX
(where X is the ID sequence number assigned by the SVC internally).

If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, the
dash and the underscore. It can be between one and 15 characters in length, but
cannot start with a number, a dash or the word VDisk because this prefix is reserved for
SVC assignment only.

4. In the next screen (Figure 10-53) you must enter a name of the MDG where you want to
add the new MDisk. Click on Next to proceed.

Figure 10-53 MDG name entry

5. In the following screen (Figure 10-54 on page 301) you select the correct extent size for
your VDISK. Click on Next to proceed.

300 IBM TotalStorage SAN Volume Controller


Figure 10-54 Select extent size

6. The next screen (Figure 10-55) shows you the results of your previous MDG inputs. Click
on Next to proceed.

Figure 10-55 Verify MDG

Chapter 10. SVC configuration and administration using the GUI 301
7. Next (Figure 10-56) you can select another MDG if the one entered before does not have
enough space available. In our case we had to select MDG2_DS43. Click on Next to
proceed.

Figure 10-56 Choose an I/O group and a MDG

8. This last panel (Figure 10-57 on page 303) shows you the characteristics of the new
image VDisk

302 IBM TotalStorage SAN Volume Controller


Figure 10-57 Verify imaged VDisk

10.3.3 Managed Disk Groups


This section details the tasks that can be performed at an MDG level.

Viewing MDisk group information


Each of the following tasks are performed from the View Managed Disk Groups panel
(Figure 10-58). To access this panel, from the SVC Welcome page, click the Work with
Managed Disks option and then the Managed Disk Groups link.

Note: At the Filtering Managed Disk (MDisk) Groups panel, click Bypass filter.

Figure 10-58 Viewing MDGs

Chapter 10. SVC configuration and administration using the GUI 303
Viewing MDisk group information
To retrieve information about a specific MDG, perform the following steps:
1. On the Viewing Managed Disk Groups panel (Figure 10-58 on page 303), click the name
of any MDG in the list.
2. On the View MDisk Group Details panel (Figure 10-59), you see more detailed information
about the specified MDisk. Here you see information pertaining to the number of MDisks
and VDisks as well as the capacity (both total and free space) within the MDG. When you
finish viewing the details, click Close to return to the previous panel.

Figure 10-59 MDG details

Creating an MDisk group


To create an MDG, perform the following steps:
1. Select Create an MDisk group from the list Figure 10-58 on page 303 and click Go.
2. On the Create Managed Disk Group wizard panel (Figure 10-60 on page 305), click Next.

304 IBM TotalStorage SAN Volume Controller


Figure 10-60 Create Managed Disk Group wizard panel

3. On the Name the group and select the managed disks panel (Figure 10-61 on page 306),
give your MDG a name. Optionally select the MDisk Candidates and add them to the
Selected MDisks list (one at a time) in the desired order.
Selecting No MDisk Candidates creates an “empty” MDG. You can add MDisks to an
“empty” MDG at a later time.
Click Next.

Note: If you do not provide a name, the SVC automatically generates the name
mdiskgrpX (where X is the ID sequence number assigned by the SVC internally).

If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, the
dash and the underscore. It can be between one and 15 characters in length. However,
it cannot start with a number, the dash or the word mdiskgrp because this prefix is
reserved for SVC assignment only.

Chapter 10. SVC configuration and administration using the GUI 305
Figure 10-61 Name the group and select managed disks

4. On the select extent size (Figure 10-62) you select the extent size in MB that you want to
format your MDG. Then click on Next.

Figure 10-62 Select the extent size

306 IBM TotalStorage SAN Volume Controller


5. Verify the information that you specified in the previous panels (Figure 10-63)and if it is
correct, click Finish. If you need to correct something, click Back.

Figure 10-63 Verifying the information about the MDG

Renaming an MDisk group


To rename an MDG, perform the following steps:
1. Select the radio button in Viewing MDGs (Figure 10-64) to the left of the MDG you want to
rename. Select Rename an MDisk Group from the list and click Go.

Figure 10-64 Viewing MDGs

Chapter 10. SVC configuration and administration using the GUI 307
2. From the Renaming Managed Disk Group MDGname panel (where MDGname is the
MDG you selected in the previous step), (Figure 10-65) type the new name you want to
assign and click OK.

Note: The name can consist of letters A to Z, a to z, numbers 0 to 9, a dash and the
underscore. It can be between one and 15 characters in length, but cannot start with a
number, a dash or the word mdiskgrp because this prefix is reserved for SVC
assignment only.

Figure 10-65 Renaming a MDG

Deleting an MDisk group


To delete an MDG, perform the following steps:
1. Select the radio button to the left of the MDG you want to delete. Select Delete an MDisk
Group from the list and click Go.
2. On the Deleting a Managed Disk Group MDGname panel (where MDGname is the MDG
you selected in the previous step), (Figure 10-66) click OK to confirm that you want to
delete the MDG.

Figure 10-66 Deleting a MDG

If there are MDisks and VDisks within the MDG you are deleting, you are required to click
on Forced delete for the MDG (Figure 10-67 on page 309).

308 IBM TotalStorage SAN Volume Controller


Important: If you delete a MDG with the Forced Delete option, and VDisks were
associated with that MDisk group, you lose the data on your VDisks, since they it is
deleted before the MDisk Group. If you want to save your data, migrate the VDisks to
another MDisk group before you delete the MDisk group previously assigned to it.

Figure 10-67 Confirming forced deletion of a MDG

Adding MDisks
If you created an “empty” MDG as we did, or you simply assign additional MDisks to your
SVC environment later, you can add MDisks to existing MDGs by performing the following
steps:

Note: You can only add unmanaged MDisks to an MDG.

1. Select the radio button (Figure 10-68) to the left of the MDG to which you want to add
MDisks. Select Add MDisks from the list and click Go.

Figure 10-68 Viewing MDGs

Chapter 10. SVC configuration and administration using the GUI 309
2. From the Adding Managed Disks to Managed Disk Group MDiskname panel (where
MDiskname is the MDG you selected in the previous step) (Figure 10-69), select the
desired MDisk or MDisks from the MDisk Candidates list. After you select all the desired
MDisks, click OK.

Figure 10-69 Adding MDisks to a MDG

Removing MDisks
To remove an MDisk from an MDG, perform the following steps:
1. Select the radio button to the left (Figure 10-70) of the MDG from which you want to
remove an MDisk. Select Remove MDisks from the list and click Go.

Figure 10-70 Viewing MDGs

2. From the Deleting Managed Disks from Managed Disk Group MDGname panel (where
MDGname is the MDG you selected in the previous step) (Figure 10-71 on page 311),
select the desired MDisk or MDisks from the list. After you select all the desired MDisks,
click OK.

310 IBM TotalStorage SAN Volume Controller


Figure 10-71 Removing MDisks from a MDG

If VDisks are using the MDisks that you are removing from the MDG, you are required to
click the Forced Delete button to confirm the removal of the MDisk, as shown in
Figure 10-72. Even then, the removal only takes place if there is sufficient space to
migrate the VDisk data to other extents on other MDisks that remain in the MDG.

Figure 10-72 Confirming forced deletion of MDisk from MDG

Showing MDisks in this group


To show a list of MDisks within a MDG, perform the following steps:
1. Select the radio button to the left (Figure 10-73 on page 312) of the MDG from which you
want to retrieve MDisk information. Select Show MDisks in this group from the list and
click Go.

Chapter 10. SVC configuration and administration using the GUI 311
Figure 10-73 View MDGs

2. You now see a subset (specific to the MDG you chose in the previous step) of the Viewing
Managed Disk panel (Figure 10-74) from 10.3.2, “Managed disks” on page 290.

Figure 10-74 Viewing MDisks in a MDG

Note: Remember, you can collapse the column titled My Work at any time by clicking the
arrow to the right of the My Work column heading.

Showing VDisks using this group


To show a list of VDisks associated with MDisks within a MDG, perform the following steps:
1. Select the radio button to the left (Figure 10-75 on page 313) of the MDG from which you
want to retrieve VDisk information. Select Show VDisks using this group from the list
and click Go.

312 IBM TotalStorage SAN Volume Controller


Figure 10-75 View MDisks

2. You see a subset (specific to the MDG you chose in the previous step) of the Viewing
Virtual Disks panel in Figure 10-76. We cover the Viewing Virtual Disks panel in more
detail in “VDisk information” on page 322.

Figure 10-76 VDisks belonging to selected MDG.

You have now completed the tasks required to manage the disk controller systems, managed
disks, and MDGs within the SVC environment.

10.4 Working with Virtual Disks


This section details the various configuration and administration tasks that you can perform
on the VDisks within the SVC environment.

Chapter 10. SVC configuration and administration using the GUI 313
10.4.1 Hosts
This section details the tasks that you can perform at a host level. Each of the following tasks
are performed from the Viewing Hosts panel (Figure 10-77). To access this panel, from the
SVC Welcome page, click the Work with Virtual Disks option and then the Hosts link.

Note: At the Filtering Hosts panel, click Bypass filter.

Figure 10-77 Viewing hosts

Host information
To retrieve information about a specific host, perform the following steps:
1. On the Viewing Hosts panel (Figure 10-77) click the name of any host in the list displayed.
2. On the Viewing General Details panel (Figure 10-78 on page 315), you can see more
detailed information about the specified host. You can click the Port Details (Figure 10-79
on page 315) link to see information about the Fibre Channel Host Bus Adapters (HBAs)
that were defined within the host.
When you are finished viewing the details, click Cancel to return to the previous panel.

314 IBM TotalStorage SAN Volume Controller


Figure 10-78 Host details

Figure 10-79 Host port details

Creating a host
To create a new host, perform the following steps:
1. Select in Figure 10-80 on page 316 the option Create a host from the list and click Go.

Chapter 10. SVC configuration and administration using the GUI 315
Figure 10-80 Create a host

2. On the Creating Hosts panel (Figure 10-81 on page 317), type a name for your host.

Note: If you do not provide a name, the SVC automatically generates the name hostX
(where X is the ID sequence number assigned by the SVC internally).

If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, and the
underscore. It can be between one and 15 characters in length. However, it cannot start
with a number or the word host because this prefix is reserved for SVC assignment only.

Select the mode for the host. You must choose HP_UX to get more than 8 LUNS
supported on for HP_UX machines. For all other hosts select Generic mode (default).
Select and add the worldwide port names (WWPNs) that correspond to your HBA or
HBAs. Click OK.
Your WWPN or WWPNs may not display, although you are sure your adapter is
functioning (for example, you see the WWPN in the switch name server) and your zones
are correctly setup. In this case, you can manually type the WWPN of your HBA or HBAs
into the Additional Ports field (type in WWPNs, one per line) at the bottom of the panel
before you click OK.

316 IBM TotalStorage SAN Volume Controller


Figure 10-81 Creating a new host

3. This brings you back to the viewing host panel (Figure 10-82) where you can see the
added host.

Figure 10-82 Create host results

Modify a host
To modify a host, perform the following steps:

Chapter 10. SVC configuration and administration using the GUI 317
1. Select the radio button to the left of the host you want to rename (Figure 10-83). Select
Rename a host from the list and click Go.

Figure 10-83 Modify a host.

2. From the Modifying Host panel (Figure 10-84), type the new name you want to assign or
change the Type parameter and click OK.

Note: The name can consist of letters A to Z, a to z, numbers 0 to 9, and the


underscore. It can be between one and 15 characters in length. However, it cannot start
with a number or the word host because this prefix is reserved for SVC assignment
only.

Figure 10-84 Modifying a host

Deleting a host
To delete a Host, perform the following steps:
1. Select the radio button to the left of the host you want to delete (Figure 10-85 on
page 319). Select Delete a host from the list and click Go.

318 IBM TotalStorage SAN Volume Controller


Figure 10-85 Delete a Host.

2. On the Deleting Host hostname panel (where hostname is the host you selected in the
previous step), click OK if you are sure you want to delete the host. See Figure 10-86.

Figure 10-86 Deleting a host

3. If you still have VDisks associated with the host, you see a panel (Figure 10-87)
requesting confirmation for the forced deletion of the host. Click OK and all the mappings
between this host and its VDisks are deleted before the host is deleted.

Figure 10-87 Forcing a deletion

Chapter 10. SVC configuration and administration using the GUI 319
Adding ports
If you add an HBA to a server that is already defined within the SVC, you can simply add
additional ports to your host definition by performing the following steps:
1. Select the radio button to the left of the host to which you want to add WWPNs
(Figure 10-88). Select Add ports from the list and click Go.

Figure 10-88 Add ports to a host.

2. From the Adding ports to hostname panel (where hostname is the host you selected in the
previous step), select the desired WWPN from the Available Ports list (one at a time) and
click Add. After you select all the desired WWPNs, click OK. See Figure 10-89.
Your WWPN or WWPNs may not display, although you are sure your adapter is
functioning (for example, you see WWPN in the switch name server) and your zones are
correctly set up. In this case, you can manually type the WWPN of your HBA or HBAs into
the Add Additional Ports field at the bottom of the panel before you click OK.

Figure 10-89 Adding ports to a host

320 IBM TotalStorage SAN Volume Controller


Deleting ports
To delete a port from a host, perform the following steps:
1. Select the radio button to the left of the host from which you want to delete a port
(Figure 10-90). Select Delete ports from the list and click Go.

Figure 10-90 Delete ports from a host.

2. On the Deleting Ports From hostname panel (where hostname is the host you selected in
the previous step), select the ports you want to delete from the Available Ports list and
click Add. When you have selected all the ports you want to delete from your host to the
column to the right, click OK. See Figure 10-91.

Figure 10-91 Deleting ports from a host.

3. If you have VDisks that are associated with the host, you receive a warning about deleting
a host port. You need to confirm your action when prompted. as shown in Figure 10-92 on
page 322.

Chapter 10. SVC configuration and administration using the GUI 321
Figure 10-92 Port delete conformation.

10.4.2 Virtual disks


This section discusses the tasks that you can perform at a VDisk level. Each of the following
tasks are performed from the Viewing Virtual Disks panel (Figure 10-93). To access this
panel, from the SVC Welcome page, click the Work with Virtual Disks option and then the
Virtual Disks link.

Note: At the Filtering Virtual Disks (VDisks) panel, click Bypass filter.

Figure 10-93 Viewing Virtual Disks

VDisk information
To retrieve information about a specific VDisk, perform the following steps:
1. On the Viewing Virtual Disks panel, click the name of the desired VDisk in the list.

322 IBM TotalStorage SAN Volume Controller


2. The next panel (Figure 10-94) that opens shows detailed information. Review the
information. When you are done, click Close to return to the Viewing Virtual Disks panel.

Figure 10-94 VDisk details

Creating a VDisk
To create a new VDisk, perform the following steps:
1. Select Create a VDisk from the list (Figure 10-93 on page 322) and click Go.
2. The Create Virtual Disks wizard launches (Figure 10-95 on page 324). Click Next.

Chapter 10. SVC configuration and administration using the GUI 323
Figure 10-95 Creating a VDisk wizard: Introduction

3. The Choose an I/O group and a Managed Disk Group panel (Figure 10-96 on
page 325) opens. In our case we cannot select the I/O group as we only have one SVC
cluster. Select the MDG within which you want to create the VDisk and check if there is
enough disk space for the VDisks you want to create. Click Next.

324 IBM TotalStorage SAN Volume Controller


Figure 10-96 Creating a VDisk wizard: Choose an I/O group and a MDG

4. The Select type of VDisk and number of VDisk opens. Choose what type of VDisk you
want to create, striped or sequential. Enter the number of VDisks you want to create and
click Next.

Figure 10-97 Creating a VDisk wizard: Select type of VDisk and number of VDisk

Chapter 10. SVC configuration and administration using the GUI 325
5. In the Name the Virtual Disks panel (Figure 10-98) you can enter the VDisk name if you
create just one VDisk or the naming prefix if you create multiple VDisks. Click on Next.

Figure 10-98 Creating a VDisk wizard: Name the VDisks panel

Note: If you do not provide a name, the SVC automatically generates the name VDiskX
(where X is the ID sequence number assigned by the SVC internally).

If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, and
the underscore. It can be between one and 15 characters in length, but cannot start
with a number or the word VDisk because this prefix is reserved for SVC assignment
only.

6. If you selected Striping in Figure 10-97 on page 325 you will get the panel in Figure 10-99
on page 327. On the Select Attributes for Striped-mode VDisk you can Add or Remove the
MDisks candidates and with the up and down arrows you can set the striping sequence on
the selected MDisks.
7. Enter the size of the VDisk you want to create and select the capacity measurement (MB
or GB) from the list.

Note: An entry of 1 GB uses 1024 MB.

8. Optionally, format the new VDisk by selecting the Format virtual disk (write zeros to its
managed disk extents) check box at the bottom of the panel. Click on Next.

326 IBM TotalStorage SAN Volume Controller


Figure 10-99 Creating a VDisk wizard: Select Attributes for Striped-mode VDisk

9. If you selected Sequential in Figure 10-97 on page 325 you will get the panel in
Figure 10-100 on page 328. If you are creating multiple virtual disks, choose whether to
use a single managed disk or multiple managed disks. Choose the managed disk(s) and
type a capacity for the virtual disk(s). Specify whether to format the virtual disk(s) and click
Next.

Chapter 10. SVC configuration and administration using the GUI 327
Figure 10-100 Creating a VDisk wizard: Select attributes for sequential mode VDisks

10.On the Verify VDisk panel (Figure 10-101 for striped and Figure 10-102 on page 329 for
sequential) check if you are satisfied with the information shown, click Finish to complete
the task. Otherwise, click Back to return and make any corrections.

Figure 10-101 Creating a VDisk wizard: Verify VDisk Striped type

328 IBM TotalStorage SAN Volume Controller


Figure 10-102 Creating a VDisk wizard: Verify VDisk sequential type.

11.The last panel (Figure 10-103) shows you the progress during the creation of your VDisks
on the storage and the final results.

Figure 10-103 Creating a VDisk wizard: final result.

Chapter 10. SVC configuration and administration using the GUI 329
Deleting a VDisk
To delete a VDisk, perform the following steps:
1. Select the radio button to the left of the VDisk you want to delete(Figure 10-93 on
page 322). Select Delete a VDisk from the list and click Go.
2. On the Deleting Virtual Disk VDiskname panel (where VDiskname is the VDisk you
selected in the previous step), click OK to confirm your desire to delete the VDisk. See
Figure 10-104.
Remember, you can collapse the My Work column by clicking the arrow to the right of the
My Work column heading.

Figure 10-104 Deleting a VDisk

If the VDisk is currently assigned to a host, you receive a secondary message where you
must click Forced Delete to confirm your decision. See Figure 10-105. This deletes the
VDisk-to-host mapping before deleting the VDisk.

Important: Deleting a VDisk is a destructive action related to user’s data residing in


that VDisk.

Figure 10-105 Deleting a VDisk: Forcing a deletion

Deleting a VDisk-to-host mapping


To unmap (unassign) a VDisk from a host, perform the following steps:

330 IBM TotalStorage SAN Volume Controller


1. Select the radio button to the left of the VDisk you want to unmap. Select Delete a
VDisk-to-host mapping from the list and click Go.
2. On the Deleting a VDisk-to-host mapping panel (Figure 10-106), from the Host Name list,
select the host from which to unassign the VDisk. Click OK.

Figure 10-106 Deleting a VDisk-to-host mapping

Expanding a VDisk
Expanding a VDisk presents a larger capacity disk to your operating system. Although you
can do this easily using the SVC, you must ensure that your operating system is prepared for,
and supports the volume expansion before you use this function.

Dynamic expansion of a VDisk is only supported when the VDisk is in use by


򐂰 AIX 5.2 and above.
򐂰 W2K and W2K3 for basic disks
򐂰 W2K and W2K3 with a hot fix from Microsoft (Q327020) for dynamic disks.

Assuming your operating system supports it, to expand a VDisk, perform the following steps:
1. Select the radio button to the left of the VDisk you want to expand in Figure 10-93 on
page 322. Select Expand a VDisk from the list and click Go.
2. The Expanding Virtual Disks VDiskname panel (where VDiskname is the VDisk you
selected in the previous step) opens. See Figure 10-107 on page 332. Follow these steps:
a. Select the new size of the VDisk. This is the increment to add. For example, if you have
a 5 GB disk and you want it to become 10 GB, you specify 5 GB in this field.

b. Optionally, select the managed disk candidates from which to obtain the additional
capacity. The default for striped VDisk is to use equal capacity from each MDisk in the
MDG.

Notes:
򐂰 With sequential VDisks, you must specify the MDisk from which you want to
obtain space.
򐂰 There is no support for the expansion of image mode VDisks.
򐂰 If there are not enough extents to expand your VDisk to the specified size, you
receive an error message.

Chapter 10. SVC configuration and administration using the GUI 331
c. Optionally, format the full VDisk with zeros by selecting the Format virtual disk (write
zeros to its managed disk extents) check box at the bottom of the panel.

Important: The Format Virtual Disk check box is not selected by default. But if you
check it the entire VDisk formats, not just the new extents, so be very careful.

When you are done, click OK.


If there are not enough extents to expand your VDisk to the specified size, you receive an
error message.

Figure 10-107 Expanding a VDisk

3. Go to your host and perform necessary operations to discover the additional space and
expand your volumes into it. This procedure differs depending on the operating system.

Mapping a VDisk to a host


To map (assign) a virtual disk to a host, perform the following steps:
1. Select the radio button to the left of the VDisk you want to assign to a host (Figure 10-93
on page 322). Select Map a VDisk to a host from the list and click Go.
2. On the Creating a Virtual Disk-to-Host mapping VDiskname panel (where VDiskname is
the VDisk you selected in the previous step), from the Target Host list, select the desired
host. The SCSI LUN ID is to increment based on what is already assigned to the host.
Click OK. See Figure 10-108 on page 333.

332 IBM TotalStorage SAN Volume Controller


Figure 10-108 Mapping a VDisk to a host

3. The next panel (Figure 10-109) shows you the progress of the VDisk to host mapping.

Figure 10-109 Progress of VDisk to host mapping.

Modifying a VDisk
The Modifying Virtual Disk menu item allows you to rename the VDisk, reassign the VDisk to
another I/O group, and set throttling parameters.

To modify a VDisk, perform the following steps:


1. Select the radio button to the left of the VDisk you want to modify (Figure 10-93 on
page 322). Select Modify a VDisk from the list and click Go.

Chapter 10. SVC configuration and administration using the GUI 333
2. The Modifying virtual disk VDiskname panel (where VDiskname is the VDisk you selected
in the previous step) opens. See Figure 10-110. You can perform the following steps
separately or in combination:
a. Type a new name for you VDisk.

Note: The name can consist of letters A to Z, a to z, numbers 0 to 9, and the


underscore. It can be between one and 15 characters in length. However, it cannot
start with a number or the word VDisk because this prefix is reserved for SVC
assignment only.

b. Select an alternate I/O group from the list to alter the I/O group to which it is assigned.
c. Set performance throttling for a specific VDisk. In the I/O Governing field, type a
number and select either I/O or MB from the list.
I/O governing effectively throttles the amount of I/Os per second (or MBs per second)
that can be achieved to and from a specific VDisk. You may want to do this if you have
a VDisk that has an access pattern that adversely affects the performance of other
VDisks on the same set of MDisks. For example, it uses most of the available
bandwidth.
If this application is highly important, then migrating the VDisk to another set of MDisks
might be advisable. However, in some cases, it is an issue with the I/O profile of the
application rather than a measure of its use or importance.
The choice between I/O and MB as the I/O governing throttle should be based on the
disk access profile of the application. Database applications generally issue large
amounts of I/O but only transfer a relatively small amount of data. In this case, setting
an I/O governing throttle based on MBs per second does not achieve much. It is better
for you to use an I/O per second throttle. On the other extreme, a streaming video
application generally issues a small amount of I/O, but transfers large amounts of data.
In contrast to the database example, setting an I/O governing throttle based on I/Os
per second does not achieve much. Therefore, you should use an MB per second
throttle.
Click OK when you are done making changes.

Figure 10-110 Modifying a VDisk

334 IBM TotalStorage SAN Volume Controller


Migrating a VDisk
To migrate a VDisk, perform the following steps:
1. Select the radio button to the left of the VDisk you want to migrate (Figure 10-93 on
page 322). Select Migrate a VDisk from the list and click Go.
2. The Migrating Virtual Disks-VDiskname panel (where VDiskname is the VDisk you
selected in the previous step) opens as shown in Figure 10-111. From the MDisk Group
Name list, select the MDG to which you want to reassign the VDisk. Specify the number of
threads to devote to this process (a value from 1 to 4).
The optional threads parameter allows you to assign a priority to the migration process. A
setting of 4 is the highest priority setting. If you want the process to take a lower priority
over other types of I/O, you can specify 3, 2, or 1.

Important: After a migration is started, there is no way to stop it. Migration continues
until it is complete unless it is stopped or suspended by an error condition or the VDisk
being migrated is deleted.

When you are done making your selections, click OK to begin the migration process.
3. You need to manually refresh your browser or close it and return to the Viewing Virtual
Disks panel periodically to see the MDisk Group Name column in the Viewing Virtual Disks
panel update to reflect the new MDG name.

Figure 10-111 Migrating a VDisk

Migrate VDisk to an image mode VDisk

Migrating a VDisk to an image mode VDisk allows the SVC to be removed from the data path.
This might be useful where the SVC is used as a data mover appliance.

To Migrate a VDisk to an image mode Vdisk there are some rules.


򐂰 The destination MDisk must be greater than or equal to the size of the VDisk.
򐂰 The Managed Disk specified as the target must be in an unmanaged state
򐂰 Regardless of the mode that the VDisk starts in it is reported as managed mode during the
migration.
򐂰 Both of the MDisks involved are reported as being Image mode during the migration.

Chapter 10. SVC configuration and administration using the GUI 335
򐂰 If the migration is interrupted by a cluster recovery, or by a cache problem then the
migration will resume after the recovery completes.

Perform the following steps:


1. Select Migrate to an Image Mode VDisk from the drop-down list (Figure 10-93 on
page 322) and click Go.
2. The Migrate to Image Mode VDisk wizard launches (Figure 10-112). Read the steps on
this screen and click Next.

Figure 10-112 Migrate to image mode VDisk wizard: Introduction

3. Select the radio button to the left of the MDisk where you want the data to be migrated
(Figure 10-113 on page 337). Click Next.

336 IBM TotalStorage SAN Volume Controller


Figure 10-113 Migrate to image mode VDisk wizard: Select target Mdisk

4. Select the MDG to which the MDisk will join (Figure 10-114). Click Next.

Figure 10-114 Migrate to image mode VDisk wizard: Select MDG.

5. Select the priority of the migration by selecting the number of threads (Figure 10-115 on
page 338). Click Next.

Chapter 10. SVC configuration and administration using the GUI 337
Figure 10-115 Migrate to image mode VDisk wizard: Select Threads.

6. Verify that the information you specified is correct (Figure 10-116). If you are satisfied click
Finish. If you want to change something use the Back option.

Figure 10-116 Migrate to image mode VDisk wizard: Verify migration Attributes.

7. The last screen (Figure 10-117 on page 339) displays the details of the VDisk that you are
migrating.

338 IBM TotalStorage SAN Volume Controller


Figure 10-117 Migrate to image mode VDisk wizard: Progress of Migration

Shrinking a VDisk
The method the that the SVC uses to shrink a VDisk is to remove the required number of
extents from the end of the VDisk. Depending on where the data actually resides on the
VDisk, this can be quite destructive. For example, you may have a VDisk that consists of 128
extents (0 to 127) of 16 MB (2 GB capacity) and you want to decrease the capacity to 64
extents (1 GB capacity). In this case, the SVC simply removes extents 64 to 127. Depending
on the operating system, there is no easy way to ensure that your data resides entirely on
extents 0 through 63 so be aware that you may lose data.

Although easily done using the SVC, you must ensure your operating system supports
shrinking, either natively or by using third-party tools, before using this function.

Dynamic shrinking of a VDisk is only supported when the VDisk is in use by


򐂰 W2K and W2K3 for basic disks
򐂰 W2K and W2K3 with a special fix from Microsoft (Q327020) for dynamic disks.

In addition, we recommend that you always have a good current backup before you execute
this task.

Shrinking a VDisk is useful in certain circumstances such as:


򐂰 Reducing the size of a candidate target VDisk of a PPRC relationship to turn it the same
size as the source
򐂰 Releasing space from VDisks to have free extents in the MDG, considering you do not use
that space anymore and take precautions with the remaining data, as explained earlier

Assuming your operating system supports it, perform the following steps to shrink a VDisk:
1. Perform any necessary steps on your host to ensure that you are not using the space you
are about to remove.
2. Select the radio button to the left of the VDisk you want to shrink (Figure 10-93 on
page 322). Select Shrink a VDisk from the list and click Go.
3. The Shrinking Virtual Disks VDiskname panel (where VDiskname is the VDisk you
selected in the previous step) opens as shown in Figure 10-118 on page 340. In the
Reduce Capacity By field, enter the capacity you want to reduce. Select MB or GB
accordingly. The final capacity of the VDisk is the Current Capacity minus the capacity that
you specify.

Chapter 10. SVC configuration and administration using the GUI 339
Note: Be careful with the capacity information. The Current Capacity field shows it in
MBs, while you can specify a capacity to reduce in GBs. SVC calculates 1 GB as being
1024 MB.

When you are done, click OK. The changes should become apparent on your host.

Figure 10-118 Shrinking a VDisk

Showing the MDisks


To show the MDisks that are used by a specific VDisk, perform the following steps:
1. Select the radio button to the left of the VDisk you want to view MDisk information about
(Figure 10-93 on page 322). Select Show the MDisks from the list and click Go.
2. You see a subset (specific to the VDisk you chose in the previous step) of the Viewing
Managed Disks panel (Figure 10-119).

Figure 10-119 Showing MDisks used by a VDisk

For information about what you can do on this panel, see 10.3.2, “Managed disks” on
page 290.

340 IBM TotalStorage SAN Volume Controller


Showing the MDisk group
To show the MDG to which a specific VDisk belongs, perform the following steps:
1. Select the radio button to the left of the VDisk you want to view MDG information about
(Figure 10-93 on page 322). Select Show the MDisk Group from the list and click Go.
2. You see a subset (specific to the VDisk you chose in the previous step) of the Viewing
MDGs panel (Figure 10-120).

Figure 10-120 Showing a MDG for a VDisk

You have now completed the tasks required to manage the hosts and virtual disks within a
SVC environment.

Showing the Host the VDisk is mapped to


To show the Host to which a specific VDisk belongs, perform the following steps:
1. Select the radio button to the left of the VDisk you want to view MDG information about
(Figure 10-93 on page 322). Select Show the Host this VDisk is mapped to from the list
and click Go.
2. This shows you a subset of “Showing VDisks mapped to a host” on page 342.

Show capacity information


To show the capacity information of the cluster, perform the following step:
1. In Figure 10-93 on page 322 Select Show Capacity Information from the drop-down list
and click Go.

Chapter 10. SVC configuration and administration using the GUI 341
Figure 10-121 Show capacity information

2. This shows you the total MDisk capacity, the space in the MDGs, the space allocated to
the VDisks and the total free space (Figure 10-121).

10.4.3 Showing VDisks mapped to a host


To show the VDisks assigned to a specific host, perform the following steps:
1. From the SVC welcome page click the Work with Virtual Disks option and then the
Virtual Disk to Host Mapping link (Figure 10-122). Now you can see that VDisk belongs
to what host. If this is a long list you can use the additional filtering and sort option from
10.1.1, “Organizing on-screen content” on page 266.

Figure 10-122 VDisk to Host Mapping.

342 IBM TotalStorage SAN Volume Controller


Deleting VDisks from a host
In the same panel where you can view the VDisk to Host mapping (Figure 10-122 on
page 342) you can also delete a mapping. Select the radio button to the left of the Host and
VDisk combination you want to delete. Ensure that Delete from Mapping is selected from
the list. Click Go.
2. Confirm the selection you made on Figure 10-123 by clicking on the Delete button.

Figure 10-123 Deleting VDisk to Host mapping.

Now you are back to the panel shown in Figure 10-122 on page 342. Check that this VDisk
(LINUX2) is no longer mapped to this Host (LINUX1). Now you can assign this VDisk to
another Host as described in “Mapping a VDisk to a host” on page 332.

10.5 Managing Copy Services


See Chapter 11, “Copy Services: FlashCopy” on page 365, and Chapter 12, “Copy services:
Metro Mirror” on page 413, for more information about the tasks related to the management
of the Copy Services in the SVC environment.

10.6 Service and maintenance using the GUI


This section discusses the various service and maintenance tasks that you can perform
within the SVC environment. To perform all of the following activities, on the SVC Welcome
page (Figure 10-124 on page 344), select the Service and Maintenance option.

Note: You will be prompted for a user ID and password for some of the following tasks.

Chapter 10. SVC configuration and administration using the GUI 343
Figure 10-124 Service and Maintenance functions

10.6.1 Upgrading software


This section explains how to upgrade the SVC software.

Package numbering and version


The format for software upgrade packages is three positive integers separated by dots. For
example, a software upgrade package contains something similar to 1.15.0.

Each software package is given a unique number. Two software packages are fully
compatible if the first two numbers are identical. For example, 1.2.15 and 1.2.16 are fully
compatible. However, 1.2.15 and 1.3.0 are not compatible.

If two software packages are fully compatible, then you can upgrade or downgrade between
these packages. If two software packages are not fully compatible, then the new software
upgrade package contains code to upgrade from a list of previous software levels. In this
case, you can apply the software upgrade only if the cluster is currently running one of the
previous software levels. After you apply it, you cannot remove it.

New software utility


It is planned to develop a software utility (which resides on the master console) which will
check software levels in the system against recommended levels which will be documented
on the support website. You will be informed if software levels are up-to-date, or if you need to
download and install newer levels.

After the software file has been uploaded to the cluster (to the /home/admin/upgrade
directory), it can be selected and applied to the cluster. This is performed by the Web script by
using the svcservicetask applysoftware command. When a new code level is applied, it is
automatically installed on all the nodes within the cluster.

344 IBM TotalStorage SAN Volume Controller


The underlying command line tool runs the script sw_preinstall, which checks the validity of
the upgrade file, and whether it can be applied over the current level. If the upgrade file is
unsuitable, the preinstall script deletes the files, and this prevents the build up of invalid files
on the cluster.

Precaution before upgrade


Before you upgrade the SVC software, ensure that all I/O paths between all hosts and SANs
are working. Otherwise, the applications may have I/O failures during the software upgrade.

You must also ensure that all I/O paths are working for each host that is running I/O
operations to the SANs during the software upgrade. You can check the I/O paths by using
datapath query commands. See the Subsystem Device Driver User's Guide for the IBM
TotalStorage Enterprise Storage Server and the IBM TotalStorage SAN Volume Controller,
SC26-7540 for more information about datapath query commands. You do not need to check
for hosts that have no active I/O operations to the SANs during the software upgrade.

Procedure
To upgrade the SVC cluster software, perform the following steps:
1. From the SVC Welcome page, click the Service and Maintenance option and then the
Upgrade Software link.
2. When prompted, enter the admin user ID and password, and click Yes if prompted with
security alerts concerning certificates.
3. On the Upgrade Software panel shown in Figure 10-125, you can either upload a new
software upgrade file or list the upgrade files. Click the Upload button to upload the latest
SVC cluster code.

Figure 10-125 Update Software panel

4. On the Software Upgrade (file upload) panel (Figure 10-126 on page 346), type or browse
to the directory on your management workstation (for example, master console) where
you stored the latest code level and click Upload.

Chapter 10. SVC configuration and administration using the GUI 345
Figure 10-126 Software Upgrade (file upload)

5. The File Upload panel (Figure 10-127) comes up if the file is uploaded. Click Continue.

Figure 10-127 File upload

6. The Software Upgrade panel (Figure 10-128) lists the available software packages. Make
sure the radio button next to the package you want to apply is selected. Click the Apply
button.

Figure 10-128 Software Upgrade

346 IBM TotalStorage SAN Volume Controller


7. On the confirmation panel (Figure 10-129), click the Confirm button to begin the upgrade
process.

Figure 10-129 Confirm

8. The Software Upgrade Status panel (Figure 10-130) opens. Click the Check Upgrade
Status button periodically. This process may take a while to complete. If the software is
completely upgraded you will get the screen shown in Figure 10-131 on page 348.

Figure 10-130 Software Upgrade Status

Chapter 10. SVC configuration and administration using the GUI 347
Figure 10-131 Upgrade complete

9. The new code is distributed and applied to each node in the SVC cluster. After installation,
each node is automatically restarted in turn. If a node does not restart automatically during
the upgrade, repair or manually delete that node from the cluster to complete the backout
process.
10.If you run into an error, go to the Analyze Error Log panel. Search for Software Install
completed. Select the Sort by date with the newest first radio button and then click
Perform. This should list the software near the top. For more information about how to
work with the Analyze Error Log panel, see 10.6.4, “Analyzing the error log” on page 351.

You have now completed the tasks required to upgrade the SVC software. Click the X icon in
the upper right corner of the display area to close the Upgrade Software panel. Do not close
the browser by mistake.

10.6.2 Running maintenance procedures


To run the maintenance procedures on the SVC cluster, perform the following steps:
1. From the SVC Welcome page, click the Service and Maintenance option and then the
Run Maintenance Procedures link.
2. Click Start Analysis in Figure 10-132 on page 349. This will analyze the cluster log and
guide you through the maintenance procedures.

348 IBM TotalStorage SAN Volume Controller


Figure 10-132 Maintenance Procedures

3. This generates a new error log file named errlog_000667_041118_154252 in the


/dumps/elogs/ directory.
– errlog part of the file name is generic for all error log files.
– 000667 is the panel name of the current configuration node.
– 041118 is the date (YYMMDD).
– 154252 is the time.

Figure 10-133 Maintenance error log with unfixed errors

4. Click on the error number in the Error Code column in Figure 10-133. This gives you the
explanation for this error as shown in Figure 10-134 on page 350.

Chapter 10. SVC configuration and administration using the GUI 349
Figure 10-134 Maintenance: error code description.

5. Click on Continue. The maintenance procedure checks if the problem is fixed. If so you
can click the OK button as shown in Figure 10-135.

Figure 10-135 Maintenance procedures: fixing

6. The entry in the error log is now marked as fixed as shown in Figure 10-136. Click Exit.

Figure 10-136 Maintenance procedures: fixed

7. Click the X icon in the upper right corner of the display area in Figure 10-137 on page 351
to close the Run Maintenance Procedures panel. Do not close the browser by mistake.

350 IBM TotalStorage SAN Volume Controller


Figure 10-137 Maintenance procedures: close

10.6.3 Setting error notification


To set up error notification, perform the following steps:
1. From the SVC Welcome page, click the Service and Maintenance option and then the
Set Error Notifications link.
2. On the Modify Error Notification Settings panel (Figure 10-138), select the level of
notification (default is None) to apply to both SNMP and e-mail alerting. Type the IP
address of your SNMP Manager and the e-mail address to which to forward alerts. Click
Modify Settings.

Figure 10-138 Setting error notification

3. The Modifying Error Notification Settings panel now shows these message:
The SNMP notification settings have been updated.
The e-mail notification settings have been updated.
4. Click the X icon in the upper right corner of the display area to close the Set Error
Notification panel. Do not close the browser by mistake.

10.6.4 Analyzing the error log


The following types of events and errors are logged in the error log:

Chapter 10. SVC configuration and administration using the GUI 351
򐂰 Events: State changes that are detected by the cluster software and that are logged for
informational purposes. Events are recorded in the cluster error log.
򐂰 Errors: Hardware or software problems that are detected by the cluster software and that
require some sort of repair. Errors are recorded in the cluster error log.
򐂰 Unfixed errors: Errors that were detected and recorded in the cluster error log and that
were not yet corrected or repaired.
򐂰 Fixed errors: Errors that were detected and recorded in the cluster error log and that were
subsequently corrected or repaired.

To display the error log for analysis, perform the following steps:
1. From the SVC Welcome page, click the Service and Maintenance options and then the
Analyze Error Log link.
2. From the Error Log Analysis panel (Figure 10-139), you can choose either the Process or
Clear Log button.

Figure 10-139 Analyzing the error log

a. Select the appropriate radio buttons and click the Process button to display the log for
analysis. The Analysis Options and Display Options radio button boxes allow you to
filter the results of your log enquiry to reduce the output.
You can display the whole log, or you can filter the log so that only errors, events, or
unfixed errors are displayed. You can also sort the results by selecting the appropriate
display options. For example, you can sort the errors by error priority (lowest number =
most serious error) or by date. If you sort by date, you can specify whether the newest
or oldest error is to display at the top of the table. You can also specify the number of
entries you want to display on each page of the table.

352 IBM TotalStorage SAN Volume Controller


Click the Log File Options radio button to use the existing log file or to generate a fresh
one. Using the existing log file displays entries that exist in the log file that was last
generated. If this is the first time you are using this option, no error log exists. To obtain
the latest status of your cluster, or if it is the first time you are using this option, select
the Generate a new error log file option. The errlog_000667_041118_153930 error
log file is created in the /dumps/elogs/ directory and is ready for analysis.
• errlog part of the file name is generic for all error log files.
• 000667 is the panel name of the current configuration node.
• 041118 is the date (YYMMDD).
• 153930 is the time. (HHMMSS)

Figure 10-140 Analyzing Error Log: Process

b. Click on a Sequence Number gives you the detailed log of this error.

Chapter 10. SVC configuration and administration using the GUI 353
Figure 10-141 Analyzing Error Log: Detailed error analysis

c. Click the Clear Log button at the bottom of the panel in Figure 10-139 on page 352 to
clear the log. If the error log contains unfixed errors, a warning message is displayed
when you click Clear Log.
3. Click the X icon in the upper right corner of the display area to close the Analyze Error Log
panel. Do not close the browser by mistake.

10.6.5 Setting features


To change licensing feature settings, perform the following steps:
1. From the SVC Welcome page, click the Service and Maintenance options and then the
Set Features link.
2. On the Featurization Settings panel (Figure 10-142 on page 355), consult your license
before you make changes in this panel. If you purchased additional features (for example,
FlashCopy or PPRC) or increase the capacity of your license, make the appropriate
changes. Then click the Update Feature Settings button.

354 IBM TotalStorage SAN Volume Controller


Figure 10-142 Setting features

3. You now see a license confirmation panel as shown in Figure 10-143. Review this panel
and ensure that you are in compliance. If you are in compliance, click I Agree to make the
requested changes take effect.

Figure 10-143 License agreement

4. You return to the Set Features panel (Figure 10-144 on page 356) where your changes
should be reflected.

Chapter 10. SVC configuration and administration using the GUI 355
Figure 10-144 Featurization settings update.

5. Click the X icon in the upper right corner of the display area to close the Set Features
panel. Do not close the browser by mistake.

10.6.6 Viewing the feature log


To view the feature log, which registers the events related to the SVC licensed features,
perform the following steps:
1. From the SVC Welcome page, click the Service and Maintenance option and then the
View Feature Log link.
2. The Feature Log panel (Figure 10-145) opens. It displays the current feature settings and
a log of when changes were made.

Figure 10-145 Feature Log

356 IBM TotalStorage SAN Volume Controller


3. Click the X icon in the upper right corner of the display area to close the View Feature Log
panel. Do not close the browser by mistake.

10.6.7 Dump configuration


To dump configuration information, perform the following steps:
1. From the SVC Welcome page, click the Service and Maintenance options and then the
Dump Configuration link.

Note: The purpose of performing a dump of the configuration data is only for problem
diagnosis. It is not intended for configuration backup. To back up the SVC configuration
data, see the 10.7, “Backing up the SVC configuration” on page 360.

2. On the Dumping Cluster Configuration panel (Figure 10-146), type a unique prefix for the
dump file name. Click OK to create the dump file.

Note: If you don’t specify a file name prefix, SVC uses the word config. The node front
panel name and a time stamp are automatically appended to the file name prefix.

Figure 10-146 Dump configuration

3. Click the X icon in the upper right corner of the display area to close the Dump
Configuration panel. Do not close the browser by mistake.

10.6.8 Listing dumps


To list the dumps that were generated, perform the following steps:
1. From the SVC Welcome page, click the Service and Maintenance option and then the
List Dumps link.
2. On the List Dumps panel (Figure 10-147 on page 358), you see several dumps and log
files that were generated over time on this node.They include the configuration dump we
generated in the previous section. Click any of the available links (the underlined text in
the table under the List Dumps heading) to go to another panel that displays the available
dumps. To see the dumps on the other node you must click on Check other nodes.

Chapter 10. SVC configuration and administration using the GUI 357
Figure 10-147 List Dumps

For example, if you click the Error Logs link, you see information like the example shown
in Figure 10-148.

Figure 10-148 List Dumps: Error Logs

3. From this panel, you can perform either of the following tasks:

358 IBM TotalStorage SAN Volume Controller


– Click any of the available log file links (indicated by the underlined text) to display the
log in complete detail as shown in Figure 10-149.

Note: By default, the dump and log information that is displayed is available from the
configuration node (in our example SVC1N1). In addition to these files, each node in
the SVC cluster keeps a local software dump file. Occasionally, other dumps are stored
on them. Click the Check Other Nodes button at the bottom of the List Dumps panel to
see which dumps or logs exist on other nodes in your cluster.

Figure 10-149 List Dumps: Error log detail

– Delete one or all of the dump or log files. To delete all, click the Delete All button. To
delete some, select the radio button or buttons to the right of the file and click the
Delete button. On Figure 10-150 on page 360 you have to confirm the deletion by
clicking on Confirm Delete.

Chapter 10. SVC configuration and administration using the GUI 359
Figure 10-150 Confirm Delete

4. Click the X icon in the upper right corner of the display area to close the List Dumps panel.
Do not close the browser by mistake.

10.7 Backing up the SVC configuration


The SVC configuration data is stored on all the nodes in the cluster. It is specially hardened
so that, in normal circumstances, the SVC should never lose its configuration settings.
However in exceptional circumstances this data may become corrupted or lost.

This section details the tasks that you can perform to save the configuration data from a SVC
configuration node and restore it. The following configuration information is backed up:
򐂰 SVC cluster
򐂰 Storage controllers
򐂰 Hosts
򐂰 I/O groups
򐂰 Software licenses
򐂰 Managed disks
򐂰 MDGs
򐂰 SVC nodes
򐂰 SSH keys
򐂰 Virtual disks
򐂰 VDisk-to-host mappings

Before you begin the restore process, consult IBM Support to determine the cause as to why
you cannot access your original configuration data. After the restore process starts, the
original data on the VDisks is destroyed. Therefore, you must ensure that you have a backup
of all user data on the VDisks.

The prerequisites for having a successful backup are:


򐂰 All nodes in the cluster must be online.
򐂰 No object name may begin with an underscore (_).
򐂰 Do not run any independent operations that could change the cluster configuration while
the backup command runs.
򐂰 Do not make any changes to the fabric or cluster between backup and restore. If changes
are made, back up your configuration again or you may not be able to restore it later.

360 IBM TotalStorage SAN Volume Controller


Note: We recommend that you make a backup of the SVC configuration data after each
major change in the environment, such as defining or changing a VDisks, VDisk-to-host
mappings, etc.

The output of the SVC configuration backup is a file with the name svc.config.backup.xml that
is stored in the C:\Program Files\IBM\svcconsole\cimom\backup\SVCclustername folder in
the SVC master console (where SVCclustername is the SVC cluster name of the
configuration from which you backed up). This differs from backing up the configuration using
CLI. The svc.config.backup.xml file is stored in the /tmp folder on the configuration node and
must be copied to an external and secure place for backup purposes.

Important: We strongly recommend that you change the default names of all objects to
non-default names. For objects with a default name, a warning is produced and the object
is restored with its original name and “_r” appended to it.

10.7.1 Backup procedure


To back up the SVC configuration data, perform the following steps:
1. From the SVC Welcome page, click the Service and Maintenance option and then the
Backup Configuration link.
2. On the Backing up a Cluster Configuration panel (Figure 10-151), click the Backup button.

Figure 10-151 Backing up a Cluster Configuration data

Chapter 10. SVC configuration and administration using the GUI 361
3. After the configuration backup is successfully done, you see the message as shown in
Figure 10-152. Make sure you that read, understand, and document the warning
messages, since they can influence the restore procedure.

Figure 10-152 Configuration backup successful message and warnings

4. Click the X icon in the upper right corner of the display area to close the Backing up a
Cluster Configuration panel. Do not close the browser by mistake.

10.7.2 Restoring the SVC configuration


It is very important that you perform the configuration backup as described in “Backup
procedure” on page 361 periodically, and every time after you change the configuration of
your cluster.

You can carry out the restore procedure only under the direction of IBM level 3 support.

10.7.3 Deleting configuration backup


This section details the tasks that you can perform to delete the configuration backup files
from the default folder in the SVC master console. You can do this if you have already copied
them to another external and secure place.

To delete the SVC Configuration backup files, perform the following steps:
1. From the SVC Welcome page, click the Service and Maintenance options and then the
Delete Configuration link.

362 IBM TotalStorage SAN Volume Controller


2. On the Deleting a Cluster Configuration panel (Figure 10-153), click the OK button to
confirm the deletion. This deletes the C:\Program Files\IBM\svcconsole\cimom
\backup\SVCclustername folder (where SVCclustername is the SVC cluster name on
which you are working) on the SVC master console and all its contents.

Figure 10-153 Deleting a cluster configuration

3. Click Delete to confirm the deletion of the configuration backup data. See Figure 10-154.

Figure 10-154 Deleting a Cluster Configuration confirmation message

4. Click the X icon in the upper right corner of the display area to close the Deleting a Cluster
Configuration panel. Do not close the browser by mistake.

Chapter 10. SVC configuration and administration using the GUI 363
364 IBM TotalStorage SAN Volume Controller
11

Chapter 11. Copy Services: FlashCopy


The FlashCopy function of the IBM TotalStorage SAN Volume Controller (SVC) provides the
capability to perform a point-in-time (PiT) copy of one or more VDisks.

In this chapter, we describe how FlashCopy works on SVC, and present examples on how to
configure and utilize FlashCopy.

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. 365
11.1 FlashCopy
The challenge of creating a consistent copy of a data set which is constantly updated can be
met using the FlashCopy function on the SVC. FlashCopy provides the capability to perform
an instantaneous point-in-time (PiT) copy of one or more VDisks. Since it is performed at the
block level, it is necessary to flush the cache and OS buffers prior to executing the FlashCopy
in order to ensure consistency at the application level.

11.1.1 How it works


FlashCopy works by defining a FlashCopy mapping consisting of one source VDisk together
with one target VDisk. Multiple FlashCopy mappings can be defined and PiT consistency can
be observed across multiple FlashCopy mappings using consistency groups, see
“Consistency groups” on page 369.

When FlashCopy is invoked it makes a copy of a source VDisk to a target VDisk, and the
original contents of the target VDisk are overwritten. When the FlashCopy operation is
performed, the target VDisk presents the contents of the source VDisk as they existed at the
single point in time (PiT) the FlashCopy was invoked. This is often also referred to as a
Time-Zero copy(T0 ).

When a FlashCopy is issued the source and target VDisks are instantaneously available, this
is because when invoked, bitmaps are created to govern and redirect I/O to respectively the
source or target VDisk depending on where the requested block is present, while the blocks
are copied in the background from the source to the target VDisk.

For more details on background copy, see “Grains and the FlashCopy bitmap” on page 371.

For more details on how I/O to FlashCopy source and target VDisk is directed, see 11.1.12,
“I/O handling” on page 382.

Both the source and target VDisks are available for read and write operation, although the
background copy process has not yet completed, copying across the data from the source to
target volumes.

In Figure 11-1 on page 367, the direction of the host I/O towards source and target VDisk is
explained.

366 IBM TotalStorage SAN Volume Controller


Source Target FlashCopy command is issued.
Copy is immediately available
T0 after bitmaps (metadata) are built.
Read/write to copy is possible.

Write Read
Blocks that are not yet written to
the target are read from the source.
T0 + t Before a write to the source, data is
copied to the target.
Source Target

When a background copy is


complete, the source and target
are logically independent, and the
FlashCopy mapping can be
deleted without affecting the target.
Figure 11-1 Implementation of SVC FlashCopy

Overview of FlashCopy features


FlashCopy supports these features:
򐂰 The target is the time-zero copy of the source (known as FlashCopy mapping targets)
򐂰 The source VDisk and target VDisk are available (almost) immediately
򐂰 Consistency groups are supported to enable FlashCopy across multiple VDisks
򐂰 The target VDisk may be updated independently of the source VDisk
򐂰 Bitmaps governing I/O redirection (I/O indirection layer) are maintained in both nodes of
the SVC I/O group to prevent a single point of failure
򐂰 It is useful for backup, improved availability and testing

11.1.2 Practical uses for FlashCopy


The business applications for FlashCopy are many and various. An important use is
facilitating consistent backups of changing data. In this application, a FlashCopy is created to
capture a PiT copy. The resulting image is backed up to tertiary storage such as tape. After
the copied data is on tape, the FlashCopy target is redundant.

Different tasks can benefit from the use of FlashCopy. Below the most common situations are
listed.

Moving and migrating data


When you need to move a consistent data set from one host to another, FlashCopy can
facilitate this action with a minimum of downtime for the host application dependent on the
source VDisk. As explained earlier, it is very important to quiesce the application on the host
and flush the application and OS buffers. The cache on the SVC is also flushed using the
FlashCopy prepare command, see “Preparing” on page 379 prior to performing the
FlashCopy.

The created data set on the FlashCopy target is immediately available as well as the source
VDisk.

Chapter 11. Copy Services: FlashCopy 367


Backup
FlashCopy does not impact your backup time, but it allows you to create a PiT consistent data
set (across VDisks), with a minimum of downtime for your source host. The FlashCopy target
can then be mounted on a different host (or the backup server) and backed up. Using this
procedure, the backup speed becomes less important, since the backup time does not
require downtime for the host dependent on the source VDisks.

Restore
You can keep periodically created FlashCopy targets online, to provide very fast restore of
specific files from the PiT consistent data set revealed on the FlashCopy targets, which
simply can be copied to the source VDisk in case a restore is needed.

Given that the background copy process has completed (that is, entered the copied state; see
“Idling/copied” on page 379), and a complete data set restore is needed, it is possible to
delete the FlashCopy mappings and create corresponding FlashCopy mappings in the
opposite direction. This is often referred to as a FlashBack procedure.

This procedure can be used to very quickly restore to the PiT consistent data set obtained
from the preceding FlashCopy.

Application testing
You can test new applications and new operating system releases against a FlashCopy of
your production data. The risk of data corruption is eliminated, and your application does not
need to be taken offline for an extended period of time to perform the copy of the data.

Data mining is a good example of an area where FlashCopy can help you. Data mining can
now extract data without affecting your application.

11.1.3 FlashCopy mappings


In the SVC, FlashCopy occurs between a source VDisk and a target VDisk. The source and
target VDisks must be equal in size. The minimum granularity that SVC supports for
FlashCopy is an entire VDisk, this means it is not possible to only FlashCopy part of a VDisk.

The source and target VDisks must both belong to the same SVC Cluster, but may be in
different I/O groups within that Cluster. SVC FlashCopy associates a source VDisk and a
target VDisk together in a FlashCopy mapping. Each VDisk may be a member of only one
FlashCopy mapping and a FlashCopy mapping always has exactly one source and one target
VDisk. Therefore, it is not possible for a VDisk to simultaneously be the source for one
FlashCopy mapping and the target for another.

VDisks, which are members of a FlashCopy mapping, cannot have their size increased or
decreased while they are members of the FlashCopy mapping. The SVC supports the
creation of enough FlashCopy mappings to allow every VDisk to be a member of a
FlashCopy mapping.

A FlashCopy mapping is the act of creating a relationship between a source VDisk and a
target VDisk. FlashCopy mappings can be either stand-alone or a member of a consistency
group. You can perform the act of preparing, starting, or stopping on either the stand-alone
mapping or the consistency group. Figure 11-2 on page 369 illustrates the concept of
FlashCopy mapping.

368 IBM TotalStorage SAN Volume Controller


FC_Mapping VDisk1T
VDisk1
FC_Source FC_Target

Figure 11-2 FlashCopy mapping

11.1.4 Consistency groups


Consistency groups address the issue where the objective is to preserve data consistency
across multiple VDisks, because the applications have related data which spans multiple
VDisks. A requirement for preserving the integrity of data being written is to ensure that
“dependent writes” are executed in the application's intended sequence. Because the SVC
provides PiT semantics, a self consistent data set is obtained.

FlashCopy mappings must be part of a consistency group, although if no FlashCopy


consistency group is specified, upon creation the FlashCopy mapping will belong to the
default group 0 (zero). The default consistency group 0 is a pseudo consistency group, and
this means that no commands can be directed at FlashCopy consistency group 0, since it is
intended for FlashCopy mappings which are to be handled as single instances.

FlashCopy commands can be issued to a FlashCopy consistency group, which affects all
FlashCopy mappings in the consistency group, or to a single FlashCopy mapping if not part of
a defined FlashCopy consistency group.

Figure 11-3 illustrates a consistency group consisting of two FlashCopy mappings.

Consistency Group 1

FC_Mapping 1 VDisk1T
VDisk1
FC_Source FC_Target

FC_Mapping 2 VDisk2T
VDisk2
FC_Source FC_Target

Figure 11-3 FlashCopy consistency group

Chapter 11. Copy Services: FlashCopy 369


Dependent writes
To illustrate why it is crucial to use consistency groups when a data set spans multiple
VDisks, consider the following typical sequence of writes for a database update transaction:
1. A write is executed to update the database log, indicating that a database update is to be
performed
2. A second write is executed to update the database
3. A third write is executed to update the database log, indicating that the database update
has completed successfully

The database ensures the correct ordering of these writes by waiting for each step to
complete before starting the next. However if the database log (updates 1 and 3) and the
database itself (update 2) are on different VDisks and a FlashCopy mapping is started during
this update, then you need to exclude the possibility that the database itself is copied slightly
before the database log resulting in the target VDisks seeing writes (1) and (3) but not (2),
since the database was copied before the write was completed.

In this case, if the database was restarted using the backup made from the FlashCopy target
disks, the database log would indicate that the transaction had completed successfully when,
in fact, that is not the case. Because the FlashCopy of the VDisk with the database file was
triggered (bitmap was created) before the write was on the disk. Therefore, the transaction is
lost and the integrity of the database is in question.

To overcome the issue of dependent writes across VDisks, to create a consistent image of the
client data, it is necessary to perform a FlashCopy operation on multiple VDisks as an atomic
operation. To achieve this condition, the SVC supports the concept of consistency groups.

A FlashCopy consistency group can contain an arbitrary number of FlashCopy mappings up


to the maximum number of FlashCopy mappings supported by the SVC Cluster. FlashCopy
commands can then be issued to the FlashCopy consistency group and thereby
simultaneously for all FlashCopy mappings defined in the consistency group. For example,
when issuing a FlashCopy start command to the consistency group, all of the FlashCopy
mappings in the consistency group are started at the same time, resulting in a PiT copy which
is consistent across all of the FlashCopy mappings which are contained in the consistency
group.

Consistency group zero


For FlashCopy mappings where there is no need for the complexity of consistency groups,
SVC allows a FlashCopy mapping to be treated as an independent entity. In this case, the
FlashCopy mapping will become a member of the pseudo consistency group zero.

For FlashCopy mappings that are configured in this way, the prepare and start commands
are directed at the FlashCopy mapping name or FlashCopy mapping ID rather than the
consistency group ID. A prepare or start command directed toward a FlashCopy mapping,
which is a member of any other consistency group, is illegal and fails, and at the same time
the pseudo consistency group zero cannot be started or prepared.

For more information see “Preparing (pre-triggering) the FlashCopy mapping” on page 385.

Maximum configurations
In table 10-1 the limitations for FlashCopy properties are listed for SVC 2.1.

370 IBM TotalStorage SAN Volume Controller


Table 11-1 FlashCopy properties, maximum configuration.
FlashCopy Property Maximum Comment

FC mappings per SVC cluster 2048 Since each FC mapping requires 2 VDisks
the maximum number of FC mappings
equal half of the maximum number of
VDisks

FC consistency groups per SVC 128 An arbitrary limit policed by the SVC
cluster software

FC VDisk per I/O group 16TB There is a pr. I/O group limit of 16TB on the
quantity of source VDisk address space
which may participate in FC mappings

FC mappings per consistency group 512 Due to the time taken to prepare a
consistency group with a large number of
mappings

11.1.5 FlashCopy indirection layer


The FlashCopy indirection layer governs the I/O to both the source and target VDisks when a
FlashCopy mapping is triggered, this is done using a FlashCopy bitmap. The purpose of the
FlashCopy indirection layer is to enable both the source and target vdisks for read and write
I/O immediately after the FlashCopy has been triggered.

To illustrate how the FlashCopy indirection layer works we look at what happens when a
FlashCopy mapping is prepared and subsequently triggered.

When a FlashCopy mapping is invoked the following sequence is applied:


򐂰 Flush write data in cache onto source VDisk or VDisks if part of a consistency group
򐂰 Put cache into write-through on the source VDisk(s)
򐂰 Discard cache for the target VDisk(s)
򐂰 Establish a sync point on all source VDisks in the consistency group (creating the
FlashCopy bitmap)
򐂰 Indirection layer governs all I/O to source and target VDisks
򐂰 Enable cache on both the source and target VDisks

FlashCopy provides the semantics of a PiT copy, using the indirection layer which intercepts
I/Os targeted at either the source or target VDisks. The act of starting a FlashCopy mapping
causes this indirection layer to become active in the I/O path. This occurs as an atomic
command across all FlashCopy mappings in the consistency group. The indirection layer
makes a decision about each I/O. This decision is based upon:
򐂰 The VDisk and logical block number (LBA) to which the I/O is addressed
򐂰 Its direction (read or write)
򐂰 The state of an internal data structure, the FlashCopy bitmap

The indirection layer either allows the I/O through to the underlying storage, redirects the I/O
from the target VDisk to the source VDisk, or stalls the I/O while it arranges for data to be
copied from the source VDisk to the target VDisk. To explain in more detail which action is
applied for each I/O we first look at the FlashCopy bitmap.

Grains and the FlashCopy bitmap


When data is copied from the source VDisk to the target VDisk, it is copied in units of address
space known as grains. In the SVC, the grain size is 256 KB. The FlashCopy bitmap contains
one bit for each grain. The bit records whether the associated grain has yet been split, by

Chapter 11. Copy Services: FlashCopy 371


copying the grain from the source to the target. The rate at which the grains are copied across
from the source VDisk to the target VDisk is called the copy rate. By default the copy rate is
50% though this can be altered. For more information about copy rates see “Background copy
rate” on page 380.

The FlashCopy indirection layer algorithm


Imagine the FlashCopy indirection layer as the I/O traffic cop when a FlashCopy mapping is
active. The I/O is intercepted and handled according to whether it is directed at the source
VDisk or the target VDisk, depending on the nature of the I/O (read or write) and the state of
the background copy of the LBAs the I/O is addressed (has the grain been split or not).

In Figure 11-4, it is illustrated how the background copy is running while I/Os are handled
according to the indirection layer algorithm.

Tunable background
Source copy rate Target

I/Os to data already


copied handled as
normal

Reads to data not yet


copied redirected to
source volume

1 Writes to data that are not


yet copied on either volume
2 cause “copy on demand”,
which is then handled as
normal

Full data access is retained during copy


Figure 11-4 I/O processing with FlashCopy

How the FlashCopy indirection layer handles read and write I/O respectively to the source
and target vdisks is described in more detail in the following topics.

Source reads
Reads of the source are always passed through to the underlying source disk.

Target reads
In order for FlashCopy to process a read from the target disk, it must consult its bitmap. If the
data being read is already copied to the target (grain is split), then the read is sent to the
target disk. If it has not (grain is unsplit), then the read is sent to the source disk. Clearly, this
algorithm requires that while this read is outstanding, no writes are allowed to execute which
would change the data being read from the source. The SVC satisfies this requirement by a
cluster wide locking scheme.

Note: The current implementation of FlashCopy limits the number of concurrent reads to
an unsplit target grain to one. If more than one concurrent read to an unsplit target grain is
received by the FlashCopy mapping layer, then they are serialized.

372 IBM TotalStorage SAN Volume Controller


Writes to the source or target
Where writes occur to source or target to an area (grain) which has not yet been copied,
these will usually be stalled while a copy operation is performed to copy data (the grain) from
the source to the target, to maintain the illusion that the target contains its own copy. A
specific optimization is performed where an entire grain is written to the target VDisk. In this
case the new grain contents are written to the target VDisk, and if this succeeds, then the
grain is marked as split in the FlashCopy bitmap without a copy from the source to the target
having been performed. If the write fails, then the grain is not marked as split.

Summary of the FlashCopy indirection layer algorithm


In Table 11-2 the indirection layer algorithm is summarized.

Table 11-2 Summary table of the FlashCopy indirection layer algorithm.


VDisk being Has the grain Host I/O operation
accessed been split (copied)
Read Write

Source No Read from source VDisk Split grain (copy to target


VDisk), then write to source
VDisk

Yes Read from source VDisk Write to source VDisk

Target No Read from source VDisk Split grain (copy to target


VDisk), then write to target
VDisk

Yes Read from target VDisk Write to target VDisk

Interaction with the cache


The copy-on-write process, which is applied by the FlashCopy indirection layer when write I/O
address LBAs on the target VDisk, which has not yet been copied (from the source VDisk),
can introduce significant latency into write operations. To isolate the host application that
issued the write I/O from this latency, the FlashCopy indirection layer is placed logically below
the cache. This means that the copy latency is typically only seen on destage from the cache
rather than for write operations from the host application which otherwise may be blocked
waiting for the write to complete. In Figure 11-5 on page 374 the logical placement of the
FlashCopy indirection layer is illustrated.

Chapter 11. Copy Services: FlashCopy 373


Figure 11-5 Logical placement of the FlashCopy indirection layer.

11.1.6 FlashCopy rules


There is a per I/O group limit of 16 TB on the quantity of the source and target VDisk address
space that may participate in FlashCopy mappings. This address space is allocated in units
of 8 GB. That is to say, that creating a FlashCopy mapping between a pair of VDisks whose
size is less than 8 GB consumes 8 GB of FlashCopy mapping address space.

For SVC2.1 the maximum number of supported FlashCopy mappings are 2048 per SVC
cluster. This equals half the number of supported VDisks 4096 per SVC cluster. This means
that the maximum configuration supports the fact that all VDisks can be part of a FlashCopy
mapping whether source or target VDisk.

374 IBM TotalStorage SAN Volume Controller


An overview of the rules that apply to FlashCopy on SVC:
򐂰 There is a one-to-one mapping of the source VDisk to the target VDisk.
򐂰 The source and target VDisks can be across I/O groups, but must be within the same
cluster.
򐂰 The complete source VDisk is copied to the target VDisk.
򐂰 Minimum granularity is the entire VDisk.
򐂰 The source and target must be exactly equal in size.
򐂰 One VDisk can only be part of one FlashCopy mapping, either source or target.
򐂰 FlashCopy cannot be performed incrementally.
򐂰 An existing FlashCopy mapping can be stopped.
򐂰 A FlashCopy mapping is persistent until it is explicitly unmapped (deleted).
򐂰 The size of a source and target VDisk cannot be altered (increased or decreased) after
the FlashCopy mapping is created.
򐂰 The maximum quantity of source VDisks per I/O group is 16 TB.

FlashCopy and image mode disks


You can use FlashCopy with an image mode VDisk. Since the source and target VDisks must
be exactly the same size when creating a FlashCopy mapping, a VDisk must be created with
the exact same size as the image mode VDisk.

To create a VDisk with exactly the same size use the command svcinfo lsvdisk -bytes
VDiskName. The size in bytes is then used to create the VDisk to be used in the FlashCopy
mapping.

In Example 11-1 we list the size of the image mode VDisk VD9_img, in bytes. Subsequently
the VDisk VD9_imgT is created specifying the same size.

Example 11-1 Listing the size of a VDisk in bytes and creating a VDisk of equal size.
IBM_2145:admin>svcinfo lsvdisk -bytes VD9_img
id 17
name VD9_img
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 4
mdisk_grp_name MDG4_DS43_IMG
capacity 10736344064
type image
formatted no
mdisk_id 7
mdisk_name mdisk7
FC_id
FC_name
RC_id
RC_name
throttling 0
preferred_node_id 1
fast_write_state empty

Chapter 11. Copy Services: FlashCopy 375


IBM_2145:admin>svctask mkvdisk -size 10736344064 -unit b -name VD9_imgT -mdiskgrp MDG0_DS43
-vtype striped -iogrp 0
Virtual Disk, id [18], successfully created

Tip: Alternatively, the expand and shrink VDisk commands can be used to modify the size
as the commands support specification of the size in bytes. See “Expanding a VDisk” on
page 238 and “Shrinking a VDisk” on page 243 for more information.

An image mode VDisk can be used as either a FlashCopy source or target VDisk.

Note: VDisk extents are not used when the SVC verifies the size of a VDisk. The VDisk
size comparison is based on the size available to the host. Therefore, it is needed to query
the size of the VDisk using svctask lsvdisk -bytes, and create the corresponding VDisk
specifying the equal size in bytes.

The number of bytes is always rounded up to 512 bytes (size of an LBA).

11.1.7 FlashCopy mapping events


In this section the series of events that modify the states of a FlashCopy are explained. In
Figure 11-6 on page 377 the FlashCopy mapping state diagram shows an overview of the
states that apply to a FlashCopy mapping.

FlashCopy sequence of events:


1. Associate the source data set with a target location (one or more source and target
VDisks)
2. Create FlashCopy mapping for each source VDisk to the corresponding target VDisk.
The target VDisk must be equal in size to the source VDisk.
3. Discontinue access to the target (application dependent)
4. Prepare (pre-trigger) the FlashCopy:
a. Flush cache for the source.
b. Discard cache for the target.
5. Start (trigger) the FlashCopy:
a. Pause I/O (briefly) on the source.
b. Resume I/O on the source.
c. Start I/O on the target.

Below each event (numbered in the diagram) that transitions a FlashCopy mapping from one
state to another, they are explained.

376 IBM TotalStorage SAN Volume Controller


1 6
Source Target Source Target
Idle or
Copied
Stop

2
Preparing Stop 6b
5 3
Stop Stopped
Prepared

4 Stop
Copying

Stop
4b 4a

Suspended

Figure 11-6 FlashCopy mapping state diagram.

򐂰 1. Create: A new FlashCopy mapping is created by specifying the source VDisk and the
target VDisk. The operation fails if either the source or target VDisks are already members
of a FlashCopy mapping. The operation also fails if the source and target VDisks are not
equal in size.
򐂰 2. Prepare: The prepare command is directed to either a consistency group of FlashCopy
mappings or to a stand alone FlashCopy mapping. The prepare command places the
FlashCopy mappings in the preparing state. It is important to note that the act of preparing
for start may corrupt any data which previously resided on the target VDisk since cached
writes are discarded. Even if the FlashCopy mapping is never started, the data from the
target may be logically changed by the act of preparing for start.
򐂰 3. Flush done: The FlashCopy relationship moves from the preparing state to the
prepared state automatically after all cached data for the source is flushed and all cached
data for the target is invalidated.

Note: If the flush of data from the cache cannot be completed, then the FlashCopy
mapping enters the stopped state.

򐂰 4. Start: After all of the FlashCopy mapping or mappings in a consistency group are in the
prepared state, the FlashCopy mapping(s)) can be started. This is often referred to as
triggering the FlashCopy. When started this is the T0 of the PiT copy.
During the start command, the following actions occur in sequence:

Chapter 11. Copy Services: FlashCopy 377


1. New reads and writes to all source VDisks in the consistency group are paused in the
cache layer until all ongoing reads and writes below the cache layer are completed.
2. After all FlashCopy mappings in the consistency group are paused, internal metadata is
set to allow FlashCopy operation, creating the FlashCopy bitmap.
3. After all FlashCopy mappings in the consistency group have their metadata set, read and
write operations are unpaused on the source VDisks.
4. The target VDisks are brought online.
As part of the start command, read, and write caching is enabled for both the source and
target VDisks.
򐂰 5. Copy completed: After every grain of the source VDisk is split (copy progress is
100%), the FlashCopy mapping enters the copied state.
򐂰 6. Delete: This event requests that the specified FlashCopy mapping be deleted. When
deleting a FlashCopy mapping which is in the copied state, the data on the target VDisk is
not affected, while the source and target VDisks are independent when in the copied state.

Special FlashCopy events


These are special FlashCopy events.
򐂰 4a/4b. Bitmap offline/online: If access to both SVC nodes in the I/O group which the
source VDisk belongs to is lost while in the copying state, then the FlashCopy mapping
will enter the suspended state, and access to both the source and target VDisks in the
FlashCopy mapping will be suspended. This happens because the FlashCopy bitmap,
which the indirection layer (and the background copy process) depend on, becomes
inaccessible.
– When the FlashCopy bitmap becomes available again (at least one of the SVC nodes
in the I/O group is accessible), the FlashCopy mapping will return to the copying state,
access to the source and target VDisks will be restored, and the background copy
process resumed. Unflushed data which was written to the source or target before the
FlashCopy was suspended is pinned in the cache, consuming resources, until the
FlashCopy Mapping leaves the suspended state.
– Normally two copies of the FlashCopy bitmaps are maintained (in non volatile
memory), one on each of the two SVC nodes making up the I/O group of the source
VDisk. If only one of the SVC nodes in the I/O group which the source VDisk belong to
goes offline then the FlashCopy mapping will continue in the copying state, with a
single copy of the FlashCopy bitmap. When the failed SVC node recovers or a
replacement SVC node is added to the I/O group, up-to-date FlashCopy bitmaps will
be re-established on the resuming SVC node, and again provide a redundant location
for the FlashCopy bitmaps.

Note: If both Nodes in the I/O group to which the target VDisk belongs become unavailable
then host access to the target VDisk will not be possible. This behavior is standard SVC
behavior and is unaffected by FlashCopy. The Flash Copy state is unaffected by this and
any background copy will continue, given that the I/O group which the source VDisk
belongs to is available since this is where the FlashCopy bitmap is located.

򐂰 6b Forced Delete: If a FlashCopy mapping in the stopped state is to be deleted, the


-force flag must be used. Deleting a FlashCopy mapping in the stopped state may allow
unflushed write data from the cache to be destaged to what was the target VDisk. This
does not affect the data integrity of the system because following a forced delete, nothing
can be learnt about the contents of the target VDisk. The data contained in the target
VDisk could be anything. The destaging of old data to what was the target VDisk does not
affect the future use of the VDisk. Any new data is written over this old data, in the cache,
or on disk.

378 IBM TotalStorage SAN Volume Controller


򐂰 Modify: A FlashCopy mapping has two parameters which can be modified (apart from
renaming it). These are the background copy rate and the consistency group. The
background copy rate can be modified in any state regardless of whether the FlashCopy
mapping is part of a consistency group. However, attempting to modify a consistency
group in any state other than idle/copied fails.
򐂰 Stop: There are two mechanisms by which a FlashCopy mapping can be stopped. It can
be stopped by a user command or by an I/O error. When a FlashCopy mapping enters the
stopped state the target VDisk is taken offline.

11.1.8 FlashCopy mapping states


In this section the states of a FlashCopy mapping are explained in more detail.

Idling/copied
Read and write caching is enabled for both the source and the target. A FlashCopy mapping
exists between the source and target, but they behave as independent VDisks in this state.

Copying
The FlashCopy indirection layer governs all I/O to the source and target VDisks while the
background copy is running. Reads and writes are executed on the target as though the
contents of the source were instantaneously copied to the target during the start command
The source and target can be independently updated. Internally, the target depends on the
source for some tracks. Read and write caching is enabled on the source and the target.

Stopped
The FlashCopy was stopped either by user command or by an I/O error. When a FlashCopy
mapping is stopped, any useful data in the target VDisk is lost. Because of this, while the
FlashCopy mapping is in this state, the target VDisk is in the offline state. To regain access to
the target, the mapping must be started again (the previous FlashCopy is lost) or the
FlashCopy mapping must be deleted. While in the stopped state, any data that was written to
the target VDisk and was not flushed to disk before the mapping was stopped, is pinned in the
cache. It cannot be accessed, and takes up space in the cache. This data is destaged after a
subsequent delete command or discarded during a subsequent prepare command. The
source VDisk is accessible, and read and write caching is enabled for the source.

Suspended
While the FlashCopy mapping was in the copying state the I/O group which the source VDisk
belongs to becomes inaccessible, this means access to the FlashCopy bitmap is lost. As a
consequence, both the source and target VDisks are offline. The background copy process is
halted. When the FlashCopy bitmap becomes available again, the FlashCopy mapping
returns to the copying state. Then, access to the source and target VDisks are restored, and
the background copy process is resumed. Unflushed data that was written to the source or
target before the FlashCopy was suspended is pinned in the cache, consuming resources,
until the FlashCopy mapping leaves the suspended state.

Preparing
Since the FlashCopy function is placed logically below the cache to anticipate any write
latency problem, it demands no read or write data for the target and no write data for the
source in the cache at the time that the FlashCopy operation is started, in order for the
resulting copy to be consistent.

Performing the necessary flush and invalidate operations as part of the start command
unnecessarily delays the triggering, and potentially long latency for I/Os received after can be

Chapter 11. Copy Services: FlashCopy 379


experienced, since these I/Os need to wait for the start command to complete the cache
flush. To overcome this, the SVC FlashCopy supports the prepare command, which prepares
for a FlashCopy start while I/O continues to the source VDisk.

In the preparing state, the FlashCopy mapping is prepared by the following steps:
򐂰 Flushing any modified write data associated with the source VDisk from the cache. Read
data for the source is left in the cache.
򐂰 Placing the cache for the source VDisk into write through mode, so that subsequent writes
wait until data has been written to disk before completing the write command received
from the host application.
򐂰 Discarding any read or write data associated with the target VDisk from the cache.

While in this state, writes to the source VDisk experience additional latency because the
cache is operating in write through mode. While the FlashCopy mapping is in this state, the
target VDisk is in the offline state.

Before starting the FlashCopy mapping, it is important that any caches on the host level, for
example buffers in host OS or application, are also instructed to flush any outstanding writes
to the source VDisk. We do not intend to cover flushing for the OS or applications in this
redbook as it is beyond the scope.

Prepared
When in the prepared state the FlashCopy mapping is ready to perform a start. While the
FlashCopy mapping is in this state, the target VDisk is in the offline state.

In the prepared state, writes to the source VDisk experience additional latency because the
cache is operating in write through mode.

Summary of FlashCopy mapping states


In Table 11-3 the various FlashCopy mapping states and the corresponding state of the
source and target Vdisks are listed.

Table 11-3 FlashCopy mapping state summary due to FlashCopy


State Source Target

Online/Offline Cache state Online/Offline Cache state

Idling/Copied Online Write-back Online Write-back

Copying Online Write-back Online Write-back

Stopped Online Write-back Offline -

Suspended Offline Write-back Offline -

Preparing Online Write-through Offline -

Prepared Online Write-through Offline -

11.1.9 Background copy rate


A FlashCopy mapping has a property Background Copy Rate. This is expressed as a
percentage and can take values between 0 and 100. The background copy rate can be
changed when the FlashCopy mapping is in any state. If a value of 0 is specified, then
background copy is disabled. This is equivalent to a NOCOPY option. This option is suitable
for short lived FlashCopy mappings, which are to be used for backup purposes only. Since

380 IBM TotalStorage SAN Volume Controller


the source data set is not expected to change much during the lifetime of the FlashCopy
mapping, it is more efficient in terms of managed disk I/Os not to perform a background copy.
The relationship of the background copy rate value to the attempted number of grains to be
split (copied) per second is shown in Table 11-4.

Table 11-4 Background copy rate


User percentage Data copied per second Grains per second

1-10 128KB 0.5

11-20 256KB 1

21-30 512KB 2

31-40 1MB 4

41-50 2MB 8

51-60 4MB 16

61-70 8MB 32

71-80 16MB 64

81-90 32MB 128

91-100 64 MB 256

The grains/sec numbers represent the maximum number of grains the SVC will copy per
second assuming that the bandwidth to the MDisks can accommodate this.

The SVC is unable to achieve these copy rates if insufficient bandwidth is available from the
SVC nodes to the physical disks making up the managed disks, after taking into account the
requirements of foreground I/O. If this situation arises, then background copy I/O contends for
resources on an equal basis with I/O arriving from hosts. Both tend to see an increase in
latency, and a consequential reduction in throughput with respect to the situation had the
bandwidth not been limited. Degradation is graceful.

Both background copy and foreground I/O continue to make forward progress, and do not
stop, hang, or cause the node to fail. The background copy is performed by one of the nodes
belonging to the I/O group in which the source VDisk resides. This responsibility is failed over
to the other node in the I/O group in the event of the failure of the node performing the
background copy. The background copy is performed “backwards” that is to say it starts with
the grain containing the highest LBAs and works backward toward the grain containing LBA
0. This is done to avoid any unwanted interactions with sequential write streams from the
using application.

Important: We do not recommend setting background copy rates higher than 95% unless
this is tested in your environment, since it can severely impact the response time of
applications.

11.1.10 Synthesis
The FlashCopy functionality in SVC simply creates copy VDisks. All the data in the source
VDisk is copied to the destination VDisk. This includes operating system control information
as well as application data and metadata.

Some operating systems are unable to use FlashCopy without an additional step which is
termed synthesis. In general, synthesis performs some transformation on the operating

Chapter 11. Copy Services: FlashCopy 381


system metadata in the target VDisk so that the operating system can use the disk. Operating
system specifics are discussed in Appendix A, “Copy services and open systems” on
page 585.

11.1.11 Metadata management


A bitmap is maintained, with a bit for each grain in the copy FlashCopy mapping.

The bitmap is maintained in nonvolatile storage on both the SVC nodes of the I/O group the
source VDisk belongs to. While both of these nodes are functioning members of the cluster
the two copies of the bitmaps are updated and kept consistent with one another.

Other SVC nodes, which are not members of the I/O group for the source VDisk, maintain a
volatile pessimistic bitmap of the FlashCopy mapping. Pessimistic means that the bitmap may
hold a bit which indicates that the grain must be copied, where in fact it does not need to be
copied. Access to grains which have not yet been copied are coordinated by the source
extent owner only. The actual reads and writes are performed on the node that wants to
access the grain.

No provision is made for storing bitmaps within the back-end storage.

11.1.12 I/O handling


To understand how the SVC cluster behaves in the presence of errors, it is necessary to
describe how the FlashCopy I/O path operates. It is important to note that the binding of
VDisks to I/O groups affects only the cache and layers above the cache. Below the cache in
the SVC software stack, VDisks are available for I/O on all nodes.

As mentioned earlier, the background copy is performed by one of the nodes belonging to the
source VDisk’s I/O group. It should be clear that any of the nodes in a SVC cluster may want
to submit an I/O from the FlashCopy layer to a VDisk.

It is assumed that the nodes and managed disks in the cluster have complete connectivity. If
the nodes comprising the source I/O group do not have access to the managed disk extents
comprising the target VDisk, then an I/O error occurs, and FlashCopy mapping is probably
stopped. Similarly, the nodes comprising the target I/O group must have access to the
managed disk extents comprising the source VDisk.

11.1.13 Serialization of I/O by FlashCopy


In general, the FlashCopy function in SVC introduces no explicit serialization into the I/O
path. Therefore, many concurrent I/Os are allowed to the source and target VDisks. However,
there is a lock for each grain. The lock can be taken shared or exclusive. The lock is taken in
the following modes under the following conditions:
򐂰 The lock is taken shared for the duration of a read from the target VDisk which touches a
grain which is yet to be split.
򐂰 The lock is taken exclusive during a grain split. This happens prior to FlashCopy actioning
any destage (or write through) from the cache to a grain which is yet to be split (the
destage waits for the grain to be split). The lock is held during the grain split and released
before the destage is processed.

If the lock is held shared, and another process wants to take the lock shared, then this
request is granted unless a process is already waiting to take the lock exclusive. If the lock is
held shared and it is requested exclusive, then the requesting process must wait until all

382 IBM TotalStorage SAN Volume Controller


holders of the shared lock free it. Similarly, if the lock is held exclusive, then a process
wanting to take the lock in either shared or exclusive mode must wait for it to be freed.

11.1.14 Error handling


When a FlashCopy mapping is not copying, the FlashCopy function does not affect the error
handling or reporting of errors in the I/O path. Only when a FlashCopy mapping is copying,
error handling and reporting are affected by FlashCopy. These scenarios are described in the
following sections.

Node failure
To explain what the product does, it is necessary to explain a little of how it does it. Normally,
two copies of the FlashCopy bitmaps are maintained, one on each of the two nodes making
up the I/O group of the source VDisk. When a node fails, one copy of the bitmaps for all
FlashCopy mappings whose source VDisk is a member of the failing node’s I/O group
become inaccessible. FlashCopy continues with a single copy of the FlashCopy bitmap being
stored nonvolatile in the remaining node in the source I/O group. The cluster metadata is
updated to indicate that the missing node no longer holds up to date bitmap information.

When the failing node recovers or a replacement node is added to the I/O group, up-to-date
bitmaps are re-established on the new node. Once again, it provides a redundant location for
the bitmaps.

If access to both nodes in an I/O group is lost, or if access to the single remaining valid copy
of a FlashCopy bitmap is lost, then any FlashCopy mappings that were in the copying state,
and for which the source was in the lost I/O group, enter the suspended state. As stated,
access to both the source and target VDisks in the FlashCopy mapping is suspended.

If both nodes in the I/O group to which the target VDisk belongs become unavailable, then
host access to the target VDisk is not possible. This is standard SVC behavior and is
unaffected by FlashCopy. The FlashCopy state is unaffected by this, and any background
copy continues.

Path failure (path offline state)


In a fully functioning cluster, all nodes have a software representation of every VDisk in the
cluster within their application hierarchy. Since the SAN which links the SVC nodes to each
other, and to the managed disks, is made up of many independent links, it is possible for
some subset of the nodes to be temporarily isolated from some of the managed disks. When
this happens, the managed disks are said to be path offline on some nodes.

Note: Other nodes may see the managed disks as online, because their connection to the
managed disks is still functioning.

When a managed disk enters the path offline state on a SVC node, all the VDisks which have
any extent on the managed disk, also become path offline. Again, this happens only on the
affected nodes. When a VDisk is path offline on a particular SVC node, this means that host
access to that VDisk through the node will fail with the SCSI sense indicating offline.

Path offline for the source VDisk


If a FlashCopy mapping is in the copying state and the source VDisk goes path offline, then
this path offline state is propagated to both source and target VDisks. Again, note that path
offline is a state which exists on a per-node basis. Other nodes may not be affected. If the
source VDisk again becomes online then both the target and source VDisks are brought back
online.

Chapter 11. Copy Services: FlashCopy 383


Path offline for the target VDisk
If the target VDisk goes path offline but the source VDisk is still online then above FlashCopy
in the application stack, only the target VDisk is in the path offline state. The source VDisk
remains online.

I/O errors caused by path failures


Special handling is invoked where:
򐂰 A FlashCopy mapping is copying.
򐂰 The source has not suffered a path failure (online).
򐂰 The target has suffered a path failure (path offline).
򐂰 A copy operation is requested, either by:
– A background copy operation
– A host I/O to the source (but specifically not the target)
򐂰 The copy operation fails when the write is attempted to the target.

In this case, the FlashCopy mapping is placed into the stopped state. I/O to the source is
allowed to continue unaffected by FlashCopy. The target VDisk is held OFFLINE and all I/O to
it fails.

Although this situation arises on just one node, the FlashCopy mapping state is held on a
cluster-wide basis and is propagated to all nodes. In all other cases, I/Os failing due to path
failures simply result in the original I/O being failed because of the path failure.

11.1.15 Asynchronous notifications


FlashCopy raises informational error logs when mappings or consistency groups make
certain state transitions as detailed below. These state transitions occur as a result of
configuration events which complete asynchronously, and the informational errors can be
used to generate Simple Network Management Protocol (SNMP) traps to notify the user.
Other configuration events complete synchronously, and no informational errors are logged
as a result of these events.
򐂰 PREPARE_COMPLETED: This is logged when the FlashCopy mapping or consistency
group enters the PREPARED state as a result of a user request to prepare. The user can
now start (or stop) the mapping or group.
򐂰 COPY_COMPLETED: This is logged when the FlashCopy mapping or consistency group
enters the IDLE_COPIED state when it was previously in the copying state. This indicates
that the target disk now contains a complete copy and no longer depends on the source.
򐂰 STOP_COMPLETED: This is logged when the FlashCopy mapping or consistency group
enters the STOPPED state as a result of a user request to Stop. It is distinct from the error
that is logged when a mapping or group enters the stopped state as a result of an I/O error.

11.2 FlashCopy commands


This section lists and explains the commands used to create, modify, and delete FlashCopy
mappings. For complete details about the FlashCopy commands, see IBM TotalStorage
Virtualization Family SAN Volume Controller: Command-Line Interface User's Guide,
SC26-7544.

11.2.1 Creating a FlashCopy mapping


To create a FlashCopy mapping we use the command svctask mkfcmap.

384 IBM TotalStorage SAN Volume Controller


svctask mkfcmap
The svctask mkfcmap command enables you to create a new FlashCopy mapping, which
maps a source VDisk to a target VDisk ready for subsequent copying. When executed, this
command creates a new FlashCopy mapping logical object. This mapping persists until it is
deleted, and must specify the source and target VDisk. The target VDisk must be identical in
size to the source VDisk, otherwise the command fails. The source VDisk and target VDisk
cannot already be part of an existing mapping. This means a VDisk can be either a source or
target VDisk in one, and only one, FlashCopy mapping. When created, the mapping can be
triggered at the PiT the copy is required.

Upon creation the FlashCopy mapping can be named, made a member of a consistency
group and the background copy rate can be set. If not specified, the FlashCopy mapping will
be named fcmap#, be a member of the pseudo consistency group 0, and have a background
Copy rate of 50%. The parameters can be changed using svctask chfcmap, if needed.

11.2.2 Modifying the mapping


To change a FlashCopy mapping we use the command svctask chfcmap.

svctask chfcmap
This command allows you to modify the attributes of an existing mapping. When executed
you can change the name of the mapping, the copy rate or the consistency group. When
modifying the name of a mapping, you cannot modify any of the other attributes at the same
time, since changing the name is mutually exclusive.

Modifying the consistency group the mapping belongs to can only be done when the mapping
is inactive. That is, it has not been triggered, or, if it has been triggered, the copy has run to
completion. Similarly, if the target consistency group is active, the mapping cannot be moved.

11.2.3 Deleting the mapping


To delete a FlashCopy mapping we use the command svctask rmfcmap.

svctask rmfcmap
This command is used to delete an existing FlashCopy mapping. When the command is
executed it attempts to delete the FlashCopy mapping specified. If the FlashCopy mapping is
active, the command fails unless the -force flag is specified.

Deleting a mapping only deletes the logical relationship between the two VDisks. It does not
affect the VDisks themselves. However, when issued on an active FlashCopy mapping
applying the -force flag, the delete renders the data on the FlashCopy mapping target VDisk
as inconsistent.

11.2.4 Preparing (pre-triggering) the FlashCopy mapping


To prepare a FlashCopy mapping we use the command svctask prestartfcmap.

svctask prestartfcmap
This command prepares a mapping for triggering. It flushes the cache of any data destined
for the source VDisk, and forces the cache into write through until the mapping is triggered.

When executed, this command prepares a single FlashCopy mapping. The prepare step
ensures that any data that resides in the cache for the source VDisk is first flushed to disk.

Chapter 11. Copy Services: FlashCopy 385


This ensures that when the copy is made it is consistent with what the operating system
thinks is on disk.

When the command is issued the FlashCopy mapping enters the preparing state, and upon
completion (when in write-through), it enters the prepared state. At this point, the FlashCopy
mapping is ready for triggering.

Preparing, and the subsequent triggering, is usually performed on a consistency group basis.
Only mappings belonging to consistency group 0 can be prepared on their own

11.2.5 Preparing (pre-triggering) the FlashCopy consistency group


To prepare a FlashCopy consistency group, we use the command svctask
prestartfcconsistsgrp.

svctask prestartfcconsistgrp
This command prepares all FlashCopy mappings in the consistency group for triggering. It
flushes the cache of any data destined for the source VDisks, and forces the cache into write
through until the mapping is triggered.

When executed, this command prepares all FlashCopy mappings in the consistency group.
The prepare step ensures that any data that resides in the cache for the source VDisks is first
flushed to disk. This ensures that when the copy is made it is consistent with what the
operating system thinks is on disks.

When the command is issued the FlashCopy mappings in the consistency group enters the
preparing state, and upon completion (when all are in write-through), the consistency group
enters the prepared state. At this point, the FlashCopy consistency group is ready for
triggering.

11.2.6 Starting (triggering) FlashCopy mappings


The command svctask startfcmap is used to start a single FlashCopy mapping.

svctask startfcmap
The command svctask startfcmap triggers a FlashCopy mapping. when invoked a PiT copy
of the source VDisk is created on the target VDisk.

The FlashCopy mapping must be in the prepared state prior to triggering. However, you can
run this command with the optional -prep flag. This prepares the mapping and trigger the
FlashCopy as soon as the preparation is complete. Note that this means it is under the
systems control when the trigger happens. Since the prepare step may take some time to
complete before the trigger is executed, this can potentially prolong the period your
application must be quiesced. If you want to control the triggering you should use the prepare
command first. See svctask prestartfcmap and svctask prestartfcconsistgrp.

When the FlashCopy mapping is triggered it enters the copying state. The way the copy
proceeds depends on the background copy rate attribute of the mapping. If the mapping is set
to 0% (NOCOPY), then only data that is subsequently updated on the source is copied to the
destination. This means that the target VDisk can only be used as a backup copy while the
mapping exists in the copying state. If the copy is stopped, the target VDisk is not usable.
When the intention is to end up with a duplicate copy of the source at the target VDisk, it is
necessary to set the background copy rate greater than 0. This means that the system copies
all the data (even unchanged data) to the target VDisk, and eventually reaches the

386 IBM TotalStorage SAN Volume Controller


idle/copied state. At this time, you can delete the FlashCopy mapping and create an
independent PiT copy of the source at the target VDisk.

11.2.7 Stopping the FlashCopy mapping


The command svctask stopfcmap is used to stop a FlashCopy mapping.

svctask stopfcmap
This command allows you to stop an active (copying) or suspended mapping. When
executed, this command stops a single FlashCopy mapping. When a FlashCopy mapping is
stopped, the target VDisk becomes invalid and is set offline by the SVC. The FlashCopy
mapping needs to be reprepared or retriggered to bring the target VDisk online again.

Note: Stopping a FlashCopy mapping should only be done when the data on the target
VDisk is of no interest. When a FlashCopy mapping is stopped, the target VDisk becomes
invalid and is set offline by the SVC, regardless of the state of the mapping.

11.2.8 Stopping the FlashCopy consistency group


The command svctask stopfcconsistgrp is used to stop all FlashCopy mappings in a
FlashCopy consistency group.

svctask stopfcconsistgrp
This command allows you to stop an active (copying) or suspended consistency group. When
executed, this command stops all FlashCopy mappings in the consistency group. When a
FlashCopy consistency group is stopped, the target VDisks become invalid and are set offline
by the SVC. The FlashCopy consistency group needs to be reprepared or retriggered to bring
the target VDisks online again.

11.2.9 Creating the FlashCopy consistency group


The command svctask mkfcconsistgrp is used to create a FlashCopy consistency group.

svctask mkfcconsistgrp
This command allows you to create a new FlashCopy consistency group. When executed, a
new FlashCopy consistency group is created, and the ID of the new group is returned. If you
do not specify a name for the FlashCopy consistency group, a name will automatically be
assigned.

11.2.10 Modifying the FlashCopy consistency group


The command svctask chfcconsistgrp is used to modify a FlashCopy consistency group.

svctask chfcconsistgrp
This command allows you to modify the name of an existing consistency group. When
executed, this command changes the name of the consistency group that is specified.

11.2.11 Deleting the FlashCopy consistency group


The command svctask rmfcconsistgrp is used to delete a FlashCopy consistency group.

Chapter 11. Copy Services: FlashCopy 387


svctask rmfcconsistgrp
This command allows you to delete an existing consistency group. When executed, this
command deletes the consistency group specified. If there are mappings that are members of
the group, the command fails unless the -force flag is specified.

If you want to delete all the mappings in the consistency group as well, you must first delete
the mappings, and then delete the consistency group.

11.3 FlashCopy scenario using the CLI


In the following scenario we want to FlashCopy the following VDisks:
򐂰 VDisk1 -database files
򐂰 VDisk2 -database log files
򐂰 VDisk3 -application files

Since data consistency is needed across VDisk1 and VDisk2, we create a consistency group
to handle FlashCopy of VDisk1 and VDisk2. While in this scenario the application files are
independent of the database, we create a single FlashCopy mapping for VDisk3. The
FlashCopy setup is illustrated in Figure 11-7

Consistency Group 1

FC_Mapping 1 VDisk1T
VDisk1
FC_Source FC_Target

FC_Mapping 2 VDisk2T
VDisk2
FC_Source FC_Target

VDisk3 FC_Mapping 3 VDisk3T


FC_Source FC_Target

Figure 11-7 FlashCopy scenario using the CLI

Setting up FlashCopy
In the following section, we assume that the target VDisks have already been created.

To setup the FlashCopy the following steps must be performed:


򐂰 Create a FlashCopy consistency group:
– Name FCCG1
򐂰 Create FlashCopy mapping for VDisk1:

388 IBM TotalStorage SAN Volume Controller


– source VDisk1
– target VDisk1T
– name FCMap1
– consistency group FCCG1
– copyrate 80%
򐂰 Create FlashCopy mapping for VDisk2:
– source VDisk2
– target VDisk2T
– name FCMap2
– consistency group FCCG1
– copyrate 80%
򐂰 Create FlashCopy mapping for VDisk3:
– source VDisk3
– target VDisk3T
– name FCMap3
– copyrate 0%

In the following, each step is carried out using the CLI

In Example 11-2, the consistency group FCCG1 is created.

Example 11-2 Creating a FlashCopy mapping


IBM_2145:admin>svctask mkfcconsistgrp -name FCCG1
FlashCopy Consistency Group, id [4], successfully created

In Example 11-3 the FlashCopy mapping for VDisk1 is created.

Example 11-3 Create FlashCopy mapping for VDisk1


IBM_2145:admin>svctask mkfcmap -source VDisk1 -target VDisk1T -name FCMap1 -consistgrp FCCG1 -copyrate 80
FlashCopy Mapping, id [1], successfully created

In Example 11-4 the FlashCopy mapping for VDisk2 is created.

Example 11-4 Create FlashCopy mapping for VDisk2


IBM_2145:admin>svctask mkfcmap -source VDisk2 -target VDisk2T -name FCMap2 -consistgrp FCCG1 -copyrate 80
FlashCopy Mapping, id [2], successfully created

In Example 11-5, the FlashCopy mapping for VDisk3 is created.

Example 11-5 Create FlashCopy mapping for VDisk3


IBM_2145:admin>svctask mkfcmap -source VDisk3 -target VDisk3T -name FCMap3 -copyrate 0
FlashCopy Mapping, id [3], successfully created

In Example 11-6, we list the FlashCopy mappings to verify the were created as intended

Example 11-6 Listing the FlashCopy mappings


IBM_2145:admin>svcinfo lsfcmap -delim :
id:name:source_VDisk_id:source_VDisk_name:target_VDisk_id:target_VDisk_name:group_id:group_name:status:prog
ress:copy_rate
1:FCMap1:9:VDisk1:10:VDisk1T:4:FCCG1:idle_or_copied::80
2:FCMap2:11:VDisk2:12:VDisk2T:4:FCCG1:idle_or_copied::80
3:FCMap3:13:VDisk3:14:VDisk3T:::idle_or_copied::0

Chapter 11. Copy Services: FlashCopy 389


Executing FlashCopy
Now that we have created the FlashCopy mappings and the consistency group we are ready
to use the FlashCopy mappings in our environment.

When performing the FlashCopy on the VDisks with the database we want to be able to
control the PiT when the FlashCopy is triggered, in order to keep our quiesce time at a
minimum. To achieve this we prepare the consistency group in order to flush the cache for the
source VDisks. In Example 11-7 we execute the command prestartfcconsistgrp, when
listing the FlashCopy consistency groups we verify that it has entered the prepared state.

Example 11-7 Prestart FlashCopy consistency group


IBM_2145:admin>svctask prestartfcconsistgrp FCCG1
IBM_2145:admin>svcinfo lsfcconsistgrp
id name status
4 FCCG1 prepared

Now the source VDisks are in write-through, we flush all database and OS buffers and
quiesce the database. Immediately after the quiesce we execute the command
startfcconsistgrp as shown in Example 11-8, and afterwards the database can be
resumed. We now have created a PiT consistent copy of the database on the target VDisks.

Example 11-8 Start FlashCopy consistency group


IBM_2145:admin>svctask startfcconsistgrp FCCG1

When executing the single FlashCopy mapping, we decide to let the SVC perform the
FlashCopy triggering as soon as the FlashCopy enters the prepared state. To do this, we
simply issue the command svctask startfcmap with the flag -prep. In Example 11-9, the
single FlashCopy mapping is executed, and we verify that the FlashCopy mapping enters the
copying state.

Example 11-9 Execution of single FlashCopy mapping


IBM_2145:admin>svctask startfcmap -prep FCMap3
IBM_2145:admin>svcinfo lsfcmap FCMap3
id 3
name FCMap3
source_VDisk_id 13
source_VDisk_name VDisk3
target_VDisk_id 14
target_VDisk_name VDisk3T
group_id
group_name
status copying
progress 0
copy_rate 0

We created the single FlashCopy mapping with the background copyrate set to 0%, this
means that unless we issue a command to alter the copyrate, it will stay in the copying state
until stopped. This is often referred to as NOCOPY.

To monitor the background copy progress of the FlashCopy mappings in the consistency
group, we issue the command svcinfo lsfcmapprogress for each FlashCopy mapping.
Alternatively, the copy progress can also be queried using the command svcinfo lsfcmap. As
shown in Example 11-10 on page 391 both commands return that the background copy has
completed 56%.

390 IBM TotalStorage SAN Volume Controller


Example 11-10 Monitoring background copy progress
IBM_2145:admin>svcinfo lsfcmapprogress FCMap1
id progress
1 56
IBM_2145:admin>svcinfo lsfcmapprogress FCMap 2
id progress
2 56
IBM_2145:admin>svcinfo lsfcmap -delim :
id:name:source_VDisk_id:source_VDisk_name:target_VDisk_id:target_VDisk_name:group_id:group_name:status:prog
ress:copy_rate
1:FCMap1:9:VDisk1:10:VDisk1T:4:FCCG1:copying:56:80
2:FCMap2:11:VDisk2:12:VDisk2T:4:FCCG1:copying:56:80
3:FCMap3:13:VDisk3:14:VDisk3T:::copying:0:0

When the background copy has completed, the FlashCopy mapping enters the
idle_or_copied state, and when all FlashCopy mappings in a consistency group enter this
state the consistency group enters the idle_or_copied state. When in this state the FlashCopy
mapping can be deleted, and the target disk be used independently, if, for example, another
target disk is to be used for the next FlashCopy of the particular source VDisk.

In Example 11-11, we verify the state of the FlashCopy consistency group, that the
background copy has completed (100% copied), and then we delete the FlashCopy mapping
FCMap2.

Example 11-11 Verifying the copied state and deleting a FlashCopy mapping
IBM_2145:admin>svcinfo lsfcconsistgrp FCCG1
id 4
name FCCG1
status idle_or_copied
FC_mapping_id 1
FC_mapping_name FCMap1
FC_mapping_id 2
FC_mapping_name FCMap2
IBM_2145:admin>svcinfo lsfcmapprogress FCMap1
id progress
1 100
IBM_2145:admin>svcinfo lsfcmapprogress FCMap2
id progress
2 100
IBM_2145:admin>svctask rmfcmap FCMap2
IBM_2145:admin>svcinfo lsfcconsistgrp FCCG1
id 4
name FCCG1
status idle_or_copied
FC_mapping_id 1
FC_mapping_name FCMap1

11.4 FlashCopy scenario using the GUI


In the following we use the same scenario as described in “FlashCopy scenario using the CLI”
on page 388.

The scenario is that we want to FlashCopy the following VDisks:

Chapter 11. Copy Services: FlashCopy 391


򐂰 VDisk1 -database files
򐂰 VDisk2 -database log files
򐂰 VDisk3 -application files

Since data consistency is needed across VDisk1 and VDisk2 we create a consistency group,
to handle FlashCopy of VDisk1 and VDisk2. While in this scenario the application files are
independent of the database we create a single FlashCopy mapping for VDisk3. The
FlashCopy setup is illustrated in Figure 11-7 on page 388.

Setting up FlashCopy
In the following we assume that the target VDisks already have been created.

To setup the FlashCopy the following steps must be performed:


򐂰 Create a FlashCopy consistency group:
– name FCCG1
򐂰 Create FlashCopy mapping for VDisk1:
– source VDisk1
– target VDisk1T
– name FCMap1
– consistency group FCCG1
– copyrate 80%
򐂰 Create FlashCopy mapping for VDisk2:
– source VDisk2
– target VDisk2T
– name FCMap2
– consistency group FCCG1
– copyrate 80%
򐂰 Create FlashCopy mapping for VDisk3:
– source VDisk3
– target VDisk3T
– name FCMap3
– copyrate 0%

In the following, each step is carried out using the GUI.

Creating a FlashCopy consistency group


First we create the FlashCopy consistency group according to the described scenario.

In the SVC GUI Figure 11-8 on page 393 we expand Manage Copy Services in the Task
pane and select FlashCopy Consistency Groups.

392 IBM TotalStorage SAN Volume Controller


Figure 11-8 Select FlashCopy Consistency Groups.

When prompted for filtering in Figure 11-9, we select Bypass Filter and click OK.

Figure 11-9 Bypass FlashCopy Consistency Groups Filter.

Chapter 11. Copy Services: FlashCopy 393


In Figure 11-10 we select Go to create the consistency group.

Figure 11-10 Create consistency group.

In Figure 11-11 we name the FlashCopy consistency group (FCCG1). Click OK.

Figure 11-11 FlashCopy consistency group name.

394 IBM TotalStorage SAN Volume Controller


When prompted with the mapping results in Figure 11-12, we click OK.

Figure 11-12 Mapping results for created consistency group.

In Figure 11-13 the created FlashCopy consistency group is displayed.

Figure 11-13 Viewing FlashCopy consistency groups.

Chapter 11. Copy Services: FlashCopy 395


The FlashCopy consistency group is now created and ready for use.

Creating the FlashCopy mappings


In the SVC GUI shown in Figure 11-14, we expand Manage Copy Services in the Task pane
and select FlashCopy mappings.

Figure 11-14 Select FlashCopy mappings.

When prompted for filtering in Figure 11-15 on page 397, we select Bypass Filter.

396 IBM TotalStorage SAN Volume Controller


Figure 11-15 Bypass FlashCopy mappings filter.

As shown in Figure 11-16, we select Create a Mapping from the scroll menu and click Go to
start the creation process of a FlashCopy mapping.

Figure 11-16 Create Flashcopy mapping

Chapter 11. Copy Services: FlashCopy 397


As shown in Figure 11-17, we are presented with an overview of the creation process for a
FlashCopy mapping, and click Next to proceed.

Figure 11-17 FlashCopy mapping overview.

As shown in Figure 11-18 on page 399, we name the FlashCopy mapping FCMap1, select
the previously created consistency group FCCG1, and set the background copy priority to
80% and click Next to proceed.

398 IBM TotalStorage SAN Volume Controller


Figure 11-18 Setting the properties for the FlashCopy mapping.

As shown in Figure 11-19, we define the filer VDisk* for the source VDisk and click Next to
proceed.

Figure 11-19 Filtering source VDisk candidates.

Chapter 11. Copy Services: FlashCopy 399


As shown in Figure 11-20, we select VDisk1 as the source disk and click Next to proceed.

Figure 11-20 Selecting source VDisk.

As shown in Figure 11-21, we select the target VDisk1T and click Next to proceed.

Figure 11-21 Selecting target VDisk.

400 IBM TotalStorage SAN Volume Controller


As shown in Figure 11-22, we verify the defined FlashCopy mapping and click Finish to
create it.

Figure 11-22 Verify FlashCopy mapping.

As shown in Figure 11-23 on page 402, we are returned to the FlashCopy mapping list, and
the created FlashCopy mapping is displayed.

Chapter 11. Copy Services: FlashCopy 401


Figure 11-23 Viewing FlashCopy mappings.

The remaining FlashCopy mappings are created in the same way (starting with Figure 11-16
on page 397) according to the properties for the scenario. In Figure 11-24 on page 403, the
created FlashCopy mappings are displayed.

402 IBM TotalStorage SAN Volume Controller


Figure 11-24 Viewing all created FlashCopy mappings

Executing FlashCopy
Now that we have created the FlashCopy mappings and the consistency group, we are ready
to use the FlashCopy mappings in our environment.

Note: It is unlikely that you will use the GUI to execute FlashCopy except for testing
purposes, because FlashCopy execution often is performed periodically and at scheduled
times. To ensure a consistent data set is created, it is crucial to flush application and OS
buffers, and quiesce the application. To do this, scripting using the CLI is much more
powerful.

When performing the FlashCopy on the VDisks with the database we want to be able to
control the PiT when the FlashCopy is triggered, in order to keep our quiesce time at a
minimum. To achieve this we prepare the consistency group in order to flush the cache for the
source VDisks.

In Figure 11-25 on page 404, we select the FlashCopy consistency group and Prepare a
consistency group from the action list and click Go.

Chapter 11. Copy Services: FlashCopy 403


Figure 11-25 Prepare FlashCopy consistency group.

As shown in Figure 11-26, the FlashCopy consistency group enters the prepared state. To
start the FlashCopy consistency group, we select the consistency group and select Start a
Consistency Group from the scroll menu and click Go.

Figure 11-26 Start FlashCopy consistency group

404 IBM TotalStorage SAN Volume Controller


In Figure 11-27, we are prompted to confirm starting the FlashCopy consistency group. We
now flush the database and OS buffers and quiesce the database, then click OK to start the
FlashCopy consistency group.

Note: Since we have already prepared the FlashCopy consistency group, this option is
grayed out when prompted to confirm starting the FlashCopy consistency group.

Figure 11-27 Confirm start of FlashCopy consistency group

As shown in Figure 11-28 on page 406, we verify that the consistency group is in the copying
state and subsequently we resume the database.

Chapter 11. Copy Services: FlashCopy 405


Figure 11-28 Viewing FlashCopy consistency groups.

To monitor the progress of the FlashCopy mappings in the consistency group, we navigate to
the FlashCopy mappings window. As shown in Figure 11-29 on page 407, the progress of the
background copy is displayed; to update the window, we must click the Refresh button.

406 IBM TotalStorage SAN Volume Controller


Figure 11-29 Viewing the background copy progress.

Note: Even if you click the Refresh button several times, the SVC only updates progress
of the background copy once a minute.

When the background copy is completed for all FlashCopy mappings in the consistency
group, the status is changed to Idle or Copied as shown in Figure 11-30 on page 408.

Chapter 11. Copy Services: FlashCopy 407


Figure 11-30 FlashCopy consistency group, Idle or Copied.

Executing a single FlashCopy mapping


When executing the single FlashCopy mapping we decide to let the SVC perform the
FlashCopy triggering as soon as the FlashCopy enters the prepared state. To do this we
select the FlashCopy mapping FCMap3, select Start a Mapping from the scroll menu and
click Go to proceed, as shown in Figure 11-31 on page 409.

408 IBM TotalStorage SAN Volume Controller


Figure 11-31 Selecting a single mapping to be started.

Because we want to start a FlashCopy mapping which has not yet been prepared, the
Prepare box is ticked, as shown in Figure 11-32 on page 410, and we click OK to start the
FlashCopy mapping.

Chapter 11. Copy Services: FlashCopy 409


Figure 11-32 Starting a single FlashCopy mapping.

After starting the FlashCopy mapping, we are returned to the FlashCopy mappings list shown
in Figure 11-33.

Figure 11-33 Viewing FlashCopy mappings.

410 IBM TotalStorage SAN Volume Controller


Note: FlashCopy may be invoked from the SVC graphical user interface (GUI), but this
may not make much sense if you plan to handle a large number of FlashCopy mappings or
consistency groups periodically, or at varying times. In this case, creating a script to use
the CLI is much more powerful.

Chapter 11. Copy Services: FlashCopy 411


412 IBM TotalStorage SAN Volume Controller
12

Chapter 12. Copy services: Metro Mirror


This chapter describes the Metro Mirror copy service. Metro Mirror in an IBM TotalStorage
SAN Volume Controller (SVC) and is similar to Metro Mirror in the IBM TotalStorage DS
family. The SVC provides a single point of control while enabling Metro Mirror in your SAN
regardless of the disk subsystems used.

Before SVC release 2.1 this function was called Peer to Peer Remote Copy (PPRC).

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. 413
12.1 Metro Mirror
The general application of Metro Mirror is to maintain two real-time synchronized copies of a
data set. Often, the two copies are geographically dispersed on two SVC clusters, though it is
possible to use Metro Mirror in a single cluster (within an I/O group). If the primary copy fails,
the secondary copy can then be enabled for I/O operation. A typical application of this
function is to set up a dual-site solution using two SVC clusters where the first site is
considered the primary production site, and the second site is considered the failover site,
which is activated when a failure of the first site is detected.

Note: Before SVC release 2.1 this function was called PPRC.

12.1.1 Metro Mirror overview


Metro Mirror works by defining a Metro Mirror relationship between VDisks equal in size. To
provide management (and consistency) across a number of Metro Mirror relationships
consistency groups are supported (as with FlashCopy). The SVC provides both intracluster
and intercluster Metro Mirror as described below.

Intracluster Metro Mirror


Intracluster Metro Mirror can be applied within any single I/O group, while Metro Mirror across
I/O groups in a SVC cluster is not supported, since Intracluster Metro Mirror can only be
performed between VDisks in the same I/O group.

Intercluster Metro Mirror


Intercluster Metro Mirror operations requires a pair of SVC clusters that are separated by a
number of moderately high bandwidth links. The two SVC clusters must be defined in an SVC
partnership, which must be performed on both SVC clusters to establish a fully functional
Metro Mirror partnership.

Using standard single mode connections, the supported distance between two SVC clusters
in a Metro Mirror partnership is 10 km. For extended distance solutions, contact your IBM
representative.

Note: When a local and a remote fabric are connected together for Metro Mirror purposes,
then the ISL hop count between a local node and a remote node may not exceed seven.

Metro Mirror remote copy technique


Metro Mirror is a synchronous remote copy which is briefly explained below. To illustrate the
differences between synchronous and asynchronous remote copy, asynchronous remote
copy is also explained.

Synchronous remote copy


Metro Mirror is a fully synchronous remote copy technique which ensures that updates are
committed at both primary and secondary VDisks before the application is given completion
to an update. In Figure 12-1 on page 415, it is illustrated how a write to the master VDisk is
mirrored to the cache for the auxiliary VDisk before an acknowledge of the write is sent back
to the host issuing the write. This ensures that the secondary is real-time synchronized, in
case it is needed in a failover situation. However, this means that the application is fully
exposed to the latency and bandwidth limitations of the communication link to the secondary
site. This is the reason for the distance limitations when applying Metro Mirror.

414 IBM TotalStorage SAN Volume Controller


1 Write 4 Ack

3 Acknowledge write
Cache Cache
2 Mirror write

Master Auxiliary
VDisk VDisk
Metro Mirror
Relationship
Figure 12-1 Write on VDisk in Metro Mirror relationship

Asynchronous remote copy


In asynchronous remote copy, the application is given completion to an update when it is sent
to the secondary site, but the update is not necessarily committed at the secondary site at
that time. This provides the capability of performing remote copy over distances exceeding
the limitations of synchronous remote copy. In a failover situation, some updates may be
missing at the secondary, and therefore the application must have some external mechanism
for recovering the missing updates and reapplying them. This mechanism may involve user
intervention.

Asynchronous remote copy is not part of the SVC Copy Services and will not be discussed
further in this book. For further information about asynchronous remote copy see the IBM
TotalStorage Global Mirror available with the IBM TotalStorage DS family.

12.1.2 The importance of write ordering


Many applications that use block storage have a requirement to survive failures such as loss
of power, or a software crash, and not lose data that existed prior to the failure. Since many
applications need to make enormous updates to that storage maintaining write ordering is key
to ensuring the correct operation of applications following a disruption.

Dependent writes
To illustrate why it is crucial to use consistency groups when a data set spans multiple
VDisks, consider the following typical sequence of writes for a database update transaction:
1. A write is executed to update the database log, indicating that a database update is to be
performed
2. A second write is executed to update the database
3. A third write is executed to update the database log, indicating that the database update
has completed successfully

In Figure 12-2 on page 416 the write sequence is illustrated.

Chapter 12. Copy services: Metro Mirror 415


Time

Log Log: Update record xyz ... started

Step 1
DB file

Log: Update record xyz ... started


Log

Step 2
DB file DB: record xyz ...

Log: Update record xyz ... started


Log Log: Update record xyz ... completed

Step 3
DB file DB: record xyz ...

Figure 12-2 Dependent writes for a database

The database ensures the correct ordering of these writes by waiting for each step to
complete before starting the next. But imagine that if the database log and the database itself
are on different VDisks and a Metro Mirror relationship is stopped during this update. In this
case you need to exclude the possibility that the Metro Mirror relationship for the VDisk with
the database file is stopped slightly before the VDisk containing the database log resulting in
the secondary VDisks seeing writes (1) and (3) but not (2), since the Metro Mirror relationship
was stopped before the write was completed on the VDisk with the database file.

In this case, if the database was restarted using the backup made from the secondary disks,
the database log would indicate that the transaction had completed successfully when that is
not the case. Because the Metro Mirror of the VDisk with the database file was stopped, the
write was committed to the VDisk with the database file. Therefore the transaction is lost and
the integrity of the database is in question.

To overcome the issue of dependent writes across VDisks, and to ensure a consistent data
set, the SVC supports the concept of consistency groups for Metro Mirror relationships. A
Metro Mirror consistency group can contain an arbitrary number of relationships up to the
maximum number of Metro Mirror relationships supported by the SVC Cluster.

Metro Mirror commands can then be issued to the Metro Mirror consistency group, and
thereby simultaneously for all Metro Mirror relationships defined in the consistency group. For
example, when issuing a Metro Mirror start command to the consistency group, all of the
Metro Mirror relationships in the consistency group are started at the same time.

416 IBM TotalStorage SAN Volume Controller


12.1.3 Practical use of Metro Mirror
To use Metro Mirror, a relationship must be defined between two VDisks. When creating the
Metro Mirror relationship, one VDisk is defined as the master, and the other as the auxiliary.
The relationship between the two copies is not symmetric, and when the Metro Mirror
relationship is created the master VDisk is by default considered the primary copy (often
referred to as the source), and the auxiliary VDisk is considered the secondary copy (often
referred to as the target). The master VDisk is the production VDisk and updates to this copy
are real time mirrored to the auxiliary VDisk.

Note: The copy direction for a Metro Mirror relationship can be switched so the auxiliary
VDisk becomes the primary and the master VDisk becomes the secondary.

While the Metro Mirror relationship is active, the secondary copy (VDisk) is not accessible for
host application write I/O at any time. The SVC allows read-only access to the secondary
VDisk when it contains a “consistent” image. This is only intended to allow boot time
operating system discovery to complete without error, so that any hosts at the secondary site
can be ready to start up the applications with minimum delay if required. For instance, many
operating systems need to read Logical Block Address (LBA) 0 to configure a logical unit.
Although read access is allowed at the secondary in practice, the data on the secondary
volumes cannot be read by a host. The reason for this is that most operating systems write a
“dirty bit” to the file system when it is mounted. Because this write operation is not allowed on
the secondary volume, the volume cannot be mounted.

This access is only provided where consistency can be guaranteed. However, there is no way
in which coherency can be maintained between reads performed at the secondary and later
write I/Os performed at the primary.

To enable access to the secondary VDisk for host operations, the Metro Mirror relationship
must be stopped, specifying the -access parameter. While access to the secondary VDisk for
host operations is enabled, the host must be instructed to mount the VDisk and related tasks
before the application can be started, or instructed to perform a recovery process.

The Metro Mirror requirement to enable the secondary copy for access, differentiates it from,
for example, third party mirroring software on the host, which aims to emulate a single,
reliable disk regardless of what system is accessing it. Metro Mirror retains the property that
there are two volumes in existence but suppresses one while the copy is being maintained.

Using a secondary copy demands conscious policy decision by the administrator, that a
failover is required, and the tasks to be performed on the host involved in establishing
operation on the secondary copy are substantial. The goal is to make this rapid (much faster
compared to recovering from a backup copy) but not seamless. The failover process can be
automated through failover management software. The SVC provides Simple Network
Management Protocol (SNMP) traps and programming (or scripting) towards the Command
Line Interface (CLI) to enable this automation.

12.1.4 SVC Metro Mirror features


The SVC Metro Mirror supports the following features:
򐂰 Synchronous remote copy of geographically dispersed VDisks
򐂰 SVC implements Metro Mirror relationship between VDisk pairs, each VDisk in the pair
being managed by a SVC cluster
򐂰 SVC supports intracluster Metro Mirror, where both VDisks belong to the same cluster
(and IO group)

Chapter 12. Copy services: Metro Mirror 417


򐂰 SVC supports intercluster Metro Mirror, where each VDisk belongs to their separate SVC
cluster. A given SVC cluster can be configured for partnership with another cluster. A
given SVC cluster can only communicate with one other cluster. All intercluster Metro
Mirror takes place between the two SVC clusters in the configured partnership.
򐂰 Intercluster and intracluster Metro Mirror can be used concurrently within a cluster for
different relationships.
򐂰 SVC does not require a control network or fabric to be installed to manage Metro Mirror.
For intercluster Metro Mirror SVC maintains a control link between the two clusters. This
control link is used to control state and co-ordinate updates at either end. The control link
is implemented on top of the same FC fabric connection as SVC uses for Metro Mirror I/O.
򐂰 SVC implements a configuration model which maintains the Metro Mirror configuration
and state through major events such as failover, recovery, and resynchronization to
minimize user configuration action through these events.
򐂰 SVC maintains and polices a strong concept of consistency and makes this available to
guide configuration activity.
򐂰 SVC implements flexible resynchronization support enabling it to re-synchronize VDisk
pairs which have suffered write I/O to both disks and to resynchronize only those regions
which are known to have changed.

How Metro Mirror works


There are several steps in the Metro Mirror process:
1. A SVC cluster partnership is created between two SVC clusters (for intracluster Metro
Mirror).
2. A Metro Mirror relationship is created between two VDisks of the same size.
3. To manage multiple Metro Mirror relationships as one entity, the relationships can be
made part of a Metro Mirror consistency group. This is to ensure data consistency across
multiple Metro Mirror relationships, or simply for ease of management.
4. The Metro Mirror relationship is started, and when the background copy has completed
the relationship is consistent and synchronized.
5. Once synchronized, the relationship is ready for production data.
6. To access the auxiliary VDisk, the Metro Mirror relationship must be stopped and
consequently the Metro Mirror enters the inconsistent state when write I/O is submitted to
the secondary.
7. The remote host server is mapped to the auxiliary VDisk and the disk is available for I/O.

Intercluster communication and zoning


All intercluster communication is performed through the SAN. Prior to creating intercluster
Metro Mirror relationships, you must create a partnership between the two clusters. All SVC
node ports on each SVC cluster must be able to access each other to facilitate the
partnership creation. Therefore, a zone in each fabric must be defined for intercluster
communication see Chapter 3, “Planning and configuration” on page 25 and pay special
attention to “Zoning” on page 36.

SVC cluster partnership


Each SVC cluster can only be in a partnership with one other SVC cluster. When the SVC
cluster partnership has been defined on both clusters, further communication facilities
between the nodes in each of the cluster are established. This comprises:
򐂰 A single control channel which is used to exchange and coordinate configuration
information

418 IBM TotalStorage SAN Volume Controller


򐂰 I/O channels between each of the nodes in the clusters

These channels are maintained and updated as nodes appear and disappear and as links
fail, and are repaired to maintain operation where possible. If communication between the
SVC clusters is interrupted or lost, an error is logged (and consequently Metro Mirror
relationships will stop). To handle error conditions the SVC can be configured to raise SNMP
traps to the enterprise monitoring system.

Maintenance of the intercluster link


All SVC node maintains a database of the other devices that are visible on the fabric. This is
updated as devices appear and disappear. Devices that advertise themselves as SVC nodes
are categorized according to the SVC cluster to which they belong. SVC nodes that belong to
the same cluster establish communication channels between themselves and begin to
exchange messages to implement the clustering and functional protocols of SVC. Nodes that
are in different clusters do not exchange messages after the initial discovery is complete
unless they have been configured together in to perform Metro Mirror.

The intercluster link carries the control traffic to coordinate activity between the two clusters. It
is formed between one node in each cluster which is termed the focal point. One of the FC
logins between the two focal points is selected to carry all the communications traffic. If this
connection should fail because of a problem with the specific path chosen, a new login
between the same two focal points is selected. If the focal point node should fail (or all its
logins to the remote cluster fail), then a new focal point is chosen to carry the control traffic.
Changing the focal point causes I/O to pause but does not cause relationships to become
ConsistentStopped.

Metro Mirror relationship


Metro Mirror relationships are similar to FlashCopy mappings. They can be stand-alone or
combined in consistency groups. Start and stop commands can be issued either against the
stand-alone relationship or the consistency group. Figure 12-3 illustrates the Metro Mirror
relationship.

MM_Relationship
VDisk1M VDisk1A
MM_Master MM_Auxiliary

Figure 12-3 Metro Mirror relationship

A Metro Mirror relationship is composed of two VDisks equal in size. The master VDisk and
the auxiliary VDisk can be in the same I/O group, within the same SVC cluster (intracluster
Metro Mirror) or be on separate SVC clusters that are defined as SVC partners (intercluster
Metro Mirror).

Note:
򐂰 A VDisk can only be part of one Metro Mirror relationship at a time
򐂰 A VDisk which is a Flash Copy target cannot be part of a Metro Mirror relationship

Metro Mirror relationship between primary and secondary VDisk


When creating a Metro Mirror relationship, by default the master VDisk is assigned as the
primary, and the auxiliary VDisk the secondary. This implies that the default copy direction is

Chapter 12. Copy services: Metro Mirror 419


mirroring the master VDisk to the auxiliary VDisk, though the copy direction can be reversed if
appropriate, hence mirroring the auxiliary VDisk to the master VDisk.

In most common applications of Metro Mirror, the master VDisk contains the production copy
of the data, and is used by the host application, while the auxiliary VDisk contains the
mirrored copy of the data and is used for failover in disaster recovery scenarios. The terms
master and auxiliary help support this use. If Metro Mirror is applied differently, the terms
master and auxiliary VDisks need to be interpreted appropriately.

Metro Mirror consistency groups


Certain uses of Metro Mirror require the manipulation of more than one relationship. Metro
Mirror consistency groups provides the ability to group relationships, so that they are
manipulated in unison.

Consistency groups address the issue where the objective is to preserve data consistency
across multiple Metro Mirrored VDisks because the applications have related data which
spans multiple VDisks. A requirement for preserving the integrity of data being written is to
ensure that “dependent writes” are executed in the application's intended sequence.

Metro Mirror commands can be issued to a Metro Mirror consistency group, which affects all
Metro Mirror relationships in the consistency group, or to a single Metro Mirror relationship if
not part of a Metro Mirror consistency group. In Figure 12-4 the concept of Metro Mirror
consistency groups is illustrated, since the MM_Relationship 1 and 2 are part of the
consistency group they can be handled as one entity, while the stand alone MM_Relationship
3 is handled separately.

Consistency Group 1

MM_Relationship 1
VDisk1M VDisk1A
MM_Master MM_Auxiliary

VDisk2M MM_Relationship 2 VDisk2A


MM_Master MM_Auxiliary

VDisk3M MM_Relationship 3 VDisk3A


MM_Master MM_Auxiliary

Figure 12-4 Metro Mirror consistency group

420 IBM TotalStorage SAN Volume Controller


򐂰 Metro Mirror relationships can be part of a consistency group, or be stand alone and
therefore handled as single instances.
򐂰 A consistency group can contain zero or more relationships. An empty consistency group,
with zero relationships in it, has little purpose until it is assigned its first relationship except
that it has a name.
򐂰 All the relationships in a consistency group must have matching master and auxiliary SVC
clusters.

Although it is possible that consistency groups can be used to manipulate sets of


relationships that do not need to satisfy these strict rules, that manipulation can lead to some
undesired side effects. The rules behind consistency mean that certain configuration
commands are prohibited where this would not be the case if the relationship was not part of
a consistency group.

For example, consider the case of two applications that are completely independent, yet they
are placed into a single consistency group. In the event of an error there is a loss of
synchronization, and a background copy process is required to recover synchronization.
While this process is in progress, Metro Mirror rejects attempts to enable access to the
Secondary VDisks of either application. If one application finishes its background copy much
more quickly than the other, Metro Mirror still refuses to grant access to its secondary, even
though this is safe in this case, because the Metro Mirror policy is to refuse access to the
entire consistency group if any part of it is inconsistent.

Stand-alone relationships and consistency groups share a common configuration and state
model. All the relationships in a non-empty consistency group have the same state as the
consistency group.

12.1.5 Metro Mirror states and events


In this section the different states of a Metro Mirror relationship, and the series of events that
modify these states are explained. In Figure 12-5 on page 422 the Metro Mirror relationship
state diagram shows an overview of the states that apply to a Metro Mirror relationship in the
connected state.

Chapter 12. Copy services: Metro Mirror 421


Create
Metro Mirror
Relationship
1a (ou 1b
c) to
syn fs
(in yn
c)
Consistent Inconsistent
Stopped Stopped
Fo
rce
(ou dS
to ta
Stop f sy rt Stop
or nc) or
2a Start 2b Start
Error Error
Stop
enable
4b access 3b
Consistent Inconsistent
Synchronized Background copy complete Copying

Start
Stop 5a (in sync) t
enable S tar
4a ed c)
access
orc f syn
F o
ut
(o

Idle

Figure 12-5 Metro Mirror mapping state diagram

򐂰 When creating the Metro Mirror relationship, you can specify if the auxiliary VDisk is
already in sync with the master VDisk, and the background copy process is then skipped.
This is especially useful when creating Metro Mirror relationships for pristine VDisks.
򐂰 Step1a: The Metro Mirror relationship is created with the -sync option and the Metro
Mirror relationship enters the Consistent Stopped state.
򐂰 Step1b: The Metro Mirror relationship is created without specifying that the master and
auxiliary VDisks are in sync, and the Metro Mirror relationship enters the Inconsistent
Stopped state.
򐂰 Step 2a: When starting a Metro Mirror relationship in the Consistent Stopped state, it
enters the Consistent Synchronized state. This implies that no updates (write I/O) has
been performed on the primary VDisk while in the Consistent Stopped state, otherwise the
-force option must be specified and the Metro Mirror relationship then enters the
Inconsistent Copying state, while background copy is started.
򐂰 Step 2b: When starting a Metro Mirror relationship in the Inconsistent Stopped state, it
enters the Inconsistent Copying state, while background copy is started.
򐂰 Step 3b: When the background copy completes the Metro Mirror relationship transitions
from the Inconsistent Copying state to the Consistent Synchronized state.
򐂰 Step 4a: When stopping a Metro Mirror relationship in the Consistent Synchronized state,
specifying the -access option which enables write I/O on the secondary VDisk, the Metro
Mirror relationship enters the Idle state.

422 IBM TotalStorage SAN Volume Controller


򐂰 Step 4b: To enable write I/O on the secondary VDisk, when the Metro Mirror relationship is
in the Consistent Stopped state, issue the command svctask stoprcrelationship
specifying the -access option, and the Metro Mirror relationship enters the Idle state.
򐂰 Step 5a: When starting a Metro Mirror relationship which is in the Idle state, it is required
to specify the -primary argument to set the copy direction. Given that no write I/O has
been performed (to either master or auxiliary VDisk) while in the Idle state, the Metro
Mirror relationship enters the Consistent Synchronized state. In case write I/O has been
performed to either the master or the auxiliary VDisk, then the -force option must be
specified and the Metro Mirror relationship then enters the Inconsistent Copying state,
while background copy is started.
򐂰 Stop or Error: When a Metro Mirror relationship is stopped (either intentionally or due to an
error), a state transition is applied. For example, this means that Metro Mirror relationships
in the Consistent Synchronized state enters the Consistent Stopped state and Metro
Mirror relationships in the Inconsistent Copying state enters the Inconsistent Stopped
state.
– In case the connection between the SVC clusters in a partnership is broken then all
(intercluster) Metro Mirror relationships enter a disconnected state. For further
information refer to Connected versus disconnected.

Note: Standalone Relationships and Consistency Groups share a common configuration


and state model. This means that all the Metro Mirror relationships in a non-empty
consistency group have the same state as the Consistency Group.

State overview
The SVC defined concepts of state are key to understanding the configuration concepts and
are therefore explained in more detail below.

Connected versus disconnected


This distinction can arise when a Metro Mirror relationship is created with the two virtual disks
in different clusters. Under certain error scenarios, communications between the two clusters
may be lost. For instance, power may fail causing one complete cluster to disappear.
Alternatively, the fabric connection between the two clusters might fail, leaving the two
clusters running but unable to communicate with each other.

When the two clusters can communicate, the clusters and the relationships spanning them
are described as connected. When they cannot communicate, the clusters and the
relationships spanning them are described as disconnected.

In this scenario, each cluster is left with half the relationship and has only a portion of the
information that was available to it before. Some limited configuration activity is possible, a
subset of what was possible before. The disconnected relationships are portrayed as having
a changed state. The new states describe what is known about the relationship and what
configuration commands are permitted.

When the clusters can communicate again the relationships become connected once again.
Metro Mirror automatically reconciles the two state fragments, taking into account any
configuration or other event that took place while the relationship was disconnected. As a
result, the relationship can either return to the state it was in when it became disconnected or
it can enter a different connected state.

Relationships that are configured between virtual disks in the same SVC cluster (intracluster)
will never be described as being in a disconnected state.

Chapter 12. Copy services: Metro Mirror 423


Consistent versus inconsistent
Relationships that contain VDisks operating as secondaries can be described as being
consistent or inconsistent. Consistency groups that contain relationships can also be
described as being consistent or inconsistent. The consistent or inconsistent property
describes the relationship of the data on the secondary to that on the primary virtual disk. It
can be considered a property of the secondary VDisk itself.

A secondary is described as consistent if it contains data that could have been read by a host
system from the primary if power had failed at some imaginary point in time while I/O was in
progress and power was later restored. This definition assumes that the primary maintains
write ordering and therefore requires that the secondary maintains write ordering also.

Requirements of this definition are:


򐂰 The secondary VDisk contains the data from all writes for which the host has received
good completion, and that data has not been overwritten by a subsequent write.
򐂰 For writes for which the host did not receive good completion (that is it received bad
completion or no completion at all) and the host subsequently performed a read from the
primary of that data and that read returned good completion and no later write was sent,
the secondary contains the same data as that returned by the read from the primary.

Consistent does not mean that the data is up-to-date. A copy can contain data that is frozen
at some point in time in the past.

From the point of view of an application, consistency means that a secondary VDisk contains
the same data as the primary VDisk at the point at which the imaginary power failure
occurred.

If an application is designed to cope with unexpected power failure this guarantee of


consistency means that the application can use the secondary and begin operation just as
though it was restarted after the hypothetical power failure.

Again, the application is dependent on the key properties of consistency:


򐂰 Write ordering
򐂰 Read stability for correct operation at the secondary

If a relationship, or set of relationships, is inconsistent and an attempt is made to start an


application using the data in the secondaries, a number of outcomes are possible:
򐂰 The application may decide that the data is corrupt and crash or exit with an error code.
򐂰 The application may fail to detect the data is corrupt and return erroneous data.
򐂰 The application may work without a problem.

Because of the risk of data corruption, and in particular undetected data corruption, Metro
Mirror strongly enforces the concept of consistency and prohibits access to inconsistent data.
Consistency as a concept can be applied to a single relationship or a set of relationships in a
consistency group. Write ordering is a concept that an application can maintain across a
number of disks accessed through multiple systems and therefore consistency must operate
across all those disks.

When deciding how to use consistency groups, the administrator must consider the scope of
an application’s data, taking into account all the interdependent systems which communicate
and exchange information. If two programs or systems communicate and store details as a
result of the information exchanged, then either of the following actions may occur:
򐂰 All the data accessed by the group of systems must be placed into a single consistency
group.

424 IBM TotalStorage SAN Volume Controller


򐂰 The systems must be recovered independently (each within its own consistency group).
Then, each system must perform recovery with the other applications to become
consistent with them.

Consistent versus synchronized


A copy which is consistent and up-to-date is described as synchronized. In a synchronized
relationship, the primary and secondary virtual disks are only different in regions where writes
are outstanding from the host.

Consistency does not mean that the data is up-to-date. A copy can be consistent and yet
contain data which was frozen at some point in time in the past. Write I/O may have continued
to a primary and not have been copied to the secondary. This state arises when it becomes
impossible to keep up-to-date and maintain consistency. An example is a loss of
communication between clusters when writing to the secondary.

When communication is lost for an extended period of time, Metro Mirror tracks the changes
that happen at the primary, but not the order of such changes, nor the details of such changes
(write data). When communication is restored, it is impossible to make the secondary
synchronized without sending write data to the secondary out-of-order, and therefore losing
consistency.

Two policies can be used to cope with this:


򐂰 Take a point-in-time copy of the consistent secondary before allowing the secondary to
become inconsistent. In the event of a disaster before consistency is achieved again, the
point-in-time copy target provides a consistent, though out-of-date, image.
򐂰 Accept the loss of consistency, and loss of useful secondary, while making it
synchronized.

12.1.6 Metro Mirror configuration limits


In Table 12-1 the maximum configuration limits related to Metro Mirror are listed:

Table 12-1 Metro Mirror configuration limits


Parameter Value

Number of Metro Mirror consistency groups 256 per SVC cluster

Number of Metro Mirror relationships 1024 per SVC cluster

Total VDisk size per I/O group 16TB is the per I/O group limit on the quantity of
primary and secondary VDisk address space
which may participate in Metro Mirror
relationships

12.2 Metro Mirror commands


For all the details about the Metro Mirror Commands, see IBM TotalStorage Virtualization
Family SAN Volume Controller: Command-Line Interface User's Guide, SC26-7544.

The command set for Metro Mirror contains two broad groups:
򐂰 Commands to create, delete and manipulate relationships and consistency groups
򐂰 Commands to cause state changes

Where a configuration command affects more than one cluster, Metro Mirror performs the
work to coordinate configuration activity between the clusters. Some configuration commands

Chapter 12. Copy services: Metro Mirror 425


can only be performed when the clusters are connected and fail with no effect when they are
disconnected.

Other configuration commands are permitted even though the clusters are disconnected. The
state is reconciled automatically by Metro Mirror when the clusters become connected once
more.

For any given command, with one exception, a single cluster actually receives the command
from the administrator. This is significant for defining the context for a CreateRelationShip
mkrcrelationship or CreateConsistencyGroup mkrcconsistgrp command in which case, the
cluster receiving the command is called the local cluster.

This exception mentioned previously is the command that sets clusters into a Metro Mirror
partnership. The mkpartnership command must be issued to both the local, and to the
remote cluster.

The commands here are described as an abstract command set. These are implemented as:
򐂰 A command line interface (CLI) which can be used for scripting and automation
򐂰 A graphical user interface (GUI) which can be used for one-off tasks

12.2.1 Listing available SVC cluster partners


To create an SVC cluster partnership we use the command svcinfo lsclustercandidate

svcinfo lsclustercandidate
The svcinfo lsclustercandidate command is used to list the clusters that are available for
setting up a two-cluster partnership. This is a prerequisite for creating Metro Mirror
relationships.

12.2.2 Creating SVC cluster partnership


To create an SVC cluster partnership we use the command svctask mkpartnership.

svctask mkpartnership
The svctask mkpartnership command is used to establish a one-way Metro Mirror
partnership between the local cluster and a remote cluster. To establish a fully functional
Metro Mirror partnership, you must issue this command to both clusters. This step is a
prerequisite to creating Metro Mirror relationships between VDisks on the SVC clusters.
When creating the partnership you can specify the bandwidth to be used by the background
copy process between the local and the remote SVC cluster, and if not specified the
bandwidth defaults to 50 MB/s. The bandwidth should be set to a value that is less than or
equal to the bandwidth that can be sustained by the intercluster link. If the parameter is set to
a higher value than the link can sustain, the background copy process will simply use the
actual available bandwidth.

svctask chpartnership
In case it is needed to change the bandwidth available for background copy in a SVC cluster
partnership, the command svctask chpartnership can be used to specify the new
bandwidth.

12.2.3 Creating a Metro Mirror consistency group


To create a Metro Mirror consistency group we use the command svctask mkrcconsistgrp.

426 IBM TotalStorage SAN Volume Controller


svctask mkrcconsistgrp
The svctask mkrcconsistgrp command is used to create a new empty Metro Mirror
consistency group. The Metro Mirror consistency group name must be unique across all
consistency groups known to the clusters owning this consistency group. If the consistency
group involves two clusters, the clusters must be in communication throughout the create
process. The new consistency group does not contain any relationships and will be in the
empty state. Metro Mirror relationships can be added to the group either upon creation or
afterwards, using the svctask chrelationship command.

12.2.4 Creating a Metro Mirror relationship


To create a Metro Mirror relationship we use the command svctask mkrcrelationship.

svctask mkrcrelationship
The svctask mkrcrelationship command is used to create a new Metro MIrror relationship.
This relationship persists until it is deleted. The auxiliary virtual disk must be equal in size to
the master virtual disk or the command will fail, and if both VDisks are in the same cluster
they must both be in the same I/O group. The master and auxiliary VDisk cannot be in an
existing relationship. Neither disk can be the target of a FlashCopy mapping. This command
returns the new relationship (relationship_id) when successful. When creating the Metro
Mirror relationship it can be a added to an already existing consistency group, or be a stand
alone Metro Mirror relationship if no consistency group is specified.

To check whether the master or auxiliary VDisks comply with the prerequisites to participate
in a Metro Mirror relationship you can use the command svcinfo
lsrcrelationshipcandidate explained below.

svcinfo lsrcrelationshipcandidate
The svcinfo lsrcrelationshipcandidate command is used to list available VDisk eligible to
a Metro Mirror relationship. When issuing the command you can specify the master VDisk
name and auxiliary cluster to list candidates that comply with prerequisites to create a Metro
Mirror relationship. If the command is issued with no flags, all VDisks that are not disallowed
by some other configuration state, such as being a FlashCopy target, are listed.

12.2.5 Changing a Metro Mirror relationship


To modify the properties of a Metro Mirror relationship we use the command svctask
chrcrelationship.

svctask chrcrelationship
The svctask chrcrelationship command is used to modify the following properties of a
Metro Mirror relationship:
򐂰 Change the name of a Metro Mirror relationship
򐂰 Add a relationship to a group
򐂰 Remove a relationship from a group, using the -force flag

Note: When adding a Metro Mirror relationship to a consistency group that is not empty,
the relationship must have the same state and copy direction as the group in order to be
added to it.

Chapter 12. Copy services: Metro Mirror 427


12.2.6 Changing a Metro Mirror consistency group
To change the name of a Metro Mirror consistency group we use the command svctask
chrcconsistgrp.

svctask chrcconsistgrp
The svctask chrcconsistgrp command is used to change the name of a Metro Mirror
consistency group.

12.2.7 Starting a Metro Mirror relationship


To start a stand alone Metro Mirror relationship we use the command svctask
startrcrelationship.

svctask startrcrelationship
The svctask startrcrelationship command is used to start the copy process of a Metro
Mirror relationship. When issuing the command the copy direction can be set if undefined,
and optionally mark the secondary VDisk of the relationship as clean. The command fails it if
it is used to attempt to start a relationship that is part of a consistency group.

This command can only be issued to a relationship that is connected. For a relationship that
is idling, this command assigns a copy direction (primary and secondary roles) and begins
the copy process. Otherwise this command restarts a previous copy process that was
stopped either by a stop command or by some I/O error.

If the resumption of the copy process leads to a period when the relationship is not
consistent, then you must specify the -force flag when restarting the relationship. This
situation can arise if, for example, the relationship was stopped, and then further writes were
performed on the original primary of the relationship. The use of the -force flag here is a
reminder that the data on the secondary will become inconsistent while resynchronization
(background copying), and therefore not usable for disaster recovery purposes before the
background copy has completed.

In the idling state, you must specify the primary VDisk to indicate the copy direction. In other
connected states, you can provide the primary argument, but it must match the existing
setting.

12.2.8 Stopping a Metro Mirror relationship


To stop a stand alone Metro Mirror relationship we use the command svctask
stoprcrelationship.

svctask stoprcrelationship
The svctask stoprcrelationship command is used to stop the copy process for a
relationship. It may also be used to enable write access to a consistent secondary VDisk
specifying the -access flag.

This command applies to a stand-alone relationship. It is rejected if it is addressed to a


relationship that is part of a consistency group. You can issue this command to stop a
relationship that is copying from primary to secondary.

If the relationship is in an inconsistent state, any copy operation stops and does not resume
until you issue a svctask startrcrelationship command. Write activity is no longer copied
from the primary to the secondary virtual disk. For a relationship in the
ConsistentSynchronized state, this command causes a consistency freeze.

428 IBM TotalStorage SAN Volume Controller


When a relationship is in a consistent state (that is, in the ConsistentStopped,
ConsistentSynchronized, or ConsistentDisconnected state) then the -access argument may
be used with the stoprcrelationship command to enable write access to the secondary
virtual disk.

12.2.9 Starting a Metro Mirror consistency group


To start a Metro Mirror consistency group we use the command svctask startrcconsistgrp

svctask startrcconsistgrp
The svctask startrcconsistgrp command is used to start a Metro Mirror consistency group.
This command can only be issued to a consistency group that is connected.

For a consistency group that is idling, this command assigns a copy direction (primary and
secondary roles) and begins the copy process. Otherwise this command restarts a previous
copy process that was stopped either by a stop command or by some I/O error.

12.2.10 Stopping a Metro Mirror consistency group


To stop a Metro Mirror consistency group we use the command svctask stoprcconsistgrp

svctask stoprcconsistgrp
The svctask startrcconsistgrp command is used to stop the copy process for a Metro
Mirror consistency group. It may also be used to enable write access to the secondary VDisks
in the group if the group is in a consistent state.

If the consistency group is in an inconsistent state, any copy operation stops and does not
resume until you issue the svctask startrcconsistgrp command. Write activity is no longer
copied from the primary to the secondary virtual disks belonging to the relationships in the
group. For a consistency group in the ConsistentSynchronized state, this command causes a
consistency freeze.

When a consistency group is in a consistent state (for example, in the ConsistentStopped,


ConsistentSynchronized, or ConsistentDisconnected state), then the -access argument may
be used with the svctask stoprcconsistgrp command to enable write access to the
secondary VDisks within that group.

12.2.11 Deleting a Metro Mirror relationship


To delete a Metro Mirror relationship we use the command svctask rmrcrelationship

svctask rmrcrelationship
The svctask rmrcrelationship command is used to delete the relationship that is specified.
Deleting a relationship only deletes the logical relationship between the two virtual disks. It
does not affect the virtual disks themselves.

If the relationship is disconnected at the time that the command is issued, then the
relationship is only deleted on the cluster on which the command is being run. When the
clusters reconnect, then the relationship is automatically deleted on the other cluster.
Alternatively, if the clusters are disconnected, and you still wish to remove the relationship on
both clusters, you can issue the rmrcrelationship command independently on both of the
clusters.

Chapter 12. Copy services: Metro Mirror 429


A relationship cannot be deleted if it is part of a consistency group. You must first remove the
relationship from the consistency group.

If you delete an inconsistent relationship, the secondary virtual disk becomes accessible even
though it is still inconsistent. This is the one case in which Metro Mirror does not inhibit
access to inconsistent data.

12.2.12 Deleting a Metro Mirror consistency group


To delete a Metro Mirror consistency group we use the command svctask rmrcconsistgrp

svctask rmrcconsistgrp
The svctask rmrcconsistgrp command is used to delete a Metro Mirror consistency group.
This command deletes the specified consistency group. You can issue this command for any
existing consistency group. If the consistency group is disconnected at the time that the
command is issued, then the consistency group is only deleted on the cluster on which the
command is being run. When the clusters reconnect, then the consistency group is
automatically deleted on the other cluster.

Alternatively, if the clusters are disconnected, and you still want to remove the consistency
group on both clusters, you can issue the svctask rmrcconsistgrp command separately on
both of the clusters.

If the consistency group is not empty, then the relationships within it are removed from the
consistency group before the group is deleted. These relationships then become stand-alone
relationships. The state of these relationships is not changed by the action of removing them
from the consistency group.

12.2.13 Reversing a Metro Mirror relationship


To reverse a Metro Mirror relationship we use the command svctask switchrcrelationship

svctask switchrcrelationship
The svctask switchrcrelationship command is used to reverse the roles of primary and
secondary VDisk when a stand-alone relationship is in a consistent state, when issuing the
command the desired primary is specified.

12.2.14 Reversing a Metro Mirror consistency group


To reverse a Metro Mirror consistency group we use the command svctask
switchrcconsistgrp

svctask switchrcconsistgrp
The svctask switchrcconsistgrp command is used to reverse the roles of primary and
secondary VDisk when a consistency group is in a consistent state. This change is applied to
all the relationships in the consistency group, and when issuing the command the desired
primary is specified.

12.2.15 Detailed states


The following sections detail the states which are portrayed to the user, for either consistency
groups or relationships. It also details the extra information available in each state. The

430 IBM TotalStorage SAN Volume Controller


different major states are constructed to provide guidance as to the configuration commands
that are available.

InconsistentStopped
This is a connected state. In this state, the primary is accessible for read and write I/O but the
secondary is not accessible for either. A copy process needs to be started to make the
secondary consistent.

This state is entered when the relationship or consistency group was InconsistentCopying
and has either suffered a persistent error or received a Stop command which has caused the
copy process to stop.

A Start command causes the relationship or consistency group to move to the


InconsistentCopying state. A Stop command is accepted, but has no effect.

If the relationship or consistency group becomes disconnected, the secondary side


transitions to InconsistentDisconnected. The primary side transitions to IdlingDisconnected.

InconsistentCopying
This is a connected state. In this state, the primary is accessible for read and write I/O but the
secondary is not accessible for either.

This state is entered after a Start command is issued to an InconsistentStopped relationship


or consistency group. It is also entered when a forced start is issued to an idling or
ConsistentStopped relationship or consistency group.

A background copy process runs which copies data from the primary to the secondary virtual
disk.

In the absence of errors, an InconsistentCopying relationship is active, and the Copy


Progress increases until the copy process completes. In some error situations, the copy
progress may freeze or even regress.

A persistent error or Stop command places the relationship or consistency group into
InconsistentStopped state. A Start command is accepted, but has no effect.

If the background copy process completes on a stand-alone relationship, or on all


relationships for a consistency group, the relationship or consistency group transitions to
ConsistentSynchronized.

If the relationship or consistency group becomes disconnected, then the secondary side
transitions to InconsistentDisconnected. The primary side transitions to IdlingDisconnected.

ConsistentStopped
This is a connected state. In this state, the secondary contains a consistent image, but it may
be out-of-date with respect to the primary.

This state can arise when a relationship was in ConsistentSynchronized state and suffers an
error which forces a consistency freeze. It can also arise when a relationship is created with a
CreateConsistentFlag set to TRUE.

Normally, following an I/O error, subsequent write activity cause updates to the primary and
the secondary is no longer synchronized (set to FALSE). In this case, to re-establish
synchronization, consistency must be given up for a period. A Start command with the Force
option must be used to acknowledge this, and the relationship or consistency group
transitions to InconsistentCopying. Do this only after all outstanding errors are repaired.

Chapter 12. Copy services: Metro Mirror 431


In the unusual case where the primary and secondary are still synchronized (perhaps
following a user stop, and no further write I/O was received), a Start command takes the
relationship to ConsistentSynchronized. No Force option is required. Also in this unusual
case, a Switch command is permitted which moves the relationship or consistency group to
ConsistentSynchronized and reverses the roles of the primary and secondary.

If the relationship or consistency group becomes disconnected, then the secondary side
transitions to ConsistentDisconnected. The primary side transitions to IdlingDisconnected.

An informational status log is generated every time a relationship or consistency group enters
the ConsistentStopped with a status of Online state. This can be configured to enable an
SNMP trap and provide a trigger to automation software to consider issuing a Start
command following a loss of synchronization.

ConsistentSynchronized
This is a connected state. In this state, the primary VDisk is accessible for read and write I/O.
The secondary VDisk is accessible for read-only I/O.

Writes that are sent to the primary VDisk are sent to both primary and secondary VDisks.
Either good completion must be received for both writes, or the write must be failed to the
host, or a state transition out of ConsistentSynchronized must take place before a write is
completed to the host.

A Stop command takes the relationship to ConsistentStopped state. A Stop command with
the -access argument takes the relationship to the Idling state.

A Switch command leaves the relationship in the ConsistentSynchronized state, but reverses
the primary and secondary roles.

A Start command is accepted, but has no effect.

If the relationship or consistency group becomes disconnected, the same transitions are
made as for ConsistentStopped.

Idling
This is a connected state. Both master and auxiliary disks are operating in the primary role.
Consequently both are accessible for write I/O.

In this state, the relationship or consistency group accepts a Start command. Metro Mirror
maintains a record of regions on each disk which received write I/O while Idling. This is used
to determine what areas need to be copied following a Start command.

The Start command must specify the new copy direction. A Start command can cause a
loss of consistency if either virtual disk in any relationship has received write I/O. This is
indicated by the synchronized status. If the Start command leads to loss of consistency, then
a Forced flag must be specified.

Following a Start command, the relationship or consistency group transitions to


ConsistentSynchronized if there is no loss of consistency, or to InconsistentCopying if there
is.

Also, while in this state, the relationship or consistency group accepts a Clean option on the
Start command.

If the relationship or consistency group becomes disconnected, then both sides change their
state to IdlingDisconnected.

432 IBM TotalStorage SAN Volume Controller


IdlingDisconnected
This is a disconnected state. The virtual disk or disks in this half of the relationship or
consistency group are all in the primary role and accept read or write I/O.

The main priority in this state is to recover the link and make the relationship or consistency
group connected once more.

No configuration activity is possible (except for deletes or stops) until the relationship
becomes connected again. At that point, the relationship transition to a connected state. The
exact connected state which is entered depends on the state of the other half of the
relationship or consistency group, which depends on:
򐂰 The state when it became disconnected
򐂰 The write activity since it was disconnected
򐂰 The configuration activity since it was disconnected

If both halves are IdlingDisconnected, then the relationship becomes idling when
reconnected.

While IdlingDisconnected, if a write I/O is received which causes loss of synchronization


(synchronized attribute transitions from TRUE to FALSE) and the relationship was not already
stopped (either through user stop or a persistent error), then an error log is raised to notify
this. This error log is the same as that raised when the same situation arises when
ConsistentSynchronized.

InconsistentDisconnected
This is a disconnected state. The virtual disks in this half of the relationship or consistency
group are all in the secondary role and do not accept read or write I/O.

No configuration activity except for deletes is permitted until the relationship becomes
connected again.

When the relationship or consistency group becomes connected again, the relationship
becomes InconsistentCopying automatically unless either:
򐂰 The relationship was InconsistentStopped when it became disconnected
򐂰 The user issued a Stop while disconnected

In either case, the relationship or consistency group becomes InconsistentStopped.

ConsistentDisconnected
This is a disconnected state. The VDisks in this half of the relationship or consistency group
are all in the secondary role and accept read I/O but not write I/O.

This state is entered from ConsistentSynchronized or ConsistentStopped when the


secondary side of a relationship becomes disconnected.

In this state, the relationship or consistency group displays an attribute of FreezeTime which
is the point in time that Consistency was frozen. When entered from ConsistentStopped, it
retains the time it had in that state. When entered from ConsistentSynchronized, the
FreezeTime shows the last time at which the relationship or consistency group was known to
be consistent. This corresponds to the time of the last successful heartbeat to the other
cluster.

A Stop with EnableAccessFlag set to TRUE transitions the relationship or consistency group to
IdlingDisconnected state. This allows write I/O to be performed to the virtual disks and is used
as part of a disaster recovery scenario.

Chapter 12. Copy services: Metro Mirror 433


When the relationship or consistency group becomes connected again, the relationship or
consistency group becomes ConsistentSynchronized only if this does not lead to a loss of
Consistency. This is the case provided:
򐂰 The relationship was ConsistentSynchronized when it became disconnected.
򐂰 No writes received successful completion at the primary while disconnected.

Otherwise the relationship become ConsistentStopped. The FreezeTime setting is retained.

Empty
This state only applies to consistency groups. It is the state of a consistency group which has
no relationships and no other state information to show.

It is entered when a consistency group is first created. It is exited when the first relationship is
added to the consistency group at which point the state of the relationship becomes the state
of the consistency group.

12.2.16 Background copy


Metro Mirror paces the rate at which background copy is performed by the appropriate
relationships. Background copy takes place on relationships which are in
InconsistentCopying state with a Status of Online.

The quota of background copy (configured on the intercluster link) is divided evenly between
the nodes that are performing background copy for one of the eligible relationships. This
allocation is made without regard for the number of disks that node is responsible for. Each
node in turn divides its allocation evenly between the multiple relationships performing a
background copy.

For intracluster relationships, each node is assigned a static quota of 50 MB/s.

12.3 Metro Mirror scenario using the CLI


In the following scenario, we want to set up Metro Mirror for the following VDisks from SVC1
to SVC2 at the secondary site.
򐂰 VDisk1 -database files
򐂰 VDisk2 -database log files
򐂰 VDisk3 -application files

Since data consistency is needed across VDisk1 and VDisk2, we create a consistency group,
to handle Metro Mirror of VDisk1 and VDisk2. While in this scenario the application files are
independent of the database we create a stand-alone Metro Mirror relationship for VDisk3.
The Metro Mirror setup is illustrated in Figure 12-6 on page 435.

434 IBM TotalStorage SAN Volume Controller


Consistency Group 1
Site1: SVC1 Site2: SVC2

MM_Relationship 1
VDisk1M VDisk1A
MM_Master MM_Auxiliary

VDisk2M MM_Relationship 2 VDisk2A


MM_Master MM_Auxiliary

VDisk3M MM_Relationship 3 VDisk3A


MM_Master MM_Auxiliary

Figure 12-6 Metro Mirror scenario using the CLI.

Setting up Metro Mirror


In the following section, we assume that the source and target VDisks have already been
created and that the ISLs and zoning are in place, enabling the SVC clusters to communicate.

To set up the Metro Mirror, the following steps must be performed:


򐂰 Create SVC partnership between SVC1 and SVC2, on both SVC clusters
– bandwidth 300 MB/s
򐂰 Create a Metro Mirror consistency group
– name MMCG1
򐂰 Create the Metro Mirror relationship for VDisk1M
– master VDisk1M
– auxiliary VDisk1A
– auxiliary SVC cluster SVC2
– name MMRel1
– consistency group MMCG1
򐂰 Create the Metro Mirror relationship for VDisk2M
– master VDisk2M
– auxiliary VDisk2A
– auxiliary SVC cluster SVC2
– name MMRel2
– consistency group MMCG1
򐂰 Create the Metro Mirror relationship for VDisk3M
– master VDisk3M
– auxiliary VDisk3A
– auxiliary SVC cluster SVC2
– name MMRel3

In the following section, each step is carried out using the CLI.

Chapter 12. Copy services: Metro Mirror 435


Creating SVC partnership between SVC1 and SVC2, on both SVC clusters
To verify connectivity to the remote SVC cluster we list cluster candidates from the local SVC
cluster SVC1. In Example 12-1, we see that SVC2 is an eligible SVC cluster candidate for a
SVC cluster partnership.

Example 12-1 Listing available SVC cluster for partnership


IBM_2145:admin>svcinfo lsclustercandidate
id configured cluster_name
0000020062C00298 no SVC2

In Example 12-2, we create the partnership from SVC1 to SVC2 specifying the bandwidth to
be used for background copy to 300 MB/s. To verify the creation of the partnership we issue
the command svcinfo lscluster, and see that the partnership is only partially configured
since we have not yet created the partnership from SVC2 to SVC1.

Example 12-2 Creating the partnership from SVC1 to SVC2


IBM_2145:admin>svctask mkpartnership -bandwidth 300 SVC2
IBM_2145:admin>svcinfo lscluster
id name location partnership bandwidth cluster_IP_address cluster_service_IP_address id_alias
00000200626006C8 SVC1 local 9.42.164.155 9.42.164.156 00000200626006C8
0000020062C00298 SVC2 remote partially_configured_local 300 9.42.164.150 9.42.164.151 0000020062C00298

In Example 12-3, we create the partnership from SVC2 to SVC1 specifying the bandwidth to
be used for background copy to 300 MB/s.

For completion, we issue svcinfo lscluster and svcinfo lsclustercandidate prior to


creating the partnership. After creating the partnership, we verify that the partnership is fully
configured by re-issuing the svcinfo lscluster command.

Example 12-3 Creating the partnership from SVC2 to SVC1


IBM_2145:admin>svcinfo lscluster
id name location partnership bandwidth cluster_IP_address cluster_service_IP_address id_alias
0000020062C00298 SVC2 local 9.42.164.150 9.42.164.151 0000020062C00298
IBM_2145:admin>svcinfo lsclustercandidate
id configured cluster_name
00000200626006C8 yes SVC1
IBM_2145:admin>svctask mkpartnership -bandwidth 300 SVC1
IBM_2145:admin>svcinfo lscluster
id name location partnership bandwidth cluster_IP_address cluster_service_IP_address id_alias
0000020062C00298 SVC2 local 9.42.164.150 9.42.164.151 0000020062C00298
00000200626006C8 SVC1 remote fully_configured 300 9.42.164.155 9.42.164.156 00000200626006C8

Creating a Metro Mirror consistency group


In Example 12-4, we create the Metro Mirror consistency group to be used for the Metro
Mirror relationships for the VDisks with the database and log files.

Example 12-4 Creating the Metro Mirror consistency group MMCG1


IBM_2145:admin>svctask mkrcconsistgrp -cluster SVC2 -name MMCG1
RC Consistency Group, id [253], successfully created

Creating the Metro Mirror relationship for VDisk1M and VDisk2M


In Example 12-5 on page 437, we create the Metro Mirror relationships MMRel1 and MMRel2
and make them members of the Metro Mirror consistency group MMCG1. To verify the

436 IBM TotalStorage SAN Volume Controller


created Metro Mirror relationships we list them with the command svcinfo
lsrcrelationship, using colon as a delimiter.

Example 12-5 Creating the Metro Mirror relationships MMRel1 and MMRel2.
IBM_2145:admin>svctask mkrcrelationship -master VDisk1M -aux VDisk1A -cluster SVC2 -consistgrp MMCG1 -name
MMRel1
RC Relationship, id [3], successfully created
IBM_2145:admin>svctask mkrcrelationship -master VDisk2M -aux VDisk2A -cluster SVC2 -consistgrp MMCG1 -name
MMRel2
RC Relationship, id [2], successfully created
IBM_2145:admin>svcinfo lsrcrelationship -delim :
id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:aux_cluster_id:aux_cluster_
name:aux_vdisk_id:aux_vdisk_name:primary:consistency_group_id:consistency_group_name:state:bg_copy_priority
:progress
2:MMRel2:00000200626006C8:SVC1:2:VDisk2M:0000020062C00298:SVC2:11:VDisk2A:master:253:MMCG1:inconsistent_sto
pped:50:0
3:MMRel1:00000200626006C8:SVC1:3:VDisk1M:0000020062C00298:SVC2:12:VDisk1A:master:253:MMCG1:inconsistent_sto
pped:50:0

Creating the stand-alone Metro Mirror relationship for VDisk3M


In Example 12-6, we create the stand-alone Metro Mirror relationship MMRel3. Because
VDisk1M and VDisk1A are both pristine, since they have not yet been mapped to a host and
therefore now we specify the -sync flag to indicate that the two VDisks in the relationship are
already synchronized, and therefore no need to copy all blocks from the master to the
auxiliary VDisk.

When listing the Metro Mirror relationship MMRel3, we notice that it is in the consistent
stopped state.
Example 12-6 Creating a stand-alone Metro Mirror relationship for VDisks already in sync (pristine)
IBM_2145:admin>svctask mkrcrelationship -master VDisk3M -aux VDisk3A -cluster SVC2 -sync -name MMRel3
RC Relationship, id [29], successfully created
IBM_2145:admin>svcinfo lsrcrelationship MMRel3
id 29
name MMRel3
master_cluster_id 00000200626006C8
master_cluster_name SVC1
master_vdisk_id 29
master_vdisk_name VDisk3M
aux_cluster_id 0000020062C00298
aux_cluster_name SVC2
aux_vdisk_id 10
aux_vdisk_name VDisk3A
primary master
consistency_group_id
consistency_group_name
state consistent_stopped
bg_copy_priority 50
progress
freeze_time
status online
sync in_sync

Executing Metro Mirror


Now that we have created the Metro Mirror consistency group and relationships, we are
ready to use the Metro Mirror relationships in our environment.

Chapter 12. Copy services: Metro Mirror 437


When implementing Metro Mirror, the goal is to reach a consistent and synchronized state
which can provide redundancy for a dataset, in case a hardware failure occurs that affects the
SAN at the production site.

In the following section, we show how to stop and start the stand-alone Metro Mirror
relationship and the consistency group.

Starting a stand-alone Metro Mirror relationship


In Example 12-7, we start the stand-alone Metro Mirror relationship MMRel3. Because the
Metro Mirror relationship was in the consistent stopped state and no updates have been made
on the primary VDisk, the relationship enters the consistent synchronized state, and the
master VDisk is ready to be mapped to the host (see Figure 12-5 on page 422).

Example 12-7 Starting the stand-alone Metro Mirror relationship


IBM_2145:admin>svctask startrcrelationship MMRel3
IBM_2145:admin>svcinfo lsrcrelationship MMRel3
id 29
name MMRel3
master_cluster_id 00000200626006C8
master_cluster_name SVC1
master_vdisk_id 29
master_vdisk_name VDisk3M
aux_cluster_id 0000020062C00298
aux_cluster_name SVC2
aux_vdisk_id 10
aux_vdisk_name VDisk3A
primary master
consistency_group_id
consistency_group_name
state consistent_synchronised
bg_copy_priority 50
progress
freeze_time
status online
sync

Starting a Metro Mirror consistency group


In Example 12-8, we start the Metro Mirror consistency group MMCG1. Because the
consistency group was in the inconsistent stopped state, it enters the inconsistent copying
state until the background copy has completed for all relationships in the consistency group.
Upon completion of the background copy for all relationships in the consistency group, it
enters the consistent synchronized state (see Figure 12-5 on page 422).

Example 12-8 Starting the Metro Mirror consistency group


IBM_2145:admin>svctask startrcconsistgrp MMCG1
IBM_2145:admin>svcinfo lsrcconsistgrp MMCG1
id 253
name MMCG1
master_cluster_id 00000200626006C8
master_cluster_name SVC1
aux_cluster_id 0000020062C00298
aux_cluster_name SVC2
primary master
state inconsistent_copying
relationship_count 2
freeze_time
status online

438 IBM TotalStorage SAN Volume Controller


sync
RC_rel_id 2
RC_rel_name MMRel2
RC_rel_id 3
RC_rel_name MMRel1

Monitoring background copy progress


To monitor the background copy progress, we must list either all Metro Mirror relationships or
the specific Metro Mirror relationship shown in Example 12-9.

Note: Setting up SNMP traps for the SVC enables automatic notification when Metro
Mirror consistency groups or relationships change state.

Example 12-9 Listing the Metro Mirror relationships to monitor background copy progress
IBM_2145:admin>svcinfo lsrcrelationship -delim :
id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:aux_cluster_id:aux_cluster_
name:aux_vdisk_id:aux_vdisk_name:primary:consistency_group_id:consistency_group_name:state:bg_copy_priority
:progress
2:MMRel2:00000200626006C8:SVC1:2:VDisk2M:0000020062C00298:SVC2:11:VDisk2A:master:253:MMCG1:inconsistent_cop
ying:50:33
3:MMRel1:00000200626006C8:SVC1:3:VDisk1M:0000020062C00298:SVC2:12:VDisk1A:master:253:MMCG1:inconsistent_cop
ying:50:33
29:MMRel3:00000200626006C8:SVC1:29:VDisk3M:0000020062C00298:SVC2:10:VDisk3A:master:::consistent_synchronise
d:50:
IBM_2145:admin>svcinfo lsrcrelationship MMRel1
id 3
name MMRel1
master_cluster_id 00000200626006C8
master_cluster_name SVC1
master_vdisk_id 3
master_vdisk_name VDisk1M
aux_cluster_id 0000020062C00298
aux_cluster_name SVC2
aux_vdisk_id 12
aux_vdisk_name VDisk1A
primary master
consistency_group_id 253
consistency_group_name MMCG1
state inconsistent_copying
bg_copy_priority 50
progress 34
freeze_time
status online
sync

When all the Metro Mirror relationships complete the background copy, the consistency group
enters the consistent synchronised state, shown in Example 12-10.

Example 12-10 Listing the Metro Mirror consistency group


IBM_2145:admin>svcinfo lsrcconsistgrp MMCG1
id 253
name MMCG1
master_cluster_id 00000200626006C8
master_cluster_name SVC1
aux_cluster_id 0000020062C00298
aux_cluster_name SVC2

Chapter 12. Copy services: Metro Mirror 439


primary master
state consistent_synchronised
relationship_count 2
freeze_time
status online
sync
RC_rel_id 2
RC_rel_name MMRel2
RC_rel_id 3
RC_rel_name MMRel1

Stopping a stand-alone Metro Mirror relationship


In Example 12-11, we stop the stand-alone Metro Mirror relationship, while enabling access
(write I/O,) to both the primary and the secondary VDisk, and the relationship enters the Idle
state.

Example 12-11 Stopping a stand-alone Metro Mirror relationship while enabling access to the secondary VDisk
IBM_2145:admin>svctask stoprcrelationship -access MMRel3
IBM_2145:admin>svcinfo lsrcrelationship MMRel3
id 29
name MMRel3
master_cluster_id 00000200626006C8
master_cluster_name SVC1
master_vdisk_id 29
master_vdisk_name VDisk3M
aux_cluster_id 0000020062C00298
aux_cluster_name SVC2
aux_vdisk_id 10
aux_vdisk_name VDisk3A
primary
consistency_group_id
consistency_group_name
state idling
bg_copy_priority 50
progress
freeze_time
status
sync in_sync

Stopping a Metro Mirror consistency group


In Example 12-12, we stop the Metro Mirror consistency group without specifying the -access
flag. This means that consistency group enters the consistent stopped state.

Example 12-12 Stopping a Metro Mirror consistency group


IBM_2145:admin>svctask stoprcconsistgrp MMCG1
IBM_2145:admin>svcinfo lsrcconsistgrp MMCG1
id 253
name MMCG1
master_cluster_id 00000200626006C8
master_cluster_name SVC1
aux_cluster_id 0000020062C00298
aux_cluster_name SVC2
primary master
state consistent_stopped
relationship_count 2
freeze_time 2004/11/30/22/48/32
status online

440 IBM TotalStorage SAN Volume Controller


sync in_sync
RC_rel_id 2
RC_rel_name MMRel2
RC_rel_id 3
RC_rel_name MMRel1

If afterwards we want to enable access (write I/O) to the secondary VDisk we can reissue the
svctask stoprcconsistgrp, specifying the -access flag, and the consistency group
transitions to the Idle state, shown in Example 12-13.

Example 12-13 Stopping a Metro Mirror consistency group and enabling access to the secondary
IBM_2145:admin>svctask stoprcconsistgrp -access MMCG1
IBM_2145:admin>svcinfo lsrcconsistgrp MMCG1
id 253
name MMCG1
master_cluster_id 00000200626006C8
master_cluster_name SVC1
aux_cluster_id 0000020062C00298
aux_cluster_name SVC2
primary
state idling
relationship_count 2
freeze_time
status
sync in_sync
RC_rel_id 2
RC_rel_name MMRel2
RC_rel_id 3
RC_rel_name MMRel1

Restarting a Metro Mirror relationship in the Idle state


When restarting a Metro Mirror relationship in the Idle state, we must specify the copy
direction, and if any updates has been performed on either the master or the auxiliary VDisk
consistency will be compromised while background copy is performed to resync the Metro
Mirror relationship, therefore we must issue the -force flag to start the relationship otherwise
the command will fail, as shown in Example 12-14.

Example 12-14 Restarting a Metro Mirror relationship after updates in the idle state
IBM_2145:admin>svctask startrcrelationship -primary master MMRel3
CMMVC5978E The operation was not performed because the relationship is not synchronised.
IBM_2145:admin>svctask startrcrelationship -primary master -force MMRel3
IBM_2145:admin>svcinfo lsrcrelationship MMRel3
id 29
name MMRel3
master_cluster_id 00000200626006C8
master_cluster_name SVC1
master_vdisk_id 29
master_vdisk_name VDisk3M
aux_cluster_id 0000020062C00298
aux_cluster_name SVC2
aux_vdisk_id 10
aux_vdisk_name VDisk3A
primary master
consistency_group_id
consistency_group_name
state inconsistent_copying
bg_copy_priority 50

Chapter 12. Copy services: Metro Mirror 441


progress 99
freeze_time
status online
sync

Restarting a Metro Mirror consistency group in the idle state


When restarting a Metro Mirror consistency group in the Idle state, we must specify the copy
direction. If any updates have been performed on either the master or the auxiliary VDisk in
any of the Metro Mirror relationships in the consistency group, then consistency will be
compromised while background copy is performed to resync the Metro Mirror relationships,
therefore we must issue the -force flag to start the relationship otherwise the command will
fail. In Example 12-15, we change the copy direction by specifying the auxiliary VDisks to be
the primaries

Example 12-15 Restarting a Metro Mirror relationship while changing the copy direction.
IBM_2145:admin>svctask startrcconsistgrp -primary aux MMCG1
IBM_2145:admin>svcinfo lsrcconsistgrp MMCG1
id 253
name MMCG1
master_cluster_id 00000200626006C8
master_cluster_name SVC1
aux_cluster_id 0000020062C00298
aux_cluster_name SVC2
primary aux
state consistent_synchronised
relationship_count 2
freeze_time
status online
sync
RC_rel_id 2
RC_rel_name MMRel2
RC_rel_id 3
RC_rel_name MMRel1

Switching copy direction for a Metro Mirror relationship


When a Metro Mirror relationship is in the consistent synchronised state, we can change the
copy direction for the relationship, using the command svctask switchrcrelationship,
specifying the primary VDisk. If the primary is specified when issuing the command, and it is
already the primary then the command has no effect. In Example 12-16, we change the copy
direction for the stand-alone Metro Mirror relationship, specifying the auxiliary VDisk to be the
primary.

Note: When the copy direction is switched, it is crucial that there is no outstanding I/O to
the VDisk which transitions from primary to secondary, since all I/O will be inhibited to that
VDisk when it becomes the secondary. Therefore, careful planning is required prior to
using the svctask switchrcrelationship command.

Example 12-16 Switching the copy direction for a Metro Mirror consistency group
IBM_2145:admin>svcinfo lsrcrelationship MMRel3
id 29
name MMRel3
master_cluster_id 00000200626006C8
master_cluster_name SVC1
master_vdisk_id 29
master_vdisk_name VDisk3M

442 IBM TotalStorage SAN Volume Controller


aux_cluster_id 0000020062C00298
aux_cluster_name SVC2
aux_vdisk_id 10
aux_vdisk_name VDisk3A
primary master
consistency_group_id
consistency_group_name
state consistent_synchronised
bg_copy_priority 50
progress
freeze_time
status online
sync
IBM_2145:admin>svctask switchrcrelationship -primary aux MMRel3
IBM_2145:admin>svcinfo lsrcrelationship MMRel3
id 29
name MMRel3
master_cluster_id 00000200626006C8
master_cluster_name SVC1
master_vdisk_id 29
master_vdisk_name VDisk3M
aux_cluster_id 0000020062C00298
aux_cluster_name SVC2
aux_vdisk_id 10
aux_vdisk_name VDisk3A
primary aux
consistency_group_id
consistency_group_name
state consistent_synchronised
bg_copy_priority 50
progress
freeze_time
status online
sync

Switching copy direction for a Metro Mirror consistency group


When a Metro Mirror consistency group is in the consistent synchronised state, we can
change the copy direction for the relationship, using the command svctask
switchrcconsistgrp, specifying the primary VDisk. If the primary specified, when issuing the
command, is already the primary then the command has no effect. In Example 12-17, we
change the copy direction for the Metro Mirror consistency group, specifying the auxiliary to
be the primary.

Note: When the copy direction is switched, it is crucial that there is no outstanding I/O to
the VDisks which transitions from primary to secondary, since all I/O will be inhibited when
they become the secondary. Therefore, careful planning is required prior to using the
svctask switchrcconsistgrp command.

Example 12-17 Switching the copy direction for a Metro Mirror consistency group
IBM_2145:admin>svcinfo lsrcconsistgrp MMCG1
id 253
name MMCG1
master_cluster_id 00000200626006C8
master_cluster_name SVC1
aux_cluster_id 0000020062C00298
aux_cluster_name SVC2
primary master

Chapter 12. Copy services: Metro Mirror 443


state consistent_synchronised
relationship_count 2
freeze_time
status online
sync
RC_rel_id 2
RC_rel_name MMRel2
RC_rel_id 3
RC_rel_name MMRel1
IBM_2145:admin>svctask switchrcconsistgrp -primary aux MMCG1
IBM_2145:admin>svcinfo lsrcconsistgrp MMCG1
id 253
name MMCG1
master_cluster_id 00000200626006C8
master_cluster_name SVC1
aux_cluster_id 0000020062C00298
aux_cluster_name SVC2
primary aux
state consistent_synchronised
relationship_count 2
freeze_time
status online
sync
RC_rel_id 2
RC_rel_name MMRel2
RC_rel_id 3
RC_rel_name MMRel1

12.4 Metro Mirror scenario using the GUI


In the following scenario, we want to set up Metro Mirror for the following VDisks from SVC1
to SVC2 at the secondary site.
򐂰 VDisk1 -database files
򐂰 VDisk2 -database log files
򐂰 VDisk3 -application files

Since data consistency is needed across VDisk1 and VDisk2, we create a consistency group,
to handle Metro Mirror of VDisk1 and VDisk2. While in this scenario the application files are
independent of the database we create a stand-alone Metro Mirror relationship for VDisk3.
The Metro Mirror setup is illustrated in Example 12-7 on page 445.

444 IBM TotalStorage SAN Volume Controller


Consistency Group 1
Site1: SVC1 Site2: SVC2

MM_Relationship 1
VDisk1M VDisk1A
MM_Master MM_Auxiliary

VDisk2M MM_Relationship 2 VDisk2A


MM_Master MM_Auxiliary

VDisk3M MM_Relationship 3 VDisk3A


MM_Master MM_Auxiliary

Figure 12-7 Metro Mirror scenario using the GUI

Setting up Metro Mirror


In the following section, we assume that the source and target VDisks have already been
created and that the ISLs and zoning, are in place enabling the SVC clusters to communicate.

To set up up the Metro Mirror the following steps must be performed:


򐂰 Create SVC partnership between SVC1 and SVC2, on both SVC clusters
– bandwidth 300 MB/s
򐂰 Create a Metro Mirror consistency group
– name MMCG1
򐂰 Create the Metro Mirror relationship for VDisk1M
– master VDisk1M
– Auxiliary VDisk1A
– auxiliary SVC cluster SVC2
– name MMRel1
– consistency group MMCG1
򐂰 Create the Metro Mirror relationship for VDisk2M
– master VDisk2M
– Auxiliary VDisk2A
– auxiliary SVC cluster SVC2
– name MMRel2
– consistency group MMCG1
򐂰 Create the Metro Mirror relationship for VDisk3M
– master VDisk3M
– Auxiliary VDisk3A
– auxiliary SVC cluster SVC2
– name MMRel3

In the following section, each step is carried out using the GUI.

Chapter 12. Copy services: Metro Mirror 445


Creating an SVC partnership between SVC1 and SVC2, on both clusters
To create a Metro Mirror partnership between the SVC clusters using the GUI we launch the
SVC GUI for SVC1. Then we select Manage Copy Services and click Metro Mirror Cluster
Partnership, shown in Figure 12-8.

Figure 12-8 Selecting Metro Mirror Cluster Partnership on SVC1

To verify that we want to create a Metro Mirror SVC Cluster partnership, we click Create as
shown in Figure 12-9 on page 447.

446 IBM TotalStorage SAN Volume Controller


Figure 12-9 Confirming that a Metro Mirror partnership is to be created.

In Figure 12-10, available SVC cluster candidates are listed, which in our case is SVC2. We
select SVC2 and specify the available bandwidth for background copy, then click OK.

Figure 12-10 Selecting SVC partner and specifying bandwidth for background copy

Chapter 12. Copy services: Metro Mirror 447


In the resulting window shown in Figure 12-11, the created Metro Mirror cluster partnership is
shown as partially configured.

Figure 12-11 Metro Mirror cluster partnership is partially configured.

To fully configure the Metro Mirror cluster partnership, we must carry out the same steps as
on SVC1, and for brevity in the following figures only the last two windows are shown.

Launching the SVC GUI for SVC2, we select SVC1 for the Metro Mirror cluster partnership
and specify the available bandwidth for background copy and click OK, as shown in
Figure 12-12 on page 449.

448 IBM TotalStorage SAN Volume Controller


Figure 12-12 Selecting SVC partner and specify bandwidth for background copy.

In the resulting window shown in Figure 12-13 the created Metro Mirror cluster partnership is
now fully configured.

Figure 12-13 Metro Mirror cluster partnership is fully configured.

Chapter 12. Copy services: Metro Mirror 449


Creating a Metro Mirror consistency group
To create the consistency group to be used for the Metro Mirror relationships for the VDisks
with the database and log files we select Manage Copy Services and click Metro Mirror
Consistency Groups, as shown in Figure 12-14.

Figure 12-14 Selecting Metro Mirror Consistency Groups

As shown in Figure 12-15 on page 451 we click Bypass Filter to continue to the next window.

450 IBM TotalStorage SAN Volume Controller


Figure 12-15 Bypass the filter

To start the creation process we select Create Consistency Group from the scroll menu and
click Go, as shown in Figure 12-16.
,

Figure 12-16 Create a consistency group

Chapter 12. Copy services: Metro Mirror 451


As shown in Figure 12-17 we are presented with an overview of the steps in the process of
creating a consistency group, we click Next to proceed.

Figure 12-17 Overview of the Metro Mirror consistency group creation

452 IBM TotalStorage SAN Volume Controller


As shown in Figure 12-18 we specify the consistency group name and whether it is to be
used for inter- or intracluster relationships. In our scenario we select intercluster and click
Next.

Figure 12-18 Specifying consistency group name and type.

As shown in Figure 12-19 on page 454, there are currently no defined Metro Mirror
relationships (since we have not defined any at this point) to be included in the Metro Mirror
consistency group and we click Next to proceed.

Chapter 12. Copy services: Metro Mirror 453


Figure 12-19 There are no defined Metro Mirror relationships to be added.

As shown in Figure 12-20, we verify the settings for the consistency group and click Finish to
create the Metro Mirror relationship group.

Figure 12-20 Verifying the settings for the Metro Mirror consistency group.

454 IBM TotalStorage SAN Volume Controller


When the Metro Mirror consistency group is created we are returned to the list of defined
consistency groups shown in Figure 12-21.

Figure 12-21 Viewing Metro Mirror consistency groups

Creating the Metro Mirror relationships for VDisk1M and VDisk2M


To create the Metro Mirror relationships for VDisk1M and VDisk2M we select Manage Copy
Services and click Metro Mirror Cluster Relationships, as shown in Figure 12-22 on
page 456.

Chapter 12. Copy services: Metro Mirror 455


Figure 12-22 Selecting Metro Mirror Relationships

As shown in Figure 12-23, we click Bypass Filter to proceed.

Figure 12-23 Bypass the filter

456 IBM TotalStorage SAN Volume Controller


To start the creation process we select Create a Relationship from the scroll menu and click
Go, as shown in Figure 12-24.

Figure 12-24 Create a relationship.

As shown in Figure 12-25 on page 458, we are presented with an overview of the steps in the
process of creating a relationship; click Next to proceed.

Chapter 12. Copy services: Metro Mirror 457


Figure 12-25 Overview of the Metro Mirror relationship creation process.

As shown in Figure 12-26, we name the Metro Mirror relationship (MMRel1 or MMRel2) and
decide that the Relationship will be intercluster.

Figure 12-26 Naming the Metro Mirror relationship and selecting the auxiliary cluster

458 IBM TotalStorage SAN Volume Controller


As shown in Figure 12-26 on page 458, we are queried for a filter prior to presenting the
master VDisk candidates. We select to filter for VDisk* and click Next.

Figure 12-27 Defining filter for master VDisk candidates.

As shown in Figure 12-28 on page 460 we select VDisk1M to be the master VDisk of the
relationship, and click Next to proceed.

Chapter 12. Copy services: Metro Mirror 459


Figure 12-28 Selecting the master VDisk

As shown in Figure 12-29 we select the VDisk1A as the auxiliary VDisk of the relationship,
and click Next to proceed.

Figure 12-29 Selecting the auxiliary VDisk

460 IBM TotalStorage SAN Volume Controller


As shown in Figure 12-30 we select the relationship to be part of the previously created
consistency group and click Next to proceed.

Figure 12-30 Selecting the relationship to be part of a consistency group.

As shown in Figure 12-31 on page 462 we verify the Metro Mirror relationship and click
Finish to create it.

Chapter 12. Copy services: Metro Mirror 461


Figure 12-31 Verifying the Metro Mirror relationship.

As shown in Figure 12-32 we are returned to the Metro Mirror relationship list after creating
the Metro Mirror relationship.

Figure 12-32 Viewing Metro Mirror relationships.

462 IBM TotalStorage SAN Volume Controller


We create the second Metro Mirror relationship, MMRel2, starting at Figure 12-24 on
page 457. After creation all relationships are listed as shown in Figure 12-33.

Figure 12-33 Viewing the Metro MIrror relationships after creating MMRel2

Creating the stand-alone Metro Mirror relationship for VDisk3M


To create the stand-alone Metro Mirror relationship, we start the creation process by selecting
Create a Relationship from the scroll menu and click Go, as shown in Figure 12-34 on
page 464.

Chapter 12. Copy services: Metro Mirror 463


Figure 12-34 Create a Metro MIrror relationship

As shown in Figure 12-35 we are presented with an overview of the steps in the process of
creating a consistency group, we click Next to proceed.

Figure 12-35 Overview of the Metro Mirror creation process.

464 IBM TotalStorage SAN Volume Controller


As shown in Figure 12-36 we name the relationship (MMRel3) and specify that it is an
intercluster relationship and click Next.

Figure 12-36 Specifying the Metro Mirror relationship name and auxiliary cluster.

As shown in Figure 12-37 on page 466, we are queried for a filter prior to presenting the
master VDisk candidates. We select to filter for VDisk* and click Next.

Chapter 12. Copy services: Metro Mirror 465


Figure 12-37 Filtering VDisk candidates

As shown in Figure 12-38, we select VDisk3M to be the master VDisk of the relationship, and
click Next to proceed.

Figure 12-38 Selecting the master VDisk

466 IBM TotalStorage SAN Volume Controller


As shown in Figure 12-39, we select the VDisk3A as the auxiliary VDisk of the relationship,
and click Next to proceed.

Figure 12-39 Selecting the auxiliary VDisk

As shown in Figure 12-40 on page 468, we specify that the master and auxiliary VDisk are
already synchronized since they are pristine, and no consistency group is selected, thus we
are creating a stand-alone Metro Mirror relationship.

Chapter 12. Copy services: Metro Mirror 467


Figure 12-40 Selecting options for the Metro Mirror relationship

Note: To add a Metro Mirror relationship to a consistency group, it must be in the same
state as the consistency group.

Even if we intended to make the Metro Mirror relationship MMRel3 part of the consistency
group MMCG1, we are not offered the option since the state of the relationship MMRel3 is
consistent stopped, because we selected the synchronized option, and the state of the
consistency group MMCG1 is currently inconsistent stopped.

The status of the Metro Mirror relationships can be seen in Figure 12-42.

As shown in Figure 12-31 on page 462, we verify the Metro Mirror relationship and click
Finish to create it.

468 IBM TotalStorage SAN Volume Controller


Figure 12-41 Verifying the Metro Mirror relationship.

As shown in Figure 12-42, we are returned to the Metro Mirror relationship list after creating
the Metro Mirror relationship.

Figure 12-42 Viewing Metro Mirror relationships

Chapter 12. Copy services: Metro Mirror 469


Executing Metro Mirror
Now that we have created the Metro Mirror consistency group and relationships, we are
ready to use the Metro Mirror relationships in our environment.

When performing Metro Mirror the goal is to reach a consistent and synchronized state which
can provide redundancy for a dataset, in case a hardware failure occurs that effects the SAN
at the production site.

In the following section, we show how to stop and start the stand-alone Metro Mirror
relationship and the consistency group.

Starting a stand-alone Metro Mirror relationship


In Figure 12-43, we select the stand-alone Metro Mirror relationship MMRel3, and from the
scroll menu, we select Start Copy Process and click Go.

Figure 12-43 Starting a stand-alone Metro Mirror relationship

In Figure 12-44 on page 471, we select the appropriate options and click OK to start the
stand-alone Metro Mirror relationship MMRel3.

470 IBM TotalStorage SAN Volume Controller


Figure 12-44 Selecting options and starting the copy process.

Because the Metro Mirror relationship was in the consistent stopped state and no updates
have been made on the primary VDisk the relationship enters the consistent synchronized
state shown in Figure 12-45 on page 472.

Chapter 12. Copy services: Metro Mirror 471


Figure 12-45 Viewing Metro Mirror relationships

Starting a Metro Mirror consistency group


To start the Metro Mirror consistency group MMCG1, we select Metro Mirror Consistency
Groups shown in Figure 12-46 on page 473.

472 IBM TotalStorage SAN Volume Controller


Figure 12-46 Selecting Metro Mirror Consistency Groups

As shown in Figure 12-46, we click Bypass Filter.

Figure 12-47 Bypass Filter

Chapter 12. Copy services: Metro Mirror 473


In Figure 12-48, we select the Metro Mirror consistency group MMCG1, and from the scroll
menu we select Start Copy Process and click Go.

Figure 12-48 Selecting start copy process

As shown in Figure 12-49 on page 475 we click OK to start the copy process.

474 IBM TotalStorage SAN Volume Controller


Figure 12-49 Selecting options and starting the copy process.

As shown in Figure 12-50, we are returned to the Metro Mirror consistency group list and the
consistency group MMCG1 has transitioned to the Inconsistent Copying state.

Figure 12-50 Viewing Metro Mirror consistency groups.

Chapter 12. Copy services: Metro Mirror 475


Because the consistency group was in the inconsistent stopped state it enters the
inconsistent copying state until the background copy has completed for all relationships in the
consistency group. Upon completion of the background copy for all relationships in the
consistency group, it enters the Consistent Synchronized state.

Monitoring background copy progress


To monitor the background copy progress, we must list either all Metro Mirror relationships or
the a specific Metro Mirror relationship shown in Figure 12-51.

Figure 12-51 Viewing Metro Mirror relationships.

Note: Setting up SNMP traps for the SVC enables automatic notification when Metro
Mirror consistency group or relationships change state.

Stopping a stand-alone Metro Mirror relationship


To stop a Metro Mirror relationship, while enabling access (write I/O,) to both the primary and
the secondary VDisk we select the relationship and Stop Copy Process from the scroll menu
and click Go, shown in Figure 12-52 on page 477.

476 IBM TotalStorage SAN Volume Controller


Figure 12-52 Stopping a stand-alone Metro MIrror relationship.

As shown in Figure 12-53 we tick the enable access option and click OK to stop the Metro
Mirror relationship.

Figure 12-53 Enable access to the secondary VDisk while stopping the relationship

Chapter 12. Copy services: Metro Mirror 477


As shown in Figure 12-54, the Metro Mirror relationship transitions to the Idle state when
stopped while enabling access to the secondary VDisk.

Figure 12-54 Viewing the Metro Mirror relationships

Stopping a Metro Mirror consistency group


As shown in Figure 12-55 on page 479, we select the Metro Mirror consistency group and
Stop Copy Process from the scroll menu and click Go.

478 IBM TotalStorage SAN Volume Controller


Figure 12-55 Selecting the Metro Mirror consistency group to be stopped.

As shown in Figure 12-56, we click OK without specifying access to the secondary VDisk.

Figure 12-56 Stopping the consistency group, without enabling access to the secondary VDisk.

Chapter 12. Copy services: Metro Mirror 479


As shown in Figure 12-57 the consistency group enters the consistent stopped state, when
stopped without enabling access to the secondary.

Figure 12-57 Viewing Metro Mirror consistency groups.

If afterwards we want to enable access (write I/O) to the secondary VDisks we can reissue
the Stop Copy Process specifying access to be enabled to the secondary VDisks.

As shown in Figure 12-58 on page 481 we select the Metro Mirror relationship and Stop
Copy Process from the scroll menu and click Go.

480 IBM TotalStorage SAN Volume Controller


Figure 12-58 Selecting the Metro Mirror consistency group.

As shown in Figure 12-59, we tick the Enable access box and click OK.

Figure 12-59 Enabling access to the secondary VDisks

Chapter 12. Copy services: Metro Mirror 481


When applying the enable access option the consistency group transitions to the Idle state
shown in Figure 12-60.

Figure 12-60 Viewing the Metro Mirror consistency group is in the Idle state

Restarting a Metro Mirror relationship in the Idle state


When restarting a Metro Mirror relationship in the Idle state, we must specify the copy
direction. In case any updates have been performed on either the master or the auxiliary
VDisk, consistency will be compromised while background copy is performed to resync the
Metro Mirror relationship, therefore the Force option must be ticked.

As shown in Figure 12-61 on page 483 we select the Metro Mirror relationship and Start
Copy Process from the scroll menu and click Go.

482 IBM TotalStorage SAN Volume Controller


Figure 12-61 Starting a stand-alone Metro Mirror relationship in the Idle state.

As shown in Figure 12-62 on page 484, we tick the Force option since write I/O has been
performed while in the Idle state and we select the copy direction by defining the master
VDisk as the primary and click OK.

Chapter 12. Copy services: Metro Mirror 483


Figure 12-62 Starting the copy process.

As shown in Figure 12-63 on page 485, the Metro Mirror relationship enters the Consistent
Copying state. When the background copy is complete the relationship transitions to the
Consistent synchronized state.

484 IBM TotalStorage SAN Volume Controller


Figure 12-63 Viewing the Metro Mirror relationships.

Restarting a Metro Mirror consistency group in the idle state


When restarting a Metro Mirror consistency group in the Idle state, we must specify the copy
direction. If any updates have been performed on either the master or the auxiliary VDisk in
any of the Metro Mirror relationships in the consistency group, then consistency will be
compromised while background copy is performed to resync the Metro Mirror relationships.
Therefore we must tick the Force option to start the copy process otherwise the command
will fail.

As shown in Figure 12-64 on page 486, we select the Metro Mirror consistency group and
Start Copy Process from the scroll menu and click Go.

Chapter 12. Copy services: Metro Mirror 485


Figure 12-64 Starting the copy process.

As shown in Figure 12-65 we tick the Force option, and change the copy direction by
specifying the auxiliary VDisks to be the primaries.

Figure 12-65 Starting the copy process for the consistency group.

486 IBM TotalStorage SAN Volume Controller


When the background copy completes the Metro Mirror consistency group enters the
Consistent synchronized state shown in Figure 12-66.

Figure 12-66 Viewing Metro Mirror consistency groups.

Switching copy direction for a Metro Mirror relationship


When a Metro Mirror relationship is in the consistent synchronized state, we can change the
copy direction for the relationship. In Figure 12-67 on page 488 we select the relationship
MMRel3 and Switch Copy Direction from the scroll menu and click Go.

Note: When the copy direction is switched, it is crucial that there is no outstanding I/O to
the VDisk which transitions from primary to secondary, since all I/O will be inhibited to that
VDisk when it becomes the secondary. Therefore, careful planning is required prior to
using switching the copy direction for a Metro Mirror relationship.

Chapter 12. Copy services: Metro Mirror 487


Figure 12-67 Selecting the relationship for which the copy direction is to be changed

In Figure 12-68 on page 489 we see that the current primary VDisk is the master, so to
change the copy direction for the stand-alone Metro Mirror relationship we specify the
auxiliary VDisk to be the primary, and click OK.

488 IBM TotalStorage SAN Volume Controller


Figure 12-68 Selecting the primary VDisk to switch the copy direction.

The copy direction is now switched and we are returned to the Metro Mirror relationship list,
where we see that the copy direction has been switched as shown in Figure 12-69.

Figure 12-69 Viewing Metro Mirror relationship, after changing the copy direction

Chapter 12. Copy services: Metro Mirror 489


Switching copy direction for a Metro Mirror consistency group
When a Metro Mirror consistency group is in the consistent synchronized state, we can
change the copy direction Metro Mirror consistency group. In Figure 12-70 we select the
consistency group MMCG1 and Switch Copy Direction from the scroll menu and click Go.

Note: When the copy direction is switched, it is crucial that there is no outstanding I/O to
the VDisks which transitions from primary to secondary, since all I/O will be inhibited when
they become the secondary. Therefore, careful planning is required prior to using switching
the copy direction.

Figure 12-70 Selecting the consistency group for which the copy direction is to be changed

In Figure 12-71 on page 491 we see that currently the primary VDisks are the auxiliary, so to
change the copy direction for the Metro Mirror consistency group we specify the master
VDisks to be the primary, and click OK.

490 IBM TotalStorage SAN Volume Controller


Figure 12-71 Selecting the primary VDisk to switch the copy direction.

The copy direction is now switched and we are returned to the Metro Mirror consistency
group list, where we see that the copy direction has been switched, shown in Figure 12-72 on
page 492 the master is now the primary.

Chapter 12. Copy services: Metro Mirror 491


Figure 12-72 Viewing Metro Mirror consistency groups, after changing the copy direction.

492 IBM TotalStorage SAN Volume Controller


13

Chapter 13. Migration to the SAN Volume


Controller
This chapter explains how to migrate from a conventional storage infrastructure to a
virtualized storage infrastructure applying the SVC. We also explain how the SVC can be
phased out of a virtualized storage infrastructure after, for example, a trial period.

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. 493
13.1 Migration overview
The SVC allows the mapping of VDisk extents to Managed Disk (MDisk) extents to be
changed, without interrupting host access to the VDisk. This functionality is utilized when
performing VDisk migrations, and can be performed for any VDisk defined on the SVC.

This functionality can be used for:


򐂰 Redistribution of VDisks, and thereby the workload within a SVC cluster across backend
storage.
– Moving workload onto newly installed storage.
– Moving workload off old/failing storage, ahead of decommissioning it.
– Moving workload to re-balance a changed workload.
򐂰 Migrating data from legacy backend storage to SVC managed storage.
򐂰 Migrating data from one backend controller to another using the SVC as a data block
mover and afterwards removing the SVC from the SAN.
򐂰 Migrating data from Managed Mode back into Image mode prior to removing the SVC
from a SAN.

13.2 Migration operations


Migration can be performed at either the VDisk or the extent level depending on the purpose
of the migration. The different supported migration activities are:
򐂰 Migrate extents within a Managed Disk Group (MDG), redistributing the extents of a given
VDisk on the MDisks in the MDG.
򐂰 Migrate extents off an MDisk which is to be removed from the MDG (to other MDisks in the
MDG).
򐂰 Migrate a VDisk from one MDG to another MDG.
򐂰 Migrate a VDisk to change the virtualization type of the VDisk to Image.
򐂰 Migrate a VDisk between I/O Groups.

13.2.1 Migrating multiple extents (within an MDG)


A number of VDisk extents can be migrated at once using the migrateexts command.
Extents are allocated on the destination MDisk using the algorithm described in 3.6.6,
“Allocation of free extents” on page 48.

When executed, this command migrates a given number of extents from the source MDisk
where the extents of the VDisk specified resides, to a defined target MDisk which must be
part of the same MDG.

The number of migration threads which will be used in parallel can be specified from 1 to 4,
with 2 as the default.

If the type of the VDisk is image, the VDisk type transitions to striped when the first extent is
migrated while the MDisk access mode transitions from image to managed.

The syntax of the CLI command is:


svctask migrateexts -source src_mdisk_id | src_mdisk_name -exts num_extents -target
target_mdisk_id | target_mdisk_name [-threads number_of_threads] -vdisk vdisk_id
| vdisk_name
򐂰 -vdisk: Specifies the VDisk id or name to which the extents belong.
򐂰 -source: Specifies the source Managed Disk id or name on which the extents currently
reside.

494 IBM TotalStorage SAN Volume Controller


򐂰 -exts: Specifies the number of extents to migrate.
򐂰 -target: Specifies the target MDisk id or name onto which the extents are to be migrated.
򐂰 -threads: Optional parameter that specifies the number of threads to use while migrating
these extents, from 1 to 4.

13.2.2 Migrating extents off an MDisk which is being deleted


When an MDisk is deleted from an MDG using the rmmdisk -force command, any occupied
extents on the MDisk are migrated off the MDisk (to other MDisks in the MDG) prior to its
deletion.

In this case, the extents that need to be migrated are moved onto the set of MDisks which are
not being deleted, the extents are distributed according to the algorithm described in 3.6.6,
“Allocation of free extents” on page 48. This statement holds true if multiple MDisks are being
removed from the MDG at the same time, and MDisks which are being removed are not
candidates for supplying free extents to the allocation of free extents algorithm.

If a VDisk uses one or more extents which need to be moved as a result of a delete mdisk
command, then the virtualization type for that VDisk is set to striped (if it previously was
sequential or image).

If the MDisk is operating in image mode, the MDisk transitions to manage mode while the
extents are being migrated, and upon deletion it transitions to unmanaged.

The syntax of the CLI command is:


svctask rmmdisk -mdisk mdisk_id_list | mdisk_name_list [-force]
mdisk_group_id | mdisk_group_name
򐂰 -mdisk: Specifies one, or more MDisk ids or names to delete from the group.
򐂰 -force: Migrate any data that belongs to other VDisks before removing the MDisk

Note: If the -force flag is not supplied, and VDisk(s) occupy extents on one or more of the
MDisks specified, the command will fail.

When the -force flag is supplied, and VDisks exist that are made from extents on one or
more of the MDisks specified, all extents on the MDisk(s) will be migrated to the other
MDisks in the MDG, if there are enough free extents in the MDG. The deletion of the
MDisk(s) is postponed until all extents are migrated which may take some time. In case
there is not enough free extents in the MDG the command will fail.

When the -force flag is supplied, the command will complete asynchronously.

13.2.3 Migrating a VDisk between MDGs


An entire VDisk can be migrated from one MDG to another MDG using the migratevdisk
command. A VDisk can be migrated between MDGs regardless of the virtualization type
(image, striped or sequential), though it will transition to the virtualization type of striped.
The syntax of the CLI command is:
svctask migratevdisk -mdiskgrp mdisk_group_id | mdisk_group_name [-threads
number_of_threads] -vdisk vdisk_id | vdisk_name
򐂰 -vdisk: Specifies the VDisk id or name to migrate into another MDG.
򐂰 -mdiskgrp: Specifies the target MDG id or name.
򐂰 -threads: Optional parameter that specifies the number of threads to use while migrating
these extents, from 1 to 4.

Chapter 13. Migration to the SAN Volume Controller 495


In Figure 13-1 on page 496 it is illustrated how the VDisk V3 is being migrated from MDG1 to
MDG2.

Important: In order for the migration to be “legal”, the source and destination MDisk must
have the same extent size.

I/O G r o u p 0

S V C 1 IO -G rp 0
Node 1

S V C 1 IO -G r p 0
Node 2

V1 V2 V4 V3 V3 V6

V5

MDG 1 MDG 2 MDG 3

M1 M2 M3 M4 M5 M6 M7

R A ID C o n tr o lle r A R A ID C o n tr o lle r B

Figure 13-1 VDisk migration between MDGs

Extents are allocated to the migrating VDisk, from the set of MDisks in the target MDG using
the extent allocation algorithm described in 3.6.6, “Allocation of free extents” on page 48.

The process can be prioritized by specifying the number of threads to use while migrating,
using only one thread will put the least background load on the system. If a large number of
extents are being migrated, you can specify the number of threads which will be used in
parallel (from 1 to 4).

For the duration of the move, the offline rules described in 3.6.8, “I/O handling and offline
conditions” on page 49, apply to both MDGs. Therefore, referring to Figure 13-1, if any of the
MDisks M4, M5, M6, or M7 go offline, then VDisk V3 goes offline. If MDisk M4 goes offline,
then V3 and V5 goes offline but V1, V2, V4 and V6 remains online.

If the type of the VDisk is image, the VDisk type transitions to striped when the first extent is
migrated while the MDisk access mode transitions from image to managed.

For the duration of the move, the VDisk is listed as being a member of the original MDG. For
the purposes of configuration, the VDisk moves to the new MDG instantaneously at the end
of the migration.

496 IBM TotalStorage SAN Volume Controller


13.2.4 Migrating the VDisk to image mode
The facility to migrate a VDisk to an image mode VDisk can be combined with the ability to
migrate between MDGs. The source for the migrate can be a managed mode or an image
mode VDisk. This leads to four possibilities.
򐂰 Migrate Image Mode to Image mode within a MDG
򐂰 Migrate Managed Mode to Image mode within a MDG
򐂰 Migrate Image Mode to Image mode between MDGs
򐂰 Migrate Managed Mode to Image mode between MDGs

To be able to migrate:
򐂰 The destination MDisk must be greater than or equal to the size of the VDisk.
򐂰 The MDisk specified as the target must be in an unmanaged state at the time the
command is run.
򐂰 If the migration is interrupted by a cluster recovery then the migration will resume after the
recovery completes.
򐂰 If the migration involves moving between Managed Disk groups then the VDisk behaves
as described in “Migrating a VDisk between MDGs” on page 495.

The CLI command is :


svctask migratetoimage -vdisk source_vdisk_id | name -mdisk unmanaged_target_mdisk_id |
name -mdiskgrp managed_disk_group_id | name [-threads number_of_threads]

򐂰 -vdisk: Specifies the name or id of the source VDisk to be migrated.


򐂰 -mdisk: Specifies the name of the MDisk to which the data must be migrated. (This MDisk
must be unmanaged and large enough to contain the data of the disk being migrated).
򐂰 -mdiskgrp: Specifies the MDG into which the MDisk must be placed once the migration
has completed.
򐂰 -threads: Optional parameter that specifies the number of threads to use while migrating
these extents, from 1 to 4.

Regardless of the mode that the VDisk starts in it is reported as managed mode during the
migration. Also, both of the MDisks involved are reported as being image mode during the
migration.

At completion of the command the VDisk will be classified as an image mode VDisk.

13.2.5 Migrating a VDisk between I/O groups


A VDisk can be migrated between I/O groups using the svctask chvdisk command. This is
only supported if the VDisk is not in a FlashCopy Mapping or Remote Copy relationship.

In order to move a VDisk between I/O groups, the cache must be flushed. The SVC will
attempt to destage all write data for the VDisk from the cache during the I/O group move. This
flush will fail if data has been pinned in the cache for any reason (such as a MDG being
offline). By default this will cause the migration between I/O groups to fail, but this behavior
can be overridden using the -force flag. If the -force flag is used and if the SVC is unable to
destage all write data from the cache, then the result is that the contents of the VDisk are
corrupted by the loss of the cached data. During the flush, the VDisk operates in cache
write-through mode.

You must quiesce host I/O before the migration for two reasons.
1. If there is significant data in cache which takes a long time to destage then the command
line will time out.

Chapter 13. Migration to the SAN Volume Controller 497


2. SDD vpaths associated with the VDisk are deleted before the VDisk move takes place in
order to avoid data corruption. So, data corruption could occur if I/O is still ongoing at a
particular LUN ID when it is re-used for another VDisk.

When migrating a VDisk between I/O Groups, the user does not have the ability to specify the
preferred node. The preferred node is assigned by the SVC.

The CLI command is:


svctask chvdisk [-iogrp io_group_id|io_group_name [-force]] [-rate throttle_rate [-unitmb]]
[-name new_name_arg] [-force ] vdisk_name | vdisk_id
򐂰 -iogrp: Optionally specifies a new I/O Group to move the VDisk to, either by I/O or name.
-force can be used together with this parameter in order to force the removal of the VDisk
from the io group.
򐂰 -rate: Optionally sets I/O governing rates for the VDisk. The default units are IO/S, but
can be used in conjunction with -unitmb (-mb) to specify it in terms of MB/s
򐂰 -name: Optionally specifies a new name to assign to the VDisk
򐂰 -force: Specifies that the I/O group be changed without completing the destage of the
cache. This can corrupt the contents of the Vdisk.
򐂰 vdisk id: entered as the last entry on the CLI and specifies the VDisk name to modify,
either by id or by name.

Notes: The three optional items are mutually exclusive. So, to change the name and
modify the I/O Group would require two invocations of the command.

A VDisk which is a member of a FlashCopy or Remote Copy relationship cannot be moved


to another I/O Group, and this cannot be overridden by using the force flag.

13.2.6 Monitoring the migration progress


To monitor the progress of ongoing migrations use the CLI command:
svcinfo lsmigrate

To determine the extent allocation of MDisks and VDisks use the following commands:
򐂰 To list the VDisk IDs and the corresponding number of extents the VDisks are occupying
on the queried MDisk use the CLI command:
svcinfo lsmdiskextent <mdiskname>
򐂰 To list the MDisk IDs and the corresponding number of extents the queried VDisks are
occupying on the listed MDisks use the CLI command:
svcinfo lsvdiskextent <vdiskname>

To list the number of free extents available on an MDisk, use the CLI command:
svcinfo lsfreeextents <mdiskname>

After a migration has been started, there is no way for the user to stop the migration. The
migration runs to completion unless it is stopped or suspended by an error condition, or if the
VDisk being migrated is deleted.

498 IBM TotalStorage SAN Volume Controller


13.3 Functional overview of migration
This section describes the functional view of data migration.

13.3.1 Parallelism
Some of the activities can be carried out in parallel.

Per cluster
An SVC cluster supports up to 32 active concurrent instances of members of the set of
migration activities:
򐂰 Migrate multiple extents
򐂰 Migrate between MDGs
򐂰 Migrate off deleted MDisk
򐂰 Migrate to Image Mode

These high-level migration tasks operate by scheduling single extent migrations as described
following.

Up to 256 single extent migrations can run concurrently. This number includes single extent
migrates which result from the operations listed above, and single extent operations
requested by the user.

The Migrate Multiple Extents and Migrate Between MDGs command support a flag which
allows the user to specify the number of “threads” to use between 1 and 4. This parameter
affects the number of extents which will be concurrently migrated for that migration operation.
Thus, if the thread value is set to 4, up to four extents can be migrated concurrently for that
operation, subject to other resource constraints.

Per MDisk
The SVC supports up to four concurrent single extent migrates per MDisk. This limit does not
take into account whether the MDisk is the source or the destination. If more than four single
extent migrates are scheduled for a particular MDisk, further migrations are queued pending
the completion of one of the currently running migrations.

Error handling
If a medium error occurs on a read from the source, and the destinations medium error table
is full, or if an I/O error occurs on a read from the source repeatedly, or if the MDisk(s) go
offline repeatedly, the migration is suspended or stopped.

The migration will be suspended if any of the following conditions exist, otherwise it will be
stopped:
򐂰 The migration is between Managed Disk Groups and has progressed beyond the first
Extent. These migrations are always suspended rather than being stopped because
stopping a migration in progress would leave a VDisk spanning MDGs which is not a valid
configuration other than during a migration.
򐂰 The migration is a Migrate to Image Mode (even if it is processing the first extent). These
migrations are always suspended rather than being stopped because stopping a migration
in progress would leave the VDisk in an inconsistent state.
򐂰 A Migration is waiting for a metadata checkpoint which has failed.

Chapter 13. Migration to the SAN Volume Controller 499


If a migration is stopped, then if any migrations are queued awaiting the use of the MDisk for
migration, these migrations are now considered. If, however, a migration is suspended then
the migration continues to use resources, and so another migration is not started.

The SVC will attempt to resume the migration if the error log entry is marked as fixed using
the CLI or the GUI. If the error condition no longer exists, then the migration will proceed. The
migration may resume on a different node to the one which started the migration.

13.3.2 Migration algorithm


This section describes the effect of the migration algorithm.

Chunks
Regardless of the extent size for the MDG, data is migrated in units of 16 MB. In this
description, this unit is referred to as a chunk.

The algorithm followed to migrate an extent is:


1. Pause (means queue all new I/O requests in the virtualization layer in SVC and wait for all
outstanding requests to complete) all I/O on the source MDisk on all nodes in the SVC
cluster. I/O to other extents is unaffected.
2. Unpause I/O on all of the source MDisk extent apart from writes to the specific chunk
which is being migrated. Writes to the extent are mirrored to the source and destination as
described below.
3. On the node performing the migrate, for each 256K section of the chunk:
– Synchronously read 256K from the source.
– Synchronously write 256K to the target.
4. Once the entire chunk has been copied to the destination the process repeats for the next
chunk within the extent.
5. Once the entire extent has been migrated, pause all I/O to the extent being migrated,
checkpoint the extent move to on-disk metadata, redirect all further reads to the
destination, and stop mirroring writes (writes only to destination).
6. If the checkpoint fails then the I/O is unpaused.

During the migration, the extent can be divided into three regions as shown in Figure 13-2 on
page 501. Region B is the chunk which is being copied. Writes to region B are queued
(paused) in the virtualization layer waiting for the chunk to be copied. Reads to Region A are
directed to the destination since this data has already been copied. Writes to Region A are
written to both the source and the destination extent in order to maintain the integrity of the
source extent. Reads and writes to Region C are directed to the source because this region
has yet to be migrated.

The migration of a chunk requires 64 synchronous reads and 64 synchronous writes. During
this time, all writes to the chunk from higher layers in the software stack (such as cache
destages) are held back. If the back-end storage is operating with significant latency, then it is
possible that this operation may take some time (minutes) to complete. This can have an
adverse affect on the overall performance of the SVC. To avoid this, if the migration of a
particular chunk is still active after one minute, then the migration is paused for 30 seconds.
During this time, writes to the chunk are allowed to proceed. After 30 seconds, the migration
of the chunk is resumed. This algorithm is repeated as many times as necessary to complete
the migration of the chunk.

500 IBM TotalStorage SAN Volume Controller


Managed Disk Extents

Extent N-1 Extent N Extent N+1

Region A Region B Region C


(already copied) (copying) (yet to be copied)
reads/writes go reads/writes reads/writes go to
to destination paused source

Not to scale
16 MB
Figure 13-2 Migrating an extent

SVC guarantees read stability during data migrations even if the data migration is stopped by
a node reset or a cluster shutdown. This is possible because SVC disallows writes on all
nodes to the area being copied, and upon a failure the extent migration is restarted from the
beginning.

13.4 Migrating data from an image mode VDisk


This section describes how to migrate data from an image mode VDisk to a VDisk.

13.4.1 Image mode VDisk migration concept


Firstly, we describe the concepts associated with this operation.

MDisk modes
There are three different MDisk modes:
1. Unmanaged MDisk
An MDisk is reported as unmanaged when it is not a member of any MDG. An unmanaged
MDisk is not associated with any VDisks and has no metadata stored on it. The SVC will
not write to an MDisk which is in unmanaged mode except when it attempts to change the
mode of the MDisk to one of the other modes.
2. Image Mode MDisk
Image Mode provides a direct block-for-block translation from the MDisk to the VDisk with
no virtualization. Image Mode VDisks have a minimum size of one block (512 bytes) and
always occupy at least one extent. An Image Mode MDisk is associated with exactly one
VDisk.
3. Managed Mode MDisk
Managed Mode Mdisks contribute extents to the pool of extents available in the MDG.
Zero or more Managed Mode VDisks may use these extents.

Chapter 13. Migration to the SAN Volume Controller 501


Transitions between the different modes
The following state transitions can occur to an MDisk (see Figure 13-3).
1. Unmanaged mode to managed mode.
This occurs when an MDisk is added to an MDisk group. This makes the MDisk eligible for
the allocation of data and metadata extents.
2. Managed mode to unmanaged Mode
This occurs when an MDisk is removed from an MDisk group.
3. Unmanaged mode to image mode
This occurs when an image mode MDisk is created on an MDisk which was previously
unmanaged. It also occurs when an MDisk is used as the target for a Migrate to Image
Mode.
4. Image mode to unmanaged mode.
There are two distinct ways in which this can happen.
– When an image mode VDisk is deleted. The MDisk which supported the VDisk
becomes unmanaged.
– When an image mode Vdisk is migrated in image mode to another MDisk, the MDisk
which it is being migrated from remains in image mode until all data has been moved
off it. It then transitions to unmanaged mode.
5. Image mode to managed mode
This occurs when the image mode VDisk which is using the MDisk is migrated into
managed mode.
6. Managed mode to image mode is not possible.
There is no operation that will take an MDisk directly from managed mode to image mode.
This can be achieved by performing operations which convert the MDisk to unmanaged
mode and then to image mode.

add to group

Managed
Not in group
mode
remove from group

delete image
mode vdisk

complete migrate start migrate to


managed mode

create image
mode vdisk

Migrating to
Image mode
image mode start migrate to image mode

Figure 13-3 Different states of a VDisk

502 IBM TotalStorage SAN Volume Controller


Image mode VDisks have the special property that the last extent in the VDisk can be a
partial extent. Managed mode disks do not have this property.

To perform any type of migration activity on an image mode VDisk, the image mode disk must
first be converted into a managed mode disk. If the image mode disk has a partial last extent,
then this last extent in the image mode VDisk must be the first to be migrated. This migration
is handled as a special case.

After this special migration operation has occurred, the VDisk becomes a managed mode
VDisk and is treated in the same way as any other managed mode VDisk. If the image mode
disk does not have a partial last extent then no special processing is performed, the image
mode VDisk is simply changed into a managed mode VDisk, and is treated in the same way
as any other managed mode VDisk.

After data is migrated off a partial extent, there is no way to migrate data back onto the partial
extent.

13.4.2 Migration tips


You have several methods to migrate an image mode VDisk into a managed mode VDisk:
򐂰 If your image mode VDisk is in the same MDG as the MDisks on which you want to
migrate the extents, you can:
– Migrate a single extent. You have to migrate the last extent of the image mode VDisk
(number N-1).
– Migrate multiple extents.
– Migrate all the in-use extents from an MDisk. Migrate extents off an MDisk which is
being deleted.
򐂰 If you have two MDGs, one for the image mode VDisk, and one for the managed mode
VDisks, you can migrate a VDisk from one MDG to another.

The recommended method is to have one MDG for all the image mode VDisks, and other
MDGs for the managed mode VDisks, and to use the migrate VDisk facility.

Do not forget to check that enough extents are available in the target MDG.

13.5 Data migration from DS storage to SVC using the CLI


This scenario consists of the migration of one hdisk of an AIX host from an IBM TotalStorage
DS Family server to an SVC. Before the migration, the DS storage is connected directly to the
AIX host. After the migration, the DS storage is only connected to the SVC nodes. Figure 13-4
on page 504 shows the setup.

Chapter 13. Migration to the SAN Volume Controller 503


Before migration
Host RAID Ctrl A RAID Ctrl B

After migration
Zone RAID
Zone Host

SAN switch

Host SVCe SVCe RAID Ctrl A RAID Ctrl B

Figure 13-4 Data migration scenario with AIX host and DS storage

13.5.1 AIX host connected directly to the DS storage


The DS storage has two partitions:
򐂰 One partition for AIX host with one Logical disk of 2 GB (seen as 1742)
򐂰 One partition for SVC with seven logical disks

On the AIX host, the commands shown in Figure 13-5 on page 505 are issued.

Note: After unmount, varyoffvg, and exportvg, you can start the migration of the host to
SVC.

504 IBM TotalStorage SAN Volume Controller


lspv
hdisk0 0051a2dac3069dbc rootvg
hdisk1 none None
hdisk2 0051a2da822b17b1 dmvg

lsdev -Cc disk


hdisk0 Available 10-60-00-0,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 10-60-00-1,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 20-58-01 1742 Disk Array Device

lsdev -Cc adapter


fcs1 Available 20-58 FC Adapter
fcs0 Available 10-68 FC Adapter

lspv hdisk2
PHYSICAL VOLUME: hdisk2 VOLUME GROUP: dmvg
PV IDENTIFIER: 0051a2da822b17b1 VG IDENTIFIER
0051a2da00004c00000000f18242facf
PV STATE: active
STALE PARTITIONS: 0 ALLOCATABLE: yes
PP SIZE: 16 megabyte(s) LOGICAL VOLUMES: 2
TOTAL PPs: 127 (2032 megabytes) VG DESCRIPTORS: 2
FREE PPs: 62 (992 megabytes) HOT SPARE: no
USED PPs: 65 (1040 megabytes)
FREE DISTRIBUTION: 25..00..00..11..26
USED DISTRIBUTION: 01..25..25..14..00

lsvg -o
dmvg
rootvg

lsvg -l dmvg
dmvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
dmlv jfs 64 64 1 open/syncd /dmfs
loglv00 jfslog 1 1 1 open/syncd N/A

Figure 13-5 AIX disk configuration before migration

Chapter 13. Migration to the SAN Volume Controller 505


On the SVC, the commands shown in Figure 13-6 are issued.

svcinfo lscontroller 0
id:0
type:4
WWN:200200A0B80F06F3
VLUN link count:8
degraded:no
peripheral dev type:0
peripheral qualifier:0
normal ACA:32
vendor id:IBM
product id low:1742
product id high:
product revision:0500
product serial number:
WWPN table:200200A0B80F06F4
path count:4
WWPN table:200300A0B80F06F4
path count:6

svcinfo lsmdisk -delim :


0:MDisk0:online:unmanaged:::3.0GB:
1:MDisk1:online:unmanaged:::4.0GB:
2:MDisk2:online:unmanaged:::10.0GB:
3:MDisk3:online:unmanaged:::50.0GB:
4:MDisk4:online:unmanaged:::60.0GB:
5:MDisk5:online:unmanaged:::89.0GB:
6:MDisk6:online:unmanaged:::256.3GB:

svcinfo lsnode -delim :


1:node1:10L3BIP:5005676801000050:online:0:io_grp0:yes:231304C33424950
2:node2:10L3BIT:5005676801000019:online:0:io_grp0:no:231304C33424954

Figure 13-6 SVC configuration before migration

13.5.2 SVC is added between the AIX host and the DS storage
Perform these tasks:
1. Change partition’s definition on the DS storage.
2. Delete hdisk2 in the AIX host by entering the following command:
rmdev –dl hdisk2 -R

The DS storage has one partition for the SVC with seven logical disks as before, plus a new
logical disk from the AIX partition (2 GB).

On the SVC
Complete these steps on the SVC:
1. Perform discovery of new MDisk (which was previously direct attached to the AIX). Enter:
svctask detectmdisk

506 IBM TotalStorage SAN Volume Controller


The svcinfo command is issued as shown in Figure 13-7.

svcinfo lsmdisk
0:MDisk0:online:unmanaged:::3.0GB:
1:MDisk1:online:unmanaged:::4.0GB:
2:MDisk2:online:unmanaged:::10.0GB:
3:MDisk3:online:unmanaged:::50.0GB:
4:MDisk4:online:unmanaged:::60.0GB:
5:MDisk5:online:unmanaged:::89.0GB:
6:MDisk6:online:unmanaged:::256.3GB:
7:MDisk7:online:unmanaged:::2.0GB:

Figure 13-7 svcinfo command result

2. Create the MDiskgrp0 with the extent size set to 16 MB:


svctask mkmdiskgrp -name MDiskgrp0 -ext 16
3. Add two MDisks (0 and 1) in MDiskgrp0:
svctask addmdisk -mdisk MDisk0:MDisk1 MDiskgrp0
The svcinfo command output is displayed as shown in Figure 13-8.

svcinfo lsmdisk
0:MDisk0:online:managed:0:MDiskgrp0:3.0GB:
1:MDisk1:online:managed:0:MDiskgrp0:4.0GB:
2:MDisk2:online:unmanaged:::10.0GB:
3:MDisk3:online:unmanaged:::50.0GB:
4:MDisk4:online:unmanaged:::60.0GB:
5:MDisk5:online:unmanaged:::89.0GB:
6:MDisk6:online:unmanaged:::256.3GB:
7:MDisk7:online:unmanaged:::2.0GB:

Figure 13-8 svcinfo command output

4. Create the VDisk in image mode in diskgrp 0 with MDisk 7 and IOgroup 0:
svctask mkvdisk -mdiskgrp MDiskgrp0 -iogrp 0 -vtype image -mdisk MDisk7 VDisk0
The output of svcinfo now is displayed as shown in Figure 13-9.

svcinfo lsvdisk
0:VDisk0:0:io_grp0:online:0:MDiskgrp0:2.0GB:image::::

svcinfo lsmdisk
0:MDisk0:online:managed:0:MDiskgrp0:3.0GB:
1:MDisk1:online:managed:0:MDiskgrp0:4.0GB:
2:MDisk2:online:unmanaged:::10.0GB:
3:MDisk3:online:unmanaged:::50.0GB:
4:MDisk4:online:unmanaged:::60.0GB:
5:MDisk5:online:unmanaged:::89.0GB:
6:MDisk6:online:unmanaged:::256.3GB:
7:MDisk7:online:image:0:MDiskgrp0:2.0GB:

Figure 13-9 VDisk in image mode on SVC

Chapter 13. Migration to the SAN Volume Controller 507


On the AIX host
The shutdown –Fr command was performed. You can obtain the same results by entering:
cfgmgr

Figure 13-10 shows the output of the lspv command.

lspv
hdisk0 0051a2dac3069dbc rootvg
hdisk1 none None
hdisk2 none None
hdisk3 none None
hdisk4 none None
hdisk5 none None
hdisk6 none None
hdisk7 none None
hdisk8 none None
hdisk9 none None
hdisk10 none None
hdisk11 none None
hdisk12 none None
hdisk13 none None
hdisk14 none None
hdisk15 none None
hdisk16 none None
hdisk17 none None
vpath0 0051a2da822b17b1 dmvg

Figure 13-10 lspv command output

Figure 13-11 shows the output of the lsdev -Cc disk command.

lsdev -Cc disk


hdisk0 Available 10-60-00-0,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 10-60-00-1,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 20-58-01 IBM TotalStorage SAN Volume Controller device
hdisk3 Available 20-58-01 IBM TotalStorage SAN Volume Controller device
hdisk4 Available 20-58-01 IBM TotalStorage SAN Volume Controller device
hdisk5 Available 20-58-01 IBM TotalStorage SAN Volume Controller device
hdisk6 Available 20-58-01 IBM TotalStorage SAN Volume Controller device
hdisk7 Available 20-58-01 IBM TotalStorage SAN Volume Controller device
hdisk8 Available 20-58-01 IBM TotalStorage SAN Volume Controller device
hdisk9 Available 20-58-01 IBM TotalStorage SAN Volume Controller device
hdisk10 Available 10-68-01 IBM TotalStorage SAN Volume Controller device
hdisk11 Available 10-68-01 IBM TotalStorage SAN Volume Controller device
hdisk12 Available 10-68-01 IBM TotalStorage SAN Volume Controller device
hdisk13 Available 10-68-01 IBM TotalStorage SAN Volume Controller device
hdisk14 Available 10-68-01 IBM TotalStorage SAN Volume Controller device
hdisk15 Available 10-68-01 IBM TotalStorage SAN Volume Controller device
hdisk16 Available 10-68-01 IBM TotalStorage SAN Volume Controller device
hdisk17 Available 10-68-01 IBM TotalStorage SAN Volume Controller device
vpath0 Available Data Path Optimizer Pseudo Device Driver

Figure 13-11 lsdev-Cc disk

508 IBM TotalStorage SAN Volume Controller


Figure 13-12 shows the output of a number of AIX commands which are run on the attached
host at this stage.

lsdev -Cc adapter | grep fcs


fcs1 Available 20-58 FC Adapter
fcs0 Available 10-68 FC Adapter

datapath query adapter


Active Adapters :2
Adpt# Adapter Name State Mode Select Errors Paths Active
0 fscsi1 NORMAL ACTIVE 35 0 8 8
1 fscsi0 NORMAL ACTIVE 46 0 8 8

datapath query device


Total Devices : 1
DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized
SERIAL: 60056768018000028000000000000000
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi1/hdisk2 OPEN NORMAL 0 0
1 fscsi1/hdisk3 OPEN NORMAL 5 0
2 fscsi1/hdisk4 OPEN NORMAL 4 0
3 fscsi1/hdisk5 OPEN NORMAL 3 0
4 fscsi1/hdisk6 OPEN NORMAL 3 0
5 fscsi1/hdisk7 OPEN NORMAL 4 0
6 fscsi1/hdisk8 OPEN NORMAL 9 0
7 fscsi1/hdisk9 OPEN NORMAL 7 0
8 fscsi0/hdisk10 OPEN NORMAL 4 0
9 fscsi0/hdisk11 OPEN NORMAL 6 0
10 fscsi0/hdisk12 OPEN NORMAL 5 0
11 fscsi0/hdisk13 OPEN NORMAL 6 0
12 fscsi0/hdisk14 OPEN NORMAL 6 0
13 fscsi0/hdisk15 OPEN NORMAL 3 0
14 fscsi0/hdisk16 OPEN NORMAL 8 0
15 fscsi0/hdisk17 OPEN NORMAL 8 0
lsvpcfg
vpath0 (Avail pv dmvg) 60056768018000028000000000000000 = hdisk2 (Avail )
hdisk3 (Avail ) hdisk4 (Avail ) hdisk5 (Avail ) hdisk6 (Avail ) hdisk7 (Avail
) hdisk8 (Avail ) hdisk9 (Avail ) hdisk10 (Avail ) hdisk11 (Avail ) hdisk12
(Avail ) hdisk13 (Avail ) hdisk14 (Avail ) hdisk15 (Avail ) hdisk16 (Avail )
hdisk17 (Avail )

Figure 13-12 AIX command output

Chapter 13. Migration to the SAN Volume Controller 509


Figure 13-13 shows the output of some of the LVM and file system commands.

lsvg -o
dmvg
rootvg

lsvg -l dmvg
dmvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
dmlv jfs 64 64 1 open/syncd /dmfs
loglv00 jfslog 1 1 1 open/syncd N/A

df -k
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 65536 53764 18% 1391 5% /
/dev/hd2 720896 75328 90% 22303 13% /usr
/dev/hd9var 65536 56900 14% 463 3% /var
/dev/hd3 327680 316900 4% 91 1% /tmp
/dev/hd1 65536 63404 4% 18 1% /home
/proc - - - - - /proc
/dev/hd10opt 65536 48984 26% 596 4% /opt
/dev/lv00 262144 125912 52% 20019 31% /usr/vice/cache
AFS 72000000 72000000 0% 0 0% /afs
/dev/dmlv 1048576 1015596 4% 21 1% /dmfs

ls
lost+found
temp
titi.ttt
toto.ttt

Figure 13-13 LVM and file system commands

13.5.3 Migrating the VDisk from image mode to managed mode


The VDisk in image mode is migrated to managed mode by migrating several extents from
one MDisk to another MDisk method. The output of the svcinfo command is shown in
Figure 13-14.

svcinfo lsvdiskextent VDisk0


id number extents
7 128

Figure 13-14 svcinfo output

The value 128 is the number of extents that are used and exist for VDisk 0. They are all on
the MDisk 7 (which is VDisk 0 since all the VDisks in image mode are only on one MDisk). No
extents are used in MDisk 0 and MDisk 1

Note: If you want to migrate only one extent, the last extent (which is the first one to be
migrated) is migrated, and at the same time the VDisk transitions to striped virtualization
type while the MDisk transitions to managed mode.

You can check the number of free extents on the MDisks that you want to migrate to using the
following commands:
svcinfo lsfreeextents MDisk0

510 IBM TotalStorage SAN Volume Controller


svcinfo lsfreeextents MDisk1

Important: For migration using migrateexts, be sure that the number of free extents on
the target MDisks is greater than the number of extents you want to migrate.

򐂰 Migration of VDisk in image mode (VDisk0) (source MDisk7) (number of extents 40)
(target MDisk 0) to managed mode:
svctask migrateexts -vdisk VDisk0 -source MDisk7 -target MDisk0 -exts 40
The output from the svcinfo command after this operation is shown in Figure 13-15.

svcinfo lsvdiskextent VDisk0


id: number extents
0 40 ---> 40 extents on MDisk0
7 88 ---> 88 extents on MDisk7

Figure 13-15 svcinfo command output

򐂰 After a migration from MDisk 7 to MDisk 1 of 43 extents:


svctask migrateexts -vdisk VDisk0 -source MDisk7 -target MDisk1 -exts 43
Figure 13-16 shows the resulting output from the svcinfo command with various flags.

svcinfo lsvdiskextent VDisk0


id:number extents
0:40
1:43
7:45
svcinfo lsmdiskextent MDisk0
id:number of extents
0:40

svcinfo lsmdiskextent MDisk1


id:number of extents
0:43

svcinfo lsmdiskextent MDisk7


id:number of extents
0:45

Figure 13-16 svcinfo command output

Chapter 13. Migration to the SAN Volume Controller 511


Figure 13-17 shows output from the svcinfo command with different flags.

svcinfo lsmdisk -delim :


id:name:status:access mode:mdisk_grp id:mdisk_grp name:capacity:ctrl s/n
0:MDisk0:online:managed:0:MDiskgrp0:3.0GB:
1:MDisk1:online:managed:0:MDiskgrp0:4.0GB:
2:MDisk2:online:unmanaged:::10.0GB:
3:MDisk3:online:unmanaged:::50.0GB:
4:MDisk4:online:unmanaged:::60.0GB:
5:MDisk5:online:unmanaged:::89.0GB:
6:MDisk6:online:unmanaged:::256.3GB:
7:MDisk7:online:managed:0:MDiskgrp0:2.0GB:

svcinfo lsvdisk -delim :


id:name:IO group id:IO group name:status:mdisk_grp id:mdisk_grp
name:capacity:type:FC id:FC name:id:RC name
0:VDisk0:0:io_grp0:online:0:MDiskgrp0:2.0GB:seq::::

svcinfo lsmdiskgrp -delim :


id:name:status:MDisk count:VDisk count:capacity:extent size:free capacity
0:MDiskgrp0:online:3:1:9.0GB:16:7.0GB

Figure 13-17 svcinfo command output

13.5.4 Migrating the VDisk from managed mode to image mode


The VDisk in managed mode can be migrated to image mode by using the svctask
migratetoimage command. See “Migrating the VDisk to image mode” on page 497 for more
details. A reason for using this would be after an SVC virtualization trial period has expired, so
you are returning the volume into its original state.

In this example we migrated VDisk VD1_LINUX1 from managed mode to image mode.
1. Check that the VDisk is not mapped to a host with the svcinfo lsvdiskhostmap
command. If it is mapped issue the svctask rmvdiskhostmap command to unmap the host.
2. Check the size of the VDisk with the svcinfo lsvdisk command
– VDisk VD1_LINUX1 on MDG MDG0_DS43 is 200.0MB
3. Check for an unmanaged MDisk that is large enough to contain the data of the VDisk
being migrated with the svcinfo lsvdisk command. If there is no unmanaged MDisk you
must create them on your backend storage and use the svcinfo lsmdiskcandidate to
discover the new unmanaged MDisk.
– mdisk8 is unmanaged with a size of 10.0GB
4. Check for an MDG to use with the svcinfo lsmdiskgrp command.
– MDG MDG0_DS43 has 504.5GB as free capacity.

Example 13-1 Checking hostmap, VDisk, unmanaged MDisk and MDG before the migrate to image
IBM_2145:admin>svcinfo lsvdiskhostmap VD1_LINUX1
id name SCSI_id host_id host_name wwpn vdisk_UID
16 VD1_LINUX1 0 2 LINUX1 210000E08B04D451
600507680189801B2000000000000021
16 VD1_LINUX1 0 2 LINUX1 210100E08B24D451
600507680189801B2000000000000021

IBM_2145:admin>svctask rmvdiskhostmap -host LINUX1 VD1_LINUX1

IBM_2145:admin>svcinfo lsvdiskhostmap VD1_LINUX1


IBM_2145:admin>

512 IBM TotalStorage SAN Volume Controller


IBM_2145:admin>svcinfo lsvdisk -delim :
id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type:FC_id:FC_name:RC_id:R
C_name:vdisk_UID
0:VdAIX1_FCT:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:::::600507680189801B2000000000000004
1:VdAIX1V1:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:::::600507680189801B2000000000000003
15:vdisk15:0:io_grp0:online:2:MDG2_DS43:200.0MB:striped:::::600507680189801B2000000000000012
16:VD1_LINUX1:0:io_grp0:online:1:MDG0_DS43:200.0MB:striped:::::600507680189801B2000000000000021

IBM_2145:admin>svcinfo lsmdisk -delim :


id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
0:mdisk0:online:managed:1:MDG0_DS43:200.4GB:0000000000000000:DS4301:600a0b80000fbdf0000002753fb8b1c20000
0000000000000000000000000000
7:mdisk7:online:image:4:MDG4_DS43_IMG:10.0GB:0000000000000006:DS4301:600a0b80000fbdf000000282419360fa000
00000000000000000000000000000
8:mdisk8:online:unmanaged:::10.0GB:0000000000000008:DS4301:600a0b80000fbdf000000283419361100000000000000
0000000000000000000

IBM_2145:admin>svcinfo lsmdiskgrp -delim :


id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity
0:MDG1_DS43:online:2:6:607.3GB:32:576.2GB
1:MDG0_DS43:online:2:12:607.0GB:32:504.5GB
2:MDG2_DS43:online:2:5:1498.5GB:32:797.2GB
3:MDG3_DS43:online:1:2:200.4GB:32:180.4GB

Execute the svctask migratetoimage command as follows:


svctask migratetoimage -vdisk VD1_LINUX1 -mdisk mdisk8 -mdiskgrp MDG0_DS43

Use svcinfo lsvdiskextent VD1_LINUX1 to see the status. If all extents are on one id the
migration is finished.

Checking the status again of the VDisk, the MDisk shows you that VDisk VD1_LINUX1 and
MDisk mdisk8 are in image mode as shown in Example 13-2.

Example 13-2 Checking progress, VDisk, unmanaged MDisk and MDG after the migrate to image
IBM_2145:admin>svcinfo lsvdiskextent VD1_LINUX1
id number_extents
8 7

IBM_2145:admin>svcinfo lsvdisk -delim :


id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type:FC_id:FC_name:RC_id:R
C_name:vdisk_UID
0:VdAIX1_FCT:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:::::600507680189801B2000000000000004
1:VdAIX1V1:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:::::600507680189801B2000000000000003
15:vdisk15:0:io_grp0:online:2:MDG2_DS43:200.0MB:striped:::::600507680189801B2000000000000012
16:VD1_LINUX1:0:io_grp0:online:1:MDG0_DS43:200.0MB:image:::::600507680189801B2000000000000021

IBM_2145:admin>svcinfo lsmdisk -delim :


id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
0:mdisk0:online:managed:1:MDG0_DS43:200.4GB:0000000000000000:DS4301:600a0b80000fbdf0000002753fb8b1c20000
0000000000000000000000000000
7:mdisk7:online:image:4:MDG4_DS43_IMG:10.0GB:0000000000000006:DS4301:600a0b80000fbdf000000282419360fa000
00000000000000000000000000000
8:mdisk8:online:image:1:MDG0_DS43:10.0GB:0000000000000008:DS4301:600a0b80000fbdf000000283419361100000000
0000000000000000000000000

Free the data from the SVC by using the following command:
svctask rmvdisk VD1_LINUX1

Chapter 13. Migration to the SAN Volume Controller 513


If the command succeeds then the underlying back-end storage controller will be consistent
with the data which a host could previously have read from the Image Mode VDisk
(VD1_LINUX1). That is, all fast write data will have been flushed to the underlying LUN.
Deleting an Image Mode VDisk causes the MDisk (mdisk8) associated with the VDisk
(VD1_LINUX1) to be ejected from the MDG. The mode of the MDisk (mdisk8) will be returned
to “Unmanaged”. This is shown in Example 13-3.

Example 13-3 Status of the MDisk after removing the VDisk


IBM_2145:admin>svcinfo lsmdisk -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
0:mdisk0:online:managed:1:MDG0_DS43:200.4GB:0000000000000000:DS4301:600a0b80000fbdf0000002753fb8b1c20000000
0000000000000000000000000
7:mdisk7:online:image:4:MDG4_DS43_IMG:10.0GB:0000000000000006:DS4301:600a0b80000fbdf000000282419360fa000000
00000000000000000000000000
8:mdisk8:online:unmanaged:::10.0GB:0000000000000008:DS4301:600a0b80000fbdf000000283419361100000000000000000
0000000000000000

Now, go to the backend storage and assign the data to the host it belongs to, and run the
discovery steps on the server to detect the LUN that contains the data from VD1_LINUX1.

13.5.5 Migrating the VDisk from image mode to image mode


Migrating a VDisk from image mode to image mode is used to move image mode VDisks
from one storage subsystem to another storage subsystem. The data stays available for the
applications during this migration. So this is a zero downtime data move from one disk
subsystem to another. Use the svctask migratetoimage command to perform this task. See
“Migrating the VDisk to image mode” on page 497 for more details.
1. Check the size of the VDisk with the svcinfo lsvdisk command
– VDisk VD9_img on MDG MDG4_DS43 is 10 GB
2. Create an MDisk or MDisks in the storage subsystem you want to migrate the data to. This
MDisk must be in unmanaged mode and large enough to contain the data of the disk
being migrated. Run the svcinfo lsmdiskcandidate to discover the new unmanaged
MDisk.
– mdisk8 is unmanaged and 10 GB on controller DS4302
3. Check for a MDG to use with svcinfo lsmdiskgrp command.
– MDG MDG0_DS43 has enough free capacity.
4. Run svctask migratetoimage -vdisk VD9_img -mdisk mdisk8 -mdiskgrp MDG0_DS43
command.
5. Check VDisk info with svcinfo lsvdisk command.
6. Check MDisk info with svcinfo lsmdisk command. This is shown in Example 13-4.

Example 13-4 Migrating image mode VDisk to another storage subsystem


IBM_2145:admin>svcinfo lsvdisk -delim :
id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type:FC_id:FC_name:RC_id:RC_n
ame:vdisk_UID
0:VdAIX1_FCT:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:::::600507680189801B2000000000000004
1:VdAIX1V1:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:::::600507680189801B2000000000000003
17:VD9_img:0:io_grp0:online:4:MDG0_DS43:10.0GB:image:::::600507680189801B2000000000000015

IBM_2145:admin>svcinfo lsmdisk -delim :


id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID

514 IBM TotalStorage SAN Volume Controller


0:mdisk0:online:managed:1:MDG0_DS43:200.4GB:0000000000000000:DS4301:600a0b80000fbdf0000002753fb8b1c20000000
0000000000000000000000000
8:mdisk8:online:unmanaged:::10.0GB:0000000000000008:DS4302:600a0b80000fbdf000000283419361100000000000000000
0000000000000000

IBM_2145:admin>svctask migratetoimage -vdisk VD9_img -mdisk mdisk8 -mdiskgrp MDG0_DS43

IBM_2145:admin>svcinfo lsvdisk -delim :


id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type:FC_id:FC_name:RC_id:RC_n
ame:vdisk_UID
0:VdAIX1_FCT:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:::::600507680189801B2000000000000004
1:VdAIX1V1:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:::::600507680189801B2000000000000003
17:VD9_img:0:io_grp0:online:4:MDG0_DS43:10.0GB:image:::::600507680189801B2000000000000015

IBM_2145:admin>svcinfo lsmdisk -delim :


id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
0:mdisk0:online:managed:1:MDG0_DS43:200.4GB:0000000000000000:DS4301:600a0b80000fbdf0000002753fb8b1c20000000
0000000000000000000000000
8:mdisk8:online:managed:1:MDG0_DS43:10.0GB:0000000000000008:DS4302:600a0b80000fbdf0000002834193611000000000
000000000000000000000000

Free the data from the SVC by using the following command:
svctask rmvdisk VD9_img

If the command succeeds then the underlying back-end storage controller will be consistent
with the data which a host could previously have read from the image mode VDisk
(VD9_img). That is, all fast write data will have been flushed to the underlying LUN. Deleting
an Image Mode VDisk causes the MDisk (mdisk8) associated with the VDisk (VD9_img) to be
ejected from the MDG. The mode of the MDisk (mdisk8) will be returned to “Unmanaged”.

13.6 Data migration from ESS to SVC using the CLI


This scenario (shown in Figure 13-18 on page 516) consists of the migration of one hdisk
(IBM TotalStorage Enterprise Storage Server (ESS) disk1) of an AIX host AIX1 from an ESS
to an SVC. The ESS is, both before and after the migration, connected directly to the AIX host
(via the 2109 switches).

Before the migration, the LUN masking is defined in the ESS in order to give access to the
AIX host system for ESS disk1 and ESS disk2.

After the migration, the LUN masking is defined in the ESS to give access to the AIX host
system for the ESS disk1, and access to the SVC nodes for the ESS disk2.

The following actions occur for the migration:


1. The hdisk is first discovered as “MDisk unmanaged” by the SVC. It is MDisk9.
2. A VDisk is created in an image mode using this MDisk MDisk9.
3. This new VDisk is mapped to the host system AIX1.
4. You can run the cfgmgr command on the AIX host system AIX1.
5. The VDisk is again available for the host system AIX1.
6. The VDisk can be migrated from image mode to managed mode concurrently while it is
being accessed by the AIX host system. The mode of migration is to migrate some extents
(100) from an MDisk to another MDisk in an MDG.

Chapter 13. Migration to the SAN Volume Controller 515


Figure 13-18 Migration scenario from ESS to SVC

13.6.1 AIX host connected directly to the ESS


The AIX host initial configuration is shown in Example 13-5.

Example 13-5 AIX host system configuration before the migration


AIX1#lspv
hdisk0 0007905873b5c2fb rootvg
hdisk1 0007905873b5c364 rootvg
hdisk2 none None
hdisk3 none None
hdisk4 none None
hdisk5 none None
vpath0 00079058573c2177 essdisk1vg
vpath1 00079058573c65a9 essdisk2vg

AIX1#lsdev -Cc disk


hdisk0 Available 10-60-00-8,0 16 Bit SCSI Disk Drive
hdisk1 Available 10-60-00-9,0 16 Bit SCSI Disk Drive
hdisk2 Available 20-58-01 IBM FC 2105F20
hdisk3 Available 20-58-01 IBM FC 2105F20
hdisk4 Available 20-60-01 IBM FC 2105F20
hdisk5 Available 20-60-01 IBM FC 2105F20
vpath0 Available Data Path Optimizer Pseudo Device Driver
vpath1 Available Data Path Optimizer Pseudo Device Driver

AIX1#datapath query adapter


Active Adapters :2

516 IBM TotalStorage SAN Volume Controller


Adpt# Adapter Name State Mode Select Errors Paths Active
0 fscsi0 NORMAL ACTIVE 6607 12 2 2
1 fscsi1 NORMAL ACTIVE 6757 12 2 2

AIX1#datapath query device


Total Devices : 2
DEV#: 0 DEVICE NAME: vpath0 TYPE: 2105F20 POLICY: Optimized
SERIAL: 40514830
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk2 OPEN NORMAL 1797 0
1 fscsi1/hdisk4 OPEN NORMAL 1825 0
DEV#: 1 DEVICE NAME: vpath1 TYPE: 2105F20 POLICY: Optimized
SERIAL: 40914830
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk3 OPEN NORMAL 1792 0
1 fscsi1/hdisk5 OPEN NORMAL 1796 0

AIX1#lspv vpath1
PHYSICAL VOLUME: vpath1 VOLUME GROUP: essdisk2vg
PV IDENTIFIER: 00079058573c65a9 VG IDENTIFIER 0007905800004c00000000f5573c6ecd
PV STATE: active
STALE PARTITIONS: 0 ALLOCATABLE: yes
PP SIZE: 8 megabyte(s) LOGICAL VOLUMES: 2
TOTAL PPs: 595 (4760 megabytes) VG DESCRIPTORS: 2
FREE PPs: 584 (4672 megabytes) HOT SPARE: no
USED PPs: 11 (88 megabytes)
FREE DISTRIBUTION: 119..108..119..119..119
USED DISTRIBUTION: 00..11..00..00..00

AIX1#lsvpcfg
vpath0 (Avail pv essdisk1vg) 40514830 = hdisk2 (Avail ) hdisk4 (Avail )
vpath1 (Avail pv essdisk2vg) 40914830 = hdisk3 (Avail ) hdisk5 (Avail )

AIX1#lsvg -o
essdisk2vg
essdisk1vg
rootvg

AIX1#lsvg -l essdisk1vg
essdisk1vg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
essdisk1lv jfs 10 10 1 open/syncd /essdisk1fs
loglv00 jfslog 1 1 1 open/syncd N/A

AIX1#lsvg -l essdisk2vg
essdisk2vg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
essdisk2lv jfs 10 10 1 open/syncd /essdisk2fs
loglv01 jfslog 1 1 1 open/syncd N/A

AIX1#lsvg essdisk2vg
VOLUME GROUP: essdisk2vg VG IDENTIFIER: 0007905800004c00000000f5573c6ecd
VG STATE: active PP SIZE: 8 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 595 (4760 megabytes)
MAX LVs: 256 FREE PPs: 584 (4672 megabytes)
LVs: 2 USED PPs: 11 (88 megabytes)
OPEN LVs: 2 QUORUM: 2
TOTAL PVs: 1 VG DESCRIPTORS: 2

Chapter 13. Migration to the SAN Volume Controller 517


STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 1 AUTO ON: yes
MAX PPs per PV: 1016 MAX PVs: 32
LTG size: 128 kilobyte(s) AUTO SYNC: no
HOT SPARE: no

AIX1#df -k
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 16384 7556 54% 1296 16% /
/dev/hd2 1867776 1265400 33% 20526 5% /usr
/dev/hd9var 16384 9912 40% 436 11% /var
/dev/hd3 32768 29156 12% 65 1% /tmp
/dev/essdisk1lv 81920 79308 4% 17 1% /essdisk1fs
/dev/essdisk2lv 81920 79308 4% 17 1% /essdisk2fs

AIX1#pwd
/essdisk1fs
AIX1#ls
file_on_ess.txt lost+found
AIX1#more file_on_ess.txt
file created on ess with direct connexion on disk1

AIX1#pwd
/essdisk2fs
AIX1#ls
file_on_ess.txt lost+found
AIX1#more file_on_ess.txt
file created on ess with direct connexion on disk2

>>>>>>>>>>> Before the migration of ess disk 2 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><<<

AIX1#varyoffvg essdisk2vg

AIX1#exportvg essdisk2vg

AIX1#lsvg -o
essdisk1vg
rootvg

AIX1#df -k
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 16384 7552 54% 1290 16% /
/dev/hd2 1867776 1265400 33% 20526 5% /usr
/dev/hd9var 16384 9912 40% 436 11% /var
/dev/hd3 32768 29148 12% 66 1% /tmp
/dev/essdisk1lv 81920 79304 4% 18 1% /essdisk1fs

AIX1#lsvpcfg
vpath0 (Avail pv essdisk1vg) 40514830 = hdisk2 (Avail ) hdisk4 (Avail )
vpath1 (Avail pv ) 40914830 = hdisk3 (Avail ) hdisk5 (Avail )

AIX1#lsdev -Cc disk


hdisk0 Available 10-60-00-8,0 16 Bit SCSI Disk Drive
hdisk1 Available 10-60-00-9,0 16 Bit SCSI Disk Drive
hdisk2 Available 20-58-01 IBM FC 2105F20
hdisk3 Available 20-58-01 IBM FC 2105F20
hdisk4 Available 20-60-01 IBM FC 2105F20
hdisk5 Available 20-60-01 IBM FC 2105F20
vpath0 Available Data Path Optimizer Pseudo Device Driver
vpath1 Available Data Path Optimizer Pseudo Device Driver

518 IBM TotalStorage SAN Volume Controller


AIX1#lspv
hdisk0 0007905873b5c2fb rootvg
hdisk1 0007905873b5c364 rootvg
hdisk2 none None
hdisk3 none None
hdisk4 none None
hdisk5 none None
vpath0 00079058573c2177 essdisk1vg
vpath1 00079058573c65a9 None

AIX1#rmdev -dl vpath1


vpath1 deleted
AIX1#rmdev -dl hdisk3
hdisk3 deleted
AIX1#rmdev -dl hdisk5
hdisk5 deleted

AIX1#lsdev -Cc disk


hdisk0 Available 10-60-00-8,0 16 Bit SCSI Disk Drive
hdisk1 Available 10-60-00-9,0 16 Bit SCSI Disk Drive
hdisk2 Available 20-58-01 IBM FC 2105F20
hdisk4 Available 20-60-01 IBM FC 2105F20
vpath0 Available Data Path Optimizer Pseudo Device Driver

AIX1#lspv
hdisk0 0007905873b5c2fb rootvg
hdisk1 0007905873b5c364 rootvg
hdisk2 none None
hdisk4 none None
vpath0 00079058573c2177 essdisk1vg

AIX1#lsvpcfg
vpath0 (Avail pv essdisk1vg) 40514830 = hdisk2 (Avail ) hdisk4 (Avail )

AIX1#datapath query adapter


Active Adapters :2
Adpt# Adapter Name State Mode Select Errors Paths Active
0 fscsi0 NORMAL ACTIVE 8029 12 1 1
1 fscsi1 NORMAL ACTIVE 8228 12 1 1

AIX1#datapath query device


Total Devices : 1
DEV#: 0 DEVICE NAME: vpath0 TYPE: 2105F20 POLICY: Optimized
SERIAL: 40514830
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk2 OPEN NORMAL 2502 0
1 fscsi1/hdisk4 OPEN NORMAL 2566 0

Now if you look at the SVC, there are already two MDisks (MDisk7 and MDisk8) from an ESS
in unmanaged mode as shown in Example 13-6.

Example 13-6 Configuring SVC back-end storage


IBM_2145:admin>svcinfo lsmdisk -delim /
id/name/status/mode/mdisk_grp_id/mdisk_grp_name/capacity/ctrl_LUN_#/controller_name
0/MDisk0/online/unmanaged///167.0GB/0000000000000000/F600_23C1436
1/MDisk1/online/unmanaged///200.4GB/0000000000000001/F600_23C1436
2/MDisk2/online/unmanaged///167.0GB/0000000000000002/F600_23C1436

Chapter 13. Migration to the SAN Volume Controller 519


3/MDisk3/online/unmanaged///200.4GB/0000000000000003/F600_23C1436
4/MDisk4/online/unmanaged///339.3GB/0000000000000004/F600_23C1436
5/MDisk5/online/unmanaged///407.2GB/0000000000000005/F600_23C1436
6/MDisk6/online/unmanaged///500.0GB/0000000000000006/F600_23C1436
7/MDisk7/online/unmanaged///4.7GB/0000000000000000/ESS
8/MDisk8/online/unmanaged///4.7GB/0000000000000001/ESS

We use these two MDisks to create an MDG named “ess_MDiskgrp0” with an extent size of
16 MB, and then we add the two MDisks from the ESS in the MDisk group as shown in
Example 13-7.

Example 13-7 Creating an MDisk group with two MDisks from ESS
>>>>>>>>>>>>> Creation of an MDisk group for ESS back-end storage

IBM_2145:admin>svctask mkmdiskgrp -name ess_MDiskgrp0 -ext 16


mdisk Group, id [1], successfully created

IBM_2145:admin>svcinfo lsmdiskgrp -delim :


id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity
1:ess_MDiskgrp0:online:0:0:0:16:0

>>>>>>>>> add 2 MDisks 7 and 8 from ESS in ess_MDiskgrp0

IBM_2145:admin>svctask addmdisk -mdisk MDisk7:MDisk8 ess_MDiskgrp0

IBM_2145:admin>svcinfo lsmdiskgrp -delim :


id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity
1:ess_MDiskgrp0:online:2:0:9.3GB:16:9.3GB

IBM_2145:admin>svcinfo lsmdiskgrp ess_MDiskgrp0


id 1
name ess_MDiskgrp0
status online
mdisk_count 2
vdisk_count 0
capacity 9.3GB
extent_size 16
free_capacity 9.3GB

IBM_2145:admin>svcinfo lsmdisk -delim :


id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name
0:MDisk0:online:unmanaged:::167.0GB:0000000000000000:F600_23C1436
1:MDisk1:online:unmanaged:::200.4GB:0000000000000001:F600_23C1436
2:MDisk2:online:unmanaged:::167.0GB:0000000000000002:F600_23C1436
3:MDisk3:online:unmanaged:::200.4GB:0000000000000003:F600_23C1436
4:MDisk4:online:unmanaged:::339.3GB:0000000000000004:F600_23C1436
5:MDisk5:online:unmanaged:::407.2GB:0000000000000005:F600_23C1436
6:MDisk6:online:unmanaged:::500.0GB:0000000000000006:F600_23C1436
7:MDisk7:online:managed:1:ess_MDiskgrp0:4.7GB:0000000000000000:ESS
8:MDisk8:online:managed:1:ess_MDiskgrp0:4.7GB:0000000000000001:ESS

13.6.2 SVC added between the AIX host and the ESS
After changing the LUN masking definition on the ESS, add the SVC between the AIX host
and the ESS.

520 IBM TotalStorage SAN Volume Controller


On the SAN Volume Controller
Follow these steps:
1. Perform discovery of the new MDisk (previously mapped to AIX) by entering the following
command:
svctask detectmdisk
2. Enter the svcinfo command as shown in Example 13-8.

Example 13-8 A new MDisk from ESS MDisk9 which corresponds to ESS disk2 from AIX
IBM_2145:admin>svcinfo lsmdisk -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name
0:MDisk0:online:unmanaged:::167.0GB:0000000000000000:F600_23C1436
1:MDisk1:online:unmanaged:::200.4GB:0000000000000001:F600_23C1436
2:MDisk2:online:unmanaged:::167.0GB:0000000000000002:F600_23C1436
3:MDisk3:online:unmanaged:::200.4GB:0000000000000003:F600_23C1436
4:MDisk4:online:unmanaged:::339.3GB:0000000000000004:F600_23C1436
5:MDisk5:online:unmanaged:::407.2GB:0000000000000005:F600_23C1436
6:MDisk6:online:unmanaged:::500.0GB:0000000000000006:F600_23C1436
7:MDisk7:online:managed:1:ess_MDiskgrp0:4.7GB:0000000000000000:ESS
8:MDisk8:online:managed:1:ess_MDiskgrp0:4.7GB:0000000000000001:ESS
9:MDisk9:online:unmanaged:::4.7GB:0000000000000002:ESS

IBM_2145:admin>svcinfo lsfreeextents MDisk7


id 7
number_of_extents 297

IBM_2145:admin>svcinfo lsfreeextents MDisk8


id 8
number_of_extents 297

IBM_2145:admin>svcinfo lsfreeextents MDisk9


id 9
number_of_extents 0

3. Create the VDisk in image mode using the MDisk MDisk9 as shown in Example 13-9.

Example 13-9 Creating the VDisk in image mode


>>>>>>>>> create the VDisk in image mode using MDisk9 in mdisgrp ess

IBM_2145:admin>svctask mkvdisk -mdiskgrp ess_MDiskgrp0 -iogrp 0 -vtype image -mdisk MDisk9


aix1_ima_vd1
Host LUN, id [0], successfully created

IBM_2145:admin>svcinfo lsmdisk -delim :


id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name
0:MDisk0:online:unmanaged:::167.0GB:0000000000000000:F600_23C1436
1:MDisk1:online:unmanaged:::200.4GB:0000000000000001:F600_23C1436
2:MDisk2:online:unmanaged:::167.0GB:0000000000000002:F600_23C1436
3:MDisk3:online:unmanaged:::200.4GB:0000000000000003:F600_23C1436
4:MDisk4:online:unmanaged:::339.3GB:0000000000000004:F600_23C1436
5:MDisk5:online:unmanaged:::407.2GB:0000000000000005:F600_23C1436
6:MDisk6:online:unmanaged:::500.0GB:0000000000000006:F600_23C1436
7:MDisk7:online:managed:1:ess_MDiskgrp0:4.7GB:0000000000000000:ESS
8:MDisk8:online:managed:1:ess_MDiskgrp0:4.7GB:0000000000000001:ESS
9:MDisk9:online:image:1:ess_MDiskgrp0:4.7GB:0000000000000002:ESS

IBM_2145:admin>svcinfo lsvdisk -delim :

Chapter 13. Migration to the SAN Volume Controller 521


id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type:FC_id:FC
_name:RC_id:RC_name
0:VDisk0:0:io_grp0:online:1:ess_MDiskgrp0:4.7GB:image::::

IBM_2145:admin>svcinfo lsmdisk MDisk9


id 9
name MDisk9
status online
mode image
mdisk_grp_id 1
mdisk_grp_name ess_MDiskgrp0
capacity 4.7GB
quorum_index
block_size 512
controller_name ESS
ctrl_type 4
ctrl_WWNN 5005076300C00BE8
controller_id 1
path_count 2
max_path_count 2
ctrl_LUN_# 2

IBM_2145:admin>svcinfo lsvdisk VDisk0


id 0
name VDisk0
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name ess_MDiskgrp0
capacity 4.7GB
type image
formatted no
mdisk_id 9
mdisk_name MDisk9
FC_id
FC_name
RC_id
RC_name
throttling 0
preferred_node_id 2

>>> number of extents of the VDisk. MDisk 9 is providing extents for the specified VDisk 0
IBM_2145:admin>svcinfo lsvdiskextent VDisk0
id number_extents
9 299

>>> VDisk 0 that is using extents on the specified MDisk MDisk9.

IBM_2145:admin>svcinfo lsmdiskextent MDisk7

IBM_2145:admin>svcinfo lsmdiskextent MDisk8

IBM_2145:admin>svcinfo lsmdiskextent MDisk9


id number_of_extents
0 299

4. Map the VDisk to the host system AIX (LUN masking) as shown in Example 13-10 on
page 523.

522 IBM TotalStorage SAN Volume Controller


Example 13-10 Host mapping of the image VDisk to host system AIX
IBM_2145:admin>svcinfo lshbaportcandidate
id
210000E08B08AFD6
210100E08B291541
210100E08B28AFD6
210000E08B091541
210100E08B24D751
210000E08B04D751
10000000C9295A9A
10000000C9266F5B

IBM_2145:admin>svcinfo lshost
id name port_count
0 AIX1 2

IBM_2145:admin>svcinfo lsvdisk -delim :


id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type:FC_id:FC
_name:RC_id:RC_name
0:VDisk0:0:io_grp0:online:1:ess_MDiskgrp0:4.7GB:image::::

IBM_2145:admin>svcinfo lshostvdiskmap -delim :


id:name:SCSI_id:vdisk_id:vdisk_name:wwpn:vdisk_UID
0:AIX1:0:0:VDisk0:10000000C9266F5B:600507680183001AD00000000000000A

IBM_2145:admin>svcinfo lsvdiskhostmap -delim :


id:name:SCSI_id:host_id:host_name:wwpn:vdisk_UID
0:VDisk0:0:0:AIX1:10000000C9266F5B:600507680183001AD00000000000000A
0:VDisk0:0:0:AIX1:10000000C9295A9A:600507680183001AD00000000000000A

On the AIX host


On the AIX host system after you run the cfgmgr command, you see the results as shown in
Example 13-11.

Example 13-11 Discovery of the ESS disk2 on the AIX host system via the SVC
>>>>>>>>>>>>>>>>>>>>>>>>>>>> after cfgmgr >>>>>>>>>>>>>>>>>>>>>>>>>>>>>><

AIX1#lspv
hdisk0 0007905873b5c2fb rootvg
hdisk1 0007905873b5c364 rootvg
hdisk2 none None
hdisk4 none None
vpath0 00079058573c2177 essdisk1vg
hdisk3 00079058573c65a9 None
hdisk5 00079058573c65a9 None
hdisk6 00079058573c65a9 None
hdisk7 00079058573c65a9 None
vpath1 none None

AIX1#lsdev -Cc disk


hdisk0 Available 10-60-00-8,0 16 Bit SCSI Disk Drive
hdisk1 Available 10-60-00-9,0 16 Bit SCSI Disk Drive
hdisk2 Available 20-58-01 IBM FC 2105F20
hdisk4 Available 20-60-01 IBM FC 2105F20
vpath0 Available Data Path Optimizer Pseudo Device Driver
hdisk3 Available 20-58-01 IBM TotalStorage SAN Volume Controller device
hdisk5 Available 20-58-01 IBM TotalStorage SAN Volume Controller device
hdisk6 Available 20-60-01 IBM TotalStorage SAN Volume Controller device

Chapter 13. Migration to the SAN Volume Controller 523


hdisk7 Available 20-60-01 IBM TotalStorage SAN Volume Controller device
vpath1 Available Data Path Optimizer Pseudo Device Driver

AIX1#importvg -y essdisk2vg hdisk3


essdisk2vg
AIX1#lspv
hdisk0 0007905873b5c2fb rootvg
hdisk1 0007905873b5c364 rootvg
hdisk2 none None
hdisk4 none None
vpath0 00079058573c2177 essdisk1vg
hdisk3 00079058573c65a9 essdisk2vg
hdisk5 00079058573c65a9 essdisk2vg
hdisk6 00079058573c65a9 essdisk2vg
hdisk7 00079058573c65a9 essdisk2vg
vpath1 none None

AIX1#hd2vp essdisk2vg
essdisk2vg
essdisk2vg contains these hdisks and pvids
00079058573c65a90000000000000000 hdisk3

hdisk3 is part of vpath1.


essdisk2vg
Deleting pvid of hdisk3 from ODM database
Create vpath1 pvid of 00079058573c65a9 in ODM database.
lname=vpath1, pvid=00079058573c65a9
Deleting pvid of hdisk5 from ODM database
Deleting pvid of hdisk6 from ODM database
Deleting pvid of hdisk7 from ODM database
varyon essdisk2vg was successful.
essdisk2vg is converted to dpo device vpathes successfully!

AIX1#lspv
hdisk0 0007905873b5c2fb rootvg
hdisk1 0007905873b5c364 rootvg
hdisk2 none None
hdisk4 none None
vpath0 00079058573c2177 essdisk1vg
hdisk3 none None
hdisk5 none None
hdisk6 none None
hdisk7 none None
vpath1 00079058573c65a9 essdisk2vg

AIX1#datapath query adapter


Active Adapters :2
Adpt# Adapter Name State Mode Select Errors Paths Active
0 fscsi0 NORMAL ACTIVE 386 0 3 3
1 fscsi1 NORMAL ACTIVE 449 0 3 3

AIX1#datapath query device


Total Devices : 2
DEV#: 0 DEVICE NAME: vpath0 TYPE: 2105F20 POLICY: Optimized
SERIAL: 40514830
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk2 OPEN NORMAL 31 0
1 fscsi1/hdisk4 OPEN NORMAL 37 0

524 IBM TotalStorage SAN Volume Controller


DEV#: 1 DEVICE NAME: vpath1 TYPE: 2145 POLICY: Optimized
SERIAL: 600507680183001AD00000000000000A
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk3 OPEN NORMAL 355 0
1 fscsi0/hdisk5 OPEN NORMAL 0 0
2 fscsi1/hdisk6 OPEN NORMAL 412 0
3 fscsi1/hdisk7 OPEN NORMAL 0 0

AIX1#lsvpcfg
vpath0 (Avail pv essdisk1vg) 40514830 = hdisk2 (Avail ) hdisk4 (Avail )
vpath1 (Avail pv essdisk2vg) 600507680183001AD00000000000000A = hdisk3 (Avail ) hdisk5
(Avail ) hdisk6 (Avail ) hdisk7 (Avail )

AIX1#varyonvg essdisk2vg

AIX1#lsvg -o
essdisk2vg
essdisk1vg
rootvg

AIX1#lsvg -l essdisk2vg
essdisk2vg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
essdisk2lv jfs 10 10 1 open/syncd /essdisk2fs
loglv01 jfslog 1 1 1 open/syncd N/A

AIX1#lsvg essdisk2vg
VOLUME GROUP: essdisk2vg VG IDENTIFIER: 0007905800004c00000000f5573c6ecd
VG STATE: active PP SIZE: 8 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 595 (4760 megabytes)
MAX LVs: 256 FREE PPs: 584 (4672 megabytes)
LVs: 2 USED PPs: 11 (88 megabytes)
OPEN LVs: 2 QUORUM: 2
TOTAL PVs: 1 VG DESCRIPTORS: 2
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 1 AUTO ON: yes
MAX PPs per PV: 1016 MAX PVs: 32
LTG size: 128 kilobyte(s) AUTO SYNC: no
HOT SPARE: no

AIX1#fsck -y /essdisk2fs
** Checking /dev/ressdisk2lv (/essdi)
** Phase 0 - Check Log
log redo processing for /dev/ressdisk2lv
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Inode Map
** Phase 6 - Check Block Map
9 files 5232 blocks 158608 free

AIX1#mount /essdisk2fs

AIX1#df -k
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 16384 7560 54% 1301 16% /
/dev/hd2 1867776 1265392 33% 20526 5% /usr
/dev/hd9var 16384 9880 40% 437 11% /var

Chapter 13. Migration to the SAN Volume Controller 525


/dev/hd3 32768 29136 12% 67 1% /tmp
/dev/hd1 16384 15820 4% 18 1% /home
/proc - - - - - /proc
/dev/hd10opt 32768 26592 19% 294 4% /opt
/dev/essdisk2lv 81920 79304 4% 18 1% /essdisk2fs

AIX1#cd /essdisk2fs
AIX1#ls
file_on_ess.txt lost+found
AIX1#more file_on_ess.txt
file created on ess with direct connexion on disk2

13.6.3 Migrating the VDisk from image mode to managed mode


The VDisk in image mode is still completely made up of extents (299) from MDisk9. MDisk7
and MDisk8, with 277 free extents each, are in the same MDG.

The purpose of the migration is to migrate some extents from MDisk9 to MDisk8, and MDisk7
using the command:
svctask migrateexts

This command also migrates the VDisk from image mode to managed mode. See
Example 13-12 on page 527.

526 IBM TotalStorage SAN Volume Controller


Example 13-12 Migrating a VDisk from image mode to managed mode
IBM_2145:admin>svcinfo lsvdiskextent VDisk0
id number_extents
9 299

>>>> 299 extents for the VDisk which are in MDisk 9

IBM_2145:admin>svcinfo lsfreeextents MDisk9


id 9
number_of_extents 0

IBM_2145:admin>svcinfo lsfreeextents MDisk8


id 8
number_of_extents 297

IBM_2145:admin>svcinfo lsfreeextents MDisk7


id 7
number_of_extents 297

>>>>>>>>>>>>>>>>>>> you know that 299 extents of MDisk9 are in VDisk0 (number 0)

IBM_2145:admin>svcinfo lsmdiskextent MDisk9


id number_of_extents
0 299
>>>>>>> no VDisk use extents from MDisk 7 and 8
IBM_2145:admin>svcinfo lsmdiskextent MDisk8
IBM_2145:admin>svcinfo lsmdiskextent MDisk7
>>>>>>>>>>> start migration

IBM_2145:admin>svcinfo lsmigrate
>>>>>> no migration have started yet

IBM_2145:admin>svctask migrateexts -source MDisk9 -exts 100 -target MDisk7 -vdisk VDisk0

>>>>>>>>>>>>> The migration has started

IBM_2145:admin>svcinfo lsmigrate
migrate_type mdisk_Extents_Migration
progress 1
migrate_vdisk_index 0
migrate_source_mdisk_index 9
migrate_target_mdisk_index 7
number_extents 100
max_thread_count 4

IBM_2145:admin>svcinfo lsmigrate
migrate_type mdisk_Extents_Migration
progress 5
migrate_vdisk_index 0
migrate_source_mdisk_index 9
migrate_target_mdisk_index 7
number_extents 100
max_thread_count 4

IBM_2145:admin>svcinfo lsmigrate
migrate_type mdisk_Extents_Migration
progress 7
migrate_vdisk_index 0
migrate_source_mdisk_index 9
migrate_target_mdisk_index 7

Chapter 13. Migration to the SAN Volume Controller 527


number_extents 100
max_thread_count 4

IBM_2145:admin>svcinfo lsmigrate
migrate_type mdisk_Extents_Migration
progress 35
migrate_vdisk_index 0
migrate_source_mdisk_index 9
migrate_target_mdisk_index 7
number_extents 100
max_thread_count 4

IBM_2145:admin>svcinfo lsmigrate
migrate_type mdisk_Extents_Migration
progress 85
migrate_vdisk_index 0
migrate_source_mdisk_index 9
migrate_target_mdisk_index 7
number_extents 100
max_thread_count 4

IBM_2145:admin>svcinfo lsmigrate
migrate_type mdisk_Extents_Migration
progress 92
migrate_vdisk_index 0
migrate_source_mdisk_index 9
migrate_target_mdisk_index 7
number_extents 100
max_thread_count 4

IBM_2145:admin>svcinfo lsmigrate

>>>>>>>>>>>>>>> migration done the VDisk is now managed >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

IBM_2145:admin>svcinfo lsmdisk -delim :


id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name
0:MDisk0:online:unmanaged:::167.0GB:0000000000000000:F600_23C1436
1:MDisk1:online:unmanaged:::200.4GB:0000000000000001:F600_23C1436
2:MDisk2:online:unmanaged:::167.0GB:0000000000000002:F600_23C1436
3:MDisk3:online:unmanaged:::200.4GB:0000000000000003:F600_23C1436
4:MDisk4:online:unmanaged:::339.3GB:0000000000000004:F600_23C1436
5:MDisk5:online:unmanaged:::407.2GB:0000000000000005:F600_23C1436
6:MDisk6:online:unmanaged:::500.0GB:0000000000000006:F600_23C1436
7:MDisk7:online:managed:1:ess_MDiskgrp0:4.7GB:0000000000000000:ESS
8:MDisk8:online:managed:1:ess_MDiskgrp0:4.7GB:0000000000000001:ESS
9:MDisk9:online:managed:1:ess_MDiskgrp0:4.7GB:0000000000000002:ESS

>>>>>>>>>>>>>>>>> Status of the extents after the first migration

IBM_2145:admin>svcinfo lsmdiskextent MDisk7


id number_of_extents
0 100

IBM_2145:admin>svcinfo lsmdiskextent MDisk8

IBM_2145:admin>svcinfo lsmdiskextent MDisk9


id number_of_extents
0 199

IBM_2145:admin>svcinfo lsvdiskextent VDisk0

528 IBM TotalStorage SAN Volume Controller


id number_extents
7 100
9 199

>>>>>>>>>>>> new migration from MDisk 9 to MDisk 8 for VDisk VDisk0 with 100 extents

IBM_2145:admin>svctask migrateexts -source MDisk9 -exts 100 -target MDisk8 -vdisk VDisk0

IBM_2145:admin>svcinfo lsmigrate
migrate_type mdisk_Extents_Migration
progress 4
migrate_vdisk_index 0
migrate_source_mdisk_index 9
migrate_target_mdisk_index 8
number_extents 100
max_thread_count 4

IBM_2145:admin>svcinfo lsmigrate
migrate_type mdisk_Extents_Migration
progress 51
migrate_vdisk_index 0
migrate_source_mdisk_index 9
migrate_target_mdisk_index 8
number_extents 100
max_thread_count 4

IBM_2145:admin>svcinfo lsmigrate
migrate_type mdisk_Extents_Migration
progress 65
migrate_vdisk_index 0
migrate_source_mdisk_index 9
migrate_target_mdisk_index 8
number_extents 100
max_thread_count 4

IBM_2145:admin>svcinfo lsmigrate

>>>> Migration is done >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

IBM_2145:admin>svcinfo lsvdiskextent VDisk0


id number_extents
7 100
8 100
9 99

IBM_2145:admin>svcinfo lsmdiskextent MDisk7


id number_of_extents
0 100

IBM_2145:admin>svcinfo lsmdiskextent MDisk8


id number_of_extents
0 100

IBM_2145:admin>svcinfo lsmdiskextent MDisk9


id number_of_extents
0 99

IBM_2145:admin>svcinfo lsfreeextents MDisk7


id 7
number_of_extents 197

Chapter 13. Migration to the SAN Volume Controller 529


IBM_2145:admin>svcinfo lsfreeextents MDisk8
id 8
number_of_extents 197

IBM_2145:admin>svcinfo lsfreeextents MDisk9


id 9
number_of_extents 198

IBM_2145:admin>svcinfo lsmdiskgrp -delim :


id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity
1:ess_MDiskgrp0:online:3:1:13.9GB:16:9.2GB

IBM_2145:admin>svcinfo lsmdisk -delim :


id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name
0:MDisk0:online:unmanaged:::167.0GB:0000000000000000:F600_23C1436
1:MDisk1:online:unmanaged:::200.4GB:0000000000000001:F600_23C1436
2:MDisk2:online:unmanaged:::167.0GB:0000000000000002:F600_23C1436
3:MDisk3:online:unmanaged:::200.4GB:0000000000000003:F600_23C1436
4:MDisk4:online:unmanaged:::339.3GB:0000000000000004:F600_23C1436
5:MDisk5:online:unmanaged:::407.2GB:0000000000000005:F600_23C1436
6:MDisk6:online:unmanaged:::500.0GB:0000000000000006:F600_23C1436
7:MDisk7:online:managed:1:ess_MDiskgrp0:4.7GB:0000000000000000:ESS
8:MDisk8:online:managed:1:ess_MDiskgrp0:4.7GB:0000000000000001:ESS
9:MDisk9:online:managed:1:ess_MDiskgrp0:4.7GB:0000000000000002:ESS
10:MDisk10:online:unmanaged:::4.7GB:0000000000000003:ESS

IBM_2145:admin>svcinfo lsvdisk -delim :


id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type:FC_id:FC
_name:RC_id:RC_name
0:VDisk0:0:io_grp0:online:1:ess_MDiskgrp0:4.7GB:striped::::

IBM_2145:admin>svcinfo lsvdisk VDisk0


id 0
name VDisk0
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name ess_MDiskgrp0
capacity 4.7GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
throttling 0
preferred_node_id 2

During the migration, access to essdisk2fs was possible, and there was no impact on the AIX
host system. The migration process is transparent to the host system.

530 IBM TotalStorage SAN Volume Controller


13.7 Data migration from ESS to SVC using the GUI
This configuration is the same as the one shown in Figure 13-18 on page 516. This scenario
consists of the migration of one volume (from ESS label S:) of a Windows 2000 host from an
ESS to an SVC. The ESS is, both before and after the migration, connected directly to the
Windows 2000 host (via the 2109 switches).

Before the migration, the LUN masking is defined in the ESS to give access to the Windows
2000 host system for the volume from ESS label S:.

After the migration, LUN masking is defined in the ESS to give access to the SVC nodes for
the volume from ESS label S:.

The following actions occur for the migration:


1. Shut down the Windows 2000 host system before changing LUN masking in the ESS.
2. The volume is first discovered as an “MDisk unmanaged” by the SVC. It is MDisk10.
3. A VDisk is created in image mode using this MDisk.
4. This new VDisk is mapped to the host system Win2K.
5. Restart the Windows 2000 host system.
6. The VDisk is again available for the host system Win2K.

The VDisk can be migrated from image mode to managed mode, concurrently to the access
to the Win2K host system. The mode of migration is to migrate a VDisk from an MDisk group
to another MDisk group.

13.7.1 Windows 2000 host system connected directly to the ESS


Figure 13-19 on page 532 shows the disk from the ESS.

Chapter 13. Migration to the SAN Volume Controller 531


Figure 13-19 Disk management: One volume from ESS with label S:

Figure 13-20 shows the properties using the Subsystem Device Driver (SDD).

Figure 13-20 Drive S: from ESS with SDD

532 IBM TotalStorage SAN Volume Controller


Figure 13-21 shows the volume properties.

Figure 13-21 Volume properties of Drive S

Figure 13-22 shows the files on volume S.

Figure 13-22 Files on volume S: (Volume from ESS)

13.7.2 SVC added between the Windows 2000 host system and the ESS
After you change LUN masking in the ESS, a new MDisk is discovered in the SVC, MDisk10.
A VDisk named VDisk0 already exists. It was created after migrating one hdisk from a host
system.

Chapter 13. Migration to the SAN Volume Controller 533


Create a new VDisk named winimageVDisk1 in image mode using the MDisk10 in the MDisk
group ess_MDiskgrp0:
1. As shown in Figure 13-23, select Create a VDisk from the list and click Go.

Figure 13-23 Viewing VDisks

2. The Create VDisks panel (Figure 13-24) of the wizard is displayed. Click Next.

Figure 13-24 Create VDisks

3. On the Select the Type of VDisk panel (Figure 13-25 on page 535), type the name of the
disk, select the I/O group, select the MDisk group, and select the type of VDisk. Then click
Next.

534 IBM TotalStorage SAN Volume Controller


Figure 13-25 Select the Type of VDisk

4. On the Select Attributes for Image-mode VDisk panel (Figure 13-26), select the preferred
node for I/O and the MDisk used to create the VDisk. Then click Next.

Figure 13-26 Select Attributes for Image-mode VDisk

5. Verify the options that you selected as shown in Figure 13-27 on page 536. Click Finish.

Chapter 13. Migration to the SAN Volume Controller 535


Figure 13-27 Verify VDisk

6. You can view the VDisk that you created as shown in Figure 13-28.

Figure 13-28 Viewing VDisks

The MDisk view is shown in Figure 13-29.

Figure 13-29 Viewing Managed Disks

536 IBM TotalStorage SAN Volume Controller


7. Map the VDisk again to the Windows 2000 host system “WIN2K”. As shown in
Figure 13-30, select the name of the VDisk. Then select Map a VDisk to a host and click
Go.

Figure 13-30 Viewing VDisks

8. Select the target host as shown in Figure 13-31 and click OK.

Figure 13-31 Creating a VDisk-to-Host mapping winimageVDisk1

9. Restart the Windows 2000 host system. Figure 13-32 on page 538 shows the result.

Chapter 13. Migration to the SAN Volume Controller 537


Figure 13-32 The volume S: Volume_from_ESS is online

Figure 13-33 shows the disk properties.

Figure 13-33 The volume S: Volume_from_ESS is online and is 2145 SDD Disk device

The volume is now online with the same data as before the migration. See Figure 13-34 on
page 539.

538 IBM TotalStorage SAN Volume Controller


Figure 13-34 The volume S: Volume_from_ESS is online with the same data

Figure 13-35 shows the properties.

Figure 13-35 The volume S: Volume_from_ESS is online and have four paths to SVC 2145

13.7.3 Migrating the VDisk from image mode to managed mode


Now the VDisk is migrated to managed mode by migrating the completed VDisk from the
MDG ess_MDiskgrp0 to the MDG Migrated _VDisk. This MDG is based on FAStT 600
back-end storage instead of ESS. It consists of MDisk4 and MDisk5.
1. As shown in Figure 13-36 on page 540, select the VDisk. Then select Migrate a VDisk
from the list and click Go.

Chapter 13. Migration to the SAN Volume Controller 539


Figure 13-36 Viewing VDisks

2. Select the MDG to which to migrate the disk as shown in Figure 13-37. Click OK.

Figure 13-37 Migrating VDisks-winimagevdisk1

3. You can now view the MDG as shown in Figure 13-38.

Figure 13-38 View Managed Disk Groups

Before the migration is complete, the VDisk still belongs to ess_mdiskgrp0 as shown in
Figure 13-39 on page 541.

540 IBM TotalStorage SAN Volume Controller


Figure 13-39 Viewing VDisks

After the migration is complete, you see the results shown in Figure 13-40.

Figure 13-40 Viewing MDisks

Viewing the VDisks gives the results shown in Figure 13-41.

Figure 13-41 VDisk is now in Migrated_VDisks instead of ess_mdiskgrp0

Chapter 13. Migration to the SAN Volume Controller 541


Figure 13-42 shows the details of the VDisk.

Figure 13-42 Details for VDisk winimagevdisk1

Viewing the MDGs after the migration, you see the results shown in Figure 13-43.

Figure 13-43 The MDGs after complete migration

542 IBM TotalStorage SAN Volume Controller


Finally, viewing the MDisks, you see the information as shown in Figure 13-44.

Figure 13-44 The MDisks after migration is complete

13.7.4 Migrating the VDisk from managed mode to image mode


The VDisk in managed mode can be migrated to image mode. A reason for doing this is after
an SVC virtualization trial period has expired, and you are returning the volume to its original
state.

In this example we migrate VDisk VD1_LINUX1 from managed mode to image mode.
1. Select VD1_LINUX1 and select migrate to an image mode VDisk from the pull-down menu
(Figure 13-45). Click Go.

Figure 13-45 Select VDisk and start migrate to an image mode VDisk

2. Select a target MDisk (mdisk3) by clicking the radio button (Figure 13-46). Click Next.

Figure 13-46 Select the target MDisk

Chapter 13. Migration to the SAN Volume Controller 543


3. Select a MDG (MDG2_DS43) by clicking the radio button (Figure 13-47). Click Next.

Figure 13-47 Select MDG

4. Select the number of threads (1 to 4). The higher the number, the higher the priority
(Figure 13-48). Click Next.

Figure 13-48 Select the threads

5. Verify the migration attributes (Figure 13-49) and click Finish.

Figure 13-49 Verify migration attributes

6. Check the progress panel (Figure 13-50 on page 545) and click Close.

544 IBM TotalStorage SAN Volume Controller


Figure 13-50 Progress panel

7. This brings you back into the viewing VDisks panel (Figure 13-51). Now you can see that
VD1_LINUX1 is in image mode.

Figure 13-51 Viewing VDisks

8. Click VD1_LINUX1 to see the details (Figure 13-52 on page 546).

Chapter 13. Migration to the SAN Volume Controller 545


Figure 13-52 VDisk details

9. Free the data from the SVC by using the procedure in “Deleting a VDisk” on page 330.

If the command succeeds then the underlying back-end storage controller will be consistent
with the data which a host could previously have read from the Image Mode VDisk
(VD1_LINUX1). That is, all fast write data will have been flushed to the underlying LUN.
Deleting an Image Mode VDisk causes the MDisk (mdisk8) associated with the VDisk
(VD1_LINUX1) to be ejected from the MDG. The mode of the MDisk (mdisk8) will be returned
to “Unmanaged”.

13.7.5 Migrating the VDisk from image mode to image mode


Migrating a VDisk from image mode to image mode is used to move image mode VDisks
from one storage subsystem to another storage subsystem. The data stays available for the
applications during this migration. So, this is a zero downtime data move from one disk
subsystem to another. This procedure is nearly the same as in “Migrating the VDisk from
managed mode to image mode” on page 543

In this example we migrate VDisk VD9_img from image mode to image mode.
1. Select VD9_img and select migrate to an image mode VDisk from the pull-down menu
(Figure 13-53 on page 547). Click Go.

546 IBM TotalStorage SAN Volume Controller


Figure 13-53 Migrate to an image mode VDisk

2. Select a target MDisk (mdisk8) by clicking the radio button (Figure 13-54). Click Next.

Figure 13-54 Select the target MDisk

3. Select an MDG (MDG2_DS43) by clicking the radio button (Figure 13-55). Click Next.

Figure 13-55 Select MDG

4. Select the number of threads (1 to 4). The higher the number, the higher the priority
(Figure 13-56 on page 548). Click Next.

Chapter 13. Migration to the SAN Volume Controller 547


Figure 13-56 Select the threads

5. Verify the migration attributes (Figure 13-57) and click Finish.

Figure 13-57 Verify migration attributes

6. Check the progress panel (Figure 13-58) and click close.

Figure 13-58 Progress on the migration

7. Free the data from the SVC by using the procedure in “Deleting a VDisk” on page 330.

If the command succeeds then the underlying back-end storage controller will be consistent
with the data which a host could previously have read from the Image Mode VDisk
(VD9_img). That is, all fast write data will have been flushed to the underlying LUN. Deleting
an Image Mode VDisk causes the MDisk (mdisk8) associated with the VDisk (VD9_img) to be
ejected from the MDG. The mode of the MDisk (mdisk8) will be returned to Unmanaged.

548 IBM TotalStorage SAN Volume Controller


14

Chapter 14. Master console


This chapter provides an overview of the master console. Prior to SVC release 2.1, the
master console was provided with the IBM TotalStorage SAN Volume Controller (SVC)
product. Starting with SVC release 2.1, the master console can optionally be ordered
separately as a software-only package available on a set of CDs to be installed on hardware
that meets or exceeds minimum requirements. Individual component software packages will
be made available on the Web, except for IBM DB2® and IBM Director. The CDs will not be
available on the Web. The SVC master console is provided to help manage the virtualization
environment.

This console can be used to manage the SVC clusters using the CIM Agent for SVC by
means of ICAT through a Web browser, or through the Administrative command line interface
through a Secure Shell (SSH) session. Prior to SVC release 2.1, a trial copy of Tivoli's Basic
Management Suite was included with the solution for evaluation. IBM Tivoli SAN Manager
(ITSANM) Bonus Pack, a part of the suite, discovers and renders the customer’s entire
storage area network (SAN) into an easily understood topology. All the management
applications for the supported storage devices can be launched from the topology view,
further simplifying data center management. ITSANM can be ordered separately and installed
on a separate server.

The master console provides the ability to diagnose problems remotely. It can be configured
to call a specified phone number or send an e-mail or electronic page if a configured event
occurs. The customer also has the option to configure and initiate a Virtual Private Network
(VPN) connection which will allow IBM support to remotely access the master console. The
VPN is customer-controlled. The customer sets up a connection to IBM and supplies IBM
Support with the required password to tunnel back to the customer’s master console. The
customer can monitor that access and disconnect the session at any time. In addition,
through the use of remote desktop access software, IBM support can view the desktop of the
master console remotely. This allows the IBM Support Team to access the customer’s master
console for diagnosis and problem determination purposes.

To set up the configuration for this facility, see the IBM TotalStorage Virtualization Family SAN
Volume Controller: Configuration Guide, SC26-7543, and IBM TotalStorage Master Console:
Installation and User’s Guide, GC30-4090 at:
http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. 549
This chapter discusses the hardware and software components and customization of the
master console. See the IBM TotalStorage Virtualization Family SAN Volume Controller:
Configuration Guide, SC26-7543, and IBM TotalStorage Master Console: Installation and
User’s Guide, GC30-4090, for more detailed information about the SVC Support page at:
http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html

550 IBM TotalStorage SAN Volume Controller


14.1 Hardware
Figure 14-1 shows the xSeries® Model 305 master console installed in the rack.

Figure 14-1 Master console xSeries Model 305 in rack

Prior to SVC 2.1, the hardware platform used for the master console was the IBM xSeries
Model 305. Starting with SVC 2.1, the recommended hardware platform to be used for the
master console is the IBM xSeries Model 306. Figure 14-2 on page 552 shows the screen
and keyboard for the xSeries Model 305.

Chapter 14. Master console 551


Figure 14-2 Master console xSeries Model 305 deployed

For a complete and current list of the hardware components that are included with the master
console, refer to the SVC Support page at:
http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html

If a master console is not already installed, the customer must obtain a rack-mounted,
high-performance, highly-reliable Intel server (such as the IBM eServerTM xSeriesR model
306 or equivalent) with the following options:
򐂰 One PentiumR 4 processor, minimum 2.6 GHz.
򐂰 Minimum of 1 GB of system memory.
򐂰 Two IDE hard disk drives, minimum 40 GB each. As a required step in the installation
process, these drives must be configured as mirrored.
򐂰 CD-ROM and diskette drives.
򐂰 Two 1 Gb ports for Ethernet connections (FC or copper).
򐂰 One 2 Gb Fibre Channel host bus adapter (HBA), such as QLogic 2342 (dual port) or
QLogic 2340 (single port) FC-2 cards or equivalent.
򐂰 Keyboard, such as the Space Saver NLS keyboard or equivalent.
򐂰 Monitor, such as Netbay 1U Flat Panel Monitor Console kit (without keyboard) or
equivalent.
򐂰 Mouse.

14.1.1 Example hardware configuration


򐂰 IBM xSeries 306 server (1U).
򐂰 Intel Pentium® 4 2.8 GHz processor.
򐂰 2GB memory DIMM (256 MB comes with base unit).

552 IBM TotalStorage SAN Volume Controller


򐂰 Two 40 GB IDE hard disk drives (one comes with base unit).
򐂰 Two 10/100/1000 Copper Ethernet ports on planar.
򐂰 One 2-port 1/2 Gb/s Fibre Channel host bus adapter.
򐂰 NetBay 1U Flat Panel Monitor Console Kit with keyboard.

14.2 Management console software


The software that is installed in the master console (versions as of writing) is:
򐂰 Microsoft Windows 2000 Server Edition with the latest Service Pack (4 or higher)
򐂰 Windows 2000 Security Hotfixes (818043 required)
򐂰 Microsoft Internet Explorer (version 6 with Service Pack 1)
򐂰 IBM Tivoli SAN Manager Bonus Pack Manager and Agent (versions 1.3.1)
򐂰 IBM DB2 (version 8.2) with Fixpack (version 4a)
򐂰 IBM DS4000 Storage Manager Client and Utilities (version 8.42.G5.00 or latest supported
version as announced)
򐂰 Host Bus Adapter Driver (QLogic 2342)(latest supported version as announced)
򐂰 PuTTY utility package (version 0.56)
򐂰 IBM Director Server (version 4.2)
򐂰 SAN Volume Controller Console package (version 2.1)
򐂰 Adobe Acrobat Reader (version 6.0)
򐂰 IBM VPN Connection Manager (VPN version 1.2)
򐂰 Java™ 2 Runtime Environment Standard Edition (version 1.4.2)
򐂰 IBM MCInstall (version 1.1)

For a complete and current list of the supported software levels for the master console, refer
to the SVC Support page at:
http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html

14.3 IBM Tivoli SAN Manager


Prior to SVC 2.1, the master console shipped with IBM Tivoli SAN Manager Bonus Pack
version with agent and manager installed. It is used to help with SAN problem determination.
IBM Tivoli SAN Manager monitors and manages switches and hubs, storage, and servers in a
SAN. If a SAN problem occurs, customers and service representatives can use the SAN
manager to understand the SAN configuration.

The IBM Tivoli SAN Manager Bonus Pack has a 64-switch port limit for the SANs it manages.
If the SAN to be managed has less than 64 ports, then the Bonus Pack works. To manage a
SAN with more than 64 ports, a license can be purchased to extend the number of ports.
Consider running IBM Tivoli SAN Manager in a more powerful server. For IBM Tivoli SAN
Manager supported levels, device matrixes and updates, see the following Web site:
http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageAreaNetworkManager
.html

For other information about IBM Tivoli SAN Manager, see IBM Tivoli Storage Area Network
Manager: A Practical Introduction, SG24-6848.

See also the IBM TotalStorage Virtualization Family SAN Volume Controller: Configuration
Guide, SC26-7543, for more detailed information at:
http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html

Chapter 14. Master console 553


Two methods of communication are used by IBM Tivoli SAN Manager: In-band
communication over the Fibre Channel fabric and out-of-band communication over the
Ethernet. IBM Tivoli SAN Manager discovers SAN information by performing the following
operations:
򐂰 Communicates with IBM Tivoli SAN Manager agents. The agents run on monitored hosts
(in-band discovery).
򐂰 Sends Management Information Base (MIB) queries directly to switches and other devices
(out-of-band discovery).

Figure 14-3 shows the discovery paths.

TCP/IP Query request/Returned data


Network

SAN SAN Un-


Tivoli Manager Manager
Identified
SAN Agent Agent
Device
Manager (no Agent)
HBA HBA

Topology
and event Switch Switch
data

Fibre
Channel
SAN
Outband Fibre Channel
(SNMP Query/Trap) Connections

Inband
(RNID, SCSI queries) Disk

Figure 14-3 Discovery paths

In-band events and Simple Network Management Protocol (SNMP) traps provide the same
information, such as a change that has occurred in the SAN, by indicating that a discovery
operation should be performed. The in-band events and SNMP traps let IBM Tivoli SAN
Manager know when something has happened in the SAN. Then a discovery is done to
identify the changes. The IBM Tivoli SAN Manager agents that are running on the monitored
hosts detect in-band events.

The agent sends commands through its host bus adapters (HBA) and the Fibre Channel
network to gather information about the switches. The switch returns information through the
Fibre Channel network and HBA to the agent. The agent queries the devices using RNID and

554 IBM TotalStorage SAN Volume Controller


SCSI protocols. The agent then returns the information over the IP network to the IBM Tivoli
SAN Manager manager.

If the name of the master console is changed, the following IBM Tivoli SAN Manager
configuration files need to be changed (see the IBM TotalStorage Virtualization Family SAN
Volume Controller: Configuration Guide, SC26-7543):
򐂰 c:\tivoli\itsanm\manager\bin\w32-ix86\setenv.bat
򐂰 c:\tivoli\itsanm\manager\bin\w32-ix86\setDeployEnv.bat
򐂰 c:\tivoli\itsanm\manager\conf\tsnmdbparms.properties
򐂰 c:\tivoli\itsanm\manager\conf\user.properties
򐂰 c:\tivoli\itsanm\manager\apps\was\java\jre\lib\orb.properties
򐂰 c:\tivoli\itsanm\manager\apps\was\config\cells\DefaultNode\nodes\DefaultNode\serverindex
.xml

If a dynamic name server is not being used, the host must be defined for name resolution in
the file c:\WINNT\system32\drivers\etc\HOSTS as shown in Figure 14-4.

Figure 14-4 Sample HOSTS file

Following are some suggested basic ITSANM configuration and operation steps:
1. To start IBM Tivoli SAN Manager, click the Tivoli NetView® icon on the master console
desktop area. This opens the default (Root) panel, as shown in Figure 14-5 on page 556.

Chapter 14. Master console 555


Figure 14-5 IBM Tivoli SAN Manager start (root) panel

2. Configure each FC switch to allow SNMP command access. Refer to the switch vendor’s
documentation for the procedure to configure this access. Then, to configure ITSANM,
Select SAN -> Configure Agents to add the switch IP addresses and configure the
servers on which the IBM Tivoli SAN Manager agents have been previously installed (see
Figure 14-6 on page 557).

556 IBM TotalStorage SAN Volume Controller


Figure 14-6 Configure agents

3. The top half of the Agent Configuration window in Figure 14-7 on page 558
accommodates the agents previously installed on the customer’s host systems. In this
example, the agent on the master console is installed by default and, in addition, on the
windows host SRVE. When the IBM Tivoli SAN Manager agent is installed on a client host,
it asks for the IBM Tivoli SAN Manager server’s host name and address. In Figure 14-7 on
page 558 there are four switches that have been configured in IBM Tivoli SAN Manager.
On the Agent Configuration window, select the Advanced button. Enter the ID and
password of a switch to be managed and click OK. Repeat for each switch to be
managed. This allows the switch management interface to be launched.
Click OK to close the Agent Configuration panel.

Chapter 14. Master console 557


Figure 14-7 SNMP agent configuration: Switch user ID and password

4. From the IBM Tivoli NetView [Root] window (Figure 14-8 on page 559), select SAN ->
Configure Manager.

558 IBM TotalStorage SAN Volume Controller


Figure 14-8 Selecting Configure Manager

5. On the SAN Configuration window (Figure 14-9 on page 560), configure the polling
interval for IBM Tivoli SAN Manager to automatically gather information about the SAN
topology and other information.
To clear the history file of the previously discovered topology, click Clear History and then
rediscover the topology by clicking the Poll Now button.
Click OK to close this window.

Chapter 14. Master console 559


Figure 14-9 Polling interval

6. On the Root panel (Figure 14-10), right-click the Storage Area Network icon, and select
Explore.

Figure 14-10 Exploring storage area network

Figure 14-11 on page 561 shows the results.

560 IBM TotalStorage SAN Volume Controller


Figure 14-11 Exploring SAN

7. The Explorer panel (Figure 14-12 on page 562) opens to display all the devices, topology,
and zones in a tree format. Click the expansion icon to the left of a fabric icon to show its
components.

Chapter 14. Master console 561


Figure 14-12 Fabric view

Click a switch icon to display its switch view. Figure 14-13 on page 563 shows the Switch
view.

562 IBM TotalStorage SAN Volume Controller


Figure 14-13 Switch view

To view details for one of the hosts, click its icon and select SAN -> SAN Properties from the
menu, as shown in Figure 14-14 on page 564.

Chapter 14. Master console 563


Figure 14-14 SAN properties: Host

Figure 14-15 on page 565 shows the details for the selected host.

564 IBM TotalStorage SAN Volume Controller


Figure 14-15 SAN properties for a host

Similarly, select one of the switches in the Explore panel to view its SAN properties as shown
in Figure 14-16 on page 566.

Chapter 14. Master console 565


Figure 14-16 Switch1 SAN properties

Select Connections in the left panel of the Properties display to view switch ports and
connected devices as shown in Figure 14-17 on page 567.

566 IBM TotalStorage SAN Volume Controller


Figure 14-17 Switch1 connection properties

To view a fabric zone, select the zone view for the desired fabric in the explore panel as
shown in Figure 14-18 on page 568.

Chapter 14. Master console 567


Figure 14-18 Exploring the zone view

The connections to SCV2_switch2 are shown in Figure 14-19 on page 569. Notice that some
of the items are displayed in the topology diagram in the left panel as a question mark “?”.
Each device’s object properties should be updated by the user to change the display to show
a meaningful description. After all the appropriate information is entered, all of the links and
nodes are kept in a history file. The history file is used to show links or nodes with problems or
new Fibre Channel links and devices.

568 IBM TotalStorage SAN Volume Controller


Figure 14-19 Switch2 unknown items

To update a device’s object properties, right-click the icon and select Object Properties as
shown in Figure 14-20 on page 570.

Chapter 14. Master console 569


Figure 14-20 Selecting Object Properties

The object icon and properties can be customized as shown in Figure 14-21 on page 571. In
this example, the device and device type have been determined to be a storage controller by
using its worldwide node name (WWNN). Click the Type tab and select the device and
harddisk options.

570 IBM TotalStorage SAN Volume Controller


Figure 14-21 Properties type

Click the General tab to enter other information that identifies the object, as shown in
Figure 14-22. In this example, ESS1_B1A was entered for the name of the disk subsystem
controller port.

Figure 14-22 General properties

Chapter 14. Master console 571


The name of the disk subsystem controller port is displayed on the explorer window as shown
in Figure 14-23.

Figure 14-23 Object renamed

To simulate a problem, one of the fiber-optic cables to the Redhat1 server was removed. This
creates an event and IBM Tivoli SAN Manager acquires the change as shown in Figure 14-24
on page 573. The icon has a red background in the right panel meaning it is offline. The
switch to which it is connected shows a yellow caution in the left panel, meaning an item is
suspect. The three indication colors are:
򐂰 Green: Object is online.
򐂰 Yellow: Something is suspect.
򐂰 Red: Object is offline.

572 IBM TotalStorage SAN Volume Controller


Figure 14-24 Fibre Channel error on host

Chapter 14. Master console 573


If the IBM Tivoli Storage Resource Manager (ITSRM) is also installed in the customer’s SAN
environment, its general information panel (Figure 14-25) displays both IBM Tivoli Storage
Resource Manager and IBM Tivoli SAN Manager running processes and levels of the Agent
Service Manager. To obtain this information, browse to:
http://IP address of master console:9570/ITSRM/ServiceManager

Figure 14-25 IBM Tivoli Storage Resource Manager agent service manager

To display the levels of the IBM Tivoli Storage Resource Manager and IBM Tivoli SAN
Manager server software installed, as shown in Figure 14-26 on page 575, browse to the
following address:
http://IP address of master console:9550/ITSRM/ServiceManager

574 IBM TotalStorage SAN Volume Controller


Figure 14-26 IBM Tivoli SAN Manager server general information

14.4 Installation planning information for the master console


The following steps should be taken when planning the master console installation:
򐂰 Verify that the hardware and software prerequisites have been met.
򐂰 Determine the cabling required.
򐂰 Determine the Network IP address: A static IP address is required to use IBM Tivoli SAN
Manager
򐂰 Determine the master console host name
򐂰 Determine the location in the rack where the master console is to be installed
򐂰 Provide two FC ports connected to the redundant SAN fabrics
򐂰 Determine how the ports on the master console will be configured.
򐂰 Determine the switch zoning that will be used for the master console.

For detailed installation guidance, see the IBM TotalStorage Virtualization Family SAN
Volume Controller: Configuration Guide, SC26-7543 and IBM TotalStorage Master Console:
Installation and User’s Guide, GC30-4090 at:
http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html

Chapter 14. Master console 575


14.5 Installing the master console software

The IBM TotalStorage Master Console: Installation and User’s Guide, GC30-4090 covers the
following topics in detail:
򐂰 Setting up the master console
– Connecting the fibre channel cables
– Entering IP address for Ethernet port 1
– Entering IP address for Ethernet port 2
– Setting up the master console host name
– Setting up the browser
– Installing the SNMP service
򐂰 Using the installation wizard
– Logging in and reviewing user rights
– Starting the installation wizard
– Installing the Adobe Acrobat Reader
– Choosing the Master Console destination directory
– Selecting the optional features
– Viewing the products to be installed
– Installing PuTTY
– Generating an SSH key pair using PuTTY
– Installing DB2
– Installing SAN Volume Controller console
– Installing the DS4000 series Administration Tool
– Installing IBM Tivoli SAN Manager
– Installing IBM Tivoli SAN Manager Agent
– Installing IBM Director
– Pre-configuring IBM Director
– Installing documentation and utilities files
– Mirroring the boot drive

14.6 Configuring the master console

The IBM TotalStorage Master Console: Installation and User’s Guide, GC30-4090 covers the
following topics in detail:
򐂰 Configuring a default PuTTY session
򐂰 Storing keys in the SAN Volume Controller console software
򐂰 Adding SSH keys to the SAN File System cluster
򐂰 Setting up service alert for SAN Volume Controller
– Configuring service alert for SAN Volume Controller
– Configuring service alert test for SAN Volume Controller
– Setting up e-mail notification
򐂰 Setting up service alert for SAN File System
– Compiling MIBs for the SAN File System
– Configuring service alert for SAN File System

576 IBM TotalStorage SAN Volume Controller


14.7 Managing the master console
The IBM TotalStorage Master Console: Installation and User’s Guide, GC30-4090 covers the
following topics in detail:
򐂰 Starting IBM Tivoli SAN Manager
򐂰 Accessing the SAN File System
– Accessing the SAN File System from the command line
– Accessing the SAN File System from the desktop
– Transferring files using pscp
– Transferring files using psftp
򐂰 Initiating remote support
– Invoking a remote support connection
– Adding an additional subnet route
򐂰 Replacing the SSH key pair for SAN Volume Controller
򐂰 Replacing the client SSH private key known to the SAN Volume Controller software
򐂰 Setting passwords
– Changing the Host Authorization for IBM Tivoli SAN Manager password
– Setting the DB2 user IDs and passwords
– Setting SAN Volume Controller user ID and password

There are also chapters that discuss uninstalling master console software, upgrading master
console software, and troubleshooting the master console.

14.8 Secure Shell overview


Secure Shell (SSH) is used to secure data flow between the SVC cluster (SSH server) and a
client (either a command line client (CLI), or the CIMOM). The connection is secured by the
means of a private/public key pair:
򐂰 A public key and a private key are generated together as a pair.
򐂰 A public key is uploaded to the SSH server.
򐂰 A private key identifies the client and is checked against the public key during the
connection. The private key must be protected.
򐂰 The SSH server must also identify itself with a specific host key.
򐂰 If the client does not have that host key yet, it is added to a list of known hosts.
򐂰 If the client already has a key related to that server's IP address:
– The client can overwrite the existing key, or
– The client can refuse the connection to allow checking of that key.

These mechanisms (public/private key pair and host key) are used so that each party is sure
about the identity of the other one, as shown in Figure 14-27 on page 578.

Chapter 14. Master console 577


I am me (public/private key pair)
SSH SSH
client server
I am me too (host key)

My My
Zdss5yXZCKgeQrhtKPqmXXqJYZ
message message
Encryption
Figure 14-27 SSH client/server

The communication interfaces are shown in Figure 14-28.

Native SVC CLI over secured (SSH) IP xmlCIM over HTTP

SVC Cluster ICAT Proxy / Master console


2-4 node pairs
Ethernet

ICAT CIM Agent HTTP


SVC Native CLI

linux-based kernel ICAT GUI

Web Browser
ICAT GUI
Native SVC CLI over secured (SSH) IP CLI client Client

Figure 14-28 Communication interfaces

14.8.1 Uploading SSH public key(s) sample scenarios


For each SVC cluster to be managed by the SVC master console, the SSH public key must
be uploaded from the master console. A public key must also be uploaded from every other
system that requires access to each new SVC cluster.

Perform this task using a Web browser. This same information is included in the IBM
TotalStorage Virtualization Family SAN Volume Controller: Configuration Guide, SC26-7543.

Important: If the SSH public key from a specified server is not stored onto a particular
SVC cluster, the SVC access software cannot connect to that particular cluster from that
specified server.

Here is a summary of the steps:


1. Start the browser to access the SVC console.
2. Log onto the SVC console using the superuser account and password.
3. Identify the SVC cluster to the SVC master console.
4. Store the SSH public key on the SVC cluster.

578 IBM TotalStorage SAN Volume Controller


5. Launch the secondary browser window to manage the selected cluster.

The detailed procedure follows:


1. Start a browser and log onto the server on which the SVC console is installed by pointing
to the uniform resource locator (URL):
http://<masterconsoleipaddress>:9080/ica
2. Log onto the SAN Volume Controller Console using the superuser account and password.
3. Identify the SVC clusters to the SVC master console. The steps required depend on the
current status of the cluster to be configured:
– An SVC cluster which has not yet been initialized: If an SVC cluster has not yet been
created using the front panel of the SVC cluster, that phase of the cluster creation will
need to be performed first. See 5.3.1, “Creating the cluster (first time) using the service
panel” on page 82. A special password is displayed on the SVC front (service) panel
for 60 seconds to be used in later steps to initialize the SVC master console.
After completing the first phase to create the SVC cluster using the front panel of a
SVC node, the next step required is to complete the creation of the cluster by using the
SVC console native Web interface, as described in 5.4, “Completing the initial cluster
setup using the Web interface” on page 84.
– Previously initialized SVC cluster: If the SVC cluster has completed the initialization
(creation) process but is not yet registered with the SVC console, log on with the
superuser ID and password, select Add Cluster from the list in the SVC Welcome page
to add the cluster. Enter the IP address of the cluster to be added but do not select the
Create (Initialize) Cluster check box which is above the OK button. When the OK
button is clicked, the system displays the page to provide the SSH public key for
upload to the cluster. Step 4 continues with the SSH key input description.
As part of this process the program prompts the user to enter the network password.
Type the admin user name and the password which is configured for the cluster.
4. Store the SSH public key on the SAN Volume Controller cluster. Each key is associated
with an ID string that is user-defined and that can consist of up to 30 characters. Up to 100
keys can be stored on a cluster. Keys can be added to provide either administrator access
or service access.
5. Launch the secondary browser window to manage the new cluster. Select the specific
cluster to be managed and then launch the browser window specifically for that cluster.
a. Under the My Work section of the browser window, click Clusters. A new view is
displayed in the work area (main frame).
b. In the Select column, select the check box to the left of the cluster to be managed.
Select Launch the SAN Volume Controller Application from the drop-down menu in the
work area and click Go. A secondary browser window opens to the SVC application to
work with the specific SVC cluster which was selected. Notice the ClusterName
parameter in the browser location URL, which identifies the IP address of the cluster
currently being managed, as shown here:
http://9.43.147.38:9080/ica/Console?Console.loginToken=79334064:f46d035f31:-7ff1
&Console.ClusterName=9.43.225.208

There is an issue with Windows registration of an SSH key when a cluster is deleted and then
recreated with the same IP address. When a cluster definition is added to the ICAT for
management, SVC will send a host key to the master console. This host key is saved in the
Windows registry. If a cluster is deleted and another cluster is created with the same IP
address, SVC will again send a host key to the master console. Since a key for this IP
address is already saved, the Windows registry is not updated with the new key and the

Chapter 14. Master console 579


cluster cannot be managed from the ICAT. This is for security reasons since the console
erroneously detects that another device is attempting to access it. The workaround is to
delete the host key from the registry after deleting the cluster and before the new cluster is
recreated. There is a function (Reset SSH Fingerprint) provided in the drop-down list to
correct this situation. This is not an issue with the command line SSH since the user is
prompted to overwrite the host key.

To establish SSH connection to the cluster, the master console stores the public key sent by
SVC in the following path \HKEY_USERS\.DEFAULT\Software\SimonTatham\PuTTY\
SshHostKeys. The name of the registry key is rsa2@22:cluster_IP_address.

The reset function fixes the registry to use the correct public SSH key sent from SVC.

14.9 Call Home (service alert)


SVC makes all error reports available to network managers using industry standard error
reporting techniques. Call Home is supported by providing the error data needed by an
external call home agent. The necessary machine information to generate a call home record
is sent to network management software via a SNMP trap.

SVC uses e-mail to RETAIN® to Call Home. The agent that receives the SNMP trap and
generates the e-mail in the correct format is the IBM Director. The e-mail to RETAIN Call
Home system works by sending an e-mail to an IBM catcher machine which reformats the
data based on the machine type and model information and then opens the RETAIN record.
The e-mail address for Call Home is mailto:callhome1@de.ibm.com for the United States and
Canada, and mailto:callhome0@de.ibm.com for all other countries (regions).

To configure IBM Director settings for the SVC Call Home feature, complete these steps:
1. To launch the IBM Director, click the IBM Director icon on the SVC console desktop area.
2. Log in using the master console Windows user ID and password.
3. After IBM Director becomes active, select Options -> Discovery Preferences as shown
in Figure 14-29 on page 581.

580 IBM TotalStorage SAN Volume Controller


Figure 14-29 IBM Director Discovery Preferences

4. The Discovery Preferences window (Figure 14-30 on page 582) opens. Select the SNMP
Discovery tab. Change the entry to the IP address of the master console and the switches
using a subnet mask of all zeros. Then click Add or Replace.

Chapter 14. Master console 581


Figure 14-30 Network addresses for SNMP discovery

5. With the Director console application launched, select Task -> Event Action Plan
Builder, as shown in Figure 14-31.

Figure 14-31 Director Action Plan Builder

6. On the Event Action Plan Builder window (Figure 14-32 on page 583), in the Actions
panel, select Send an Internet (SMTP) E-mail. Then right-click 2145 and select Update.

582 IBM TotalStorage SAN Volume Controller


Figure 14-32 Updating the 2145 event

7. The Customize action:2145 window (Figure 14-33 on page 584) opens. Complete the
following items:
– Internet E-mail address to which the SVC alerts are sent: type the IBM Retain e-mail
address:
• CALLHOME1@de.ibm.com for the U.S.A. and Canada clients
• CALLHOME0@de.ibm.com for all other countries (regions)
– Reply to: Type the e-mail address to which any replies should be directed.
– SMTP E-mail server: type the address of the e-mail (SMTP) server.
– SMTP Port: change this, if required, to the SMTP server port number.
– Subject of E-mail Message: type 2145 Error Notification.
– Body of the E-mail Message: type the following information:
• Contact name
• Contact Phone number
• Off-shift phone number
• Machine location
• Record Type = 1

Chapter 14. Master console 583


Figure 14-33 Customize window

Important: Do not change the line with text Record Type = 1.

14.10 Master console summary


This chapter provides an overview of the SVC master console hardware and software. For
customers using IBM Tivoli SAN Manager (ITSANM), many specific examples of its
configuration and typical usage are provided. Topics covered in detail by the master console
Installation and User’s Guide are then listed. These topics include installation planning,
installation, configuration, and management functions. An overview of the SSH application is
provided with two sample scenarios for uploading the SSH public key to the SVC from the
master console. Finally, Call Home (or service alert) configuration steps are described in
detail.

584 IBM TotalStorage SAN Volume Controller


A

Appendix A. Copy services and open systems


This appendix describes the basic tasks that you need to perform on the individual host
systems when using IBM TotalStorage SAN Volume Controller (SVC) Copy Services. It
explains how to bring FlashCopy target vpaths online to the same host as well as to a second
host. This appendix covers AIX and Windows.

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. 585
AIX specifics
The following section describes what is necessary to use Copy Services on AIX systems.

AIX and FlashCopy


The FlashCopy functionality in SVC Copy Services copies the entire contents of a source
virtual disk (VDisk) to a target VDisk. If the source VDisk is defined to the AIX Logical Volume
Manager (LVM), all of its data structures and identifiers are copied to the target VDisk as well.
This includes the Volume Group Descriptor Area (VGDA), which contains the physical volume
identifier (PVID) and volume group identifier (VGID).

For AIX LVM, it is currently not possible to activate a volume group with a disk that contains a
VGID in the VGDA and a PVID that is already used in a active volume group on the same
server, even if the PVID is cleared and reassigned with the following two commands:
chdev -l <vpath#> -a pv=clear
chdev -l <vpath#> -a pv=yes

Therefore, it is necessary to redefine the volume group information on the FlashCopy target
using the recreatevg command. Refer to “AIX recreatevg command” on page 588. This alters
the VGID on the VGDA of the FlashCopy target so that there are no conflicts with existing
VGIDs on active volume groups. If you do not redefine the volume group information prior to
importing the volume group, then the importvg command fails.

Accessing FlashCopy target on another AIX host


Accessing a FlashCopy target on another AIX host poses some problems. As a result of
copying the entire VDisk contents, all the data structures and identifiers used by the LVM is
also copied, but only what is located on the specified VDisks. If you FlashCopy only one half
of a mirror, 50% of the VDisks in the VGDA would be missing. Therefore, it is necessary to
either turn off quorum for the volume group that you are FlashCopying, or you may need to
force the vary on of the volume group on the target VDisks.

Important: When using FlashCopy make sure to FlashCopy all volumes in the
volumegroup, and at the same time. This can be done by using FlashCopy Consistency
Groups.

The following procedure makes the data of the FlashCopy target VDisk available to another
AIX that has no prior definitions in its configuration database:
1. The target volume (VDisk) is new to AIX. Therefore the Configuration Manager should be
run on all Fibre Channel adapters:
cfgmgr -l <host_adapter>
2. When using Subsystem Device Driver (SDD), the cfallvpath command should be run to
make the vpaths for the new volumes discovered.
cfallvpath

Note: If you just execute cfgmgr without any parameters, it will discover the new disks
and make the vpaths automatically. Sometimes the cfgmgr command can take a while,
therefore the cfgmgr -l command can be the fastest way to discover the new disks.

3. Check which new vpath is on the host, this is your FlashCopy target:
lspv | grep vpath

586 IBM TotalStorage SAN Volume Controller


4. Import the volume group:
importvg -y <volume_group_name> <vpath#>
5. The importvg command should vary on the volume group, if not use the varyonvg
command:
varyonvg <volume_group_name>
6. Verify consistency of all file systems on the FlashCopy target:
fsck -y <filesystem_name>
7. Mount all the target file systems:
mount <filesystem_name>

The data is now available. You can now create a backup of the data on the FlashCopy
volume to a tape device, or use the data for other purpose such as testing software updates.
This procedure may be run after the relationship between FlashCopy source and target is
established, even if data is still being copied from the source to the target in the background.

It may be the case that the disks containing the target VDisk were previously defined to the
AIX system, for example, if you periodically do backups from the same VDisk. In this case, it
may depend on whether any LVM modifications have been made to the volume group. If no
volume group, file system or logical volume changes were made, then perform the following
actions on the target system:
1. Unmount all file systems in the target volume group:
umount <tgt_filesystem>
2. Vary off the target volume group:
varyoffvg <tgt_volume_group_name>
3. Perform the FlashCopy to the target volume, using the same methods as before.
4. Vary on the target volume group:
varyonvg <tgt_volume_group_name>
5. Perform a file system consistency check on the target:
fsck -y <tgt_filesystem>
6. Mount all target file systems in the volume group:
mount <tgt_filesystem>

If volume group modifications were made, such as changing file system size or the
modification of logical volumes (LV), then we recommend that you do not perform the
previous set of tasks. Instead, we recommend that you perform the following tasks on the
target system:
1. Unmount all file systems in the target volume group:
umount <tgt_filesystem>
2. Vary off the volume group:
varyoffvg <tgt_volume_group_name>
3. Export the volume group:
exportvg <tgt_volume_group_name>
4. Delete the target volumes and vpath:
rmdev -dl <hdisk#>
rmdev -dl <vpath#>
5. Perform the FlashCopy to the target volumes.

Appendix A. Copy services and open systems 587


6. Perform tasks as though the volumes were new to the system as previously described.

Accessing FlashCopy source and target on the same AIX host


This section describes a method to access the FlashCopy target volume on a single AIX host
while the source is active on the same server. The procedure is intended to be used as a
guide and may not cover all scenarios.

AIX recreatevg command


Point-in-time (PiT) copying a source volume's content in all cases causes all of the data
structures and identifiers used by the AIX LVM to be duplicated to the target volume. The
duplicate definitions, in turn, cause conflicts within LVM. Until recently, none of the existing set
of LVM commands have had the capability to access the logical volumes or file systems on
the target disk. This problem is solved now using the AIX command recreatevg.

The recreatevg command is packaged as a PTF for AIX 4.3.3 in APAR IY10456 and higher.
It is officially available in:
򐂰 AIX 4.3.3 Recommended Maintenance Level 05 (RML05) or higher
򐂰 AIX 5L

The recreatevg command overcomes the problem of duplicated LVM data structures and
identifiers caused by a disk copying process such as FlashCopy. It is used to recreate an AIX
volume group (VG) on a set of disks that are copied from a set of disks belonging to a specific
VG. The command allocates new PVIDs for the member disks and a new VGID to the volume
group. The command also provides options to rename the logical volumes with a prefix you
specify, and options to rename “labels” to specify different mount points for file systems.

Here is the AIX man page synopsis.


recreatevg [-y VGname] [-p] [-f] [-Y lv_prefix | -l LvNameFile] [-L label_prefix] [-n] \
PVname...

You can use this command to recreate a VG on a set of disks that are mirrored from a set of
disks belonging to a specific VG. This command allocates new PVID for the member disks
since the PVIDs are also duplicated by the disk mirroring. Similarly, other LVM logical
members that are duplicated are also changed to new names with the specified prefixes.

Note the following flags:


򐂰 -y VGname specifies the volume group name rather than having the name generated
automatically. Volume group names must be unique system wide and can range from one
to 15 characters. The name cannot begin with a prefix already defined in the PdDv class in
the device configuration database for other devices. The volume group name that is
created is sent to standard output.
򐂰 -p disables the automatic generation of the new PVIDs. If a -p flag is used, you must
ensure that there are no duplicated PVIDs on the system. All the disks that were hardware
mirrored must have had their PVIDs changed to an unique value.
򐂰 -Y lv_prefix causes the logical volumes on the VG being recreated renamed with this
prefix. For the number of characters in the prefix, the total length of the prefix and the
logical volume name must be less than or equal to 15 characters. If the length exceeds 15
characters, the logical volume is renamed with the default name. The name cannot begin
with a prefix already defined in the PdDv class in the device configuration database for
other devices, nor can it be a name already used by another device.
򐂰 -L label_prefix causes the labels of logical volumes on the VG being recreated to be
changed with this prefix. User must modify the /etc/file systems stanza manually if a
simple modification of the mount point is not enough to define the stanza uniquely.

588 IBM TotalStorage SAN Volume Controller


򐂰 -l LvNameFile entries in the LvNameFile must be in the format LV1:NEWLV1. After
recreatevg, LV1 is renamed with NEWLV1. All the logical volumes that are not included in
the LvNameFile will be recreated with the default system generated name.
򐂰 -f allows a volume group to be recreated that does not have all disks available.
򐂰 -n: After recreatevg, the volume group is imported but varied off. The default is imported
and vary on.

Note the following points:


򐂰 To use this command, you must have root user authority.
򐂰 All the member physical volumes of the volume group must be specified on the command
line. The command fails if the input list does not match with the list compiled from the
VGDA.
򐂰 If you perform a Copy Services function on one half of a RAID-1 pair to reduce the
capacity required for FlashCopy targets or Metro Mirror secondary volumes, then use the
-f option to force the creation of the volume group. Otherwise the VGDA has PVIDs of
volumes that made up the other half of the mirror at the source or primary site.

An example of accessing a FlashCopy target on same host using recreatevg


If we want to mount the FlashCopy target volume on the same AIX host, as the source
volume is located we have to use the recreatevg command. For example, we have a volume
group that contains two volumes (vpaths). We want to FlashCopy the volumes for the
purpose of creating a backup. To achieve this, we must have two target VDisks available of
equal size to, or greater than the sources in the same SVC cluster.

In this example, the source volume group is fc_source_vg containing vpath0 and vpath1, and
the target volume group is fc_target_vg containing vpath2 and vpath3.

Perform the following tasks to create the FlashCopy and make the target volumes available to
AIX:
1. Stop all applications that access the FlashCopy source volumes.
2. Unmount all related file systems for the short period of FlashCopy establishment, or in
other ways make sure no I/O is going to the disk, and that there is no unwritten data in AIX
cache.
3. Establish the FlashCopy pairs with the copy parameter set to zero, if you don’t want all the
data to be physically copied. We recommend that you don’t use copy parameter set of
zero, because then you might not be able to restore data from disk.
4. Mount all related file systems, if previously unmounted.
5. Restart applications that access the FlashCopy source volumes.
6. The target vpath2 and vpath3 now have the same volume group data structures as the
source vpath0 and vpath1. Clear the PVIDs from the target vpaths to allow a new volume
group to be made:
chdev -l vpath2 -a pv=clear
chdev -l vpath3 -a pv=clear
The output from lspv shows the result. See Figure A-1 on page 590.

Appendix A. Copy services and open systems 589


# lspv
vpath0 000c309d6de458f3 fc_source_vg active
vpath1 000c309d6de45f7d fc_source_vg active
vpath2 none None
vpath3 none None

Figure A-1 lspv after pv=clear

7. Create the target volume group and prefix all file system path names with /backup and
prefix all AIX logical volumes with bkup:
recreatevg -y fc_target_vg -L /backup -Y bkup vpath2 vpath3
Specify the vpath names of all disk volumes participating in the volume group. The
output from lspv illustrates the new volume group definition. See Figure A-2.

# lspv
vpath0 000c309d6de458f3 fc_source_vg active
vpath1 000c309d6de45f7d fc_source_vg active
vpath2 000c309d6e021590 fc_target_vg active
vpath3 000c309d6e021bf6 fc_target_vg active

Figure A-2 Recreated FlashCopy target volumes

An extract from /etc/filesystems shows how recreatevg generates a new file system
stanza. The file system named /u01 in the source volume group is renamed to
/backup/u01 in the target volume group. Also, the directory /backup/u01 is created. Notice
also that the logical volume and JFS log logical volume are renamed. The remainder of
the stanza is the same as the stanza for /u01. See Figure A-3.

/backup/u01:
dev = /dev/bkupelv001
vfs = jfs
log = /dev/bkupelvlog001
mount = true
check = false
options = rw
account = false

Figure A-3 Target file system stanza

8. Mount the new file systems that belong to the target to make them accessible.

AIX and Metro Mirror


When you have Metro Mirror primary and secondary volumes in a copy pair relationship, it is
not possible to read the secondary volume. Therefore, if you need the secondary volumes
mounted onto the target server, it is necessary to terminate the copy pair relationship. When
using Metro Mirror you have an active set of volumes, the primary volumes, where all updates
and changes are made to, these changes are then copied to the secondary volumes by the
SVC and on the secondary volumes writes are not permitted from any hosts. When you
mount a volume the AIX automatically makes a write to the volume, so it is not possible to
have both the primary and the secondary volumes mounted at the same time on any hosts.

The first time after the volumes are in simplex state, the secondary volumes can be
configured (cfgmgr) into the target systems customized device class (CuDv). To get the

590 IBM TotalStorage SAN Volume Controller


volumegroup information in to the target server, the copy pair relationship has to be stopped,
and the command cfgmgr needs to be executed on the taget system, or the procedure
described in “Making updates to the LVM information” on page 591 has to be performed. This
brings in the secondary volumes as hdisks and vpaths, and contains the same PVID as the
primary volumes. Because these volumes are new to the system, there is no conflict with
existing PVIDs. The volume group on the secondary volumes containing the LV and file
system information can now be imported into the Object Data Manager (ODM) and the
/etc/filesystems file using the importvg command.

If the Metro Mirror secondary volumes were previously defined on the target AIX system as
hdisks or vpaths, but the original volume group information was destroyed on the volumes,
you must remove the old volume group and disk definitions (using exportvg and rmdev) and
redefine (cfgmgr) them before running the importvg command to attain the new volume group
definitions. If this is not done first, the importvg command imports the volume group
improperly. The volume group data structures (PVIDs, and so on) in ODM differ from the data
structures in the VGDAs and disk volume super blocks. The file systems are not accessible.

When you execute the lsdev -Cc disk command, you observe that the state of the original
Metro Mirror secondary volumes becomes Defined during reboot.

It is important to execute both the lspv and lsdev commands back-to-back, so that you can
be certain which disks are the phantoms. From the lspv output, the phantom disks have no
PVIDs and are not assigned to a volume group. From the lsdev output, the phantom is in an
available state. The original disks have PVIDs, are assigned to a volume group, and are
marked in a defined state.

To remove the phantom hdisks from the configuration database, run the rmdev -dl command
on each phantom disk device.

Set the original hdisks and vpath to an available state with the mkdev command

You can reactivate the volume group, using the varyonvg and mount its file systems. While
you do this the copy pair relationship has to be stopped.

Making updates to the LVM information


When performing Metro Mirror between primary and secondary volumes, the primary AIX
host may create, modify, or delete existing LVM information from a volume group. However,
because the secondary volume is not accessible when in a Metro Mirror relationship, the LVM
information on the secondary AIX host is out-of-date. Therefore, you need to get the
secondary AIX host updated every time you make changes to the LVM information. If you
don’t want to have scheduled periods where write I/Os to the primary Metro Mirror volume
can be quiesced and file systems unmounted, so the copy pair relationship can be
terminated, and the secondary AIX host can perform a learn on the volume group (importvg
-L), you can execute the varyonvg command on the primary AIX host for the volumegroup
you have made changes to, that will remove the SCSI lock on the volumes in the SVC. The
parameters needed for the varyonvg command is -b -u, thereafter you execute the importvg
-L command on the secondary AIX host, and after the LVM changes are updated on the
secondary AIX host, you need to execute the varyonvg command on the primary AIX host
again, to activate the SCSI lock on the volumes on the SVC.

The import -L <volumegroup> command takes a volume group and learns about possible
changes performed to that volume group. Any new logical volumes created as a result of this
command emulate the ownership, group identification, and permissions of the /dev special
file for the volume group listed in the -y flag. The -L flag performs the functional equivalent of
the -F and -n flags during execution.

Appendix A. Copy services and open systems 591


Restrictions:
򐂰 The volume group must not be in an active state on the system executing the -L flag.
򐂰 The volume group’s disks must be unlocked on all systems that have the volume group
varied on and operational. Volume groups and their disks may be unlocked, remain active
and used via the varyonvg -b -u command.
򐂰 The physical volume name provided must be of a good and known state, the disk named
may not be in the missing or removed state.
򐂰 If an active node has both added and deleted logical volumes on the volume group, the -L
flag may produce inconsistent results. The -L flag should be used after each addition or
deletion, rather than being deferred until after a sequence of changes.
򐂰 If a logical volume name clash is detected, the command will fail. Unlike the basic
importvg actions, clashing logical volume names will not be renamed.

An example on how to use the -L on a multi-tailed system:


򐂰 Primary AIX node A has the volume group datavg varied on.
򐂰 Secondary AIX node B is aware of datavg, but it is not varied on.
򐂰 Primary AIX node A: varyonvg -b -u datavg
򐂰 Secondary AIX node B: importvg -L datavg hdisk07
򐂰 Primary AIX node A: varyonvg datavg

Where datavg is the name of the volumegroup, and hdisk07 is the physical volume (PV) that
contain the volumegroup.

If you prefer to have scheduled periods where write I/Os to the primary Metro Mirror volume
can be quiesced and file systems unmounted, then after the updates are read into the
secondary AIX hosts ODM, you can establish the Metro Mirror copy pair again.

Windows NT and 2000/2003 specifics


This section discusses the tasks that are necessary when performing Copy Services
operations on volumes owned by Microsoft Windows NT and 2000/2003 operating systems.

Windows NT and Copy Services


This section discusses the actions that you need to perform on Metro Mirror and FlashCopy
volumes owned by Microsoft Windows NT operating systems.

Windows NT handles disks in a way that is not similar to any other operating system covered
in this book. The need to reboot a server to scan for new disks and the need to run a
GUI-based Disk Administrator to manipulate the disks are the main factors that restrict the
routine use of Metro Mirror and FlashCopy and make automation virtually impossible. It is
possible to automate the actions of the GUI-based Disk Administrator using third-party
software to remotely reboot the server. It is also possible to remotely assign the drive letter
from the server that starts the Copy Services task. This was not tested during our project.

If you are going to create an automated script with Windows NT, you need to be careful about
data consistency. It may be that some part of the automation process may run a script on a
source server, and subsequent actions may be taken by a script on a target server. Therefore,
interprocess communication across servers may be required for timing. Otherwise, you may
get inconsistent data. Not all applications allow this.

592 IBM TotalStorage SAN Volume Controller


You have two options on how to make the Metro Mirror or FlashCopy target available to the
server: with reboot or without reboot. We recommend that you reboot the server. It is safer
because then it is guaranteed that all the registry entries are created. However, using Metro
Mirror or FlashCopy without rebooting is faster.

Registering the Metro Mirror and FlashCopy volumes to Windows NT


If you are going to reboot the server, you do not have to make the target disks known to
Windows NT before you do the Metro Mirror or FlashCopy. However, we recommend that you
preassign and register them in the server. The “assign disk and run Metro Mirror or
FlashCopy” approach is useful for a non-routine Metro Mirror or FlashCopy, for example, for
testing or migration.

For routine purposes, we recommend that you have target disks already present in Disk
Administrator with partitions created and partition information saved. Click Start -> Programs
-> Administrative Tools -> Disk Administrator. Then follow these steps:
1. If the target disk was not previously seen by the system, Disk Administrator issues a
pop-up message saying “No signature on Disk X. Should I write a signature?”,
where X is the number assigned to the newly present disk.
Click OK to save the signature on the target disk.
2. The Disk Administrator opens. Click the disk that is to be used as the Metro Mirror or
FlashCopy target (it should be gray and marked as free space) and select Create.
3. Confirm the partition parameters and click OK. The partition appears as Unknown.
4. Click the newly created partition and select Commit Changes Now.
5. Right-click the partition and select Assign Drive letter.
6. Assign a drive letter and click OK.
7. Exit Disk Administrator.

After this procedure, the Metro Mirror or FlashCopy target is properly registered in the
Windows NT.

Bringing down the target server


Bring down the server that will use the target if you want to use the safer method. Also keep in
mind that if you assign the volume to the host just before you perform the Metro Mirror or
FlashCopy, you must use the volume serial number for the target.

Performing a Metro Mirror or FlashCopy


Stop all applications using the source volume. Now flush the data to the source volume. Click
Start ->Programs -> Administrative Tools -> Disk Administrator. Then follow these steps:
1. Right-click the disk that is to be used as the Metro Mirror or FlashCopy source. It should
have a drive letter assigned and be formatted. Then select Assign Drive letter.
2. From the pop-up window, select Do not assign a drive letter and click OK.
3. Now the data is flushed to the source. You can start the Metro Mirror or FlashCopy task
from the SVC Copy Services Web Interface or from any server CLI.
4. Observe the GUI. Or enter the following command to see if the Metro Mirror or FlashCopy
task successfully started:
svcinfo lsfcmapprogress and svcinfo lsrcrelationshipprogress
5. Reassign the drive letter to the source volume. Right-click the disk that is a Metro Mirror or
FlashCopy source and select Assign Drive Letter.

Appendix A. Copy services and open systems 593


6. Assign a drive letter and click OK.
7. Exit Disk Administrator.

You can resume using the source volume.

Bringing up the target server


Next you may boot up the target server. In this case, you just assigned the target volumes to
the host that will create the disk entry in the Windows NT registry. To verify that the registry
entry is created, complete these tasks:
1. Click Start -> Settings -> Control Panel -> Hardware -> Device Manager.
2. In Control Panel, double-click Disk Drives.
3. Click the adapter that has the target volume attached.
4. A list of targets opens. Verify the list includes the target ID and LUN of the volume you just
made available to the server. If you are using SDD, you see each disk entry several times
[(# of vDisks) x (# of Nodes) x (4 Ports/Node) x (# of HBAs/host)], which is the
number of paths to the volume that you have.

You may also run the datapath query device command from the SDD command line to
check whether the Metro Mirror or FlashCopy targets are listed between the volumes. This
command also enables you to check volume serial numbers and gives you a more
understandable overview of the volumes and their paths.

Making the Metro Mirror or FlashCopy target available


Log in, start the Windows NT Disk Administrator, write a signature if necessary (do not write a
signature if data was already copied into this volume), and assign a drive letter. To begin, click
Start -> Programs -> Administrative Tools -> Disk Administrator. Then follow these
steps:
1. If the disk was not previously seen by this system, Disk Administrator issues the “No
signature on Disk X. Should I write a signature?” message, where X is the number
assigned to the newly present disk. Click OK to save the signature on the target disk.
2. The Disk Administrator opens. Click the disk that is a Metro Mirror or FlashCopy target.
You should see a formatted partition on it. Select Assign Drive Letter.
3. If you cannot assign a drive letter, the target may be corrupt. Try repeating the whole
process and consider the scenario that includes reboot.
4. Assign a drive letter and click OK. Exit Disk Administrator.
5. From a Windows NT command prompt, run the following command, where x is the letter
assigned to the Metro Mirror or FlashCopy target:
chkdsk x: /f /r
An option is to run the disk check from Properties of a disk in Windows NT Explorer.

After you complete this procedure, the Metro Mirror or FlashCopy target is available to the
Windows NT and can be handled like normal disk.

Copy Services with Windows Volume Sets


The following section explains how to perform Copy Services functions with Windows Volume
Sets.

594 IBM TotalStorage SAN Volume Controller


Copy Services with Windows NT Volume Sets
Both Metro Mirror and FlashCopy are supported when using normal disks and Volume Sets.
When using either Metro Mirror or FlashCopy with Volume Sets, because these outboard
copy features do not copy the Volume Set information in the Windows Registry, certain
limitations exist and a special procedure is required as outlined in below. After SP6, it is
possible to have the FlashCopy source and target volumes accessible by the same server.
Prior to SP6, the FlashCopy source and target volumes must be attached to different servers.
Metro Mirror primary and secondary volumes must be attached to different servers.

Procedure for using Metro Mirror and FlashCopy with Volume Sets
This special procedure is required to FlashCopy or Metro Mirror a Windows NT volume set.
The procedure can also be applied to other Windows NT fault tolerant disk configurations,
such as mirrored sets, striped sets, and striped sets with parity.

Consider the case where the target disks are in the same order as the source disks, and the
target disks are contiguous (that is all the disks are next to each other as viewed by the target
machine’s Disk Administrator). Then simply create an identical volume set on the target
machine and reboot prior to performing the FlashCopy. You do this before you perform
FlashCopy or Metro Mirror for the first time. Subsequent copies should work as expected,
provided that the file system is unmounted (the drive letter is unassigned) on the target prior
to performing a copy.

If the target disks do not appear contiguous to Windows NT or appear in a different order than
on the source machine, then a different procedure must be used. Microsoft’s FTEDIT,
available on the NT Resource Kit, is a Microsoft supported tool designed to write volume set
information into the registry. Using FTEDIT is much safer than editing the registry directly.

Important: Incorrect use of FTEDIT could result in loss of access to software RAID arrays.
We recommend that you use Disk Administrator to save your disk configuration before
using FTEDIT. In general, most errors made using FTEDIT are recoverable. For more
information about how to recover from FTEDIT errors, and on FTEDIT in general, see the
Microsoft Knowledge Base article for Q131658:
http://support.microsoft.com/default.aspx?scid=kb;en-us;131658

The following procedure explains how to use FlashCopy and Metro Mirror with FTEDIT.

Preparation
On the target machine, complete the following tasks:
1. Back up the disk data using Disk Administrator, and registry information using REGEDIT.
2. If the target disks were previously used, delete all of the target disks in Disk Administrator.
Do not simply unmount them, but delete all of the partitions on the target disks. Commit
the changes.
3. In the control panel, double-click Devices. Make sure that Ftdisk is started and set to start
on boot. Ftdisk is the driver used by Windows NT to identify and access fault tolerant
drives, such as volume sets. If there are any fault tolerant drives in use on the system,
Ftdisk is started and set to start on boot. If it is not started, one way to start it is to create a
fault tolerant drive on a couple of spare disks. This requires a reboot.

On the source machine, obtain the order in which the disks were added to the volume set.
One way to do this is to use a freeware utility called diskkey.exe, available from:
http://www.sysinternals.com

Appendix A. Copy services and open systems 595


This utility is not supported by IBM and is known to report disk numbering and other
information that is different than what Disk Administrator reports. However, the order in which
the disks are included in the volume set is correct. Also the correct ordering of the disks is the
information required to create a duplicate volume set on the target server.

Map the disks on the source machine to the disks on the target machine. For example,
determine that Disk6 on the source is FlashCopy copied to Disk9 on the target.

Performing the Metro Mirror and FlashCopy


On the target machine, follow these steps:
1. Run the FlashCopy establish or Metro Mirror terminate tasks.
2. Start Disk Administrator. If it asks you to write a signature on any of the disks, click No
(except in the special cases, see the following Important box). After Disk Administrator is
up, commit the changes (this is very important), and close Disk Administrator.

Important: Disk Administrator asks to write a signature when the FlashCopy is


performed to the same machine because it detects a duplicate disk signature (the
source and target volumes have the same disk signature) and needs to write a new
one. It is safe to do this, but be sure that you are writing the signature to the FlashCopy
target disk. If a signature is written to the wrong disk, it may cause data corruption.

When FlashCopying to a different machine, usually the disk signature on the target
machine's disks are different than the FlashCopy source disks’ signature, so Disk
Administrator does not need to write a new signature to the target disks to use it. It is
unlikely, but possible, that by coincidence the disk signature one of the source disks is
the same as one of the disks on the target machine. In this case, you must write a
signature on the target disk before you use it. Again, it is safe to do this, but be sure
that you are writing the signature to the right disk.

3. Start FTEDIT. Select Start -> Resource Kit 4.0 -> Disk Tools -> Fault Tolerance Editor.
4. Read the warning and click OK.
5. There are two panes in the FTEDIT window. On the left pane is a list of the disks in the
system. On the right pane is the list of partitions on that disk. You must add the disks to the
volume set in the right order. Use the results of diskkey.exe to determine the order in
which the disks were added on the source volume set.

Note: If active Metro Mirror target volumes are on the target, then the disk numbering
used in FTEDIT may differ from the disk numbering used in the Disk Administrator. The
Metro Mirror target volumes are not seen by FTEDIT and are not included in the disk
numbering scheme. Adjust your disk choices accordingly.

6. Click Make FT set in the lower left corner.


7. When it asks you what kind of set you want, choose Volume set and click OK.
8. Click the first target disk in the left pane.
9. The list of partitions on that disk should appear in the right pane. Choose the partition that
contains the volume set on that disk (usually Partition 1). Double-click Partition 1 in the
right pane. This adds this disk or partition to the volume set, in order.
10.Repeat Steps 8 and 9 for the rest of the disks. If you make a mistake, you can cancel and
start from scratch. The disks must be added in the correct order.
11.After you add all of the disks, click Save FT set at the bottom.

596 IBM TotalStorage SAN Volume Controller


12.Click Edit -> Save Changes to System.
13.Close Ftedit.
14.Reboot the system.
15.When Windows NT restarts, start Disk Administrator. The target disks should be yellow
now, indicating that they are in a volume set. Assign a drive letter and commit the
changes. If the drives are not usable at this point, then the disks were probably added in
the wrong order.

As long as the disk configuration does not change on the source or target, FlashCopy should
work as expected. If the disk configuration is changed in anyway, such as adding an
additional disk to the volume set or rearranging the disks, then you have to perform this
procedure again.

Windows 2000/2003 and Copy Services


Windows 2000/2003 handles its disks differently than Windows NT does. Windows
2000/2003 incorporates a stripped-down version of the Veritas Volume Manager, called the
logical disk manager (LDM).
With the LDM, you can create logical partitions, perform disk mounts, and create dynamic
volumes. There are five types of dynamic volumes: simple, spanned, mirrored, striped, and
RAID-5.

On Windows NT, the information relating to the disks was stored in the Windows NT registry.
With Windows 2000, this information is stored on the disk drive itself in a partition called the
LDM database, which is kept on the last few tracks of the disk. Each volume has its own
128-bit Globally Unique Identifier (GUID). This is similar to the disk PVID in AIX. Since the
LDM is stored on the physical drive itself, with Windows 2000, it is possible to move disk
drives between different computers.

Copy Services limitations with Windows 2000 and Windows 2003


Having the drive information stored on the disk itself imposes some limitations when using
Copy Services functionality on a Windows 2000/2003 system:
򐂰 The source and target volumes must be of the same physical size for two reasons:
– The LDM database holds information relating to the size of the volume. Since this is
copied from the source to the target, if the target volume is a different size from the
source, then the database information is incorrect, and the host system returns an
exception.
– The LDM database is stored at the end of the volume. The copy process is a
track-by-track copy, unless the target is an identical size to the source, the database is
not at the end of the target volume.
򐂰 It is not possible to have the source and target FlashCopy Volume on the same Windows
2000/2003 system when they were created as Windows 2000/2003 dynamic volumes.
The reason is that each dynamic volume must have its own 128-bit GUID. As its name
implies, the GUID must be unique on one system. When you perform FlashCopy, the
GUID is copied as well. This means that if you tried to mount the source and target volume
on the same host system, you would have two volumes with exactly the same GUID. This
is not allowed, and you are not able to mount the target volume.

Copy Services on basic volumes


Basic disks are the same as the Windows NT disks with the same restrictions. Dynamic disks
are supported for both Metro Mirror and FlashCopy and the primary source and secondary

Appendix A. Copy services and open systems 597


target volumes must be attached to different servers. For basic disk on the other hand it is
possible to attach the secondary target volume to the same server. In the following steps we
show how to make a FlashCopy on a basic disk, and mount it on the same server as the
source disk is mounted.

Before making any FlashCopy on the Windows server we have two disks, the internal C drive
and a SVC disk W:, see Figure A-4.

Figure A-4 Windows server before adding FlashCopy target disk

To make a FlashCopy on Windows 2000/2003, we need to make a VDisk that we will use as
FlashCopy target, and the VDisk needs to be exactly the same size as the source VDisk. To
ensure that the size is exactly the same, we first list the source VDisk on the SVC using the
parameter -bytes, that gives the precise size. If we don’t use the parameter -bytes the size
will be in GB, and 8 GB is not an exact size information, but can be the rounded size. We
create a new VDisk with the same size and give it the name W2K3_2_tgt, also using the size
unit in bytes (-unit b)

IBM_2145:admin>svcinfo lsvdisk -bytes W2K3_2_1


id 20
name W2K3_2_1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 2
mdisk_grp_name MDG2_DS43
capacity 8589934592
type striped
formatted no
mdisk_id
mdisk_name

598 IBM TotalStorage SAN Volume Controller


FC_id
FC_name
RC_id
RC_name
vdisk_UID 600507680189801B2000000000000019
throttling 0
preferred_node_id 12
fast_write_state not_empty

IBM_2145:admin>svctask mkvdisk -mdiskgrp 2 -iogrp 0 -size 8589934592 -unit b -name


W2K3_2_tgt
Virtual Disk, id [24], successfully created

After creating the taget VDisk, we define the FlashCopy mapping on the SVC and start the
actual FlashCopy process.
IBM_2145:admin>svctask mkfcmap -source W2K3_2_1 -target W2K3_2_tgt -name basic_fc -copyrate
90
FlashCopy Mapping, id [2], successfully created
IBM_2145:admin>svctask startfcmap -prep basic_fc

Now we have a point-in-time copy of the source volume from the Windows host. Now we map
the target VDisk to the same Windows host and perform a rescan at the server.
IBM_2145:admin>svctask mkvdiskhostmap -host W2K3_2 W2K3_2_tgt
Virtual Disk to Host map, id [1], successfully created

After the rescan on the Windows server, it now shows the new volume and that it is already in
NTFS format, but doesn’t have a drive letter assigned yet, as shown in Figure A-5 on
page 600.

Appendix A. Copy services and open systems 599


Figure A-5 Discovered FlashCopy target disk on Windows server

We then select the disk, assign a drive letter and check that the data is on the target disk, this
is shown in Figure A-6, Figure A-7 on page 601 and Figure A-8 on page 601.

Figure A-6 Chose disk and assign drive letter

600 IBM TotalStorage SAN Volume Controller


Figure A-7 The target disk is now ready on the Windows server with drive letter E:

Figure A-8 The data is both on source and target disk on the same Windows server

Enlarging an extended basic volume


The Copy Services source may initially be a single simple volume. However, as requirements
change on the application servers, the logical volume may be extended. You should not
independently extend the target volumes on the Windows host, but let Windows detect the

Appendix A. Copy services and open systems 601


correct sequence of the extended volumes during the import process of the extended SVC
target volume.

When you have an extended volume, the logical drive may in time grow to include more of the
initial volume (extended disk). When this occurs, it is necessary, to perform a rescan of the
disks, and use the diskpart command to extend the disk, so the capacity can be available to
an existing partition.

On the initial FlashCopy, reboot the target server to configure the additional disks, or perform
a rescan of disks. On subsequent FlashCopy copies to the target volume group, run only a
chkdsk <drive letter> /F command on the target volume, to make Windows aware of the
changes on the target volume, if you don’t execute the chkdsk /F command, you can’t rely on
the data. Using our previous example, the complete command would be chkdsk E: /F

When expanding a basic disk using diskpart that is subsequently being used as a
FlashCopy source, the process to keep the FlashCopy in a usable state on the target disk,
either on the same server or on a different server, includes the following steps.
򐂰 Remove the FlashCopy map between the source and target VDisk on the SVC.
򐂰 Extend the VDisk on the SVC for the Windows volume that is going to be extended.
򐂰 Rescan for disks on the server where the source volume is located.
򐂰 Extend the Windows volume using the diskpart program.
򐂰 Remove the target disk on the server.
򐂰 Remove the FlashCopy mapping on the SVC
򐂰 Remove the mapping of the target VDisk.
򐂰 Rescan for disk on the target server, to remove old target disk.
򐂰 Extend the target VDisk to match the new size for the source VDisk on the SVC.
򐂰 Make a “new” FlashCopy mapping on the SVC for the VDisks.
򐂰 Make a new FlashCopy.
򐂰 Rescan for disks on the server where the target volume is located.
򐂰 Assign drive letter to the new extended target volume.

Copy Services on dynamic volumes


To see target dynamic volumes on a second Windows 2000/2003 host, you have to complete
these tasks:
1. Perform the Metro Mirror or FlashCopy function onto the target volume. Ensure that when
using Metro Mirror, that the primary and secondary volumes were in consistent mode, and
write I/O was ceased prior to terminating the copy pair relationship.
2. Map the target volume (VDisk) to the second Windows 2000/2003 host.
3. Click Computer Management-> Disk Management.
4. Find the Disk that is associated with your volume. There are two “panes” for each disk.
The left pane should read Dynamic and Foreign. It is likely that no drive letter is
associated with that volume.
5. Right-click that pane, and select Import Foreign Disks. Select OK, and then OK again.
The volume now has a drive letter assigned to it. It is of Simple Layout and Dynamic Type.
You can read and write to that volume.

Tip: Disable the fast-indexing option on the source disk. Otherwise, operations to that
volume are cached to speed up disk access. However, this means that data is not
flushed from memory and the target disk may have copies of files or folders that were
deleted from the source system.

602 IBM TotalStorage SAN Volume Controller


When performing subsequent Metro Mirror or FlashCopy copies to the target volume, to
detect any changes to the contents of the target volume, it is necessary to run the following
command on the target volume:
chkdsk.exe /F

An example of how to make a FlashCopy on a volume, that is spanned over two dynamic
disks is shown in Example A-1.

Example: A-1 FlashCopy of dynamic spanned disk in Windows 2003

Find the VDisk that you will make a FlashCopy of. On the Windows source host, use the SDD
command datapath query device, to find the VDisk UID, and on the SVC check the VDisks
information. If this is the first time you make the FlashCopy, make sure that you have a target
VDisk of the same size, if not create it. As to size, use the unit byte (b) so the VDisk will be
exactly the same size.

Here we list the source VDisk, using the parameter -bytes, to get the exact size.
IBM_2145:admin>svcinfo lsvdisk -bytes dyn2_src
id 18
name dyn2_src
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name MDG0_DS43
capacity 10736344064
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 600507680189801B2000000000000017
throttling 0
preferred_node_id 12
fast_write_state empty

Here we make the new target VDisk, using the unit byte, so the VDisk has the exact same
size.
IBM_2145:admin>svctask mkvdisk -mdiskgrp MDG1_DS43 -iogrp 0 -size 10736344064 -unit b -name
dyn2_tgt

Because it is a spanned disk over two dynamic disks, we have to make sure that the
FlashCopy has been taken at the exact same time. To ensure this make the two FlashCopy
mapping members of a consistency group on the SVC. First, we have to make the FlashCopy
consistency group, to do this we execute the command svctask mkfcconsistgrp with the
right parameters.
IBM_2145:admin>svctask mkfcconsistgrp dyn_fc_grp -name
FlashCopy Consistency Group, id [3], successfully created

Appendix A. Copy services and open systems 603


We need to add a FlashCopy mapping for each of the VDisks we are going to perform the
FlashCopy on. In this example the source VDisks are named dyn1_src and dyn2_src, while
the target VDisks are named dyn1_tgt and dyn2_tgt.

Adding the FlashCopy mappings to the consistgrp.


IBM_2145:admin>svctask mkfcmap -source dyn1_src -target dyn1_tgt -name dyn1_fc -consistgrp
dyn_fc_grp -copyrate 80
FlashCopy Mapping, id [3], successfully created
IBM_2145:admin>svctask mkfcmap -source dyn2_src -target dyn2_tgt -name dyn2_fc -consistgrp
dyn_fc_grp -copyrate 80
FlashCopy Mapping, id [4], successfully created

Before we can do the FlashCopy, we need to make sure that there is no I/O to the disk on the
Windows host, and we need to prepare the FlashCopy consistency group to be flashcopied.
To prepare the FlashCopy consistency group, we execute the svctask
prestartfcconsistgrp command. This command makes sure that all data on the involved
VDisks is flushed from the cache in the SVC.
IBM_2145:admin>svctask prestartfcconsistgrp dyn_fc_grp

When this is done we check that the consistency group is prepared with the command
svcinfo lsconsistgrp.
IBM_2145:admin>svcinfo sfcconsistgrp
id name status
1 FCCG1 idle_or_copied
2 FC_AIX_GRP idle_or_copied
3 dyn_fc_grp prepared

Now we are ready to start the FlashCopy on the FlashCopy consistency group, so all the
VDisks will be flashcopied at the exact same time.
IBM_2145:admin>svctask startfcconsistgrp dyn_fc_grp

It is possible to see how the FlashCopy is progressing in the process of copying data from
source to target. We don’t need to wait until the copy is done before we add the copy to
another Windows host. To see the progress we use the svcinfo lsfcmapprogress command,
but we need to do it to both disks, as we can’t use the command on the FlashCopy
consistency group.
IBM_2145:admin>svcinfo lsfcmapprogress dyn1_fc12
id progress
4 18
IBM_2145:admin>svcinfo lsfcmapprogress dyn2_fc1_fc
id progress
3 15

On the Windows host we now need to scan for new disks. Before we made the FlashCopy the
target Windows we used had the following disk configuration as shown in Figure A-9 on
page 605.

604 IBM TotalStorage SAN Volume Controller


Figure A-9 Disk configuration before adding dynamic FlashCopy disks.

After performing a diskscan the Disk Manager on the Windows host looked like that shown in
Figure A-10 on page 606.

Appendix A. Copy services and open systems 605


Figure A-10 Dynamic disks added to Disk Manager

The disks, Disk 4 and Disk 5, are shown as Dynamic and Foreign, because this is the first
time these disks have been discovered by the Windows host. To make the disks available to
the Windows host we need to import the disks. To do this, we right-click one of the disks and
select import foreign disk as shown in Figure A-11.

Figure A-11 Import Foreign Disk

606 IBM TotalStorage SAN Volume Controller


When importing the new disks to the Windows host, and you only had chosen one of the
dynamic disks, it will tell you how many disks are going to be imported as shown in
Figure A-12.

Figure A-12 Import two foreign disks

This tells you that two disks are going to be imported and that the two dynamic disks make
one spanned volume. If you want to make sure as to what disks are going to be imported, you
can select the Disks button, which gives the information shown in Figure A-13.

Figure A-13 Disks that is going to be imported

When you click OK, Windows tells you that it is a spanned volume on the dynamic disks you
are importing, as shown in Figure A-14.

Figure A-14 Spanned volume going to be imported

Appendix A. Copy services and open systems 607


This is what we had expected, so we accept this by clicking the OK button. When we have
done this, the Windows Disk Manager shows the dynamic disks as online, and that as they
are known to the host they are not foreign anymore.

Figure A-15 The flashcopied dynamic disks is online

All we need to do now is to assign a drive letter to the spanned volume, and we are able to
use the data on the target Windows host.

Figure A-16 The data is ready to use

Metro Mirror and Windows spanned volumes


We followed this procedure when carrying out the Metro Mirror of a Windows 2000 spanned
volume set from Server (A) to Server (B):
1. On the source server (A), we created a Windows spanned volume set of multiple dynamic
disks.
2. We rebooted the target server (B), imported multiple target disks, and wrote a disk
signature on each as basic disks.
3. We established Metro Mirror between the source (A) and target volumes (B).
4. After the source and target volumes were synchronized, we terminated Metro Mirror.
5. We rebooted the target host (B.)
6. We started Disk Manager, the Metro Mirror target volumes were seen as Foreign
Dynamic Disks.

608 IBM TotalStorage SAN Volume Controller


7. The disks were imported into the target host and were seen as a spanned volume.

To demonstrate failback to the original setup, we carried out the following steps:
1. We removed the original paths and re-established them in the reverse direction from (B) to
(A).
2. We removed the spanned volume drive letter from the original source, the spanned
volume on server (A).
3. We established Metro Mirror from (B) to (A) and wrote some data onto the spanned
volume.
4. Metro Mirror was terminated.
5. We restored the drive letter to the spanned volume on server (A).
6. The contents of the spanned volume could now be read from server (A).

Appendix A. Copy services and open systems 609


610 IBM TotalStorage SAN Volume Controller
B

Appendix B. DS4000 migration scenarios


This appendix gives a high-level overview of migrating from “normal” storage area network
(SAN) attached storage to virtualized storage. In all of the examples, we use the IBM
TotalStorage DS4000 Series as the storage system.

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. 611
Initial considerations
Here are some basic factors that you must take into account in all situations before starting:
򐂰 Depending on the firmware version, each DS4000 partition can contain up to 32 or up to
256 logical unit numbers (LUNs). If the firmware version is 5.4 or newer the DS4000
support 256 LUNs per partition.
򐂰 Host device drivers must be changed, so all LUNs in a host partition must be moved to the
SVC partition in one step.
򐂰 Each partition can only access a unique set of host bus adapter (HBA) ports (as defined
by worldwide port names (WWPNs)).
򐂰 Only one storage partition must be created that includes any IBM TotalStorage SVC ports
of nodes that are in the same SVC cluster.
򐂰 The contents of an existing partition must be moved to the SVC partition at the same time.
Some configurations may require backup, reconfigure, and restore.
򐂰 Some versions of DS4000 allow RAID arrays to be expanded allowing their capacity to
increase, which is not recommended, although it may be helpful for some configurations.

Important: If you have more logical units than is supported in one partition, then some
spare storage is required to allow for temporary migration of the data while the DS4000
is re-configured to have fewer logical units.

612 IBM TotalStorage SAN Volume Controller


Scenario 1: Total numbers of LUNs is less than maximum LUN
per partition
Existing host or hosts attached to DS4000 have ewer LUNs than what is supported in one
partition on the DS4000. Only one partition per host or both partitions on a single host have
less than maximum numbers of supported LUNs when combined. Figure B-1 shows the initial
configuration.

Note: Access LUN (A) is used by Host A for in-band configuration of the DS4000. This is
deleted and DS4000 is configured to use SVC master console over Ethernet. Access LUN
is not required by SVC.

Host A Host B Host C

Figure B-1 Initial configuration

Appendix B. DS4000 migration scenarios 613


Then we add the SVC and create an SVC partition on the DS4000. See Figure B-2. The
following steps are required for this task:
1. Modify the zoning so that SVC can “see” the DS4000 only. This allows partition 3 to be
created, and access to partitions 0, 1, and 2 can continue.
2. The port or ports of the SVC master console must not be in any partition.

Host C
Host A Host B

Figure B-2 Partition 3 created

614 IBM TotalStorage SAN Volume Controller


Move the storage for host C from host partition 2 to partition 3 (SVC partition), to be managed
by the SVC. See Figure B-3. Note the following points:
򐂰 Concurrent access from host C to its logical units is not possible.
򐂰 Host C requires reconfiguration from RDAC device drivers to SDD. Changes to the
adapter configuration and microcode levels, settings, and so on, may also be required.
򐂰 Switch zoning changes are also required to prevent host C from “seeing” the DS4000
ports and instead “seeing” the SVC ports.
򐂰 LUNs from host partition 2, that are now in partition 3, must be configured as image mode
MDisks in SVC and mapped to host C.

Note: Partition 2 should now be deleted after all logical units are moved to partition 3.

Host C
Host A Host B

Figure B-3 Storage moved from partition 2 to partition 3

Appendix B. DS4000 migration scenarios 615


The next step is to move the storage for host A and B from host partition 0 and1 to partition 3
(SVC partition), to be managed by the SVC. This is shown in Figure B-4.

The following steps are required to do this:


1. Stop access from host A and B to its logical units.
2. Host A and B requires reconfiguration from RDAC device drivers to SDD, changes to
adapter configuration and microcode levels, settings, and so on, may also be required.
3. Switch zoning changes are also required to prevent host A and B from “seeing” the
DS4000 ports and instead “seeing” the SVC ports.
4. LUNs from host partition 0 and 1, which are now in partition 3, must be configured as
image mode MDisks in the SVC and mapped to host A and B as they were before the
migration, using their original logical unit numbers.
5. Partition 0 and 1 can be deleted if required, after all LUNs are moved to partition 3. Note
that LUNs moved from partition 0 and 1 to partition 3 have different logical unit numbers
on the DS4300, but the SVC will present the LUNs to the hosts with the same logical unit
numbers as before the migration.

Note: Access LUN (A) can no longer be used by host A for in-band configuration of the
DS4000. This can therefore be deleted and the DS4000 configured to use the SVC
master console over the Ethernet. The access LUN is not required by SVC.

Host C
Host A Host B

Figure B-4 Storage moved from partition 0 and 1 to partition 3

616 IBM TotalStorage SAN Volume Controller


We must now move any remaining host storage moved from the host partitions to partition 3
(SVC partition). We use the previous steps to accomplish this. This gives us the configuration
shown in Figure B-5. Image mode MDisks can now be converted to managed MDisks using
data migration commands as required.

Host C
Host A Host B

Figure B-5 All storage under the SVC

Appendix B. DS4000 migration scenarios 617


Scenario 2: Migrating storage (where the total number of LUNs
in all partitions is greater than what is supported in one
partition)
It is not possible to use the same solution as in scenario 1, because we will exceed the
number of supported LUNs in one partition. An easy way to do the migration is shown here,
but it is also possible to solve this without the need of another DS4000, if there is free
capacity, and the sum of the LUNs in the largest partition and the SVC partition, don’t exceed
the maximum number of support LUNs in one partition. In this case you need to follow the
procedure in scenario 1, but for one partition at a time, and move the LUNs from image mode
to managed mode disk in the SVC, and remove the image mode mdisks, after they are
migrated to managed mode. Thereafter you can move the next partition into the SVC
partition, and before you do this, you might need to expand the capacity for the SVC using the
free capacity in the DS4000 that has been gotten by removing the old LUNs.

The initial configuration is shown in Figure B-6 on page 619. Note the following points:
򐂰 More LUNs than maximum supported in one partition on DS4000-1
򐂰 New DS4000 providing new storage, larger than or equal to the capacity of DS4000-1
򐂰 Only one partition per host

618 IBM TotalStorage SAN Volume Controller


Host C
Host A Host B

Figure B-6 Scenario 2 initial configuration

Appendix B. DS4000 migration scenarios 619


We then add another DS4000 and carry out the following steps:
1. Create RAID arrays on DS4000-2, one LUN per array, using equal numbers of arrays.
2. Rezone the switch to allow SVC ports to access DS4000-2 ports.
3. Create the partition, including all LUNs and SVC ports.

This is shown in Figure B-7.

Host C
Host A Host B

Figure B-7 Second DS4000 added

620 IBM TotalStorage SAN Volume Controller


We then move the storage for host C under the control of the SVC. This is shown in
Figure B-8.

Host C
Host A Host B

Figure B-8 Portions created on DS4000-2

Appendix B. DS4000 migration scenarios 621


The following steps are carried out:
1. Stop host C.
2. Rezone the switch so that host C port accesses the SVC ports as required, not the
DS4000-1 ports.
3. Rezone the switch to allow the SVC ports to access DS4000-1 ports.
4. Change host C device drivers, settings, software etc., to support the SVC.
5. Change partition 2 to SVC host type and change port names to SVC ports removing ports
of host C.
6. Create SVC managed mode disks from storage in partition 0 on DS4000-2.
7. Create SVC image mode disks from storage in partition 2 on DS4000-1.
8. Migrate image mode VDisks for host C to managed disks on DS4000-2.
9. When migration completes, delete LUNs and partition 2 on DS4000-1.

Figure B-9 shows the result of this.

Host C
Host A Host B

Figure B-9 Storage for host C migrated to DS4000-2

622 IBM TotalStorage SAN Volume Controller


The procedure is repeated for the remaining host until all the storage is migrated to the
control of the SVC. This is shown in Figure B-10. DS4000-1 is now unused.

Note: Although we used a second DS4000 in this scenario, it is possible to carry out a
similar procedure if there is enough spare capacity on DS4000-1.

Host C
Host A Host B

Figure B-10 All storage under control of the SVC

Appendix B. DS4000 migration scenarios 623


624 IBM TotalStorage SAN Volume Controller
C

Appendix C. Scripting
This appendix gives a high-level overview of how to automate different tasks by creating
scripts using the SVC Command Line Interface (CLI).

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. 625
Scripting structure
When creating scripts to automate tasks on the SVC use the structure illustrated in
Figure C-1.

Create
connection
(SSH) to the
SVC

Scheduled
activation
Run the or
command(s)
command Manual
activation

Perform
logging

Figure C-1 Scripting structure for SVC task automation

Creating a connection (SSH) to the SVC


When creating a connection to the SVC, the user running the script must have access to a
private key which corresponds to a public key previously uploaded to the SVC. The private
key is used to establish the SSH connection needed to use the CLI on the SVC.

PuTTY provides a command line connection utility called plink.exe (you can use another
SSH). In the following examples we will use plink to create the SSH connection to the SVC.

Executing the command(s)


When using the CLI you can use the examples in Chapter 9, “SVC configuration and
administration using the CLI” on page 207 for inspiration, or refer to the SVC Command-Line
Interface User’s Guide which can be downloaded from the SVC documentation page for each
SVC code-level.

Performing logging
When using the CLI not all commands provide usable response to determine the status of the
invoked command, therefore we recommend that you always create checks that can be
logged for monitoring and troubleshooting purposes.

626 IBM TotalStorage SAN Volume Controller


Example
In the following example, we create a simple bat script to be used to automate VDisks
creation to illustrate how scripts are created. Creating scripts to automate SVC administration
tasks is not limited to bat scripting, and you can in principle encapsulate the CLI commands in
scripts using any programming language you prefer, or even programme applets to be used
to perform routine tasks.

Connecting to the SVC using a predefined SSH connection


The easiest way to create a SSH connection to the SVC is when plink can call a predefined
PuTTY session as illustrated in Figure C-2.

Figure C-2 Using a predefined SSH connection with plink

To use this predefined PuTTY session the syntax is:


plink SVC1 -l admin

Where admin is the SVC administrative user (which the SSH key must belong to).

If a predefined PuTTY session is not used the syntax is:


plink admin@9.42.164.155 -i "C:\DirectoryPath\KeyName.PPK"

Creating VDisks command using the CLI


In our example we decided the following parameters are variables when creating the VDisks:
VDisk size (in GB) %1
VDisk name %2
Managed Disk Group (MDG)%3
svctask mkvdisk -iogrp 0 -vtype striped -size %1 -unit gb -name %2 -mdiskgrp %3

Listing created VDisks


To log that our script created the VDisk we defined when executing the script we use the
filtervalue:

Appendix C. Scripting 627


svcinfo lsvdisk -filtervalue 'name=%2' >> C:\DirectoryPath\VDiskScript.log

Invoking the sample script VDiskScript.bat


Finally, putting it all together our sample bat script for creating a VDisk is created shown in
Figure C-3.

-------------------------------------VDiskScript.bat------------------------------------

plink SVC1 -l admin svctask mkvdisk -iogrp 0 -vtype striped -size %1 -unit gb
-name %2 -mdiskgrp %3

plink SVC1 -l admin svcinfo lsvdisk -filtervalue 'name=%2' >>


C:\DirectoryPath\VDiskScript.log
----------------------------------------------------------------------------------------

Figure C-3 VDiskSCript.bat

Using the script we now create a VDisk with the following parameters:
VDisk size (in GB) 20%1
VDisk name Host1_F_Drive%2
Managed Disk Group (MDG) 1%3

This is illustrated in Example C-1.

Example: C-1 Executing the script to create the VDisk


C:\SVC_Jobs>VDiskScript 20 Host1_F_Drive 1
C:\SVC_Jobs>plink SVC1 -l admin svctask mkvdisk -iogrp 0 -vtype striped -size 20
-unit gb -name Host1_F_Drive -mdiskgrp 1
Authenticating with public key "rsa-key-20040116"

C:\SVC_Jobs>plink SVC1 -l admin svcinfo lsvdisk -filtervalue 'name=Host1_F_Drive


' 1>>C:\SVC_Jobs\VDiskScript.log
Authenticating with public key "rsa-key-20040116"

From the output of the log displayed in Example C-2 we verify the VDisk is created as
desired.

Example: C-2 VDiskScript.log


id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id
FC_name RC_id RC_name
55 Host1_F_Drive 0 io_grp0 online 1 MDiskGrp1_FST2 20.0GB striped

628 IBM TotalStorage SAN Volume Controller


D

Appendix D. Standby node and cloning


If an IBM TotalStorage SAN Volume Controller (SVC) node fails, the SVC cluster continues to
operate with degraded performance until the failed node is repaired. If the repair operation is
likely to take an unacceptably long time, it may be useful to replace the failed node with a
spare.

However, to replace a failed node without interrupting I/O and without any risk to data integrity
when a repaired node is reconnected to the storage area network (SAN) fabric, a specific
procedure must be followed. The procedure involves changing the worldwide node name
(WWNN) of a node. This procedure must be executed with care since inaccurate or duplicate
WWNNs can cause data corruption.

This appendix describes the basic tasks involved in setting up a node as a standby node and
how to clone a node, particularly a failed node in the cluster. See also the online hint and tip
titled SAN Volume Controller Node Replacement Procedure at:
http://www-1.ibm.com/support/docview.wss?rs=591&context=STCFPBW&context=STCFMTF&dc=DB500&q1
=replacement&uid=ssg1S1001909&loc=en_US&cs=utf-8&lang=en

Important: The advantage of cloning a node as opposed to simply adding a new node is
that, by replicating the failed node’s WWNN, the new node’s worldwide port names
(WWPNs) are assigned to be the same as the failed node. With cloning, all logical unit
number (LUN) maskings, persistent bindings, and zonings by WWNN that are set up in the
storage, hosts, and switches attached to the cluster can remain unchanged.

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. 629
Prerequisites
Before attempting to replace a failed node, the following prerequisites must be met:
1. Have SVC software V1.1.1 or higher installed on the SVC cluster and on the spare node.
2. Know the name of the cluster that contains the failed node.
3. Have a spare node in the same rack as the SVC cluster that contains the failed node.
4. Make a record of the last five characters of the original WWNN of the spare node since
this may be needed again if the customer wants to stop using a spare node and use this
node as a normal node that can be assigned to any cluster. Complete the following steps
to display the WWNN of the node:
a. Display the node status on the front panel display of the node.
b. With the node status displayed on the front panel, press and hold the Down button.
Press and release the Select button. Release the Down button.
c. The text "WWNN" is displayed on line one of the display. Line 2 of the display contains
the last five characters of the WWNN.
d. Record this number in a safe place. It will be needed when the customer wants to stop
using a spare node.

Replacing a failed SVC node


When a node is swapped, the following actions may occur:
򐂰 Front Panel ID can be changed. This is the number that is printed on the front of the node
and used to select the node that is to be added to a cluster.
򐂰 Node Name can be changed. If the SVC application is permitted to assign default names
when adding nodes to the cluster, it creates a new name each time a node is added. If the
user chooses to assign node names, then the user must type in the desired node name. If
the customer is using scripts to perform management tasks on the cluster and those
scripts use the node name, then, by assigning the original name to a replacement node, it
is not necessary to make changes to the scripts following service activity on the cluster.
򐂰 Node ID changes. A new node ID is assigned each time a node is added to a cluster. The
node ID or the node name can be used when performing management tasks on the
cluster. If scripts are being used to perform those tasks, it is recommended that the node
name be used instead of the node ID. This is because the node name remains unchanged
following service activity on the cluster.
򐂰 World Wide Node Name does not change. The WWNN is used to uniquely identify the
node and the Fibre Channel ports. The node replacement procedure changes the WWNN
of the spare node to match that of the failed node. The node replacement procedures
must be followed exactly to avoid any duplication of WWNNs.
򐂰 The World Wide Port Name of each Fibre Channel port does not change. The WWPNs
are derived from the WWNN that is written to the replacement node as part of this
procedure.

630 IBM TotalStorage SAN Volume Controller


Complete these steps:
1. Use the SVC console or the command line interface (CLI) to gather and record the
following information about the failed node:
– Node name: To display the node name using the console:
i. From the Welcome window, select Work with Nodes -> Nodes.
ii. The failed node is offline. Note the name.
To display the node name using the CLI:
i. Use the command:
svcinfo lsnode
ii. The failed node is offline. Note the name.
– I/O group name: To display the I/O group name using the console:
i. From the Welcome window, select Work with Nodes -> Nodes.
ii. The failed node is offline. Note the I/O group name.
To display the node name using the CLI:
i. Use the command:
svcinfo lsnode
ii. The failed node is offline. Note the I/O_group_name.
– The last five characters of the WWNN: To display the WWNN using the console:
i. From the Welcome window, select Work with Nodes -> Nodes.
ii. The failed node is offline. Note the last five characters of the WWNN.
To display the WWNN using the CLI:
i. Use the command:
svcinfo lsnodevpd <node_name>
Here <node_name> is the name recorded in Step 1.
ii. Find the WWNN field in the output. Note the last five characters of the WWNN.
– Front panel ID: To display the front panel id using the console:
i. From the Welcome window, select Work with Nodes -> Nodes.
ii. The failed node is offline. Click the name of the offline node.
iii. Select the Vital Product Data tab.
iv. The front panel ID is under the Front panel assembly section of the VPD. Note the
front panel ID.
To display the front panel ID using the CLI:
i. Use the command:
svcinfo lsnodevpd <node_name>
Here <node_name> is the name recorded in the step above.
ii. Find the front_panel_id field in the output. Note the front panel ID.
– The uninterruptible power supply serial number: To display the uninterruptible
power supply serial number using the console:
i. From the Welcome window, select Work with Nodes -> Nodes.
ii. The failed node is offline. Click the name of the offline node.
iii. Select the Vital Product Data tab.
iv. The uninterruptible power supply serial number is in the UPS section of the VPD.
Note the serial number.

Appendix D. Standby node and cloning 631


To display the node VPD using the CLI:
i. Enter the command:
svcinfo lsnodevpd <node_name>
Here <node_name> is the name recorded in Step 1.
ii. Find the UPS_serial_number field in the output. Note the uninterruptible power
supply serial number.
2. Use the front panel ID to locate the failed node. Disconnect all four Fibre Channel cables
from the node.

Important: The cables must not be reconnected until the node is repaired and the
WWNN has been changed to match the default spare WWNN.

3. Connect the power or signal cable from the spare node to the uninterruptible power supply
with the serial number noted in Step 1.

Note: The signal cable can be plugged into any vacant position on the top row of the
serial connectors on the uninterruptible power supply. If no spare serial connectors are
available on the uninterruptible power supply, disconnect the cables from the failed
node.

4. Power on the spare node.


5. Display the node status on the service panel.
6. Complete the following steps to change the WWNN of the spare node:
a. With the node status displayed on the front panel, press and hold the Down button.
Press and release the Select button. Release the Down button.
The text "WWNN" is displayed on line one of the display. Line two of the display
contains the last five characters of the WWNN.
b. With the WWNN displayed on the service panel, press and hold the Down button.
Press and release the Select button. Release the Down button. This switches the
display into edit mode.
c. Change the displayed number to match the WWNN recorded in Step 1. To edit the
displayed number, use the Up and Down buttons to increase or decrease the numbers
displayed. Use the left and right buttons to move between fields. When the five
characters match the number recorded in Step 1, press the Select button twice to
accept the number.
7. Connect the four Fibre Channel cables that were disconnected from the failed node to the
spare node.
8. Depending on how the original node failed, the replacement node may or may not
automatically join the cluster. If the node does not rejoin the cluster, delete the offline node
on the master console. Then, add the spare node into the cluster on the master console.
9. Use the Subsystem Device Driver (SDD) management tool on the host systems to verify
that all paths are now online.

When the failed node is repaired, do not connect the Fibre Channel cables to it. Connecting
the cables may cause data corruption. When the node repair returns the node to an
operational state, perform the following steps:

632 IBM TotalStorage SAN Volume Controller


1. Display theW node status on the service panel.
2. With the SVC status displayed on the front panel, press and hold the Down button. Press
and release the Select button. Release the Down button.
The text "WWNN" is displayed on line one of the display. Line two of the display contains
the last five characters of the WWNN.
3. With the WWNN displayed on the service panel, press and hold the Down button, press
and release the Select button, and release the Down button. This switches the display
into edit mode.
4. Change the displayed number to 00000. To edit the displayed number, use the Up and
Down buttons to increase or decrease the numbers displayed. Use the left and right
buttons to move between fields. When the number is set to 00000, press the Select button
twice to accept the number.

This SVC node can now be used as a spare node. If this SVC is no longer required as a
spare and is to be used for normal attachment to a cluster, the procedure described
previously must be used to change the WWNN to the number saved when a spare was being
created. See “Prerequisites” on page 630. Using any other number may cause data
corruption.

Important: Never connect a node with a WWNN of "00000" to the cluster.

Appendix D. Standby node and cloning 633


634 IBM TotalStorage SAN Volume Controller
Related publications

The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this redbook.

IBM Redbooks
For information about ordering these publications, see “How to get IBM Redbooks” on
page 636.
򐂰 IBM SAN Survival Guide, SG24-6143
򐂰 Introducing Hosts to the SAN Fabric, SG24-6411
򐂰 IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848
򐂰 IBM TotalStorage: Introducing the SAN File System, SG24-7057
򐂰 Virtualization in a SAN, REDP3633
򐂰 SAN: What is a VSAN?, TIPS0199

Other resources
These publications are also relevant as further information sources:
򐂰 IBM TotalStorage Open Software Family SAN Volume Controller: Planning Guide,
GA22-1052
򐂰 IBM TotalStorage Master Console: Installation and User’s Guide, GC30-4090
򐂰 IBM TotalStorage Open Software Family SAN Volume Controller: Installation Guide,
SC26-7541
򐂰 IBM TotalStorage Open Software Family SAN Volume Controller: Service Guide,
SC26-7542
򐂰 IBM TotalStorage Open Software Family SAN Volume Controller: Configuration Guide,
SC26-7543
򐂰 IBM TotalStorage Open Software Family SAN Volume Controller: Command-Line
Interface User's Guide, SC26-7544
򐂰 IBM TotalStorage Open Software Family SAN Volume Controller: CIM Agent Developers
Reference, SC26-7545
򐂰 Subsystem Device Driver User's Guide for the IBM TotalStorage Enterprise Storage
Server and the IBM TotalStorage SAN Volume Controller, SC26-7637
򐂰 IBM TotalStorage Open Software Family SAN Volume Controller: Host Attachment Guide,
SC26-7563

Referenced Web sites


These Web sites are also relevant as further information sources:
򐂰 IBM TotalStorage home page
http://www.storage.ibm.com

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. 635
򐂰 San Volume Controller supported platform
http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html
򐂰 Download site for Windows SSH freeware
http://www.chiark.greenend.org.uk/~sgtatham/putty
򐂰 IBM site to download SSH for AIX
http://oss.software.ibm.com/developerworks/projects/openssh
򐂰 Open source site for SSH for Windows and Mac
http://www.openssh.com/windows.html
򐂰 Cygwin Linux-like environment for Windows
http://www.cygwin.com
򐂰 IBM Tivoli Storage Area Network Manager site
http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageAreaNetworkMana
ger.html
򐂰 Microsoft Knowledge Base Article 131658
http://support.microsoft.com/support/kb/articles/Q131/6/58.asp
򐂰 Microsoft Knowledge Base Article 149927
http://support.microsoft.com/support/kb/articles/Q149/9/27.asp
򐂰 Sysinternals home page
http://www.sysinternals.com
򐂰 Subsystem Device Driver download site
http://www-1.ibm.com/servers/storage/support/software/sdd/index.html
򐂰 IBM TotalStorage Virtualization Home Page
http://www-1.ibm.com/servers/storage/software/virtualization/index.html

How to get IBM Redbooks


Hardcopy Redbooks can be ordered, and softcopy Redbooks viewed, downloaded, or
searched at the following Web site:
ibm.com/redbooks

Additional materials (code samples or diskette/CD-ROM images) can also be downloaded


from that site.

IBM Redbooks collections


Redbooks are also available on CD-ROMs. Click the CD-ROMs button on the Redbooks Web
site for information about all the CD-ROMs offered, as well as updates and formats.

636 IBM TotalStorage SAN Volume Controller


Index
auxiliary 442
Symbols availability 2
/O governing rate 55 availability reasons 68
/O groups 45 available managed disks 221

A B
abends 256 back-end application 12
abends dump 256 back-end storage controllers 16
access pattern 240, 334 back-end storage guidelines 34
Accessing FlashCopy target 586 background copy 434
active quorum disk 50 background copy progress 439
active SVC cluster 125 Background Copy Rate 380
add a new volume 161 background copy rate 380
add a node 218 backup 260, 368
add additional ports 320 of data with minimal impact on production 368
add an HBA 233 backup speed 368
Add SSH Public Key 91 backup time 368
additional extents 51 balance 75
admin password 212 bandwidth 41, 447
administration tasks 216, 282, 313 Basic Management Suite 549
administration using the GUI 265 basic setup requirements 92
Adobe Acrobat Reader V5.0 553 basic tasks 585
advanced security 4 bat script 627
agent 553 bind 185
Agent Configuration 557 bind address 19
AIX 158 binding 382
AIX and FlashCopy 586 bitmaps 366
AIX and Remote Copy 590 block aggregation 2, 16
AIX host system 169 block size 46
AIX LVM 586 block virtualization 44
AIX specific information 157 block-for-block translation 46
AIX toolbox 169 Bonus Pack 549
AIX-based hosts 157 boot 35
allocation algorithm 48 boot device 35
amount of I/O 55 boot time 35
analysis 67, 71, 352 boss node 12
application abends dump 256 bottlenecks 68
application server guidelines 34 buffers 366
application servers 16 business requirements 67
application testing 368
assign VDisks 117
assigned VDisk 161 C
asymmetrical virtualization 5 cable connections 28
asynchronous notifications 384 cache 14, 366, 373
asynchronous remote 415 cache algorithm 69
Asynchronous remote copy 415 cache hits 69
asynchronous remote copy 417 cache miss 69
attributes 299 caching 68
authenticate 78 caching capability 67
authentication 94 Call Home 580
automate tasks 626 candidate node 218
automatic configuration 15 cap 55
automatically discover 223 capacity 49, 168
automatically restarted 348 capacity information 341
automation 263 capacity measurement 326

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved. 637
capacity planning 67 configuration node 12, 15, 19, 108, 120
capacity utilization 3 configuration procedures 77
case sensitive 97 configure SDD 184
certificates 127, 266 configure the AIX 157
change a FlashCopy 385 Configuring the GUI 97
change the IP addresses 212 Configuring the master console 576
channel extender 32 connected 423
check software levels 344 Connection Manager 553
chunks 44, 500 connectivity 15
CIM Agent for SVC 549 consistency 366, 418
CIM Host 189 consistency group 368–370
CIM User 189 limits 370
CimAgent 185 consistency group zero 370
CIMOM 78 consistent 424–425
clear feature 51 consistent copy 366
CLI 78, 107, 207, 253 consistent data set 367
commands 169 Consistent Stopped state 422
scripting for SVC task automation 263 Consistent Synchronized state 422, 476
clone 629 ConsistentDisconnected 433
cloning 629 ConsistentStopped 431
cluster 15, 208 ConsistentSynchronized 432
adding nodes 120 container 44
administration 15 controller, renaming 221
creation 108, 120 conventional storage 493
error log 251 cookie crumbs recovery 262
IP address 83 cooling 26
managing using the GUI 266 copy process 428
shutting down 215, 280 copy rate 372
time zone 110, 130, 212 Copy Services 14
time zone and time 277 limitations with Windows 2000 597
viewing properties 210, 274 managing 246, 343
cluster error log 352 Windows 2000 594
cluster metadata 383 Windows NT 592
cluster partnership 418 Windows Volume Sets 608
cluster properties 213 COPY_COMPLETED 384
cluster state information 223 copying state 386
cluster Web interface 77 copying the grain 372
clustered IBM SAN appliance 4, 7 copy-on-write process 373
clustered server resources 15 core PID 39
clusters 26 corruption 368
coarse grained striping 48 counterpart SAN 31
command line interface 107 create a FlashCopy 384
command syntax 208 create a new VDisk 323
common MDG 52 create an MDG 228, 304
common platform 3 Create an SVC partnership 446
Compass architecture 14 Create consistency group command 387
complexity 2 Create mapping command 384
concepts 11 Create New Cluster 86
concurrent commands 71 Create SVC partnership 436
concurrent instances 499 Create the Metro Mirror relationship 436
concurrent software upgrade 247 create VDisks 140
configuration Creating a host 55
and administration using the GUI 265 creating a host 231
dumping 254 creating a VDisk 115
restoring 362 creating managed disk groups 134
using the CLI 107 Creating the FlashCopy mappings 396
using the GUI 119 current cluster state 50
configuration backup and recovery 259 current time zone 213
configuration data 259 Cygwin 197
configuration dump 254–255

638 IBM TotalStorage SAN Volume Controller


D application abends 256
Dark Fibre 41 configuration 255, 357
data error or event 256
backup with minimal impact on production 368 featurization log 256
consistency 592 I/O statistics 256
moving and migration 367 I/O trace 256
data consistency 388 listing 255, 357
data flow 29 other nodes 258
data migration 26, 499 the configuration 254
data migration and moving 367 dumping the configuration 254
data mining 368 dynamic name server 555
data mover appliance 242 Dynamic Pathing 205–206
default password 98 dynamic pathing 205
degraded mode 32 Dynamic shrinking 339
delete dynamic volumes 602
a FlashCopy 385
a host 233 E
a host port 234 effect of latency 69
a port 321 electronic page 549
a VDisk 237–238, 330, 343 e-mail 549
ports 234 e-mail to RETAIN 580
Delete consistency group command 387 empty MDG 229
Delete mapping command 385 empty state 434
deleted 51 Enlarging an extended basic volume 601
dependent writes 370, 415–416 enqueue 72
destage 69, 373 enqueueing 71
de-staged 281 Enterprise Storage Server (ESS) 413
destructive 243, 252 entire VDisk 368
detect the new MDisks 223 ERP 73
detected 223 error 251, 352
device driver 73 Error Code 349
device SCSI address 40 error handling 383
device specific modules 171 error log 251, 351
diagnose problems 549 analyzing 351
direct connection 33–34 file 349
dirty bit 417 error notification 251, 351
discard commands 72 error number 349
disconnected 423 error or event dump 256
discovering assigned VDisk 161, 172, 198 error priority 352
discovering newly assigned MDisks 292 Error Recovery Process 73
disk 594 errors 224
disk access profile 240 ESS (Enterprise Storage Server) 413
disk controller ESS specialist 58
renaming 289 ESS storage 58
systems 220, 288 ESS to SVC 515, 531
viewing details 220, 288 Ethernet 28
disk groups 41 Ethernet connection 19, 35
disk subsystems 68 event 251, 352
disk zone 29 event log 256
Diskpart 174 events 421
display summary information 221 excludes 293
displaying managed disks 133 exclusive processor 223
distance 32, 414 Execute FlashCopy 390, 403
distance limitations 414 Execute Metro Mirror 437
distributed redundant cache 16 Executing a single FlashCopy mapping 408
DMP 205 expand 56
documentation 26, 273 a VDisk 166, 174, 238
DSMs 171 a volume 174
dual site 70 expansion 51
dump extended disk 602

Index 639
extended distance solutions 414 Modify 379
extended volume 602 prepare 377
extenders 41 rules 374
extent 12, 44, 75, 376, 494 serialization of I/O 382
free 48 software license 21
size 52 source 21
size rules 51 Start 377
extent level 494 Stop 379
extent migration 499 Synthesis 381
extent size 74, 300 target 21
extent sizes 46 trigger 377
extents 46 FlashCopy features 367
FlashCopy functionality 586
FlashCopy indirection layer 371
F FlashCopy mapping 368, 376
fabric 567 FlashCopy mapping states 379
local 31 Copying 379
remote 31 Idling/Copied 379
fabric interconnect 32 Prepared 380
failed node 50, 629 Preparing 379
failing node 383 Stopped 379
failover 32, 205 Suspended 379
Failover only 201 FlashCopy mappings 370
failover situation 414 FlashCopy properties 371
fan-in 32 flexibility 67
fast restore 368 Flush done 377
FAStT 413 flush the cache 403
configuration 34 fmtdisk 227
migration considerations 612 forced deletion 319
storage 61 format 326, 332
favored hosts 74 free extents 47–48, 237
feature log 253, 356 front-end application 12
feature log dump 253 front-end host 33
features, licensing 252, 354 FTEDIT 595
featurization log 256 full array size 61
featurization log dump 256 full-duplex 68
Featurization Settings 89
Fibre Channel port fan in 32
Fibre Channel ports 28, 70 G
Fibre Channel switch 19 gateway IP address 83
file aggregation 5 GBICs 32
file system 204 general housekeeping 274
filter criteria 267 generate some randomness 80
filtering 209, 266 generating output 209
filters 209 Generator 81
fixed error 251, 352 geographically dispersed 70, 414
FlashCopy 4, 365 governing throttle 334
accessing source, target on the same AIX host 588 graceful manner 219
accessing target with recreatevg 589 grain 12, 371, 382
applications 367 grain is unsplit 372
bitmap 371 grain size 371
commands 384 grains 371, 381
Copy complete 378 granularity 46, 368
create 377 GUI 92, 97
Delete 378 signon 98
how it works 185, 366
I/O handling 382
image mode disk 375 H
indirection layer 371 hardware configuration 552
mapping 367, 377 hardware platform 551
mapping events 376 HBA 32, 231

640 IBM TotalStorage SAN Volume Controller


HBA fails 32 image mode disk 226, 375
HBA ports 35 Image Mode MDisk 501
hdisk 158 image mode to image mode 514, 546
heartbeat signal 15 image mode to managed mode 526, 539
help 274 image mode VDisk 497
heterogeneous hosts 154 image mode virtual disk 46
Hewlett-Packard StorageWorks Modular Array 8000 20 image mode virtual disks 54
high availability 13, 15, 19, 26, 40 image-mode mapping 18
history file 568 importvg 586
Hitachi Freedom Storage Thunder 9200 20 inappropriate zoning 34
home directory 169 in-band 554
host in-band communication 554
and application server guidelines 34 in-band discovery 554
configuration 153 in-band virtualization 2
creating 231 inconsistent 424
definitions 131 Inconsistent Copying state 422
deleting 318 Inconsistent Stopped state 422
HBAs 33 inconsistent stopped state 476
information 231, 314 InconsistentCopying 431
showing 244 InconsistentDisconnected 433
systems 29 InconsistentStopped 431
zone 29, 154 increasing complexity 2
host adapter configuration settings 171 index number 292
host bay ports 59 indirection 47
host bus adapter 231 indirection layer 371, 373
host definition 111 indirection layer algorithm 372
host definitions 111 informational error logs 384
host key 579 initial considerations 612
host level 314 initial installation 77
host mapping 55 input power 215
host workload 298 install 25
housekeeping 274 Install Certificate 128
HP-UX support 205 installation planning information for master console 575
HP-UX support information 204 installing certificates 266
Installing IBM VSS Provider 186
insufficient bandwidth 381
I integrity 50, 369–370, 416
I/O errors caused by path failures 384 Intel hardware 14
I/O governing 240, 334 interaction with the cache 373
I/O governing rate 240 intercluster 41
I/O group 12–13, 15–17, 69 Intercluster communication and zoning 418
renaming 216, 282 intercluster link 419
viewing details 216 intercluster link maintenance 419
I/O group name 272 Intercluster Metro Mirror 414
I/O pair 27 intercluster zoning 419
I/O per secs 26 internal resources 71
I/O statistics dump 256 interswitch link (ISL) 31, 33
I/O threshold 55 interval 213, 279
I/O trace dump 256 intracluster 41
IBM 2145 SDD Disk Device 173 Intracluster Metro Mirror 414
IBM Director V4.1 553 iogrp 227
IBM Tivoli SAN Manager 549, 553 IP address 19
IBM Tivoli Storage Resource Manager 574 modifying 212, 276
IBM TotalStorage SAN file system 6 IP addresses 26, 277
IBM VSS Provider 185 IP subnet 35
ICAT 549, 580 ISL (interswitch link) 31, 33
idling 432 ISL count 33
idling state 428 ISL hop 31, 41
IdlingDisconnected 433 ISL hop count 414
Image mode 46 issue CLI commands 197
image mode 18, 297–298, 501

Index 641
ITSANM 549 displaying 133
working with 220, 288
managed disk group 16, 114, 228
K creating 134
key files on AIX 169 viewing 139
managed disk group (MDG) 13
L Managed Mode MDisk 501
last extent 503 managed mode to image mode 512, 543
latency 68, 373 managed mode virtual disk 47, 54
LBA 46, 417 management xvi, 67
LDM 597 manager 553
LDM (logical disk manager) 597 managing storage growth 2
LDM database 597 Managing the master console 577
license 20, 23, 83 map a VDisk 149
license features 354 map a VDisk to a host 239
licensing feature 252 mapping 46, 51, 56, 366
licensing feature settings 252, 354 mapping events 376
limiting factor 68 mapping of VDisk extents 226
linear 69 mapping state 376
linearly scalable 69 mapping table 47
Linux 169 maps 16
Linux kernel 14 master console 13, 27–28, 549, 551, 578
Linux on Intel 197 installation planning 575
list dump 255 maximum capability of one node pair 69
list of MDisks 311 maximum capacity 46
list of VDisks 312 maximum values 56
list the dumps 357 MDG (managed disk group) 13
listing dumps 255, 357 MDG information 341
Load balancing 201 MDG level 228
local cluster 426 MDGs 26
local fabric 31 MDisk 13, 16, 26, 133, 221, 290
local fabric interconnect 32 adding 229, 309
locking the quorum disk 223 discovering 223, 292
logged 251 displaying 113
Logical Block Address 417 including 224, 293
logical block address 46 information 221, 290
logical configuration 18 modes 501
logical configuration data 261 name parameter 221
logical disk manager (LDM) 597 removing 230, 310
logical disks 53 renaming 222, 291
logical SANs 30 showing 243, 311, 340
logical unit numbers 114 showing in group 230
lsdev -Cc disk 508 working with 220
LU 12 MDisk group 51
LUN 12, 41, 75 creating 228, 304
LUN arrangement 75 deleting 229, 308
LUN masking 55 name 272
LUN per disk array 75 renaming 229, 307
LUNs 2, 44, 53, 65 showing 225, 244, 295, 341
LVM data structures 588 viewing information 228, 304
memory 13
metadata management 382
M metadata server 7
maintaining availability 2 Metro 419
maintaining passwords 275 Metro Mirror 413
maintenance levels 170 Metro Mirror consistency group 426, 428–430
maintenance procedures 348 Metro Mirror consistency groups 420
maintenance tasks 246, 343 Metro Mirror features 417
managed disk 13, 16, 44, 53, 221, 290 Metro Mirror process 418
display 113 Metro Mirror relationship 419, 427–428, 430, 470

642 IBM TotalStorage SAN Volume Controller


microcode 15 replacement procedure 630
Microsoft Cluster 174 replacing failed 629
Microsoft Multi Path Input Output 171 shutting down 219
Microsoft Volume Shadow Copy Service 185 standby 629
migrate 493 using the GUI 282
migrate a VDisk 497 viewing details 217, 284
migrate between MDGs 497 node details 217
migrate data 501 node discovery 223, 292
migrate VDisks 241 node dumps 258
migrating data 226 node is swapped 630
migrating multiple extents 494 node level 217, 284
migration nodes 26
algorithm 500 non-preferred path 18, 205
functional overview 499 non-redundant 31
operations 494 N-port 31
overview 494 NTBackup 192
tips 503 number of I/O groups 70
migration activities 494
Migration operations 48
migration phase 226, 298 O
migration process 241 object properties 569
migration progress 498 offline 49
migration scenarios 611 offline rules 496
migration threads 494 older disk systems 68
minimal downtime 46 one node pair 69
Modify a host 232 online 49
Modify consistency group command 387 online help 274
Modify mapping command 385 on-screen content 209, 266
Modifying a VDisk 54 OpenSSH 169
modifying a VDisk 239 OpenSSH client 197
mount 204 operating system versions 170
mount point 204 ordered list 48
moving and migrating data 367 ordering 370
MPIO 34, 171 organizing on-screen content 209
MSCS 174 other node dumps 258
multipath configuration 158 out-band communication 554
Multi-path I/O 34 out-of-band discovery 554
Multipath solutions supported 205 overall performance needs 26
multipath storage solution 171 oversubscription 31
multipathing device driver 34 overwritten 249, 366
multiple disk arrays 67
multiple extents 494 P
package numbering and version 246, 344
N parallelism 499
naming conventions 37 partial last extent 503
NAS 8 partially used 46
new code 348 partnership 418
new disks 162 passphrase 81
new mapping 239 password maintenance 211, 275
new MDisk 300 passwords 211, 275
no virtualization 46 path failover 205
NOCOPY 380 path failure 383–384
node 13, 15, 216–217, 283 path failure errors 384
adding 218, 284 path offline 383
adding to cluster 120 path offline for source VDisk 383
cloning 629 path offline for target VDisk 384
deleting 219, 286 path offline state 383
failure 383 paths 34, 70, 156
port 32 path-selection policy algorithms 201
renaming 218, 285 Peer to Peer Remote Copy 413

Index 643
per cluster 499 preparing volumes 165
per managed disk 499 pre-trigger 385
performance 49 primary 442
performance advantage 67 priority 241
performance considerations 68 priority setting 241
performance improvement 67 private key 78, 81, 169, 577, 626
performance requirements 26 problem determination 553
performance testing 75 productivity 3
performance throttling 334 pseudo device driver 158
pessimistic bitmap 382 public key 78, 81, 169, 577, 626
phone 549 PuTTY 78, 92, 216
physical location 26 application 208
physical planning 26 CLI session 95
physical rules 27 command line 208
physical site 26 default location 81
physical storage 16 security alert 96
Physical Volume Links 206 utility package 553
PiT 185, 366, 390 PuTTY application 95, 219
PiT consistent data 368 PuTTY Installation 197
PiT copy 371, 387 PuTTY Key Generator 81–82
PiT semantics 369 PuTTY Key Generator GUI 79
planning 69, 74 PuTTY Secure Copy 247
planning chart 28 PuTTY session 82, 97
planning rules 26 PuTTY SSH client software 197
plink 626 PVIDs 588
plink.exe 626 PVLinks 35, 206
point-in-time 366
point-in-time copy 425
policing 55 Q
policy 47 QLogic 2343 553
policy decision 417 QoS (Quality of Service) 4, 14
pool 47 Quality of service 55
port Quality of Service (QoS) 4, 14
adding 233, 320 queue depth 71
address example 40 queue depth calculation 73–74
deleting 234, 321 queue depth limit 72, 74
zoning 36 queued commands 71
port binding 185 quickly 107
port mapping 65 quiesce 215
POSIX compliant 14 quiesce time 403
possible paths 156 quorum candidates 50
Power Systems 169 quorum disk 50, 53, 223, 292
Powerware 27 setting 292
PPRC 4 quorum index number 223
background copy 434
commands 425 R
configuration limits 425 RAID controller 14, 29
detailed states 430 RAID protection 65
license 22 RDAC 41
relationship 419 real-time synchronized 414
preferred node 54 reassign the VDisk 238
preferred path 18, 64, 75, 205 reboot 593
preferred pathing 206 recall commands 209
pre-installation planning 26 recommended levels 344
Prepare (pre-trigger) FlashCopy mapping command 385 recovery algorithms 72
prepare a FlashCopy 385 recreatevg command 586, 589
prepare command 377 Redbooks Web site 636
PREPARE_COMPLETED 384 Contact us xix
prepared state 377 redundant 31
preparing state 377 redundant paths 18

644 IBM TotalStorage SAN Volume Controller


redundant SAN 31 compatibility 19
redundant SAN fabrics 19 documentation 273
redundant SVC 126 general housekeeping 274
redundant SVC environment 110 help 274
reform the cluster 223 logical configuration 18
registry 580, 594 multipathing 18
relationship 368 virtualization 16
relationship state diagram 421 SAN Volume Controller (SVC) 13
reliability 49 SAN zoning 78
remote cluster 31 scalable 68
Remote Copy scalable cache 14
and AIX 590 scalable solutions 4
Windows spanned volume 608 scripting 263, 417
remote desktop access software 549 scripts 174, 625
remote fabric 31–32 SCSI primitives 223
interconnect 32 SDD 34, 43, 158, 164, 171, 184, 198
remotely 549 SDD (Subsystem Device Driver) 18, 164, 184, 199, 532
remove a disk 182 SDD Dynamic Pathing 205
remove a VDisk 168 SDD for Linux 199
remove an MDG 229 SDD installation 160
remove WWPN definitions 234 SDD package version 158, 171
removed 50 SDDDSM 171
rename a disk controller 289 SDD-specific commands 184
rename an MDG 307 secondary site 26
rename an MDisk 291 secure data flow 78
renaming an I/O group 282 secure session 219
repaired node 629 Secure Shell 78, 549, 577
repartitioning 49 Secure Shell (SSH) 78
rescan disks 173 security 4, 51, 580
reset function 580 separate fabrics 70
Reset SSH Fingerprint 580 separate zones 33
resiliency 70 sequential 54, 116, 325
restart the cluster 216 sequential mapping 18
restart the node 220 sequential policy 48
restart the SVC node 287 serial numbers 161
restarting 441 serialization 382
restore procedure 362 serialization of I/O by FlashCopy 382
restore process 260 service panel 77
Round robin 202 service password 212
round robin 48, 53, 73, 205–206 service, maintenance using the GUI 343
rules 27 set attributes 299
Set the cluster time zone 277
setup Metro Mirror 434, 444
S shared 46
same size 49 shells 263
sample script 628 short outages 226
SAN 2 show the MDG 341
SAN boot 171 show the MDisks 340
SAN Boot Support 205–206 shrink a VDisk 243, 339
SAN configuration 154 shrinking 51, 243, 339
SAN connections 32 shrunk 51
SAN definitions 31 shut down 219
SAN design guidelines 32 shut down a single node 219, 287
SAN fabric 29, 33 shut down the cluster 215, 280
SAN file system design 6 shutdown 174
SAN Integration Server 13 Signon page 97
SAN interfaces 14 Simple Network Management Protocol 224, 417
SAN interoperability 33 simple volume 601
SAN planning 29 single extent 51
SAN Volume Controller 13 single name space 5
clustering 15

Index 645
single point of failure 31 empty 434
site 26 idling 432
SNIA 2 IdlingDisconnected 433
SNMP 224, 417 inconsistent 424
SNMP alerts 293 InconsistentCopying 431
SNMP manager 251 InconsistentDisconnected 433
SNMP trap 384, 554 InconsistentStopped 431
Software 257 overview 421
software license synchronized 425
PPRC relationship 22 state fragments 423
Remote Copy 22 State overview 423
software licensing 20 state transitions 384
parameters 20 states 376, 421
software upgrade 246, 344–345 statistics 213
software upgrade packages 344 statistics collection 279
software utility 344 starting 213, 279
solution 67 stopping 214, 279
sort 270 statistics dump 256
sort criteria 272 Stop FlashCopy consistency group 387
sorting 270 Stop FlashCopy mapping command 387
source reads 372 STOP_COMPLETED 384
source virtual disks 366 storage capacity 26
space 46 storage controller 33
space management 14 storage growth 2
spanned volume 602 storage network 3
spanned volumes 608 Storage Performance Council 69
spare 629 storage pool 7
spare node 630 storage virtualization 13
SPC 69 storage zone 154
special FlashCopy events 378 stripe 68
special migration 503 stripe VDisks 67
split 371 striped 325
splitting the SAN 31 striped mapping 18
SPoF 31 striped policy 48
spread paths 70 striped VDisk 116
spreading the load 49 striping 65
SSH 78, 105, 549, 577, 626 subnet mask IP address 83
SSH (Secure Shell) 78 Subsystem Device Driver (SDD) 18, 164, 184, 199, 532
SSH client 169, 197 Subsystem Device Driver DSM 171
SSH client software 78 SUN Solaris support 204
SSH command line 208 SUN Solaris support information 204
SSH key pair 212 superuser 97, 186
SSH keys 78, 92 supported capabilities 56
SSH public key 578–579 supported host adapter 158, 198
uploading to SVC cluster 90 supported switches 33
SSH server 78 surviving node 219
SSH-2 78 suspended mapping 387
SSL 191 SVC 13
stack 500 Basic installation 82
stand-alone Metro Mirror relationship 437 cluster configuration backup and recovery 259
standby node 629 node replacement procedure 630
Start (trigger) FlashCopy mapping command 386 task automation 263
Start a PPRC relationship command 428 uploading SSH public key to cluster 90
state 430 SVC cluster 120, 127, 223
connected 423 SVC cluster candidates 447
consistent 424–425 SVC cluster partnership 426
ConsistentDisconnected 433 SVC cluster software 345
ConsistentStopped 431 SVC configuration 16, 18, 26
ConsistentSynchronized 432 backing up 360
disconnected 423 deleting the backup 362

646 IBM TotalStorage SAN Volume Controller


restoring 362 Time-Zero copy 366
SVC device 13 Tivoli SAN Manager 35, 553
SVC installations 32 in-band communication 554
SVC intercluster 33 out-band communication 554
SVC intracluster 32 trace dump 256
SVC master console 78 traffic profile activity 26
SVC node 15, 19, 32 transitions 502
SVC PPRC functions 417 trigger 386
SVC setup 154 triggering 385
SVC software 20 truststore file 191
SVC support for VSS 185 tunnel 549
SVC zone 154
svcinfo 208, 222
svcinfo lsfreeextents 498 U
svcinfo lshbaportcandidate 234 unallocated capacity 180
svcinfo lsmdiskextent 498 unassign 330
svcinfo lsmigrate 498 unconfigured nodes 218
svcinfo lsVDisk 226 uneven performance 75
svcinfo lsVDiskextent 498 unfixed error 251, 352
svcinfo lsVDiskmember 243 uninterruptible power supply 13, 16, 27–28, 32, 215, 281
svctask 211, 222 Unmanaged MDisk 501
svctask chlicense 252 unmap a VDisk 238
svctask detectmdisk 506 unrecognized certificates 127
svctask dumpinternallog 253 unused space 46
svctask finderr 249 update messages 276
svctask migrateexts 511 upgrade 344–345
svctask mkfcmap 384, 426–428 upgrade precautions 247
Switch and zoning configuration 154 upgrading software 344
switch configuration 33 use of Metro Mirror 417
switch zoning 156 using SDD 164, 184, 199
Switching copy direction 442, 487
symmetrical 1 V
symmetrical network 31 VDisk 13, 311
symmetrical virtualization 1 assigning 149
synchronized 425 assigning to host 117
synchronous reads 500 creating 115, 140, 236, 323
synchronous writes 500 creating in image mode 226, 297
synthesis 381 deleting 237, 330
discovering assigned 161, 172, 198
T expanding 238
T0 366 I/O governing 239
target 593–594 image mode migration concept 501
target reads 372 information 235, 322
target server 593–594 mapped to this host 235
target virtual disks 366 mapping to a host 239, 332
tasks 71 migrating 241, 335
test new applications 368 modifying 239, 333
threads parameter 335 path offline for source 383
threshold level 55 path offline for target 384
threshold quantity 55 showing 312
throttles 240, 334 showing for MDisk 296
throttling parameters 333 showing for mDisk 225
throughput 68 showing map to a host 342
throughput ability 70 showing using group 230
tie-break situations 50 shrinking 242, 335
tie-break solution 223, 293 working with 230
time 130, 212, 277 VDisk-to-host mapping 238
time zone 110, 130, 212, 277 deleting 330
time zone code 213 Veritas Volume Manager 205
time-zero 367 View Certificate 127

Index 647
View I/O Group details 216 WWPNs 231
viewing managed disk groups 139
viewing virtual disk 267
virtual disk 13, 16, 48, 230, 235, 322, 368 X
creating 53 xSeries 551
deleting 51
expanding 51 Z
layout planning 74 zone 16, 19, 29, 567
reducing 51 zoning 39, 154
viewing 267 zoning capabilities 29
virtual pool 2 zoning recommendation 174
Virtual Private Network 549 zoning requirements 125
virtualization 16 zoning rules 154
virtualization device 3
Virtualization Limit 89
virtualization mapping 17
virtualization operations 48
virtualization overview 13
virtualized storage 21–22
VLUN 13
voltage regulation 16
volume group 166
volume management 16
Volume Sets 595
Volume Shadow Copy Service 185
volumes, creating and preparing for use 203
voting set 50
vpath configured 162
VPN 549, 553
VSS 185
VSS activation 192

W
Web interface 184
Windows 2000 and Copy Services 594
Windows 2000 based hosts 169
Windows 2000 host configuration 169
Windows 2003 171
Windows desktop 171
Windows host system CLI 197
Windows NT and 2000 specific information 169
Windows NT and 2000 specifics 592
Windows NT Volume Sets 595
Windows registration 579
Windows registry 579
Windows spanned volumes 608
Windows Volume Sets 594, 608
with reboot 593
without reboot 593
working with managed disks 220, 288
worldwide node name 570
worldwide port name 159
write ordering 415
write through mode 32
writes to source or target 373
write-through mode 16
WWN 111
WWNN 570, 630
WWNs 111
WWPN 159, 234, 316

648 IBM TotalStorage SAN Volume Controller


IBM TotalStorage SAN Volume Controller
(1.0” spine)
0.875”<->1.498”
460 <-> 788 pages
Back cover ®

IBM TotalStorage SAN


Volume Controller
Install, use, and This IBM Redbook update is a detailed technical guide to the IBM
TotalStorageSAN Volume Controller (SVC), a virtualization appliance INTERNATIONAL
troubleshoot SAN
solution that maps virtualized volumes visible to hosts and applications TECHNICAL
Volume Controller
to physical volumes on storage devices. Each server within the SAN has SUPPORT
its own set of virtual storage addresses which are mapped to a physical ORGANIZATION
Learn how to address. If the physical addresses change, the server continues running
implement block using the same virtual addresses it had before. This means that
virtualization volumes or storage can be added or moved while the server is still
running. IBM's virtualization technology improves management of
Perform backup and information at the “block” level in a network, enabling applications and BUILDING TECHNICAL
servers to share storage devices on a network. INFORMATION BASED ON
restore on a cluster
This redbook covers the following areas: PRACTICAL EXPERIENCE
򐂰 Storage virtualization high-level overview IBM Redbooks are developed
򐂰 Architecture of the SVC by the IBM International
Technical Support
򐂰 Implementing and configuring the SVC Organization. Experts from
򐂰 Using virtualization and advanced copy services functions IBM, Customers and Partners
from around the world create
򐂰 Migrating existing storage to the SVC timely technical information
based on realistic scenarios.
Specific recommendations
are provided to help you
implement IT solutions more
effectively in your
environment.

For more information:


ibm.com/redbooks

SG24-6423-02 ISBN 0738490520

You might also like