Professional Documents
Culture Documents
ESS Redbook Expert Reporting
ESS Redbook Expert Reporting
IBM TotalStoragee
Expert Reporting
How to Produce Built-In and Customized Reports
Learn to produce
customized reports
Daniel Demer
David McFarlane
ibm.com/redbooks
International Technical Support Organization
October 2003
SG24-7016-00
Note: Before using this information and the product it supports, read the information in
“Notices” on page xi.
This edition applies to Version 2, Release 1 of IBM TotalStorage Expert (product number
5648-TSE).
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
Contents v
vi IBM TotalStorage Expert Customized Reports
Tables
Tables ix
x IBM TotalStorage Expert Customized Reports
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such provisions
are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES
THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm
the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on
the capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrates programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested under all conditions. IBM,
therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy,
modify, and distribute these sample programs in any form without payment to IBM for the purposes of
developing, using, marketing, or distributing application programs conforming to IBM's application
programming interfaces.
ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United
States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun
Microsystems, Inc. in the United States, other countries, or both.
C-bus is a trademark of Corollary, Inc. in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure
Electronic Transaction LLC.
Other company, product, and service names may be trademarks or service marks of others.
This IBM® Redbook provides the basic knowledge, tools, and samples to show
you how to extract report data from built-in reports, and create customized
reports from the IBM TotalStorage® ESS Expert application and database. This
book examines the fundamentals of the DB2® Universal Database™, the
Structured Query Language that is used as the basis for sample reports, and the
methodology to create customized ESS reports based on your enterprise
requirements.
This book has been written with a wide range of end-user knowledge and
capabilities in mind. The range of expertise for the target audience is for DB2 and
SQL beginners just getting started, and also contains information and reference
materials for those much more comfortable with the capabilities of the Expert
data management components. We strive to provide enough information in a
manner that most users will be able to derive the greatest value from the
information presented.
Maritza M. Dubec
Mary Lovelace
Bart Steegmans
International Technical Support Organization, San Jose Center
Richard Dow
Larry Mills
Ray Koehler
John Aschoff
Jake Kelly
Will Scott
Chris Katsura
IBM TotalStorage Expert development
Your efforts will help increase product acceptance and customer satisfaction. As
a bonus, you'll develop a network of contacts in IBM development labs, and
increase your productivity and marketability.
Find out more about the residency program, browse the residency index, and
apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
Preface xv
xvi IBM TotalStorage Expert Customized Reports
1
The topics in this book are quite technical and can be the basis of years of
continuing education, use, and programming. In an effort to limit the breadth of
the information provided, we have been diligent in limiting topics in order to keep
the publication manageable and more easily utilized by most end-users. There is
already an extensive amount of information currently available on the topic of
DB2 relational databases and the Structured Query Language (SQL). Since
TotalStorage Expert customized reports are the main focus of our efforts, enough
information to provide a foundation for creating and extracting these reports is
presented.
The database used for examples in this book was taken from an IBM
TotalStorage Expert 2.1 host, which was monitoring IBM storage servers during
2002 and briefly in 2003. This database includes information from open systems
attached servers and OS/390® attached servers with both SCSI, FC, and
ESCON® connections to the F20 and Model 800 storage servers. We have also
merged ETL table information into the database so that you can access this type
of information also. This database has been loaded and is now in use at the
TotalStorage demo Web site at the following URL. You may set up and view any
of the built-in reports available at the demo site using this database:
http://storwatch.dfw.ibm.com/index2.html
Wherever possible, we provide URL locations for further information to assist you
with exploiting the TotalStorage Expert for your reporting needs. In addition to the
scripting provided, some instructions and examples of spreadsheet macro setups
are provided. The work done while creating the reports is documented
thoroughly and includes graphical examples, steps used, and tools incorporated
in producing the final reports. The scripts, tools, and other related documentation
have been made available for download from a File Transfer Protocol (FTP) site
for use in your environment.
Structure
The chapters contained herein are explained briefly in this section:
Chapter 1, “IBM TotalStorage Expert reporting overview” on page 1
The TotalStorage ESS Expert simplifies performance management for the ESS,
a disk storage system that provides industry-leading availability, performance,
and scalability. The ESS is ideal for businesses with multiple heterogeneous
servers including S/390®, UNIX, Windows NT, Windows 2000, Novell NetWare,
HP/UX, SUN Solaris, and AS/400® servers.
With Version 2.1, the TotalStorage ESS Expert is packaged with the TotalStorage
ETL Expert. The ETL Expert provides performance, asset, and capacity
management for IBM’s three ETL solutions; IBM TotalStorage Enterprise
Automated Tape Library; IBM TotalStorage Virtual Tape Server; and IBM
TotalStorage Peer-to-Peer Virtual Tape Server. There are also a variety of built-in
and customizable reports for the ETL, which are discussed later in this book.
The ESS and ETL features provide capabilities for performance, asset, volume,
and capacity management. The information provided within these categories
varies significantly because of the different characteristics of disk and tape
products, and are described in separate sections in order to differentiate them
more clearly.
The TotalStorage Expert helps you do the following tasks in addition to providing
several powerful new features:
Performance management
Capability to customize and enable threshold events relating to ESS
performance metrics including disk utilization
Asset management
Hosts disk mapping through a Logical Unit Number (LUN), providing
complete information on logical volumes including physical disks and
adapters from the host perspective. Host mapping for AIX 4.3.3 and 5L,
Windows NT, Windows 2000 Server and Advanced Server, HP UX, and Sun
Solaris.
Capacity management
Storage capacity, including storage that is assigned to application server
hosts, formatted into volumes, but not yet assigned to an application server
host and free space.
Use of Simple Network Management Protocol (SNMP) alerts for ESS
exception events that exceed customized threshold values
Support for Windows 2000 Server and Advanced Server operating systems
Enhanced usability characteristics:
Before exploring each task in detail, we explain how the TotalStorage Expert
product can help you perform these tasks. Figure 1-1 presents a typical
environment for TotalStorage Expert.
The TotalStorage Expert solicits your ESS and ETL to send information about
capacity or performance. When the TotalStorage Expert receives this
IBM TotalStorage Expert Version 2.1 uses the security and auditability features of
the host operating system. The host administrator is responsible for evaluation,
selection, and implementation of security features, administrative procedures,
and appropriate controls in application systems and communication facilities.
Please note that there are no data preparation tasks for asset and capacity
management. Data preparation tasks are exclusive to ESS performance
management. The ETL reports do not require these data preparation tasks to be
defined. During the data preparation cycle, the TotalStorage Expert performs a
myriad of SQL queries and parsing activities that prepare the raw data into a
presentable format for the built-in Web user interface (WUI) reporting functions.
Since there are no calculations or further data manipulation done on the other
data types (asset, capacity, volume), there is no need to process this raw data
any further prior to presentation in the WUI report feature.
The TotalStorage Expert creates reports for viewing in the Web browser not only
in tabular view, but also as graphical charts, such as bar graphs and pie charts.
Later in this book, we will detail methods of capturing and exporting this graphical
and tabular information for further manipulation or reporting needs.
In this chapter, we list the co-requisites for TotalStorage Expert and explain the
reporting mechanism requirements. Then we provide information to help you
become acquainted with the IBM DB2 Universal Database (UDB) by providing
the overview and brief history. We also describe the organization of TotalStorage
Expert database, and list the tables used by TotalStorage Expert.
The data collecting tasks that include asset, capacity, and volume information are
performed by the TotalStorage Expert in a passive mode. There is no active
interaction with the connected hosts, ESS, or VTS environment subsystems.
For the performance data collection tasks, the TotalStorage Expert has more of a
“handshake” interaction with the ESS Specialist code layer. When a performance
task initiates, the Expert sends out information to the ESS, which responds after
security requirements are satisfied. An ODM object is set on the ESS to inform
the Specialist that a data collection “listening” mode is set to on, which port will
be opened for receiving the ESS data, which TCP/IP address (of the
TotalStorage Expert host initiating the performance task), and the interval
frequency to be used to emit data to the Expert host. The Expert host will then
revert to a passive mode while the ESS Specialist emits data on the defined
intervals (how many seconds between data transmittal, 150, 300, 600). When the
task stopping time occurs, the Expert then signals the ESS Specialist to
discontinue data transmittal and the ODM object for data transmission is set to
OFF, and the ports opened for emission host data receiving are closed until
needed again.
You can also utilize the DB2 UDB database table data to create and produce
your own reports to present comprehensive analysis, trend, and forecasting
information for managing your unique storage environment.
The graphical interface for the TotalStorage built-in reports uses a combination of
WebSphere, Java, InfoSpace SpaceSQL, and HTML components to extract DB2
table data, and display it in a usable format to the end-user. Through the nested
layers of report options, the user can drill down to examine particular areas of
interest or concern regarding the ESS, ETL, or connected host subsystems.
Although graphical reports cannot be saved or printed directly from the
SpaceSQL icons in the Expert, the resulting graphics and tabular information can
be exported using methods described in Chapter 4, “Data extraction tools and
tips” on page 79. For further details about the Graphical Report Presentation,
refer to the TotalStorage Expert Hands-On User’s Guide, SG024-6012.
IBM’s DB2 database software is the worldwide market share leader in the
relational database industry. It is multimedia and Web-ready relational database
management system delivering leading capabilities in reliability, performance,
and scalability with less skill and fewer resources. DB2 is built on open standards
for ease of access and sharing of information, and is the database of choice for
customers and partners developing and deploying critical solutions. There are
more than 60 million DB2 users from 400,000 companies worldwide relying on
IBM DB2 Information Management technology.
The TotalStorage Expert product utilizes the DB2 UDB as the backbone for its
data storage and reporting functions. It is important to understand how the
TotalStorage Expert allocates and uses DB2 resources so that you can efficiently
customize and use the information provided by the TotalStorage Expert.
Four years later, IBMers Don Chamberlin and Ray Boyce published SEQUEL: A
Structured English Query Language, which became the basis for the SQL
language standard. Questions written in the new SQL language became more
important than how the data was stored and organized on disk. New, more
powerful questions could be asked and answered. Applications could be built
much more quickly. The relational database system itself took on more of the
burden of managing the data, leaving applications more freedom to focus on
business logic.
A series of research projects have been a steady source of technology for the
DB2 family since the beginning. The System R project resulted in the first IBM
implementation of the relational model. The project called ARIES delivered
row-level locking technology used throughout the database industry today.
Cost-based query optimization has been an area of intense effort and innovation
ever since the System R days. Subsequent projects extended the relational
model to distributed system environments, focused on making the relational
model extensible to handle new forms of information and new kinds of
optimization strategies, and brought an emphasis on data federation, allowing
data in diverse systems, not just DB2 systems, to be managed together. Most
recently, a technical preview based on DB2 has demonstrated the integration of
information from Web services and the use of XQuery as an additional and
powerful query language for managing XML content.
Other data storage options, such as storing files on diskettes, hard drives, tapes,
and cartridges, have the same limitations that spreadsheets have. These file
systems do not allow multiple applications to access the same data at the same
time. As a result, each application must queue up for access to the data, and the
applications are executed serially.
This presents a problem for multiple users needing access to the data. One
solution is to replicate the data. This would solve the concurrency problem. Many
applications could produce reports against copies of the same data at the same
time. However, replicating data, which is often done in the mainframe
environment, can be problematic, especially if any of the applications need to
change data. Synchronization of the duplicated data is a formidable task and can
cause discrepancies. To compound this problem, the redundant data begins to
take on its own personality over time. Even though the data field names remain
the same, one application’s definition of net sales may not be the same as
another’s.
People work with common tables of data every day. A common example of table
data is a phone book. Many phone books have information stored in columns,
such as customer name and phone number. Every listing in the phone book is a
row in the table.
When a user logs on the system, the application manager, which controls and
manages application programs, intercepts the user’s request. As a security
measure, the application manager ensures that only authorized IDs are allowed
to run specific programs. Then the application manager loads the correct
program into memory to process the user’s request. On larger servers in the
z/OS environment, the application manager can be Information Management
System (IMS™), Customer Information Control System (CICS®), Transaction
Server, or Time Sharing Option (TSO). On other platforms, the operating
systems, or a transaction manager such as CICS TS, can directly manage the
applications.
In addition to handling the requests from user terminals, the application manager
also communicates with the data manager on behalf of any programs requiring
data services. In a DB2 UDB environment, the application manager informs DB2
UDB whom, or which ID, is requesting work. During program execution, a data
request is sent to DB2 UDB. The DB2 UDB retrieves the requested data and
returns it to the program. Data transfer operations recur as the program
continues to request additional data.
In order to view data (Figure 2-3) from the TotalStorage DB2 database, the user
logs on to the system and enters an SQL request to view or update the
TotalStorage ESS or ETL data. The user’s ID and the SQL request are passed to
DB2 UDB. TotalStorage Expert uses as default the user name and password of
db2admin for the authorization to connect to the SWDATA (default) database,
and this is done manually by the user when accessing the database through a
command line or DB2 utilities.
Before processing the change request, DB2 UDB verifies that the user has the
authority to make the request. DB2 UDB maintains a catalog in which all
information about the data, such as the table and column names, data owner,
authorized users, and user’s privileges are stored (Figure 2-4).
If the user is not authorized to view or update the database, DB2 UDB returns a
code indicating that it has refused the request. For this example, the user is
properly authorized and the request is accepted.
To ensure that a second requestor is not allowed to change the same data that
this user is changing, DB2 UDB locks the data. This lock keeps other
applications from accessing the data until the change is complete (Figure 2-5).
DB2 UDB reads the data the user wants to change into a special memory area in
DB2 UDB called the buffer. DB2 UDB does not make the change out in the
storage media; the change actually occurs in the buffer (Figure 2-6).
Using the data in the buffer, DB2 UDB writes a before the change copy of the
data to a log. It then updates the row in the buffer, and writes an after the change
copy of the row to the log (Figure 2-7). This log keeps track of activities such as
who updated or deleted data, or when a utility copied the data.
When the user tells DB2 UDB that the change is complete, which is done by
committing the change, the commit is also written to the log (Figure 2-8).
Since the user is finished with this data, DB2 UDB releases the lock on the row
and the data becomes available to other users. You may perceive the changed
data as being written to direct access storage devices (DASD) at this point, but in
reality DB2 UDB batches the writes of the buffer.
In the z/OS environment, after DB2 UDB is installed, the install SYSADM can
grant system administration authority to other IDs. In larger shops, individuals
with SYSADM authority further define what will exist on DB2 UDB. In a
Note: SYSADM is not a granted authority in DB2 UDB for UNIX/Intel. In DB2
UDB, a group ID is designated as the SYSADM group, and members of that
group are system administrators for that environment.
After the DB2 UDB objects are created, the DBADM can then disperse privileges
for use of specific objects, such as reading data from a customer table, changing
salary values in the personnel table, or creating new indexes on specific tables.
When programmers finish writing and testing their application programs on a test
system, the DBADM, or a specially designated ID, takes the programs through
the program preparation process on the production system. This process
includes a precompile step, and compile and link-edit steps. The compiler checks
the host language code and converts it into a machine-readable form called an
object module. The DBADM takes the SQL portion of an application program
through a process called BIND.
The DBADM is also responsible for maintaining the database. The DBADM
populates the tables, gathers statistics about the data, and places those statistics
in the DB2 UDB catalog. The DBADM, on a regular basis, copies the data. In the
event of a data problem, the DBADM can recover the data using the last copy of
the data.
Operator
An operator has the authority to run many of the utilities that a DBADM and a
SYSADM can run such as loading and backing up the data, restoring a
database, or importing data into DB2 UDB. An operator relieves the SYSADM
and DBADM from some of the maintenance burdens. While operators have the
authority to run utilities, they do not have inherent authority to view the data
(Figure 2-10).
Programmer
The DBADM usually creates test environments for major production applications.
For many new applications, the programmer may have the authority to create the
test environment. Most programmers do not code SQL statements into an
application until the statements have been coded, and proven to be syntactically
and logically correct. This process is called SQL prototyping.
Before a programmer embeds an SQL statement into an application that may run
hundreds or thousands of times a day, an hour, a minute, or a second, the
programmer should understand the steps DB2 UDB follows to execute the SQL
statement. DB2 UDB outline of steps involved in executing an SQL statement is
called explaining the SQL. After DB2 UDB explains the SQL, the programmer will
have an idea about how efficiently the SQL statement will access data. Of
course, the programmer is responsible for writing the programs and embedding
SQL into them. They may also write programs called stored procedures.
When the program is ready for testing, it must undergo the program preparation
steps. The programmer follows all of the same program preparation steps that
the DBADM follows before the program is put into production. The programmer
precompiles, compiles, and link-edits the program. The programmer must also
take the SQL part of the program through the bind process. Figure 2-11
illustrates the DB2 programmer’s roles.
End-user
An end-user is anyone who runs an SQL statement usually included inside an
application. The applications may require input, use the input to perform an
operation, and return a result. Inside the application may be SQL statements that
are doing some or all of the work. End-users may not even know that they are
using DB2 UDB. The applications may read data and produce reports; they may
change data and produce reports; they may just change data. The application
executes SQL statements to perform end-user functions without an end-user
understanding of SQL.
Many end-users must write their own SQL to change data and produce reports.
Some of these end-users will use tools to write SQL statements for them, instead
of having to learn the SQL syntax.
The scope of this book contains information that will assist in writing SQL queries
and database macros to mine data from the TotalStorage Expert database in
your environment. This information can then be used for reporting, management
review, performance analysis, and other functions.
The basic elements of the database engine are database objects, system
catalogs, directories, and configuration files. All access to data takes place
through the SQL interface. You can run DB2 UDB as just the database server,
with no additional products required. For remote clients, additional products are
necessary.
The server products of DB2 UDB provide support for communication to the
database server using protocols such as TCP/IP, SNA, or IPX/SPX. This, then,
With the DB2 Administration Client component (Figure 2-13), you can remotely
administer DB2 UDB or DB2 Connect™ servers using the Administration Tools.
The Administration Tools are a part of the Administration Client installation. They
are a collection of graphical user interface (GUI) tools that help to manage and
administer databases. These tools include:
Control Center for configuration, backup and recovery, and directory
management
Command Center for issuing commands and creating command scripts
Script Center for issuing SQL and creating SQL scripts
Event Analyzer for examining event information
Journal for analyzing the status of submitted jobs
Alert Center for identifying problem areas
Tools Settings for setting up replications, setting termination characters, and
setting Alert Center options
The Run-Time Client supplies the Command Line Processor (CLP), which has
the capability to catalog and uncatalog node and database directories, and bind
packages. CLP is a character-based interface for entering SQL statements and
database manager commands. It may be used to access local workstation
databases, remote workstation databases, or remote Distributed Relational
Database Architecture™ (DRDA®) Application Server databases by means of
DB2 Connect.
Through a DB2 client, these applications can access all servers and, by using the
DB2 Connect, they can also access DB2 for iSeries, DB2 for OS/390 and z/OS,
and DB2 for VSE and VM database servers.
The listing below is not an exhaustive list of the tables, but contains information
on the most widely used tables in the database. The Appendix A contains the
entire list of the TotalStorage database tables for reference.
The logical organization for IBM TotalStorage Expert database tables associated
with IBM Enterprise Storage Server (ESS) data is as follows:
Root table: VMPDX
Contains serial number of each ESS, a corresponding index, a user-defined
nickname, other ESS-level information
Asset and Capacity tables updated by the Expert asset/capacity data
collection tasks); (The indices in VMPDX identify the ESS’s in these tables.)
– Historical tables (most tables updated only when content changes.
VMCAP, VHOSTC, VCLUC are updated each time task runs)
Asset data:
VCLUA, VCLUL Cluster IP and ESS micro code level information
VMASI, VMASE Expansion rack data
Capacity data:
VMCAP Storage-server-level capacity data
The table definitions set forth in this database are intended to be used with IBM
TotalStorage Expert only. The definitions are provided for informational purposes
only, and are not intended as a programming interface. IBM does not support
user modifications to the table definitions, and IBM does not recommend using
the definitions to make modifications to the IBM TotalStorage Expert
environment. Modification of the tables, or any program code that depends on
these table definitions, may produce unreliable results in future releases of the
IBM TotalStorage Expert Product.
Table 2-1 describes the tables by giving the tablespace name and a brief
description of the purpose the contents of the database tables are fully presented
in Appendix B, “TotalStorage Expert database table layouts” on page 241.
CNODE node names and IP addresses of hosts discovered by StorWatch [key: I_IP_ADDR,
I_NODE_NAME]
CNGDI IP address patterns that define the node groups [key: I_NODE_GROUP,
I_IP_ADDR]
CSIPA IP address patterns that make up the StorWatch management scope [key:
I_IP_ADDR]
CSSTY service types that StorWatch will discover [key: C_SERVICE_TYPE, N_PORT]
CPARM general parameters -- can be used by all StorWatch components [key: I_PRM_KEY,
C_PRM_COMP, I_PRM_SCOPE]
VTSEQ Historical list of asset/capacity task sequence numbers and the associated date
and time that each task ran. [key: I_TASK_SEQ_IDX]
VMPDX VMPDX contains the basic, seldom-changing attributes associated with each
storage server the ESS Expert has communicated with. One record exists for each
storage server “known” to the ESS Expert. [key: I_VSM_IDX]
VMASI Historical table containing the basic attributes associated with asset information for
each storage server. New row inserted whenever these attributes of a storage
server changes.[key: I_VSM_IDX,I_TASK_SEQ_FIRST]
VMASE Historical table containing expansion feature attributes for each storage server. New
row inserted whenever these attributes of a storage server changes. [key:
I_VSM_IDX,I_TASK_SEQ_FIRST,I_VSM_RACK_SN]
VCLUA Historical table containing the asset management type attributes for each cluster in
a storage server. New row inserted whenever these attributes of a cluster changes.
[key: I_VSM_IDX,I_TASK_SEQ_FIRST,I_CLU_NO]
VCLUL Historical table containing licensed internal code attributes for each cluster in a
storage server. New row inserted whenever these attributes change [key:
I_VSM_IDX,I_TASK_SEQ_FIRST,I_CLU_NO,I_CLU_LIC_SRC]
VMCAP Historical storage server capacity table. New rows inserted each time
asset/capacity collection runs. [key: I_VSM_IDX,I_TASK_SEQ_IDX]
VMDDM historical list of types of DDMs (disk drive modules/actual physical disks) in the
storage server, and their quantity. When the quantity changes, a new row is added.
[key: I_VSM_IDX,I_DDM_TYPE,I_TASK_SEQ_FIRST]
VCLUC historical cluster capacity table. New row added whenever the memory attributes of
a cluster changes [key: I_VSM_IDX,I_CLU_NO,I_TASK_SEQ_FIRST]
VHSTC Historical Open System host capacity table: records the capacity for all hosts, per
storage server, each time asset/capacity data is collected from the storage servers
[key: I_VSM_IDX,I_TASK_SEQ_IDX,I_HOST_IDX]
VHSTX Index table for Open system (SCSI-attached and FC-attached) hosts. New row
added whenever a new host or host attachment type is detected or when the
attributes of a host changes. [key:
I_HOST_IDX,I_HOST_ATTACH,I_TASK_SEQ_FIRST]
VHSTV Host-Volume association table. Recreated each time asset/capacity collection runs.
[key: I_VSM_IDX,I_TASK_SEQ_IDX,I_HOST_IDX,I_VOL_IDX]
VVOLX Index table for fixed block, logical volumes. New row added whenever a new fixed
block volume is detected or the attributes of an existing volume changes.[key:
I_VSM_IDX,I_VOL_IDX,I_TASK_SEQ_FIRST]
VCUIC Historical table containing capacity data for logical control units in each storage
server. New rows inserted with each capacity collection run. [key:
I_VSM_IDX,I_CUI_IMAGE_NUM,I_TASK_SEQ_IDX]
VCUIV Historical table summarizing S/390 volumes of a given type and total capacity, per
logical control unit. New rows added each capacity collection run. [key:
I_VSM_IDX,I_CUI_IMAGE_NUM,I_CUI_VOL_TYPE,I_TASK_SEQ_IDX]
VCMTOP1 Most recently collected data for storage server capacity, logical control units, and
Open System hosts. Table is recreated each capacity collection run. [key:
I_VSM_IDX]
VCMTOP2 Most recently collected data for capacity-related hardware attributes in each
storage server. Table is recreated each capacity collection run. [key: I_VSM_IDX]
VCMDDM Most recently collected data for list of types of DDMs. Table is recreated each
capacity collection run. [key: I_VSM_IDX,I_DDM_GB_CAPACITY,I_DDM_RPM]
VCMCLUST Most recently collected data for cluster capacity. Table is recreated each capacity
collection run. [key: I_VSM_IDX,I_CLU_NO]
VCMCUISUM Most recently collected data for capacity values of logical control units in each
storage server. Table is recreated each capacity collection run. [key:
I_VSM_IDX,I_CUI_IMAGE_NUM]
VCMCUIVOL Most recently collected data for. Table is recreated each capacity collection run.
[key: I_VSM_IDX,I_CUI_IMAGE_NUM,I_CUI_VOL_TYPE]
VCMCKD Most recently collected data for S/390 volumes in each storage server. Table is
recreated each capacity collection run. [key:
I_VSM_IDX,I_CUI_IMAGE_NUM,I_VOL_NUM]
VCMHOSTCAP Most recently collected data for the capacity for Open System hosts, per storage
server. Table is recreated each capacity collection run. [key:
I_VSM_IDX,I_HOST_IDX]
VCMHOSTVOL Most recently collected data for Open System hosts/fixed block volume connections
in each storage server. Table is recreated with each capacity collection run. [key:
I_VSM_IDX,I_HOST_IDX,I_VOL_IDX]
VSXDALVL Most recently collected data for the basic identifying information for each storage
server. Table is recreated each capacity collection run. [key: I_VSM_IDX]
VSXDALDT Most recently collected data for the active level of licensed internal code for each
cluster. Table is recreated each time capacity collection run. [key:
I_VSM_IDX,I_CLU_NO]
VSXDATOP Most recently collected data for the basic attributes associated with asset
information for each storage server. Table is recreated each capacity collection
run.[key: I_VSM_IDX]
VSXDARCK Most recently collected data for the expansion feature attributes associated with
each storage server. Table is recreated each capacity collection run. [key:
I_VSM_IDX]
VSXDACLU Most recently collected data for the asset management type attributes for each
cluster in a storage server. Table is recreated each capacity collection run. [key:
I_VSM_IDX,I_CLU_NO]
VSXDALIC Most recently collected data for the licensed internal code attributes for each
cluster. Table is recreated each capacity collection run. [key:
I_VSM_IDX,I_CLU_NO,I_CLU_LIC_SRC]
VSXDSTYP Most recently collected data for storage server summary by type. Table is recreated
each capacity collection run. [key: I_VSM_TYPE]
VPCRK Logical array-level performance data (for subsystem requests issued to the lower
interface); updated by Performance Collection
[key:P_TASK,PC_INDEX,M_MACH_SN,M_CLUSTER_N,M_LSS_LA,M_ARRAY_
ID,PC_DATE_B,PC_TIME_B]
VPCCH Volume-level performance data (for I/O requests, or “command chains”, including
those causing cache/DASD transfers); updated by Performance Collection [key:
P_TASK,PC_INDEX,M_MACH_SN,M_CLUSTER_N,M_LSS_LA,M_ARRAY_ID,M
_VOL_NUM,PC_DATE_B,PC_TIME_B]
VPHVOL Hourly performance statistics for logical volumes (based on VPCCH); generated by
data preparation task
[key:I_PR_SEQ_IDX,I_MACH_IDX,I_CLUSTER_NO,I_CARD_NO,I_LOOP_ID,I_
DISK_GRP_NO,I_DISK_NUM,I_VOL_NUM,D_PR_DATE,I_PR_HOUR]
VPHARCAC Hourly performance statistics for logical arrays (based on VPCCH); generated by
data preparation task
[key:I_PR_SEQ_IDX,I_MACH_IDX,I_CLUSTER_NO,I_CARD_NO,I_LOOP_ID,I_
DISK_GRP_NO,I_DISK_NUM,D_PR_DATE,I_PR_HOUR]
VPHADCAC Hourly performance statistics for adapter/loops (from VPCCH data); generated by
data preparation task
[key:I_PR_SEQ_IDX,I_MACH_IDX,I_CLUSTER_NO,I_CARD_NO,D_PR_DATE,I
_PR_HOUR]
VPHCLCAC Hourly performance statistics for clusters (from VPCCH data); generated by data
preparation task [key:
I_PR_SEQ_IDX,I_MACH_IDX,I_CLUSTER_NO,D_PR_DATE,I_PR_HOUR]
VPSNX List of storage servers and their internal indices for which preparation of
performance data has occurred. Updated by Perf. Rollup [key: I_VSM_IDX]
VPCUT Container for the cutoff time for the most recently run data preparation task.
Updated by data preparation task [key: I_ITEM_NO]
VPHSS hourly performance statistics for storage servers (from data for logical arrays,
mostly in VPCRK); generated by data preparation task [key:
I_PR_SEQ_IDX,I_VSM_PERF_IDX,D_PR_DATE,I_PR_HOUR]
VPHAR hourly performance statistics for logical arrays (from data for logical arrays, mostly
VPCRK); generated by data preparation task [key:
I_PR_SEQ_IDX,I_VSM_PERF_IDX,I_ARRAY_ID,D_PR_DATE,I_PR_HOUR]
VPHAD hourly performance statistics for adapter/loops (from data for logical arrays, mostly
in VPCRK); generated by data preparation task [key:
I_PR_SEQ_IDX,I_VSM_PERF_IDX,I_LSS_LA,D_PR_DATE,I_PR_HOUR]
VSCHT Contains information to assist ESS Expert identify the storage servers that belong
to a scheduled data collection task. [key:
I_SCHD_TASK,I_USER,C_SCHD_TASK_TYPE,I_VSM_IDX]
VTSTATM Task completion table for ESS Expert tasks which initiate work concurrently on
multiple storage servers. Updated when such a task completes. [key:
I_SCHH_TASK_SEQ]
VTSTATS Task completion table for ESS Expert tasks which perform work procedurally for a
set of storage servers. [key: I_SCHH_TASK_SEQ]
VCMPORT Most recently collected data for Fibre Channel (FC) adapter ports and attached
open system hosts, if any. [key:
I_VSM_IDX,I_PORT_BAY,I_PORT_CARD,I_PORT_ID,I_HOST_EXISTS,I_HOST
_WWPN]
VHNICKX Container for all hnick-type indices created in this Expert. Unique index per ESS
and open system host Nickname in ESS. [key:
I_HNICK_IDX,I_VSM_IDX,I_HNICK_NAME]
VCMHNICK Contains the HNICK host indices (one index per ESS and host nickname) most
recently collected from each ESS [key: I_HNICK_IDX,I_VSM_IDX]
VCMHNICKV Contains the volume-host assignments and volume locations most recently
collected from the storage server [key: I_VSM_IDX,I_HNICK_IDX,I_VOL_IDX]
VHOSTDC Contains most recently entered network addresses of open systems host
nicknames [key: I_VSM_SN,I_HNICK_NAME,I_HOST_CONN_IP]
VSCHHDC Contains information that identifies the open systems hosts associated with a host
data collection task. [key:
I_SCHD_TASK,I_USER,C_SCHD_TASK_TYPE,I_HNICK_IDX,I_HOST_CONN_I
P]
VHLPATH The volume path information most recently collected from open systems hosts [key:
I_HOST_CONN_IP,I_VSM_IDX,I_VOL_SN,I_PATH_ID,I_TASK_SEQ_IDX]
VHLNICK The most recent set of host nicknames per ESS associated with an open systems
host [key:
I_HOST_CONN_IP,I_HNICK_IDX,I_VSM_IDX,I_VOL_SN,I_TASK_SEQ_IDX]
VHLHOST Certain host-specific information provided by the subsystem device driver (SDD),
which is updated each time data is collected from the host [key:
I_HOST_CONN_IP]
VHDCSTAT Task completion table for ESS Expert host data collection tasks [key:
I_SCHH_TASK_SEQ]
VTHRESHOLD A list of threshold definitions currently recognized by the ESS Expert and used to
detect and optionally issue alerts for threshold-overflow conditions. [key: METRIC,
SCOPE]
VOPEVENT A list of events currently recognized by the ESS Expert which are used to optionally
issue alerts if an ESS task experiences certain types of failures. [key: EVENTID]
This chapter discusses the basics of SQL. To illustrate the concepts, we have
included examples from the TotalStorage Expert database tables. After this
chapter, you should be familiar enough with the commands and syntax to begin
writing your own SQL query statements for extracting information from the
TotalStorage Expert database. If you are already familiar with SQL, you may want
to skip this chapter.
Historically, SQL has been the favorite query language for database
management systems running on minicomputers and mainframes. Increasingly,
however, SQL is being supported by PC database systems because it supports
distributed databases (databases that are spread out over several computer
systems). This enables several users on a local-area network to access the
same database simultaneously. A partitioned relational database is a relational
database where the data is managed across multiple partitions (also called
nodes). A simple way to think of partitions is to consider each partition as a
physical computer. In this book, we will focus our attention on the single partition
database used by the TotalStorage Expert product.
Tables are created in DB2 Universal Database (UDB) using the SQL CREATE
statement. However, in order to create tables in DB2 UDB, you must first have
the authority to do so. This authority is found in the authorization ID, which is a
character string that designates a set of privileges. The default authorization ID
and password for the TotalStorage database (default:SWDATA) is db2admin.
During the configuration step of the TotalStorage Expert installation, you created
the tables and other structure of the database (Figure 3-2).
When defining table and column names, the first character of the name must
begin with an alphabetic character or one of the national symbols. These include
Figure 3-4 illustrates the requirements for these DB2 object names.
DB2 UDB accepts names with characters outside of the standard NAME
character set, such as +, -, or a space. However, if these characters are used,
the names must be enclosed in double quotation marks ""
Reserved words such as SELECT and INSERT can be used in a table or column
names if the reserved word is enclosed in double quotation marks ("SELECT").
There are more than 100 reserved words in DB2 UDB, and they are listed in the
manuals IBM DB2 Universal Database SQL Reference Volume 1, SC09-2974,
and IBM DB2 Universal Database SQL Reference Volume 2, SC09-2975, for
each platform.
Agents
Client Buffer Pool
Application
(Memory)
Agents
Client
Application
Table Space
(Hard Disk)
Character strings
A character string is a sequence of bytes. The length of the string is the number
of bytes in the sequence. If the length is zero, the value is called the empty string.
The most common character string data types are the following:
For CHAR(x), the x specifies the number of characters the column will contain for
each row in a table. If values shorter than the x are placed into that column, DB2
UDB will pad the short values with spaces (blanks) to fill up the fixed length. The
maximum value x can be is 254.
One of the rules DB2 UDB enforces is that a row of data cannot cross a page
boundary. Therefore, the maximum size the value of x in VARCHAR(x) can be is
the size of the page, less the size of all other columns within the row, less any
overhead bytes DB2 UDB uses to manage the page. For each VARCHAR
column, DB2 UDB builds a two-byte length field in front of the column. The length
field is used to keep track of the length of the value placed in the VARCHAR
column.
Important: The DATE and TIME data formats are very important in
TotalStorage Expert database table fields due to the fact that they are stored
in the manner described below, but are derived from the system time of the
host where the product is installed.
The United States (USA) external date format is MM/DD/YYYY. When the month
portion of the date is listed first, the slash / must be used as the separation
character. The Europe (EUR) external date format is DD.MM.YYYY. When the
DAY portion of the date is listed first the period . must be used as the separation
character. DB2 UDB allows you to use any of the following external date formats
within your SQL statements:
USA - MM/DD/YYYY
International Standards Organization (ISO) - YYYY-MM-DD
Europe (EUR) - DD.MM.YYYY
Japanese Industrial Standard (JIS) - YYYY-MM-DD
Without single quotes around dates, times, or timestamps, DB2 UDB might
register these values as an invalid numeric value. DB2 UDB might also try to
perform the math represented by the separation character, dividing where
slashes are encountered, and subtracting where there are dashes.
The table creator has three types of null characteristics from which to choose:
NOT NULL, NOT NULL WITH DEFAULT, and nullable.
NOT NULL tells DB2 UDB to enforce the rule that there must always be a value
in this column. If an attempt is made to add a row to this table and the row does
not have a value for the NOT NULL columns, the attempt fails and DB2 UDB
returns an error message. An example of a NOT NULL characteristic in the
TotalStorage tables is the T_TASK_TIME which indicates the time when the ESS
Expert most recently collected asset and capacity data from any storage server.
When a column’s null characteristic is nullable, DB2 UDB builds an extra byte in
front of the column. This extra byte is used as a flag to indicate whether or not the
column has a known value. This unknown value is not zero or blank; the value is
simply unknown. A middle-initial column is a good example of a column that
could be defined as nullable, since not everyone has a middle name.
NOT NULL WITH DEFAULT tells DB2 UDB that there must always be a value in
this column. However, if a row being added to the table does not include a value
for this column, DB2 UDB provides one based on the data type of the column.
The chart above shows the default value DB2 UDB will use for NOT NULL WITH
DEFAULT columns of a given data type.
Numeric 0
Within the CREATE TABLE statement, also some other attributes of the table
may be defined. For a full list and complete description of other data types, refer
to the IBM DB2 Universal Database SQL Reference Volume 1, SC09-2974, and
IBM DB2 Universal Database SQL Reference Volume 2, SC09-2975 manuals.
3.2.2 Keys
The three types of keys used in DB2 UDB are the following:
Unique key
Primary key
Foreign key
Unique key
A unique key is a column or set of columns that contain unique values. Without a
unique key, you cannot find a specific row in a table. An example of a unique key
in the TotalStorage Expert database tables would be the Serial Number,
nickname, or TCP/IP address for each ESS or ETL known to Expert.
A table does not have to have a unique key, but you will find that most tables have
at least one unique key designed into them. A table can have as many unique
keys as it has columns. Keys can be simple or compound. The simple unique key
consists of a single column, whereas compound unique keys include multiple
table columns.
The VMPDX table (Figure 3-6) shown has three unique identifiers: I_VSM_IDX
and I_VSM_SN, and I_SHORT_NAME.
Primary key
A primary key is a designation given to the unique key that best identifies the
data being stored in the table. Only one of a table’s unique keys may be defined
to DB2 UDB as the primary key. The null characteristic of all columns that make
up the primary key must be NOT NULL.
In the TotalStorage Expert example in the graphic (Figure 3-7), since I_VSM_SN
and I_SHORT_NAME represent the table's data equally well, I_VSM_IDX is
designated as the primary key, for one reason, because it is a shorter field than
the I_VSM_SN column. The I_VSM_IDX primary key for asset and capacity
related tables has a counterpart in the performance tables.
The primary key for the performance tables is I_MACH_IDX. This is the column
name for internally generated identifiers (index) for a storage server that has
performance summary data available in the database. The I_VSM_IDX column
Foreign key
A foreign key is a column or set of columns that contain values from some table’s
unique key. Foreign keys are designed into tables to define relationships between
rows.
Figure 3-8 shows examples of foreign keys. In the VPSNX table, the foreign key
is I_VSM_IDX, which refers to the primary key of the VPHARCAC table
I_MACH_IDX. In the VPHCAC table, M_MACH_SN contains a subset of the
values in the VPSNX table's primary key, I_CLUSTER_NO. In the table
VPHARCAC, I_CLUSTER_NO contains a subset of values in the VPSNX table’s
unique key, I_VSM_SN.
Each key type can be defined through an SQL statement. Primary and foreign
keys may be designated in the column description area of a CREATE TABLE
statement. Unique keys may be defined through a CREATE UNIQUE INDEX
statement. Example 3-2 shows an example of a CREATE TABLE statement for
the table CNODE in TotalStorage Expert. This is the table that contains
information about nodes discovered during a node discovery task.
Be sure to choose a user ID and password that are valid on your server system.
In this example, user ID is DB2ADMIN and password is also DB2ADMIN.
Example 3-4 shows the messages you can expect after the successful
connection has been established.
Once you are connected, you can start extracting information or executing SQL
statements against the database. For further details on connections, refer to the
CONNECT statement in the IBM DB2 Universal Database SQL Reference
Volume 1, SC09-2974, and IBM DB2 Universal Database SQL Reference
Volume 2, SC09-2975 manuals.
Many information technology products have the ability to interact with DB2 UDB,
so that the end-user need not understand the syntax of an SQL statement in
order to work with DB2 UDB. Some of these products provide an editor in which
Tip: The TotalStorage product media includes the DB2 UDB 7.2 Workgroup
Edition, and it is installed by default. This includes a suite of tools available to
you for creating your own SQL query statements and saving them. This topic
is discussed in detail in a later chapter (4.3, “IBM DB2 Utilities Command
Center features” on page 83).
This section provides a foundation in ways to communicate with DB2 UDB and in
the syntax of several types of SQL statements as an end-user. Anyone issuing an
SQL statement is an end-user. For example, the system administrator
(SYSADM) and the database administrator (DBADM) are end-users while they
perform their system and database administration tasks.
At times the end-user is not the writer of the SQL statement, and may not realize
DB2 UDB is running SQL statements. End-users do not always directly use or
see SQL. However, they can run an application that requires input, such as in the
example in the graphic. They fill in the blanks and press Enter. The application
then populates the values into the SQL statements that are part of the program.
SQL statements (Figure 3-10) are similar to sentences except that they are more
compactly structured. SQL statements also contain clauses, each of which is
simply a distinct part of an SQL statement. The sequence in which the clauses
are written is the syntax, or structure, of the SQL language. Each clause begins
with a keyword. In some SQL statements, certain clauses are required while
others are optional and are used only when their services are required.
As with any table, each row contains entries of associated data, with the rows
identified by a unique key. In the VMPDX table shown (Figure 3-11), the
I_VSM_IDX column is the only column that will contain unique information. As
As with any table, each row contains entries of associated data, with the rows
identified by a unique key. In the VMPDX table shown (Figure 3-11), the
I_VSM_IDX column is the only column that will contain unique information. As
such, it is a unique key, and will be designated as the primary key. The table
columns are too wide for this graphic, so the lower table is a continuation of the
right side of the actual table view.
Example 3-6 shows another example of a table description. This is for a table
called VCMHOSTVOL.
VCMHOSTVOL table contains the most recently collected data for open system
hosts with fixed block volume connections in each storage server. This table is
recreated with each capacity collection task that is run. Notice in the descriptor
area for the table the keys of the table are also listed. At the end of each column
In this example, you can see that the storage server has several unique
identifiers. Although any of them could potentially be used as a primary key, other
considerations such as database performance and relationships to other keys
are taken into account when the database is being planned. In this example, the
primary key is the column I_VSM_IDX. This is an index number created for each
storage server and is generated when TotalStorage Expert discovers it for the
Although there are other keys in the tables that can be used, I_VSM_IDX and
I_MACH_IDXDEPT are the shorter, more concise fields, and are the better
choice to be the primary key for two reasons:
There will be potentially fewer typographical errors in SQL statements
referencing data values in that column if entered manually.
Less storage will be needed if values from the unique key are used as foreign
keys.
The sequences of steps we have just discussed are called mental joins. Many
times, if you can write down the steps toward answers to similar queries, you will
find that those steps translate relatively easily to clauses in an SQL SELECT
statement.
SELECT statement clauses are used when building queries in SQL. Every
SELECT statement must have a SELECT clause and a FROM clause always in
this sequence, while remaining clauses are optional and mainly used to refine
and organize returned data. Example 3-8 shows an example of a complete
SELECT statement.
This statement is, in fact, the shortest syntactically correct SELECT statement
that can be written, and it returns all information in the VPHSS table (hourly
performance statistics for storage servers extracted from data for logical arrays,
mostly in the table VPCRK).
The asterisk in the SELECT clause indicates that all columns should be returned
from the table. Alternatively, this clause could be written SELECT VPHSS.*;
however, it would still be necessary to include VPHSS in the FROM clause. In
some query editors, you may provide comments with your SQL. The comments
are prefixed by two consecutive hyphens, and do not affect the functionality of the
statement.
In the absence of a WHERE clause, DB2 UDB will return data from every row in
the table. Since end-users rarely need all data in a given table, this statement is
not frequently used in the production environment and when specific information
is extracted for report data extractions.
Example 3-9 shows a query that returns two columns of data (D_PR_DATE and
I_PR_HOUR) from the VPHSS table where the set of rows whose
Q_HR_MAX_IOIN value is greater than 99999999.
The column values for Q_HR_MAX_IOIN indicate the maximum I/O intensity in
this hour (maximum of sample interval-level PC_IOR_AVG * PC_MSR_AVG *
1000 for all logical arrays in this storage server). The column value for
PC_IOR_AVG indicates the average subsystem I/O rate for all requests issued to
a specific logical array in a particular time period (total requests/interval seconds)
and comes from the VPCRK table. The column value PC_MSR_AVG indicates
the average millisecond time to satisfy all subsystem I/O requests issued to a
particular logical array in a specific time period (total millisecond time/total
requests).
DB2 UDB first invokes the FROM clause indicating the table from which data will
be returned, checking to ensure that the user ID issuing the query has
permission to execute it against the VPHSS table. Next, the WHERE clause
identifies all rows having a value of 99999999 in the Q_HR_MAX_IOIN column,
and copies them into an intermediate table. This intermediate table is
conceptual, in that DB2 UDB may or may not physically create an intermediate
table. The end result, however, is as though DB2 UDB had, in fact, taken the step
of creating this table.
Finally, DB2 UDB applies the SELECT clause against the intermediate table,
retrieving the D_PR_DATE and I_PR_HOUR values from every row in the
intermediate table. Note that these columns are retrieved, left to right, in the
sequence in which they appear in the SELECT clause. In the example, the
Q_HR_MAX_IOIN column has been sorted in descending order for viewing
(Figure 3-13).
You can use the following comparison operators in DB2 UDB SQL statements:
Operators(>, <>, <, >=, <=)
Boolean (AND, OR)
Partial values (LIKE '_A% )
Value in row (RAID>NONRAID)
Calculated value (HOURS/2)
In some environments != is accepted to mean “not equal”
Example 3-10 shows the various ways to make comparisons in SQL WHERE
clauses.
In this example, all columns are to be retrieved from the VPHAR table. The
VPHAR table is generated from data in a TotalStorage Expert data preparation
task, and contains hourly performance statistics for logical arrays (from data for
logical arrays, mostly VPCRK). The columns will not be retrieved from every row,
but from the set of rows whose I_LOOP_ID values begin with the letter B, if and
only if the row’s Q_HR_DU_NO_EXCEPTS divided by Q_HR_INTERVALS value
exceeds 5 %. Partial string searches using the LIKE keyword can only be
performed on character string data. Q_HR_DU_NO_EXCEPT is the column to
When used with the LIKE keyword, the percent sign (%) becomes a masking
character and functions like a wildcard, masking from zero to any number of
characters. One or more underscores may also be used as a masking character
in partial character string searches. Every underscore occurrence masks exactly
one character.
Tip: Use the LIKE predicate to search for strings that have certain patterns.
The pattern is specified through percentage signs and underscores:
The underscore character _ represents any single character.
The percent sign % represents a string of zero or more characters.
Any other character represents itself.
SQL functions
The two types of SQL functions are column functions and scalar functions.
Column functions produce one summary row for each set of rows, whereas
scalar functions produce one value for each row within the set. The column
functions are sometimes also called summary functions, and they include:
SUM
AVG
MIN
MAX
COUNT(*)
COUNT(distinct column name)
The COUNT and SUM column functions are example of most commonly used
column functions. Example 3-11 shows a very simple example of the COUNT(*)
and SUM column functions.
The COUNT(*) counts rows that meet the WHERE clause conditions.
SUM(Q_VSM_FB_RAID) adds up the known values in the Q_VSM_FB_RAID
(amount of fixed-block storage that is defined as RAID storage, in gigabytes)
DB2 UDB will execute the statement by first finding the VMCAP table and
confirming end-user authorization. Next, it will extract into the conceptual
intermediate table the rows qualified by the WHERE clause, in this case those
whose I_VSM_IDX ((Internally generated identifier (aka index) for a storage
server, also see VMPDX for storage server data associated with this index))
columns contain the value of 4. DB2 UDB then applies the select list to the
intermediate table, creating a two-column result row by adding up the non-null
Q_VSM_FB_RAID values and counting the rows. For this query, DB2 UDB
produces a one row, two-column result table (which could be then related to the
storage serial number in the VMPDX table).
A scalar function is a function that when applied to a row, returns a value. When
applied to a set of rows meeting the WHERE clause criteria, a scalar function
returns a value for each row in the set. There are more than 100 scalar functions.
Scalar functions are also applied to a set of rows, but they produce a value for
each row in the set. Example 3-12 shows the DIGITS scalar function that is being
applied to the SMALLINT column I_VSM_IDX.
Very often data is stored with mixed case characters either on purpose or
because of data entry errors. Within TotalStorage Expert, the data is extracted
from the storage server and attached hosts and therefore there is no injection of
erroneous data due to user data into the tables. This is possible only if the Expert
data tables are manipulated through commands that write directly to the tables.
For example, the UPPER scalar function allows searches that are not case
sensitive. The UPPER function converts data in the specified column to upper
case characters prior to doing the comparison to the constant. This function may
also be invoked under the name UCASE().
If there are column functions (SUM, AVG, MIN, MAX, COUNT) in a query select
list, along with columns not in a column function, the query must have a GROUP
BY clause. Example 3-13 shows the SELECT statement in its properly coded
sequence.
Every column in the select list that is not an argument of a column function must
appear in the GROUP BY clause. DB2 UDB evaluates and processes the
clauses in the following order:
1. FROM VPHARCAC
First, the FROM clause is applied and DB2 UDB locates the VPHARCAC
table and checks user authorization. VPHARCAC is the table containing
hourly performance statistics for logical arrays (based on VPCCH) and are
generated by a data preparation task in TotalStorage Expert.
2. WHERE Q_HR_CACHE_HITS <> 0
Assume that in the Q_HR_CACHE_HITS column, a value of 0 represents no
cache hits, and any non-zero number represents the total number of cache
hits occurring in this hour for this logical array (command chains that were
completed without requiring access to any DASD).
Next, the WHERE clause pulls a copy of all qualifying rows into an
intermediate table. In this case, all rows qualify except those with a zero
value. Essentially, the WHERE clause defines the set of rows that will be
processed further. Recall that this intermediate table is conceptual and
represents a complex series of steps executed by DB2 UDB.
3. GROUP BY I_MACH_IDX, I_DISK_NUM
The GROUP BY clause is next applied, sorting the intermediate table into
sets based on the I_MACH_IDX and I_DISK_NUM values. There is one group
for each unique set of values within the grouping columns.
4. HAVING Q_HR_CACHE_HIT_RR >10
DB2 UDB applies the HAVING clause. Like the WHERE clause, the HAVING
clause contains tests, conditions, or predicates. These tests are applied to
each group. In this example, the HAVING clause prompts DB2 UDB to further
process only groups that have a cache hit ratio greater than 10. The column
Q_HR_CACHE_HIT_RR indicates the Cache hit ratio * 1000 for read
requests (total read cache hits/read I/O requests * 1000).
5. SELECT Q_HR_CACHE_HIT_RR, I_MACH_IDX, I_DISK_NUM, Q_HR_CACHE_HITS
At this point, the SELECT list is applied to the qualified groups to create one
summary result row for each group, which contains the Cache Hit Ratio, the
storage server index number, the disk number of the disk group (the lowest
level identifier of a logical array), and the total number of cache hits occurring
in this hour.
6. ORDER BY Q_HR_CACHE_HITS DESC
Figure 3-14 shows the sample output from the previous query. The script was
executed using DB2 Utilities Command Center Script interactive function.
In the SELECT list, you can assign a label to a column name, expression, literal,
or function by using the AS clause followed by the label name (a character string
that follows the DB2 UDB naming convention). The AS keyword is optional. If you
Join predicates
Drawing data from multiple tables is a very easy procedure, and is primarily a
matter of naming the desired tables in the FROM clause. If you did no more than
that, however, DB2 UDB would take every row in the first table named and
logically connect it to each and every row of the second table named in the
FROM clause. Usually, every row in one table is not related to each and every
row in a second table.
When joining rows from one table to another, in addition to naming in the FROM
clause the tables to be joined, the end-user must tell DB2 UDB the exact way in
which our tables are related. This is done in the WHERE clause, or in a clause
called an ON clause, which is an extension to the FROM clause. This part of a
WHERE clause or an ON clause is called a join predicate, which relates the
tables by columns containing common data. A join predicate is added to these
clauses to describe how rows in one table are related to rows in the other table.
Earlier in this chapter, you learned about the steps involved in extracting
information from tables. In this example, the column Q_VSM_TOTAL_CAP
contains the total amount of capacity (counted in unformatted bytes) in a
particular storage server, in gigabytes. If you wanted to find the percentage of the
You could easily locate a row in the VMCAP table and then its related row in the
VMPDX table. It is possible to identify the rows that are related because they
have a column that contain a common value (_I_VSM_IDX).
Writing an SQL join simply requires stating those steps as clauses in a query. In
order for DB2 UDB to recognize the relationship between the VMCAP and
VMPDX tables, however, it is necessary to incorporate join predicates into the
queries. It is very important to include the comma between the table names.
Forgetting the comma between the table names tells DB2 UDB to rename the
first table by assigning it the name of the second table. It works just like the
assigning of custom column names in a SELECT list. JOIN predicates may be
formulated using a traditional syntax or an alternate syntax. Example 3-15 shows
how a traditional JOIN statement might be written.
Figure 3-15 shows the results table when the statement is executed from the
DB2 UDB Command Center Interactive function.
3.4 Summary
There are many ways in which to employ the SQL when in relation to the
TotalStorage Expert database or any compatible relational database. For brevity,
this chapter does not cover a myriad of other ways to data mine, extract, and
manipulate data. For further information, refer to the IBM DB2 Universal
Database SQL Reference Volume 1, SC09-2974, and IBM DB2 Universal
Database SQL Reference Volume 2, SC09-2975 manual for each platform.
“How can I extract performance data from the TotalStorage Expert, so that I can
keep and use it outside of TotalStorage Expert?”
This chapter contains useful information about the different tools and methods of
extracting, manipulating, and exporting data from the TotalStorage Expert
database. We will also examine the requirements and the important safe
database practices to avoid causing unnecessary grief to yourself and your data.
The goal of this chapter is to have you prepared to start efficiently and quickly,
using your skills while getting the most out of your TotalStorage Expert storage
server and attached host data.
For DB2 Universal Database (UDB) UNIX and Intel platforms, you can use the
Command Center or the Command Line Processor (CLP). There is a fully
compatible version loaded with the TotalStorage Expert product for your use. You
may be familiar with tools such as Query Management Facility (QMF™) for
Windows. It is a graphical user interface (GUI) that connects to any DB2 UDB.
There are numerous other tools and applications such as IBM DB2 Intelligent
Miner™, IBM Object REXX, LotusScript, which contain powerful scripting and
report formatting capabilities, and can access DB2 UDB on UNIX or Intel,
iSeries, z/OS, as well as any database manager connected to DataJoiner®.
Please refer to following Web sites for more information about these other tools:
DB2 Intelligent Miner:
http://www.ibm.com/software/data/iminer
Object REXX
http://www.ibm.com/software/awdtools/obj-rexx/
LotusScript
http://www.ibm.com/software/data/db2/db2lotus/db2lscpt.htm
There are no direct solutions to print the built-in reports or save the report files
directly from TotalStorage Expert. However, you can issue standard SQL
statements to extract the data. All asset, capacity, and performance data is
available in the form of DB2 tables. DB2 UDB management tools will be useful in
utilizing your table data in the most efficient manner.
In this example, a connect DB2 UDB command was executed to connect to the
database named SWDATA (TotalStorage database alias). When this command
was executed, the end-user then entered a SELECT SQL statement against the
VPCUT table in the SWDATA database. The commands are not case sensitive.
The interactive mode is exited by typing QUIT and pressing Enter.
The DB2 UDB utility center also has another CLP, which operates in a
non-interactive mode. It may be opened up from the START --> IBM DB2
pull-down menu. The SQL queries are invoked by starting each SQL statement
with the characters DB2, such as: db2 connect to database_a
Use the Control Center to add systems, instances, and databases to the object
tree. If you install a new computer system or create a new instance of the
TotalStorage Expert database, and you want to use the Control Center to
perform tasks on it, you must add it to the object tree as follows:
1. Open the Command Center window (see Figure 4-2).
2. Select the Script tab within Command Center (see Figure 4-3).
3. Within the Script window, you can input your SQL commands in the upper
window, and then save them as a script with an appropriate name. Figure 4-4
shows another example of an SQL query in the script window.
4. It is easy to save your new script as a new name after you have entered at
least one line in the upper window. Before executing the lines of script, go to
the menu bar at the top of the window, select Script -> Save Script As....
(see Figure 4-5).
5. After you have selected the Save Script As... option (Figure 4-5), the Save
As window will open with the parameters you can define for your new script
name (see Figure 4-6).
6. After clicking the OK button, the message DBA2061I appears indicating that
the script has been saved. If there were errors during the save process, you
will be provided a relevant message (Figure 4-7).
7. You can select the Help button for additional information and select OK when
you have completed your parameter definitions (Figure 4-8).
Interactive mode
Working within the DB2 UDB Command Center, you can run SQL statements,
DB2 UDB commands, and operating system commands in an interactive mode.
Like with most DB GUI tools, you will first connect to the database that you want
to run your queries against. From there, Command Center can display a list of
tables to which you have access. Command Center can also assist in writing the
query by allowing you to pick table names, column names, filters, conditions,
predicates, and other table signifies from its windows.
You can also execute a stack of SQL statements within the Script tab portion of
the window (Figure 4-9). Multiple SQL statements can be executed as a unit of
work (UOW), which means each statement must complete successfully for the
others to complete successfully. If any statement fails, the work done by all
previously completed statements will be rolled back.
If you are more comfortable with an interactive mode of building your SQL query
statements, you can select the Interactive tab from within the Command Center
at any time. For the following examples, we will be discussing how to navigate
through the Interactive options, and what is available to you in the way of tools
and options to build your query statements. There are useful help screens
available to explain content and assist you through the tasks. You can select
them by selecting the Help tab in the menu bar at the top of the window.
Next, we are introducing a new example of using the Interactive mode to create a
new SQL statement through the process of adding DB2 Universal Database
systems, instances, and databases to the object tree as follows:
1. After opening the Command Center, select the Interactive tab (Figure 4-10).
The types of statements that you can create might be limited, depending on
the application that you were using when you started SQL Assist. The
following selections in the Start Screen are as follows:
a. Select: Creates a statement that returns rows that are based on criteria
that you specify on the SQL Assist notebook pages.
b. Insert: Adds rows, one at a time, to a table.
c. Update: Changes values in a table.
d. Delete: Removes rows from a table.
You may now select the Logon tab to connect to a database once you have
selected the Select radio button on the start screen. You must be connected to a
database to use SQL Assist. If you are connected to a database when you start
a. In the Database URL field, type the connection information. For example,
you might type jdbc:db2:SWDATA, where jdbc is the connection type, db2 is
the database type, and sample is the database name (SWDATA is the
default name for TotalStorage Expert).
b. In the User ID field, type the user ID that you want to use to connect to the
database.
c. In the Password field, type the password for the user ID.
d. In the Driver identifier field, select the type of database to which you are
connecting.
e. In the Other field, type the location and name of the JDBC driver that you
want to use. This field might already contain a value, based on the
selection that you made in the Driver identifier field.
f. Connect: Connects to the server that is specified in the Database URL
field.
g. Disconnect: Disconnects from the server that is specified in the Database
URL field.
h. If you exit SQL Assist by clicking OK, the information that you enter on the
Logon page is saved and displayed the next time that you start SQL
Assist.
5. The next step in the SQL Assist task series is specifying the tables for an SQL
statement. Use the Tables page (Figure 4-14) to specify the tables that your
SQL statement will access.
a. The table names that you select in the Tables page are displayed in the
FROM clause of the SQL statement on the Review page. The Available
tables list displays the tables that you can use in your statement. These
tables are stored in the database to which you are currently connected. By
default, the tables listed are those whose schema is the user ID specified
for the database connection. If no tables in the database have that
schema, all the tables in the database are listed.
b. You can use the Filter schemas and Filter tables push buttons to limit the
number of tables that are displayed in the Available tables list
(Figure 4-15).
6. Use the Columns page to specify the columns that will be included in the
result set (Figure 4-16). The column names are displayed in the SELECT
clause of the SQL statement on the Review page. If you do not specify any
columns, all columns are selected, because the default SQL statement that is
generated is SELECT * FROM table name.
7. Use the Join page to join tables in an SQL statement (Figure 4-17). The Join
page displays the columns of each table selected on the Tables page. To
request a join:
a. Select a column in one of the tables. The tables are displayed in the order
that they are shown in the Selected tables list on the Tables page.
b. Select a column in another table.
c. If the columns have compatible data types, a grey line is displayed,
connecting the columns, and the Join button is available.
d. If the columns do not have compatible data types, an error message is
displayed in the status area at the bottom of the window.
e. Click Join to create the join. By default, a join is assumed to be an inner
join. You can also request other types of joins by clicking Join Type. The
following types of joins are available:
11.Use the Review page to display the SQL statement that was generated from
selections on the other pages (Figure 4-21). You can go to the Review page at
any time to see the current state of your SQL statement. After you display the
SQL statement, you can go to other pages in SQL Assist and change your
selections. When you are satisfied with the SQL statement, you can go to the
Review page and copy the statement to the clipboard, run it, or save it to a
file.
12.To complete the SQL Assist tasks, select the Finish tab (Figure 4-23).
Control Center
The DB2 UDB Utilities includes the Control Center, which provides an insight into
the database you are using. You can use the Control Center to manage systems,
DB2 Universal Database instances, DB2 Universal Database for OS/390
subsystems, databases, and database objects such as tables and views. In the
Control Center, you can display all of your systems, databases, and database
objects, and perform administration tasks on them. From the Control Center, you
can also open other centers and tools to help you optimize queries, jobs, and
scripts, perform data warehousing tasks, create stored procedures, and work
with DB2 commands. The following is a brief overview of how to discover useful
information about the TotalStorage Expert database, and how to use this as the
basis for your query statement creation.
Once you have opened the DB2 UDB Control Center, you can drill down to you
Expert database (SWDATA in this example) by using the Explorer window on the
left hand side of the window (Figure 4-24).
On the right hand side of the Control Center main window, you can view the
tables of the SWDATA database (since the Tables folder is highlighted on the left
hand side). In this following graphic, we are going to explore the CNODE table
further by viewing the columns (Figure 4-25). This is done by double clicking the
particular table you want to view details on. The column attributes are listed
under the window column headers.
From this window, you can further explore the table by selecting the tabs on the
upper portion of the window. We will now view the Primary Key(s) window for the
CNODE table (Figure 4-26). This is very useful information when you are
creating your own query statements. It will reduce the amount of research time
spent digging through hardcopy documentation.
Please refer to the following Web site for more information about IBM DataJoiner:
http://www.ibm.com/software/data/datajoiner/
DB2 QMF Version 8.1 transforms business data into a visual information platform
for the entire enterprise with visual data on-demand. Highlights of this release
include:
Support for DB2 Universal Database Version 8 functionality including IBM
DB2 Cube Views, long names, unicode, and enhancements to SQL.
The ability to easily build OLAP analytics, SQL queries, pivot tables, and other
business analysis and reports with simple drag-and-drop actions.
For more information about QMF for Windows, refer to the following Web sites:
http://www.ibm.com/software/data/qmf/
http://www.rocketsoftware.com/qmf/
You can download the free QMF for Windows Try and Buy version from the
following Web site:
http://www-3.ibm.com/software/data/qmf/reporter/june98/downloads.html
After the nodes have been discovered, data collection tasks been defined and
scheduled, and data collected successfully from the ESSs, you can view, copy
and insert the data into your preferred spreadsheet package.
The TotalStorage Expert V2.1 provides the following ESS Performance Report
modifications:
Upgraded performance reports
The ESS performance reports show a sequential I/Os percentage column
within the disk utilization reports, and six additional columns are added into
cache reports.
New ranked performance reports
The ranked performance reports provide the data in the reports in descending
order for disk utilization, number of I/O requests, total cache hits, and NVS
cache full situations.
The following lists show the tree structure of these reports by storage type (ESS
versus ETL).
Capacity statistics:
View recent data
– Summary all storage servers
• Logical view
• Physical view
– Summary all SCSI hosts
– Summary all FC hosts
View historical data
– Capacity growth report
• Growth by storage server
• Growth by open system attached host
• Specify start date
• Specify end date
• Period Interval (month, quarter, every 6 months, year, week)
Volume statistics:
View Volumes by Single ESS volume by Logical Unit Number (LUN)
View Volumes by Group of volumes on a specific ESS
Performance statistics:
Storage Server with Available Performance Data
Capacity summary:
Server
Reporting period
System name/ID
Type:
– VTS
– Library
– Composite
Library
Name of composite
Active data
– Logical volumes
Performance summary:
Server
Reporting period
System name/ID
Type:
– VTS
– Library
– Composite
Library
Name of composite
Virtual mount time (sec)
– Overall:
• Maximum
• Average
– Cache-miss:
• Maximum
• Average
– Fast-ready:
• Maximum
Certain Excel (.XLS) workbooks are provided on the FTP site to cater for some of
the data extracted from the standard built in reports and can be customized to
provide data views not provided for by the standard reports. This data can
subsequently be easily linked into office reports for either formal presentations,
or as part of regular management reporting.
We start with displaying the detailed performance built-in report for a Disk Cache
report at the device adapter level, where we selected the data for the device
adapter 4 for both loops.
The following steps describe the location of the report. We are selecting the data
from how to drill down to the particular reports:
1. Select View Reports.
5. Highlight the header row and right-click and select Copy to copy the
highlighted data to the clipboard.
6. Switch across to Microsoft Excel and open a new workbook. In all our
examples we have used Microsoft Excel 2002.
8. Next delete the column heading and column for Graph Device Adapter by
highlighting the column, and selecting the Delete function from the Edit menu.
9. Next go to the Expert display tables and extract the data required for the
graph.
10.This data are then copied and pasted into the spreadsheet worksheets in the
Built-in Example.xls workbook found in the following FTP site under the folder
Spreadsheet Examples:
ftp://www.redbooks.ibm.com/redbooks/SG247016
A graph is automatically created from the specific cell references. It is vital
that the sets of data match as the columns for the graphs are intermingled.
Once you become familiar with the Excel workbook, you can expand the scope of
your reports; these instructions are not intended to be comprehensive enough to
cover all the options you may wish to explore.
The data from this or any other ESS Expert report can be copied in the same
manner and then manipulated and graphed with Microsoft Excel to produce
graphs and reports. Simply find a spare set of cells in the same worksheet, and
reference the data that you want to see in your report.
The data can then be graphed using the Microsoft Graphing button to create
your own representation of the ESS Expert data. Figure 5-15 is an example of a
crude graph created without further customization within the Excel graphing tool,
obviously with greater exposure to the tool, you will be able to enhance the
presentation of this data.
This report displays a summary of your storage server hardware assets. It allows
you to view information on all the ESSs discovered at your installation. For
example, you can receive the ESS serial numbers and the fibre connectivity
mode of your installation. If you would like to know the meaning of each field, you
can check it by using the online help.
This panel displays a summary of your storage server assets. This report shows
information like IP addresses assigned to the ESS and LIC level information. If
you need further information about the ESS, you can launch the ESS Specialist
for this ESS by using the Launch ESS Specialist hotlink beside the nickname
and serial number.
This report tells you the current LIC level on all ESSs at your installation. Certain
functions of the ESS, such as Copy Services, require a specific level of LIC. Even
if you do not use those advanced functions, IBM may provide you a LIC upgrade.
This report helps you to identify if you need to install a new LIC.
As you can see, Logical View gives you the overview of capacity distribution and
host connectivity.
Note that the Total Raw Capacity column includes both parity (if you configure
the RAID-5 portion) and emulation overhead (for logical volumes). The Fixed
Block column and the S390 (CKD) column does not. More precisely, capacity per
DDM times the number of DDMs loaded on your ESS (except spare DDMs) is
equal to the number appearing in Total Raw Capacity column.
Numbers appearing in the Logical Control Unit column show the logical
address of each emulated storage control. This is also known as a CUADD
parameter in the CNTLUNIT macro on an IOCP, or the equivalent definition in
HCD.
Figure 5-22 shows the report, that you get, when you click on the underscored
text Volume Summary by Logical Control Unit.
When you click a number appearing in the Total S/390 Volumes columns (see
Figure 5-21), or a number appearing in Total Volumes (see Figure 5-22) in the
field, you will see a report like presented in Figure 5-23.
As you can see, this report shows you detailed information about logical
volumes, such as device addresses (or unit addresses) assigned to these logical
volumes, and the number of alias addresses each volume has. Refer to
Figure 5-42 on page 153 for a diagram that will help you interpret the location
information appearing in this report.
Just like an S/390 report, you can click a number appearing in either the SCSI
Hosts or FC Hosts column to see the capacity report for the respective
attachment type. Figure 5-24 shows an example of a SCSI hosts capacity report.
This report shows you how many logical volumes each SCSI host owns. If these
volumes are shared with another host system, it also gives you a breakdown of
how many volumes are shared across multiple hosts attached through either
SCSI or fibre connections.
This report gives you information on the ESSs from a hardware viewpoint, such
as fibre connection mode, number of host adapters and their types, and so on.
Two additional reports are available from this report. The one contains Disk Drive
Module (DDM) information, and the other one contains cluster information.
DDM information
When you click a number that appears in the Total DDMs column, the
TotalStorage Expert shows information on DDMs, which are loaded in the
corresponding ESS (see Figure 5-26). Note that the Quantity column of DDMs
does not include the number of spare DDMs on your ESS.
Cluster information
When you click the text VIEW, which appears in the Cluster Info column of the
Physical View report (refer back to Figure 5-25), you see a cluster information
report like presented in Figure 5-27. You can see the amount of cache and NVS
installed on the ESS you selected.
And when you click Summary All FC Hosts, the TotalStorage Expert shows you
all host systems, which have access to your ESSs through fibre connections (see
Figure 5-29).
On either report, you can click a number that appears in the Number of Storage
Servers column to have the TotalStorage Expert show you how much capacity is
assigned for the corresponding host (see Figure 5-30).
V O L1
S C S I= 1
FC=1
HOST1
To ta l = 2 V O L2
S C S I= 1 F C = 2 A n y T y p e = 2
S C S I= 0
FC=1
HOST2
To ta l = 2
S C S I= 1 F C = 2 A n y T y p e = 2
V O L3
HOST3 S C S I= 1
To ta l = 2
S C S I= 2 F C = 1 A n y T y p e = 2 FC=0
HOST4
To ta l = 3 V O L4
S C S I= 2 F C = 1 A n y T y p e = 2
S C S I= 0
S C S I p a th (s )
FC=0
F ib r e C h a n n e l p a t h ( s )
The TotalStorage Expert cannot take account of hosts which have access to the
ESS through Fibre Channel and do not have access profiles, since it does not
have a way to know whether your ESS is run under Access-Any mode.
The TotalStorage Expert shows you the earliest date available to create a report.
As you can see on this figure, this report shows you the historical capacity report
by the ESS, along with the Graph column. If you see a date cell in your report with
an asterisk *, the data will be the projected value. The TotalStorage Expert will
project values if at least three historical dates are available.
The TotalStorage Expert gives you a report on how much capacity is assigned to
a certain host, so when you have the following administration policy, capacity
reports will help you plan additional storage.
When you use this method, the TotalStorage Expert’s capacity growth report
would come close to the logical space utilization report. Thus, you can use the
historical report to analyze or project demand on capacity, and it will help you
plan to install additional capacity.
When you have defined all available space to logical volumes, and you have
assigned all fixed block storage to your open systems through the ESS
Specialist, the TotalStorage Expert shows you the same information, and the
historical report plot is flat, unless you have additional capacity features.
After you have scheduled data collection and data preparation, you can view
detailed information about disk utilization and Disk <>Cache transfer rates. When
you would like to see performance management reports, click Manage ESS ->
Manage Performance -> View Reports and the TotalStorage Expert prompts
you to input how you want to create reports (see Figure 5-35).
After you have selected an ESS and reporting period, and either the Summary or
the Ranked report option, click the Show Reports button.
The Disk Utilization Summary report shows how busy the DDMs on an ESS were
during the reporting period. As you can see, dates appearing in the left-most
column indicate that a browser drill down hyperlink is available. If you would like
to see a detailed report for a specific day, click the date you want to see.
The Disk <> Cache report shows you how many I/O operations occurred
between DDMs and the Cluster cache, known as staging and destaging. Like the
Disk Utilization Summary report, if you would like to see a detailed report for a
specific day, click the date you want to see.
The Cache Summary report shows you how well the cluster cache works for your
application. Like the other two summary reports, if you would like to see a
detailed report for a specific day, click the date you want to see.
S S
S S
S
Disk Group
DA2 DA2
S
Cluster 1
Cluster 2
Cache
Cache
S
S
S
DA3 DA3
S
DA4
S
DA4 ...
S
S Logical Volumes
Figure 5-42 Scope of each level of report, and ESS component location information
Note that Disk Utilization reports do not have cluster level reports, as Disk
Utilization reports deal with physical device performance.
Note: The ESS reports show a Disk Group and a Disk Number. If a rank is
used as JBODs, the TotalStorage Expert will show a Disk Number for each
disk. If a rank is in RAID-5, the TotalStorage Expert will show the Disk Number
as “N/A”.
Note that disk utilization reports do not have this level of report, as disk utilization
reports deal with physical device performance.
There is a collection of small boxes at the top right corner of the report, which we
refer to as the Performance Navigator Matrix. The following is an expanded
image of the Performance Navigator Matrix (see Figure 5-44).
Every performance detail report has a Performance Navigator Matrix at the top
right corner of the report, and you can simply click whichever one of the boxes
you would like to see.
The columns DU, DC, and CR stand for Disk Utilization, Disk to/from Cache, and
Cache Report respectively. So, these refer to the type of report. The rows C, DA,
DG, and LV stand for Cluster, Device Adapter, Disk Group, and Logical Volume
respectively.
The box in a light gray color indicates the report that you are currently looking at.
For example, in Figure 5-43 and Figure 5-45, it shows the cluster level of the
Cache Detail report. You can click the other boxes in a dark gray color to see the
other detail reports.
When you click the Hour of Day column on a detail report, you can see the exact
performance sample data taken for the respective hour. Figure 5-46 is an
example of the granular report.
When you click the graph icon, the TotalStorage Expert shows you graph charts
from the report. Figure 5-47 shows an example of a graph chart showing the
number of I/O request distributions.
Note that the TotalStorage Expert will not break this affinity even if performance
data is captured after failover. For example, if Cluster 1 fails and failover has
completed, Cluster 2 will control all of the disk groups (ranks). However, views
from the TotalStorage Expert will not be changed.
5.6.1 Interface for viewing host volume data: Host data collected
From the Expert navigation tree, select Manage ESS->Manage Volume
Data->View Recent Data to bring up this panel (see Figure 5-48).
5.6.2 Interface for viewing host volume data: No host data collected
If no host data has been collected and only asset and capacity collection has
been completed, the View Recent Data panel has only the contents in
Figure 5-49.
5.6.3 View data for a single volume by the LUN serial number
One way to view the report for a single volume, is to enter a valid LUN serial
number (9 characters in length), as it is known by a storage server, and click OK
(see Figure 5-50).
Figure 5-50 View Data for a Single Volume (by specifying the LUN serial number)
Figure 5-51 View Data for a Single Volume (by specifying an invalid LUN serial number)
5.6.4 View data for a single volume by the device name on a host
If you choose to enter the Host Address first, select radio button for Single ESS
volume by logical device name on a specific host, on the main View Recent
Data panel (this will automatically select the Host Address radio button), pick a
host address from the drop down and click OK (See Figure 5-52).
Figure 5-52 View Data for a Single Volume (by specifying device name on a host)
On the dialog that pops up (see Figure 5-53), select the logical device name to
which the volume maps and click OK to view the Storage Server Capacity -
Logical Volume Details report.
If you choose to enter the Logical Device Name first, select the Logical Device
Name option (see Figure 5-53), and enter the logical device name to which the
volume maps on the host, and click OK.
In the panel in Figure 5-54 select the host and click OK to view the Storage
Server Capacity - Logical Volume Details report for the volume identified by the
host and the logical device name.
If, on the main View Recent Data panel you enter an invalid logical device name
(either misstype it, or if there are no hosts that have this logical device name
defined) and click OK, a popup message appears indicating that there is no data
in the TotalStorage Expert database for this device (see Figure 5-55).
To view data for a group of open system volumes within an ESS (see
Figure 5-56), select the option for Group of volumes on a specific ESS on the
main View Recent Data panel, choose an ESS and click OK.
Subsequently, you will see a panel (Figure 5-57) containing disk group
information for the selected ESS. You can click the hyperlink for the Disk Group
of interest to view data for the volumes.
Starting off with simple statements and reports and then increasing the
complexity will aid in the report creation and in troubleshooting syntax and
calculations.
Important: We recommend that you back up your database prior to doing any
manipulation of it. We also strongly recommend that you refrain from editing
your original database tables. You should edit only copies of the database to
avoid severe corruption of the existing original database.
A) Export data to file format DEL (delimited), WorkSheet Format (WSF), Comma
Separated Variables (CSV), or Integrated Exchange Format (IXF). The
commands provided below use SQL. The examples provided are on a Microsoft
Windows platform. The only difference is the path syntax that makes them
compatible on an AIX platform.
Create a folder to for the output files of the data extract. For example:
c:\ibmout
1. Start-> Programs-> IBM DB2-> click Command Line Processor (CLP).
2. Connect to the DB2 using the following command in the CLP window:
connect to swdata user db2admin using db2admin
The response should be a few lines, which says that you are connected and
the level of DB2 is 7.2. If you should replace the word db2admin (following
using) with the actual password you are using for your database instance.
3. Issue the following command to extract the data from the VPVPD table
substituting the folder and replacing the mmdd with the date of the extract (The
VPVPD table contains cluster-level and storage server-level configuration
data generated at the start of Performance Data Collection):
export to c:\ibmout\vpvpdmmdd.txt of del select * from vpvpd
4. Issue the following command to extract a specific day’s worth of data from the
VPCRK table, substituting the date to be extracted and the same substitution
as in #3 for the filename (VPCRK table contains logical array-level
performance data):
export to c:\ibmout\vpcrkmmdd.txt of del select * from vpcrk where
pc_date_b = 'mm/dd/yyyy'
Please be patient while this process takes place. The prompt will return when
the process is complete.
The following can also be performed through the DB2 Utilities Command Line
Interface or Command Center. The commands are not case sensitive, and are
presented below and include the following explanations:
1. First, we connected to the database with the following command, where
swdata is the TotalStorage Expert DB2 database, user db2admin, using the
password of db2admin:
connect to swdata user db2admin using db2admin
2. Select all (column) information from table CNODE, all rows are implied by the
asterisk * :
select * from cnode
3. The result is stored as a table with columns (field names) and rows (field
values). The sample output is as shown in Example 6-1.
Select all (column) information from table CNODE and return the (column/field)
for I_NODE_ENTITY information matches the row value of 2. Example 6-2 shows
the sample statement and output for this.
The following will give you the IP addresses and ESS serial numbers of each
cluster in your environment:
SELECT * FROM VCLUA
Important: The following scripts were used in the DB2 UDB Utilities
Command Center Interactive mode with SQL assist (unless noted otherwise).
If you save these scripts, or copy them to the script tool within the DB2 utilities,
you will need to use a semi-colon ; at the end of every SQL statement. For
example, you would place a semi-colon at the end of the connect statement in
the DB2 Utilities script tool, Command Line Processor, or Command Window.
You will need to connect to the database prior to issuing query statements
against it, or set up a script to do this.
After the Command Center is started, select the Interactive tab and then the SQL
Assist button on the right hand side of the window. Sign into the database and
select the tables and columns for your report. The tables VMPDX (asset/capacity
root table, VPSNX (performance root table), and VPHSS (hourly performance
statistics for storage servers) were selected to allow linking of the serial number
and ESS nickname to the cache transfer information.
Once the columns are selected, you can proceed to set up the JOIN clause of
your query. Figure 6-1 shows an example of the join defined for this example
query statement. You can select from any two or more tables to define your JOIN
clause.
Right-click on the first column ID you want to join and then on the second. When
you click on the second (or subsequent) column IDs, a line will be displayed to
indicate a join is selected. If a join has incompatible values between the column
IDs, the lines will appear in red. If the join is compatible, the link line will appear
as blue.
When you are satisfied with a JOIN linkage, click the JOIN button on the right
hand side of the window and the linkage line will appear in green designating the
most recent linkage selected and joined. You can define a special JOIN type
(outer, inner, and so on) by selecting the Join Type button on the right hand side
of the window, and selecting the appropriate radio button in the secondary
window that displays.
At any time, you can unjoin linkages one step at a time by clicking the Unjoin
button on the right hand side of the JOIN window. For further details and
examples, refer to the application help screens by selecting the Help button on
the Command Center menu bar.
Example 6-3 shows a query script that will extract data for a report, which selects
two weeks worth of data from the VPHVOL table. This discreetly selected (not all
rows of VPHSS table) data is grouped by cluster, adapter card, arrays, by all
hours of each day performance data is collected.
This sample script can be found in this redbook’s FTP site in the folder Scripts
with the file name hour_perf_by_ess.txt.
In the Command Center Condition window, the condition BETWEEN was used to
specify what date range to search and extract data from the VPHSS table.
Figure 6-2 represents the sample output from the previous query. The number of
rows have been truncated for this graphic example.
1. Open the Command Line Processor by selecting Start --> Run --> Open: and
then type in db2cmd and click OK (Figure 6-3). You can also go to the DB2
UDB Utilities in Programs and then select Command Line Processor.
2. Change the directory to where you have the script file(s) stored. In this
example we are using the C:\TEMP\ directory. They may be stored anywhere
you prefer.
3. Type the command db2 -tf scriptname.xxx to output the results in an
unformatted fashion to your screen. This is only useful for small script outputs.
For larger output, redirect the script output to a text file. In this case, type the
following command (see also Figure 6-4):
db2 -tf scriptname.xxx > c:\TEMP\scriptname_output.xxx
where scriptname is the name of the script xxx is the extension you prefer. A
plain text file output is useful. You can also export the data in DEL, CSV, or
WKS format for exporting the data for further manipulation.
The script executed in this example is provided here (Example 6-4). On this
redbook’s FTP site, this script can be found as a file sqlscript.scr in the folder
Scripts.
Note the final line contains fetch first 10 rows only; as this will only extract
the first 10 rows the script finds, and returns that in the report (Figure 6-5). If you
use this technique for limiting your test script output, remember to remove the
FETCH clause prior to using it in your production environment.
It is possible to create a daily, weekly, and monthly view option similar to what
ESS Expert provides at higher levels within the ESSs. Here is how you currently
drill down to the disk level currently for a given database within ESS Expert:
1. Select a Storage Server, for example, Silvertip1.
2. Select the Start Date - 10/01/2002.
3. Select the End Date - 12/31/2002
4. Select Adapter: 4, Loop: B.
5. Select Group: 2, Number: N/A.
6. Then select the upper right matrix box buttons LV-DC.
7. Select Disk Group Disk Number 2: N/A.
8. Then select Show Report.
This is where the Disk <> Cache Transfer Report at the Volume Level is
displayed. You then select the lower right box of the Level Report matrix (LV-CR)
and you can view the Cache Report at the Logical Volume Level. This data is
only displayed for the selected day. You want to get this data presented over
different periods of time like a week, a month, or six months.
The initial challenge for you is to gather data over a period of time to establish a
baseline performance level. After this is established, you are in a better position
to determine if you are experiencing a performance bottleneck for a given
incident, an application load, or some other root cause.
Currently, you have no baseline levels to compare the daily numbers to. It will be
very time consuming to collect this data (for example, the past month) for every
ESS or host disk or disk group you need to analyze. This would require a
significant amount of data extraction and massaging, which you rarely have the
time to devote to this task.
Through the ESS, it appears that the volume ID (nnn-ESS serial number) is a
common name through which the you can discuss issues with your open
systems administrative personnel about specific performance aspects of physical
components of storage. You are planning on using these names (volume IDs) to
identify which disks make up the databases you need to monitor and provide
information about.
We provide a sample script for this purpose in Appendix , “Sample script for
extracting volume level performance data” on page 204.
In the example script, we use the cache report information from table VPHVOL.
In this script, we only select one ESS Serial Number for two weeks worth of data,
you can also select for multiple ESSs or search by ESS nickname and multiple
dates if the script is modified. You could also filter for particular hours of the day
and then view trends through time.
Table 6-1 lists the tables and related columns, which can also be used to extract
the information you may want to use for an historical Volume ID-Host report.
There are more tables and column information you may want to use, depending
on your particular requirements.
This example report would use the VPCCH table column PC_DATE_B to collect
days or weeks worth of related data. You could join the required tables by data
and volume ID (and any other information you prefer to have in your report). You
can use time ranges that fall between certain dates for historical information and
derive averages, peaks, and trends from that information.
You may have resources available to you for data such as experience with DB2
administration and spreadsheets. Scripting can be used within the ESS Expert
DB2 Utilities to aid in the database table export process (see 4.3, “IBM DB2
Utilities Command Center features” on page 83, or use the TotalStorage Expert
SWExport and SWImport utilities for details) as well as within most commercial
spreadsheet products. The following section provides a brief overview of the
select SQL statements. The select command reads data from specific areas
within tables of your database, and displays the results on your screen.
The file name, which follows the name of the batch file to be executed, can be
called anything meaningful that you prefer. The batch file collects all TotalStorage
Expert 2.1 volume performance data from the DB2 database and parses the
information into worksheet (WSF) file format. The utility will create a .zip file
named filename.zip in the current directory, and it includes separate WSF files for
selected information extracted from tables VCMHOSTVOL, VHLPATH, (VPCCH),
VPHVOL in the TotalStorage Expert DB2 database. The VPCCH file is quite
large, and is also incorporated into the script. The start and stop date variables
may be used to extract data from VPCCH, but as a default, the script has the
VPCCH data extraction commented (rem) out so it will not execute.
The script output will provide a specific range of open-system host volume
relationships; vpath and hard disk data for ESS volumes (VOLSER or volume
serial number); volume level sample interval statistics (if desired); and hourly
data volume level statistics. Variables for particular hours can also be
incorporated into the script if desired using the date example from the script.
Hourly data can be extracted from the VPCCH and VPHVOL tables if hourly
variables are used within the script where appropriate. Please keep in mind that
depending on your environment, this data extraction filter can become time and
resource consuming for your processor, and may inhibit the TotalStorage Expert
performance during high processor usage periods.
Note: Please keep in mind that the more restrictive the queries, the more
tables that are queried, and the extent of data to be gathered will all have an
impact on the performance of the host where TotalStorage Expert is running,
how long the script execution will take, and the resulting performance
degradation that the TotalStorage Expert may experience with concurrent
tasks such as performance data collection and data preparation. Be
conservative with prototyping your SQL queries prior to incorporating them
into scripts. Keep your queries simple, and build more complex statements
upon what is currently working. This will simplify query and script
troubleshooting.
Example 6-5 All hourly perf query by date, card, loop, cluster
SELECT
*
FROM
DB2ADMIN.VPHAD,
DB2ADMIN.VPSNX,
DB2ADMIN.VMPDX
WHERE
(
(
DB2ADMIN.VPSNX.I_VSM_IDX = DB2ADMIN.VPHAD.I_VSM_PERF_IDX AND
DB2ADMIN.VMPDX.I_VSM_SN = DB2ADMIN.VPSNX.I_VSM_SN
)
AND
(( DB2ADMIN.VPHAD.I_VSM_PERF_IDX = 2 ) and ( DB2ADMIN.VPHAD.D_PR_DATE=
'2002-06-06' )
AND ( DB2ADMIN.VPHAD.I_LOOP_ID = 'B' )
AND ( DB2ADMIN.VPHAD.I_CARD_NO = 1 )
AND ( DB2ADMIN.VPHAD.I_CLUSTER_NO = 1 ))
)
This sample script can also be found in this redbook’s FTP site in the folder
Scripts with the file name hour_perf_by_date_loop_card_cluster. Figure 6-6
shows an example of the output generated by the previous script. This was
generated within the DB2 Utilities Command Center. The variables for the loop,
card, date, and cluster are hard coded in this query example, but can be set by
the use of variables in a script if desired.
The events are logged in CMSGS table. Queries such as the following assist in
identifying trouble areas in which to undertake a more granular analysis.
This sample script can also be found in this redbook’s FTP site in the folder
Scripts with the file name threshold.txt. Figure 6-7 shows an example of the
script output.
This sample script can also be found in this redbook’s FTP site in the folder
Scripts with the file name 200_hi_arrays_date_time.txt.
Example 6-8 shows an example of a simple report that shows the capacity (in
GB) for each ESS by nickname.
from vmpdx T1
,vsxdaldt T2
,vmcap T3
order by T1.I_SHORT_NAME;
This sample script can also be found in this redbook’s FTP site in the folder
Scripts with the file name total_gb_by_ess. Example 6-9 shows an example of
output from the previous SQL query statement.
This sample script can also be found in this redbook’s FTP site in the folder
Scripts with the file name host_vol_assign.txt. Figure 6-8 shows output from a
DB2 Utilities Command Center session for the previous script.
This sample script can also be found in this redbook’s FTP site in the folder
Scripts with the file name pers_vpcch.
Query: VPSNX
select I_VSM_IDX
from VPSNX
where I_VSM_SN='ESS serial number'
Now we have an index value that is associated with the ESS serial number. We
will call this value X for this example. Now for the next query:
Query: VPHCLCAC
select I_CLUSTER_NO,
MAX(Q_HR_ADAPTERS),
MAX(Q_HR_ARRAYS),
MAX(Q_HR_VOLUMES),
D_PR_DATE
from VPHCLCAC
where I_MACH_IDX = X
AND D_PR_DATE >='BeginDate'
From each row, take the cluster, max adapter value, max array value, and max
volume value.
Query: VPHAD
select AVG(Q_HR_DEV_UTIL),
MAX(Q_HR_DEV_UTIL),
MAX(C_PR_CONFIG_CHG),
MAX(I_DU_THRESHOLD)
from VPHAD
where I_VSM_PERF_IDX = X
AND D_PR_DATE ='value of D_PR_DATE from VPHCLCAC query'
AND I_CLUSTER_NO ='value of I_CLUSTER_NO from VPHCLCAC query'
Now the code will parse all of the values taken from both tables and compile it
into a report.
Query: VPSNX
select I_VSM_IDX
from VPSNX
where I_VSM_SN='ESS serial number'
Now we have an index value that is associated with the ESS serial number. We
will call this value ‘X’ for this example. Now for the next query we will look in table
VPHVOL to get the top ten ranked IO for each hour;
Now the code will loop through the VPHVOL table query results. For each row,
we will get the cache holding time for each cluster for this hour:
Query: VPHCLCAC
select I_CLUSTER_NO,
MAX(Q_HR_AVG_HOLD_TIME)
from VPHCLCAC
where I_MACH_IDX = X
and D_PR_DATE = 'input begin date'
and I_PR_HOUR = 'I_PR_HOUR result from VPHVOL query'
Group by I_CLUSTER_NO
Now, the code will loop through each row of results from the VPHCLCAC table
query, and take the cluster and max hold time:
Query: VPHVOL
select I_CLUSTER_NO,
I_CARD_NO,
I_LOOP_ID,
I_DISK_GRP_NO,
I_DISK_NUM,
I_VOL_NUM,
I_VOL_TYPE,
I_VOL_ADDR,
AVG(Q_HR_IO_RATE),
AVG(Q_HR_NIO_R),
AVG(Q_HR_SIO_R),
AVG(Q_HR_NIO_W),
AVG(Q_HR_SIO_W),
AVG(Q_HR_RMR),
Q_HR_NVS_DELAY / Q_HR_TOT_IO_REQS,
AVG(Q_HR_CACHE_HIT_R),
AVG(Q_HR_CACHE_HIT_RR),
AVG(Q_HR_CACHE_HIT_WR),
AVG(Q_HR_TOT_IO_REQS),
AVG(Q_HR_TOT_IO_R)
from VPHVOL
WHERE I_MACH_IDX =X
AND D_PR_DATE ='input begin date'
AND I_PR_HOUR = 'I_PR_HOUR result from VPHVOL query'
Now the code will loop through each row of the results of the VPHVOL table
query, and uses each of the row results for the report.
Script source
Filename: TSE21W2KEXT
@echo off
rem Usage: TSE21W2KEXT filename
rem RPTNAME: The Report Name as defined in command execution
rem (ex: filename = JUNE12345.perf)
rem
Filename: TSE21W2KPERF
rem Initialize
echo Initializing
:db2
Script source
Example: A-2 Asset and performance volume statistics data collection script
@echo off
rem Initialize
echo Initializing
:db2
AIX platform
AIX script instructions
Filename: 21AIX Instructions
1. Login to the Expert 2.1 AIX host as root.
/* TABLE VPCRK: */
/* P_TASK Sequence number of the performance collection task*/
/* PC_INDEX Unique identifer for the sample stats gathered for one
interval */
/* M_MACH_SN Serial number of the storage server*/
/* M_CLUSTER_N Cluster number for this logical volume*/
/* M_LSS_LA An ESS internally generated logical subsystem
identifier */
/* M_ARRAY_ID An ESS internally genreatedl logical array identifier
*/
/* M_DDM_NUM Number of DDMs in this logical array*/
/* PC_DATE_B Start date of this sample period*/
/* PC_TIME_B Start time of this sample period*/
/* */
/* PC_DATE_E End date of this sample period*/
/* PC_TIME_E End time of this sample period*/
/* PC_IO_WRITE Number of subsystem write requests issued to this
logical array */
/* PC_RT_READ Total time (ms) to satisfy all read requests to this
array */
/* PC_RT_WRITE Total time (ms) to satisfy all write requests issued
to this array */
/* PC_IOR_AVG Avg subsystem I/O rate for all requests issued to this
array */
/* Calculations */
%global key;
%global total_cache;
%global interval_lines;
%global lastline;
%global clus1_ranks;
%global clus2_ranks;
/* Gets the start time and serial number from the user */
%MACRO THE_GOODS3;
%window Query3 color=CYAN group=progInputs
#13 @28 'Enter the start time of the first interval of the
task.(tt:tt:tt)' attr=highlight
color=blue
#15 @28 t_start 8 attr=underline autoskip=yes //
#17 @28 'Enter the start time of the last interval of the
task.(tt:tt:tt)' attr=highlight
color=blue
#19 @28 t_stop 8 attr=underline autoskip=yes //
#21 @28 'Enter the serial number of the shark ' attr=highlight
color=blue
#23 @28 serial 9 attr=underline;
/**************************************************************************
/
/* */
/* Generates the SAS table copies of the information in the DB2 database
*/
/* based on SQL queries comprised by the user's input. */
/* */
/**************************************************************************
/
PROC SQL;
/* Calling macro GREETING */
%GREETING;
%THE_GOODS2;
%let Qstart_date=%nrbquote(')&start_date%nrbquote(');
%let sPC_DATE_B=&Qstart_date;
/* List all task the day in question */
select * from connection to odbc (
select distinct p_task, m_mach_sn, pc_time_b from vpcch
where pc_date_b=&sPC_DATE_B);
/**************************************************************/
/* These table are created to retrieve configuration info. */
/* These tables are only loaded when an ASSET data collection */
/* has been run. */
/**************************************************************/
/*******************************************************************/
/* Gets the total cache size for any number of of clusters per ESS */
/*********************************************************************/
/* Get the number of lines in the sample data per interval */
/*********************************************************************/
PROC SQL noprint;
select count(P_TASK) into :interval_lines from Work.subset;
select count(P_TASK) into :lastline from Work.sas_VPCCH;
QUIT;
/****************************************/
/* Sorts the subset table by m_array_id */
/****************************************/
PROC SORT data = Work.subset;
by m_array_id;
/******************************************/
/* Calculates the number of LUNS per rank */
/******************************************/
PROC MEANS noprint DATA = Work.subset;
by m_array_id;
output out = Work.ranks N=NLUN;
QUIT;
DATA Work.SharkInfo;
merge Work.SharkInfo Work.sas_VMCAP;
by I_VSM_IDX;
DATA Work.SharkInfo;
merge Work.SharkInfo Work.sas_VMASI;
by I_VSM_IDX;
/* Save this IO_RATE is the Total I/O Rate. Store this for graphing
later. */
if printheading =(D_interval-1) then do;
IO_RATE = Xiorate + (Xpc_n_io_r + Xpc_n_io_w + Xpc_s_io_r +
Xpc_s_io_w)/PC_INT_SECS;
end;
/******************************/
/* Separate the two clusters */
/* For cluster level plotting */
/*****************************/
PROC SQL;
create table ClusterOne as
select * from Summary
where M_CLUSTER_N = 1;
DATA ClusterTwo;
SET Work.ClusterTwo;
KEEP wrap iorate pc_time_b m_array_id;
/************************************/
/* Prepare the rank listing dataset */
/* This is done to produce the rank */
/* level plot of the iorate */
/************************************/
DATA RankLevel;
MERGE Work.Ranks Work.Summary;
by m_array_id;
KEEP m_array_id pc_time_b iorate;
/************************************/
/* Generate rate the iorate plots */
/* and create the html page exports */
/************************************/
goptions reset=all;
goptions device=gif;
/* Plot Rank Level I/O Rate Plot */
footnote1 justify=right 'RL_IO ';
PROC GPLOT data=Work.Ranklevel;
symbol i=line;
/* plot the iorate vs. time for each rank */
plot iorate * pc_time_b = m_array_id / grid caxis=black des='RL_IO-1'
name='rank'; ;
title height=1 'Rank Level Plot of IO Rate Vs. Time';
run;
QUIT;
/* Close the HTML destinations. */
ods html close;
/* Open the listing destination. */
ods listing;
goptions reset=all;
FILENAME odsout clear;
QUIT;
D_PASSWORD_EXPIR Expiration date for password. The password is not valid starting on this
date. (Time portion of timestamp is ignored.) [TIMESTAMP, NOT NULL]
D_LAST_UPDATE Date and time of last update for this user [TIMESTAMP, NOT NULL]
I_LAST_SCHD_SEQ Last task sequence number used for this task [INTEGER]
D_SCHD_TASK_EXPIR Date that task expires. time portion of timestamp is ignored. Task will not
run on expiration date. [TIMESTAMP, NOT NULL]
F_PRIVATE Indicates that this task and its output are private, Y=yes, N=no [CHAR(1),
NOT NULL]
F_TRACE Indicates that trace should be turned on when this task is run, Y=yes, N=no
[CHAR(1), NOT NULL]
I_LAST_SCHD_SEQ last task sequence number used for this task [INTEGER]
D_SCHD_TASK_EXPIR date that task expires. time portion of timestamp is ignored. Task will not
run on expiration date. [TIMESTAMP, NOT NULL]
F_PRIVATE indicates that this task and its output are private, Y=yes, N=no [CHAR(1),
NOT NULL]
F_TRACE indicates that trace should be turned on when this task is run, Y=yes, N=no
[CHAR(1), NOT NULL]
F_PRIVATE indicates that this task and its output are private, Y=yes, N=no [CHAR(1),
NOT NULL]
F_TRACE indicates that trace should be turned on when this task is run,
Y=yes,N=no [CHAR(1), NOT NULL]
I_IP_ADDR Full IP address in dotted decimal notation or pattern with ranges and *, e.g.
9.113.42.250 [CHAR(30), NOT NULL]
C_SERVICE_TYPE Identifier of service, for example, AS, KS, and so on [CHAR(4), NOT NULL]
C_SERVICE_TYPE Identifier of the discovered service type, for example, KS, AS, and so on
[CHAR(4),NOT NULL]
I_SCHH_TASK_SEQ Sequence number of task that discovered this service [INTEGER, NOT
NULL]
N_PORT Port number where the service was discovered [INTEGER, NOTNULL]
I_SCHH_TASK_SEQ Sequence number of task that discovered this service [INTEGER, NOT
NULL]
N_LAST_USED_INDEX Last number used for this index. assumed to be zero if row is
absent.[INTEGER, NOT NULL]
D_ISSUED date and time message was issued [TIMESTAMP, NOT NULL]
I_COMPONENT component that issued the message, for example, CORE, VSX, RPTR
[CHAR(4),NOT NULL]
I_SCHH_TASK_SEQ sequence number of scheduled task that was running to produce record,
null if not produced while a scheduled task was running [INTEGER]
D_ISSUED date and time message was issued [TIMESTAMP, NOT NULL]
C_SERVICE_TYPE type of service, for example, AS, KS, and so on [CHAR(4), NOT NULL]
C_SERV_TYPE_STATUS status indicator for service, active=AC, inactive=IN [CHAR(2), NOT NULL]
N_FOUND number of services of the given type in the given status discovered during
the indicated task [INTEGER, NOT NULL]
C_SERVICE_TYPE identifier of service, for example, AS, KS, and so on [CHAR(4), NOT NULL]
I_META_TAG value in the IBM product meta tag which will indicate that a discovered
service is of this type [VARCHAR(128), NOT NULL]
C_SERVICE_TYPE identifier of service, for example, AS, KS, and so on [CHAR(4), NOT NULL]
N_PORT port number this service might be listening on [INTEGER, NOT NULL]
X_DISPLAY_NAME null
N_SNMP_PORT port number the manager runs on, default port is 162 [INTEGER, NOT
NULL]
X_PRM_VALUE the data value associated with the key [VARCHAR(1024), NOT NULL]
TOALERT "Y" for send an alert, "N" for not to alert [CHAR(1)]
TOALERT "C" for critical, "Y" for both warning and critical, "N" for not to alert
[CHAR(1)]
D_TASK_DATE The date this asset/capacity task ran. [DATE, NOT NULL]
T_TASK_TIME The time this asset/capacity task started to run. [TIME, NOT NULL]
I_VSM_IDX An index for each unique storage server, generated when the storage
server is first discovered by StorWatch. This index is used in many tables
related to asset/capacity data. [INTEGER, NOT NULL]
I_VSM_SN The serial number of the storage server. This field is filled in when the
storage server is first discovered by StorWatch. [CHAR(16), NOT NULL]
I_VSM_TYPE The higher level identifier for the storage server product, for example2105.
This field is filled in when the storage server is first discovered by
StorWatch. [CHAR(16)]
I_VSM_MODEL_NO The model number for the storage server, for example E10. This field is
filled in when the storage server is first discovered by
StorWatch.[CHAR(10)]
I_SHORT_NAME An alias name provided by an authorized end user for this storage server.
This field is empty until a user provides a name through the Web user
interface. [CHAR(16)]
I_VSM_MANFR_DATE The date of manufacture for this storage server. This field is filled in when
the ESS Expert first successfully collects asset and capacity data from the
storage server. [CHAR(32)]
I_TASK_SEQ_FIRST A numeric identifier corresponding to the date and time when the ESS
Expert first successfully collects asset and capacity data from the storage
server. [INTEGER]
D_TASK_DATE_FIRST The date when the ESS Expert first successfully collected asset and
capacity data from the storage server. [DATE]
T_TASK_TIME_FIRST The time of day when the ESS Expert first successfully collected asset and
capacity data from the storage server. [TIME]
I_TASK_SEQ_LATEST A numeric identifier corresponding to the date and time when the ESS
Expert most recently collected asset and capacity data from the storage
server. [INTEGER]
D_TASK_DATE_LATEST The date when the ESS Expert most recently collected asset and capacity
data from the storage server. [DATE]
T_TASK_TIME_LATEST The time when the ESS Expert most recently collected asset and capacity
data from the storage server. [TIME]
I_DU_THRESHOLD The percentage (0-100) above which a disk utilization value is reported as
an exception. If this field is NULL, a default value is used by data
preparation. [SMALLINT]
I_AVH_THRESHOLD The integral value for average holding time threshold. A holding time below
this threshold is displayed as an exception. If this field is NULL, a default
value is used by data preparation. [SMALLINT]
I_VSM_FC_WWNN The World-Wide Node-Name for this storage server, where this storage
server is attached to a Fibre Channel fabric. Otherwise this field is blank.
[CHAR(16), NOT NULL]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER,NOT NULL]
I_TASK_SEQ_FIRST The task sequence index corresponding to the date and time when the
ESS Expert first detected this change in the attributes of the storage
server. [INTEGER, NOT NULL]
Q_EXPN_RACKS Total number of expansion racks for this storage server. [INTEGER]
D_TASK_DATE The date when a change in the attributes were first detected by the ESS
Expert. [DATE, NOT NULL]
T_TASK_TIME The time when a change in the attributes were first detected by the ESS
Expert. [TIME, NOT NULL]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER,NOT NULL]
I_TASK_SEQ_FIRST The task sequence index corresponding to the date and time when the
ESS Expert first detected this change in the attributes of the storage
server racks. [INTEGER, NOT NULL]
I_VSM_RACK_SN Serial number for this storage server rack. [CHAR(16), NOT NULL]
I_VSM_RACK_ID The type and model number for this storage server rack. [CHAR(16)]
D_TASK_DATE The date when a change in the attributes were first detected by the ESS
Expert. [DATE, NOT NULL]
T_TASK_TIME The time when a change in the attributes were first detected by the ESS
Expert. [TIME, NOT NULL]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER,NOT NULL]
I_TASK_SEQ_FIRST The task sequence index corresponding to the date and time when the
ESS Expert first detected this change in the attributes of the
clusters.[INTEGER, NOT NULL]
I_CLU_PORT_NO The value for the port number assigned to the ESS Specialist installed on
the cluster controller. [CHAR(5)]
D_TASK_DATE The date when a change in the attributes were first detected by the ESS
Expert. [DATE]
T_TASK_TIME The time when a change in the attributes were first detected by the ESS
Expert. [TIME]
I_CLU_USERID The userid assigned to the ESS Specialist installed on the cluster
controller. [CHAR(20)]
I_CLU_PASSWORD The password for the userid assigned to the ESS Specialist installed on
the cluster controller. (encrypted) [VARCHAR(254)]
I_CLU_USERID64 The userid assigned to the ESS Specialist installed on the cluster
controller. [CHAR(64)]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER,NOT NULL]
I_TASK_SEQ_FIRST The task sequence index corresponding to the date and time when the
ESS Expert first detected this change in the attributes of the storage
server. [INTEGER, NOT NULL]
I_CLU_NO The cluster number (identifier for the cluster) [INTEGER, NOT NULL]
I_CLU_LIC_SRC A numeric identifier that represents a licensed internal code level such as
"active", "previous", "next", "cdrom", "diskette" and "unknown". [INTEGER,
NOT NULL]
D_TASK_DATE The date when a change in the attributes were first detected by the ESS
Expert. [DATE, NOT NULL]
T_TASK_TIME The time when a change in the attributes were first detected by the ESS
Expert. [TIME, NOT NULL]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_TASK_SEQ_IDX Sequence number of the asset/capacity collection task that read and
summarized this data from the storage server. [INTEGER, NOT NULL]
Q_VSM_CUIS Number of logical control units defined in this storage server [INTEGER]
Q_VSM_UNDEFND_GB Amount of storage in this storage server that has not been defined as
either RAID or independent disks, in gigabytes [INTEGER]
Q_VSM_FB_ASSIGNED Amount of fixed block storage that is currently assigned (connected to)
hosts, in gigabytes [INTEGER]
Q_VSM_FB_PENDING Amount of fixed block storage that is defined as volumes, but not attached
to any host, in gigabytes [INTEGER]
Q_VSM_FB_FREE Amount of fixed block storage that is available for fixed block volume
definition, in gigabytes [INTEGER]
Q_VSM_FB_RAID Amount of fixed block storage that is defined as RAID storage, in gigabytes
[INTEGER]
Q_VSM_CKD_ASSIGNED Amount of logical control unit storage that has S/390 volumes assigned, in
gigabytes [INTEGER]
Q_VSM_CKD_FREE Amount of logical control unit storage that is available for S/390 volume
definition, in gigabytes [INTEGER]
IQ_VSM_CKD_RAID Amount of logical control unit storage that is defined as RAID storage, in
gigabytes [INTEGER]
Q_VSM_RAID_GRPS Number of disk groups in this storage server defined as RAID storage
[INTEGER]
Q_VSM_NONRAID_GRPS Number of disk groups in this storage server defined as independent disk
groups [INTEGER]
Q_VSM_FREE_RANKS Number of disk groups in this storage server that have not yet been defined
as either RAID or independent disk type [INTEGER]
Q_VSM_DDMS Total number of DDMs that are installed in this storage server [INTEGER]
D_TASK_DATE The date when the ESS Expert most recently collected asset and capacity
data from the storage server. [DATE]
T_TASK_TIME The time when the ESS Expert most recently collected asset and capacity
data from the storage server. [TIME]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index.[INTEGER, NOT NULL]
I_DDM_TYPE An internally generated numeric value that uniquely identifies each type of
DDM based on capacity and revolutions per minute (RPMs) [SMALLINT,
NOT NULL]
I_TASK_SEQ_FIRST The task sequence index corresponding to the date and time when the
ESS Expert first detected a change in the number of DDMs of a certain
type [INTEGER, NOT NULL]
I_DDM_RPM the number of revolutions per minute (RPMs) for this type of DDM
[SMALLINT]
Q_DDM_COUNT Total number of DDMs of this type (i.e., a fixed capacity size and RPM
speed), valid for the dates and times in the fields below [INTEGER]
D_TASK_DATE_FIRST The date the number (count) of DDMs of this type changed [DATE]
T_TASK_TIME_FIRST The time the number (count) of DDMs of this type changed [TIME]
I_TASK_SEQ_LATEST The task sequence index associated with the date and time of this change
[INTEGER]
D_TASK_DATE_LATEST The date the number (count) of DDMs of this type was most recently
compared to the current count and found to be the same. [DATE]
T_TASK_TIME_LATEST The time the number (count) of DDMs of this type was most recently
compared to the current count and found to be the same. [TIME]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_CLU_NO The cluster number (identifier for the cluster) [INTEGER, NOT NULL]
I_TASK_SEQ_FIRST The task sequence index corresponding to the date and time when the
ESS Expert first detected this change in the attributes of a cluster.
[INTEGER, NOT NULL]
Q_CLU_RAM Total amount of installed memory (RAM) for a cluster, as of the date and
time in this row, in megabytes [INTEGER]
Q_CLU_NVS Total amount of installed Non Volatile Storage (NVS) for a cluster, as of the
date and time in this row, in megabytes. [INTEGER]
Q_CLU_PS Total amount of installed PowerStore (PS) memory for a cluster, as of the
date and time in this row, in megabytes. [INTEGER]
D_TASK_DATE The date when a change in the attributes were first detected by the ESS
Expert. [DATE]
T_TASK_TIME The time when a change in the attributes were first detected by the ESS
Expert. [TIME]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_TASK_SEQ_IDX Sequence number of the asset/capacity collection task that read and
summarized this data from the storage server. [INTEGER, NOT NULL]
Q_HOST_ASSIGN_CAP Amount of fixed block capacity assigned to this host in the given storage
server, in gigabytes [INTEGER]
Q_HOST_ASSIGN_SHR Amount of fixed block capacity assigned to this host in the given storage
server that is also assigned to at least one other host, in gigabytes
[INTEGER]
Q_HOST_VOLS Total number of volumes connected to this host within this storage server.
[INTEGER]
Q_HOST_VOLS_SHR Among the volumes connected to this host, this is the number of these
volumes connected to more than one SCSI-attached host. [INTERGER
NOT NULL DEFAULT 0]
Q_HOST_VOLS_DAISY Total number of volumes where this host and another host are connected
to the volume on the same port, with different initiators. [SMALLINT]
D_TASK_DATE The date when the ESS Expert most recently collected asset and capacity
data from the storage server. [DATE]
T_TASK_TIME The time when the ESS Expert most recently collected asset and capacity
data from the storage server. [TIME]
Q_HOST_FC_VOL_SHR Among the volumes connected to this host, this is the number of these
volumes connected to more than one FC-attached host. [INTERGER NOT
NULL DEFAULT 0]
I_HOST_NAME The name of the host, as defined to the ESS Specialist [CHAR(254), NOT
NULL]
I_HOST_HW_TYPE Internally defined numeric indicator for the type of operating system of the
host [SMALLINT]
I_HOST_IP the IP address of the host (if available), otherwise zero [CHAR(254)]
I_TASK_SEQ_FIRST The task sequence index corresponding to the date and time when the
ESS Expert first detected this change in the attributes of a host.
[INTEGER, NOT NULL]
D_TASK_DATE The date when a change in the host attributes were first detected by the
ESS Expert. [DATE]
T_TASK_TIME The time when a change in the host attributes were first detected by the
ESS Expert. [TIME]
I_HOST_ATTACH Flag for a attached host connected to one or more storage servers, 2 if FC
attached, 1 if SCSI attached. [INTEGER, NOT NULL DEFAULT 1]
I_HOST_FC_WWPN The World-Wide Port-Name for this host on the Fibre Channel fabric.
[CHAR(16), NOT NULL DEFAULT]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_TASK_SEQ_IDX Sequence number of the asset/capacity collection task that read and
summarized this data from the storage server. [INTEGER, NOT NULL]
I_VOL_IDX An internally generated identifier (index) for a fixed block, logical volume
assigned to at least one Open System host. See VVOLX for volume data
associated with this index. [INTEGER, NOT NULL]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_VOL_IDX An internally generated identifier (index) for a fixed block, logical volume
assigned to at least one Open System host. [INTEGER, NOT NULL]
I_TASK_SEQ_FIRST The task sequence index corresponding to the date and time when the
ESS Expert first detected this change in the attributes of a fixed block
volume. [INTEGER, NOT NULL]
I_VOL_SN serial number of the fixed block volume (LUN serial number) [CHAR(16),
NOT NULL]
I_VOL_TYPE_ID type of the fixed block volume (as defined by the ESS Specialist)
[CHAR(16)]
I_VOL_SLOT_NUM Card number of adapter associated with this fixed block volume
[SMALLINT]
I_VOL_SSALOOP_ID SSA Loop Identifier (e.g., A or B) associated with the disk group containing
this fixed block volume [CHAR(1)]
I_VOL_DISK_GROUP Identifying number of the disk group containing this fixed block volume
[SMALLINT]
I_VOL_NUM Identifying number of this fixed block volume (and lowest level identifier of
the volume) [SMALLINT]
D_TASK_DATE The date when a change in the attributes were first detected by the ESS
Expert. [DATE]
T_TASK_TIME The time when a change in the attributes were first detected by the ESS
Expert. [TIME]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_CUI_IMAGE_NUM The unique identifying number for this logical control unit in the storage
server [INTEGER, NOT NULL]
I_TASK_SEQ_IDX Sequence number of the asset/capacity collection task that read and
summarized this data from the storage server. [INTEGER, NOT NULL]
I_CUI_EMULATION The emulation type of this logical control unit (e.g., unique integer
associated with 3990-6, 3990-3, 3990-3 TPF) [SMALLINT]
Q_CUI_ASSIGNED_CYL Assigned capacity (space assigned to S/390 volumes) in the logical control
unit, in cylinders [INTEGER]
Q_CUI_ASSIGNED_GB Assigned capacity (space assigned to S/390 volumes) in the logical control
unit, in gigabytes [INTEGER]
D_TASK_DATE The date when the ESS Expert most recently collected asset and capacity
data from the storage server. [DATE]
T_TASK_TIME The time when the ESS Expert most recently collected asset and capacity
data from the storage server. [TIME]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_CUI_IMAGE_NUM The unique identifying number for this logical control unit in the storage
server [INTEGER, NOT NULL]
I_CUI_VOL_TYPE One type of S/390 volume allocated in the storage server (for example,
3390-2 or 3390-3) [CHAR(8), NOT NULL]
I_TASK_SEQ_IDX Sequence number of the asset/capacity collection task that read and
summarized this data from the storage server. [INTEGER, NOT NULL]
Q_CUI_TOT_VOLS Number of S/390 volumes of this type in this logical control unit [INTEGER]
Q_CUI_TOT_CYLS Total cylinders for volumes of this type in the logical control unit [INTEGER]
Q_CUI_RAID_CYLS Total RAID cylinders for volumes of this type in the logical control unit
[INTEGER]
Q_CUI_NONRAID_CYLS Total non-RAID cylinders for volumes of this type in the logical control unit
[INTEGER]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_SHORT_NAME An alias name provided by an authorized end user for this storage server
(optional) [CHAR(16)]
I_VSM_SN The serial number of the storage server [CHAR(16), NOT NULL]
I_VSM_TYPE The higher level identifier for the storage server product, for example 2105
for the IBM Enterprise Storage Server 2105 [CHAR(16)]
Q_VSM_UNDEFND_GB Amount of storage in this storage serer that has not been defined as either
RAID or independent disks, in gigabytes [INTEGER]
Q_VSM_CUIS Number of logical control units defined in this storage server [INTEGER]
Q_VSM_FB_ASSIGNED Amount of fixed block storage that is currently assigned (connected to)
hosts, in gigabytes [INTEGER]
D_TASK_DATE The date when the ESS Expert most recently collected asset and capacity
data from the storage server. [DATE]
T_TASK_TIME The time when the ESS Expert most recently collected asset and capacity
data from the storage server. [TIME]
Q_S390_VOLS Total number of S/390 volumes (LCU devices) defined in the storage
server. [INTEGER NOT NULL DEFAULT 0]
Q_OS_VOLS Total number of open system volumes in the storage server (each volume
must be assigned to a host). [INTEGER NOT NULL DEFAULT 0]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_SHORT_NAME An alias name provided by an authorized end user for this storage server
(optional) [CHAR(16)]
I_VSM_SN The serial number of the storage server [CHAR(16), NOT NULL]
I_VSM_TYPE The higher level identifier for the storage server product, for example 2105
[CHAR(16)]
Q_VSM_SCSI_ADAPT Total number of SCSI adapters installed in this storage server [INTEGER]
Q_VSM_SSA_ADAPT Total number of SSA adapters installed in this storage server [INTEGER]
Q_VSM_RAID_GRPS Number of disk groups in this storage server defined as RAID storage
[INTEGER]
Q_VSM_NONRAID_GRPS Number of disk groups in this storage server defined as independent disk
groups [INTEGER]
Q_VSM_FREE_RANKS Number of disk groups in this storage server that have not yet been defined
as either RAID or independent disk type [INTEGER]
Q_VSM_DDMS Total number of DDMs that are installed in this storage server [INTEGER]
D_TASK_DATE The date when the ESS Expert most recently collected asset and capacity
data from the storage server. [DATE]
T_TASK_TIME The time when the ESS Expert most recently collected asset and capacity
data from the storage server. [TIME]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_DDM_GB_CAPACITY Capacity for this type of DDM, in gigabytes [CHAR(8), NOT NULL]
I_DDM_RPM Number of revolutions per minute (RPMs) for this type of DDM [SMALLINT,
NOT NULL]
Q_DDM_COUNT Total number of DDMs of this type (i.e., a fixed capacity size and RPM
speed) [INTEGER]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_CLU_NO The cluster number (identifier for this cluster) [INTEGER, NOT NULL]
Q_CLU_RAM Total amount of installed memory (RAM) for this cluster, in megabytes
[INTEGER]
Q_CLU_NVS Total amount of installed Non Volatile Storage (NVS) for this cluster, in
megabytes [INTEGER]
Q_CLU_PS Total amount of installed PowerStore (PS) memory for this cluster, in
megabytes [INTEGER]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_CUI_IMAGE_NUM The unique identifying number for this logical control unit in the storage
server [INTEGER, NOT NULL]
I_CUI_EMULATION The emulation type of this logical control unit (e.g., unique integer
associated with 3990-6, 3990-3, 3990-3 TPF) [SMALLINT]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_CUI_IMAGE_NUM The unique identifying number for this logical control unit in the storage
server [INTEGER, NOT NULL]
I_CUI_VOL_TYPE One type of S/390 volume allocated in the storage server (for example,
3390-2 or 3390-3) [CHAR(8), NOT NULL]
Q_CUI_TOT_VOLS Number of S/390 volumes of this type in this logical control unit
[INTEGER]
Q_CUI_RAID_CYLS Total RAID cylinders for volumes of this type in the logical control unit
[INTEGER]
Q_CUI_NONRAID_CYLS Total non-RAID cylinders for volumes of this type in the logical control unit
[INTEGER]
Q_CUI_TOT_CYLS Total cylinders for volumes of this type in the logical control unit
[INTEGER]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_CUI_IMAGE_NUM The unique identifying number for this logical control unit in the storage
server [INTEGER, NOT NULL]
I_VOL_BASE_ADDR Base device address value for this S/390 volume (value is 0-255)
[CHAR(4)]
Q_PAV_ADDR_NUM Number of PAV addresses for this S/390 volume (currently not used)
[SMALLINT]
I_VOL_TYPE The type of this volume (e.g., 3390-2, 3390-3, 3390-9) [CHAR(6)]
I_VOL_FORMAT The format for this volume (for example, 3380 or 3390 track format)
[CHAR(6)]
I_VOL_SLOT_NUM Card number of adapter associated with this S/390 volume [SMALLINT]
I_VOL_SSALOOP_ID SSA Loop Identifier (e.g., A or B) associated with the disk group containing
this CKD volume [CHAR(1)]
I_VOL_DISK_GRP Identifying number of the disk group containing this S/390 volume
[SMALLINT]
I_VOL_NUM Identifying number of this S/390 volume within the disk group [SMALLINT,
NOT NULL]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_HOST_NAME The name of the host, as defined to the ESS Specialist [CHAR(254)]
I_HOST_HW_TYPE Internally defined numeric indicator for the type of operating system of the
host [SMALLINT]
I_HOST_IP the IP address of the host (if available), otherwise zero [CHAR(254)]
Q_HOST_ASSIGN_CAP Amount of fixed block capacity assigned to this host in the given storage
server, in gigabytes [INTEGER]
Q_HOST_ASSIGN_SHR Amount of fixed block capacity assigned to this host in the given storage
server that is also assigned to at least one other host, in gigabytes
[INTEGER]
Q_HOST_VOLS Total number of volumes connected to this host within this storage server.
[INTEGER]
Q_HOST_VOLS_SHR Among the volumes connected to this host, this is the number of these
volumes connected to more than one SCSI-attached host. [INTERGER
NOT NULL DEFAULT 0]
D_TASK_DATE The date when the ESS Expert most recently collected asset and capacity
data from the storage server. [DATE]
T_TASK_TIME The time when the ESS Expert most recently collected asset and capacity
data from the storage server. [TIME]
I_HOST_ATTACH Flag for a attached host connected to one or more storage servers, 2 if FC
attached, 1 if SCSI attached. [INTEGER, NOT NULL DEFAULT 1]
I_HOST_FC_WWPN The World-Wide Port-Name for this host on the Fibre Channel fabric.
[CHAR(16), NOT NULL DEFAULT]
Q_HOST_FC_VOL_SHR Among the volumes connected to this host, this is the number of these
volumes connected to more than one FC-attached host. [INTERGER NOT
NULL DEFAULT 0]
Q_HOST_MIXD_VOLSHR Among the volumes connected to this host, this is the number of these
volumes connected to more than one host (either FC or SCSI attached).
[INTERGER NOT NULL DEFAULT 0]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_VOL_IDX An internally generated identifier (index) for a fixed block, logical volume
assigned to at least one Open System host. See VVOLX for volume data
associated with this index. [INTEGER, NOT NULL]
I_VOL_SN Serial number of the fixed block volume (LUN serial number) [CHAR(16)]
I_VOL_TYPE_ID type of the fixed block volume (as defined by the ESS Specialist). For
example, open systems and AS/400 are volume types. [CHAR(16)]
I_VOL_SLOT_NUM Card number of adapter associated with this fixed block volume
[SMALLINT]
I_VOL_SSALOOP_ID SSA Loop Identifier (e.g., A or B) associated with the disk group containing
this fixed block volume [CHAR(1)]
I_VOL_DISK_GROUP Identifying number of the disk group containing this fixed block volume
[SMALLINT]
I_VOL_NUM Identifying number of this fixed block volume (and lowest level identifier of
the volume) [SMALLINT]
I_VOL_DAISY_HOST For this host/fixed block volume connection, value is 1 if this host shares a
port with another host to connect to this volume, value is 0 otherwise
[SMALLINT]
Q_VOL_HOST_PORTS For this host/fixed block volume connection, the number of ports used by
this host to connect to this volume [SMALLINT]
I_HOST_NAME The name of the host, as defined to the ESS Specialist [CHAR(254)]
I_HOST_HW_TYPE Internally defined numeric indicator for the type of operating system of the
host [SMALLINT]
I_HOST_ATTACH Flag for a attached host connected to one or more storage servers, 2 if FC
attached, 1 if SCSI attached. [INTEGER, NOT NULL DEFAULT 1]
I_HOST_FC_WWPN The World-Wide Port-Name for this host on the Fibre Channel fabric.
[CHAR(16), NOT NULL DEFAULT]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_VSM_TYPE The higher level identifier for the storage server product, for example 2105.
[CHAR(16), NOT NULL]
I_VSM_MODEL_NO The model number for the storage server, for example E20. [CHAR(10),
NOT NULL]
I_VSM_SHORT_NAME An alias name provided by an authorized end user for this storage server.
[CHAR(16), NOT NULL]
I_VSM_SN The serial number of the storage server. [CHAR(16), NOT NULL]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_CLU_NO The cluster number (identifier for the cluster) [INTEGER, NOT NULL]
I_CLU_LIC_VRSN Version, release and modification level of the current active licensed
internal code. [CHAR(16)]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_VSM_TYPE The higher level identifier for the storage server product, for example 2105.
[CHAR(16), NOT NULL]
I_VSM_MODEL_NO The model number for the storage server, for example E20. [CHAR(10),
NOT NULL]
I_VSM_SHORT_NAME An alias name provided by an authorized end user for this storage server.
[CHAR(16), NOT NULL]
I_VSM_SN The serial number of the storage server. [CHAR(16), NOT NULL]
I_VSM_NBR_CLUS Total number of cluster controllers in this storage server. [INTEGER, NOT
NULL]
I_VSM_NBR_EXPN Total number of expansion racks for this storage server. [INTEGER, NOT
NULL]
I_VSM_MANFR_DATE The date of manufacture for this storage server. [CHAR(32), NOT NULL]
D_TASK_DATE The date when the ESS Expert most recently collected asset and capacity
data from the storage server. [DATE, NOT NULL]
T_TASK_TIME The time when the ESS Expert most recently collected asset and capacity
data from the storage server. [TIME, NOT NULL]
I_VSM_FC_WWNN The World-Wide Node-Name for this storage server, where this storage
server is attached to a Fibre Channel fabric. Otherwise this field is blank.
[CHAR(16), NOT NULL]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_VSM_RACK_MODEL The type and model number for this storage server rack. [CHAR(16), NOT
NULL]
I_VSM_RACK_SN Serial number for this storage server rack. [CHAR(16), NOT NULL]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_CLU_NO The cluster number (identifier for the cluster) [INTEGER, NOT NULL]
I_CLU_LIC_SRC A numeric identifier that represents a licensed internal code level such as
"active", "previous", "next", "cdrom", "diskette" and "unknown". [INTEGER,
NOT NULL]
I_CLU_LIC_VRSN Version, release and modification level of this level of licensed internal
code. [CHAR(16)]
I_CLU_LIC_ACTVDAT Activation date for the associated level of licensed internal code.
[CHAR(32)]
D_TASK_DATE The date when the ESS Expert most recently collected asset and capacity
data from the storage server. [DATE, NOT NULL]
T_TASK_TIME The date when the ESS Expert most recently collected asset and capacity
data from the storage server. [TIME, NOT NULL]
I_VSM_TYPE The higher level identifier for the type of storage server product, for
example 2105. [CHAR(16), NOT NULL]
I_VSM_TYPE_CNT The number of these storage servers of this type known to this ESS
Expert. [INTEGER, NOT NULL]
D_TASK_DATE The date when the ESS Expert most recently collected asset and capacity
data from any storage server. [DATE, NOT NULL]
T_TASK_TIME The time when the ESS Expert most recently collected asset and capacity
data from any storage server. [TIME, NOT NULL]
P_TASK Sequence number of the performance collection task that read this data
from the storage server. [INTEGER, NOT NULL]
M_MACH_TY The higher level identifier for the storage server product, for example 2105
for the IBM Enterprise Storage Server 2105. [CHAR(4)]
M_MODEL_N The model number for the storage server, for example E20. [CHAR(3)]
P_CDATE The date of this snapshot of the configuration for this storage server,
collected by the performance collector [DATE]
P_TASK Sequence number of the performance collection task that read this data
from the storage server. [INTEGER, NOT NULL]
M_CLUSTER_N Cluster number for this logical array [SMALLINT, NOT NULL]
M_CARD_NUM Card number of adapter associated with this logical array [SMALLINT,
NOT NULL]
M_ARRAY_ID An ESS internally generated logical array identifier [CHAR(8), NOT NULL]
M_LOOP_ID SSA Loop Identifier (e.g., A or B) associated with the disk group containing
this logical array [CHAR(1)]
M_GRP_NUM Identifying number of the disk group containing this logical array
[SMALLINT]
M_DISK_NUM Disk number of the disk group (and final identifier of the logical array), if an
independent disk, 0 otherwise [SMALLINT]
M_DBL_WIDE Attribute of an array: S if single wide strip size (32K), D if double wide strip
size (64K) [CHAR(1), NOT NULL DEFAULT 'S']
P_TASK Sequence number of the performance collection task that read this data
from the storage server. [INTEGER, NOT NULL]
M_VOL_NUM Identifying number of this logical volume (and lowest level identifier of the
logical volume) [INTEGER, NOT NULL]
M_VOL_ADDR LUN serial number if the logical volume is an open systems (fixed block)
volume, SSID + Base device address if an S/390 volume [CHAR(8)]
P_TASK Sequence number of the performance collection task that read this data
from the storage server. [INTEGER, NOT NULL]
M_CLUSTER_N Cluster number for this logical array [SMALLINT, NOT NULL]
M_ARRAY_ID An ESS internally generated logical array identifier [CHAR(8), NOT NULL]
M_DDM_NUM Number of disk drive modules (DDMs) in this logical array [SMALLINT]
PC_DATE_B Date this sample time period began (i.e., performance counters were
collected) [DATE, NOT NULL]
PC_TIME_B The time of day this sample time period began (i.e., performance counters
were collected) [TIME, NOT NULL]
PC_DATE_E Date this sample time period ended (i.e., performance counters were
collected again) [DATE]
PC_TIME_E The time of day this sample time period ended (i.e., performance counters
were collected again) [TIME]
PC_IO_WRITE Number of subsystem write requests issued to this logical array in this time
period [INTEGER]
PC_IO_READ Number of subsystem read requests issued to this logical array in this time
period [INTEGER]
PC_RT_READ Total time, in milliseconds, to satisfy all read requests issued to this logical
array in this time period [INTEGER]
PC_RT_WRITE Total time, in milliseconds, to satisfy all write requests issued to this logical
array in this time period [INTEGER]
PC_IOR_AVG Average subsystem I/O rate for all requests issued to this logical array in
this time period (total requests/interval seconds) [INTEGER]
PC_MSR_AVG Average millisecond time to satisfy all subsystem I/O requests issued to
this logical array in this time period. (total millisecond time/total requsts)
[INTEGER]
PC_RBT_AVG Number of bytes read from this logical array / Number of seconds in this
time period [INTEGER]
PC_WBT_AVG Number of bytes written to this logical array / Number of seconds in this
time period [INTEGER]
Q_SAMP_DEV_UTIL Percent (0 - 100) of time this array is busy, for this time period, or negative
value if not available. Default: -1. [SMALLINT, NOT NULL DEFAULT -1]
PC_B_HR_PRCT Percent (0 - 100) of this sample's time period which is in the hour of the
start time. Default: 100. [SMALLINT, NOT NULL DEFAULT 100]
P_COMM Zero if normal, a negative value if the location of this logical array cannot
be identified using the VPCFG table contents [SMALLINT]
Q_IO_TOTAL Total I/O read and write requests issued to the volumes in this array in this
time period [DOUBLE NOT NULL DEFAULT -1]
Q_IO_SEQ Total sequential read and write requests issued to the volumes in this array
in this time period [DOUBLE NOT NULL DEFAULT -1]
Q_CL_AVG_HOLD_TIME Cluster-level average cache holding time for this time period, for the cluster
with affinity to this array [INTEGER NOT NULL DEFAULT -1]
P_TASK Sequence number of the performance collection task that read this data
from the storage server. [INTEGER, NOT NULL]
M_CLUSTER_N Cluster number for this logical volume [SMALLINT, NOT NULL]
M_ARRAY_ID An ESS internally generated logical array identifier [CHAR(8), NOT NULL]
M_VOL_NUM Identifying number of this logical volume (and lowest level identifier of the
logical volume) [INTEGER, NOT NULL]
PC_DATE_B Date this sample time period began (i.e., performance counters were
collected) [DATE, NOT NULL]
PC_TIME_B The time of day this sample time period began (i.e., performance counters
were collected) [TIME, NOT NULL]
PC_DATE_E Date this sample time period ended (i.e., performance counters were
collected again) [DATE]
PC_TIME_E The time of day this sample time period ended (i.e., performance counters
were collected again) [TIME]
PC_N_CH_R Number of cache hits for normal (non-sequential) I/O read requests
("normal, read" command chains that were completed without requiring
access to any DASD). [INTEGER]
PC_N_CH_W Number of cache hits for normal (non-sequential) I/O write requests
("normal, write" command chains that were completed without requiring
access to any DASD). [INTEGER]
PC_S_CH_R Number of cache hits for sequential I/O read requests ("sequential mode,
read" command chains that were completed without requiring access to
any DASD). [INTEGER]
PC_S_CH_W Number of cache hits for sequential I/O write requests ("sequential mode,
write" command chains that were completed without requiring access to
any DASD). [INTEGER]
PC_D2C Number of disk to cache track transfers for non-sequential I/O requests
(number of tracks transferred successfully from DASD to cache excluding
sequential mode "next track" promotions). [INTEGER]
PC_SEQ_D2C Number of disk to cache track transfers for sequential I/O requests
(number of tracks transferred successfully from DASD to cache due to
sequential mode "next track" promotions) [INTEGER]
PC_C2D Number of cache to disk track transfers (number of tracks transferred from
cache to DASD asynchronous to transfers from the channel) [INTEGER]
PC_RHR_AVG Cache hit ratio for read I/Os (total number of cache hits for read requests
/ total number of read requests) [SMALLINT]
PC_WHR_AVG Cache hit ratio for write I/Os (total number of cache hits for write requests
/ total number of write requests) [SMALLINT]
PC_THR_AVG Overall cache hit ratio (total number of cache hits for all requests / total
number of requests) [SMALLINT]
PC_SHR_AVG Cache hit ratio for sequential I/Os (total number of cache hits for sequential
requests / total number of sequential requests) [SMALLINT]
PC_NHR_AVG Cache hit ratio for normal (non-sequential) I/Os (total number of cache hits
for non-sequential requests / total number of non-sequential requests)
[SMALLINT]
PC_RMR_IO Number of record mode read I/O requests (number of command chains
associated with a record access mode read operation, and the chain
contains no write commands) [INTEGER]
PC_RMR_CH Number of record mode read cache hits (number of record mode read
requests which were completed without requiring any access to DASD).
[INTEGER]
PC_RMRHR_AVG Cache hit ratio for record mode reads (number of record mode read cache
hits / number of record mode read requests) [SMALLINT]
PC_DFW_IO Number of DASD fast write I/O requests (same as normal write IO
requests). [INTEGER]
PC_DFW_DELAY Number of DASD fast write delayed requests (requests of this type
delayed due to NVS space constraints) [INTEGER]
PC_DFW_AVG (DASD fast write I/O requests / DASD fast writes delayed) * 100
[SMALLINT]
PC_B_HR_PRCT Percent (0 - 100) of this sample's time period which is in the hour of the
start time. Default: 100. [SMALLINT, NOT NULL DEFAULT 100]
P_COMM Zero if normal, a negative value if the location of this logical volume cannot
be identified using the VPCFG, VPVOL tables [SMALLINT]
I_PR_SEQ_IDX Sequence number of the Data Preparation task that created this summary
data. [INTEGER, NOT NULL]
I_MACH_IDX An internally generated identifier (index) for a storage server that has
performance summary data available in the database. (See
VPSNX)[INTEGER, NOT NULL]
I_CLUSTER_NO Cluster number for this logical volume [SMALLINT, NOT NULL]
I_CARD_NO Card number of adapter associated with this logical volume [SMALLINT,
NOT NULL]
I_LOOP_ID SSA Loop Identifier (e.g., A or B) associated with the disk group containing
this logical volume [CHAR(1), NOT NULL]
I_DISK_GRP_NO Identifying number of the disk group containing this logical volume
[SMALLINT, NOT NULL]
I_VOL_NUM Identifying number of this logical volume within the disk group (and lowest
level identifier of the logical volume) [SMALLINT, NOT NULL]
I_VOL_ADDR LUN serial number if the logical volume is an open systems volume, SSID
+ Base device address if an S/390 volume [CHAR(8)]
D_PR_DATE Date to which this performance data applies [DATE, NOT NULL]
I_PR_HOUR Hour of the day (0-23) to which this performance data applies [SMALLINT,
NOT NULL]
Q_HR_CACHE_HITS Total number of cache hits occurring in this hour for this logical volume
(command chains that were completed without requiring access to any
DASD) [DOUBLE]
Q_HR_TOT_IO_REQS Total number of I/O requests (command chains) occurring in this hour for
this logical volume [DOUBLE]
Q_HR_TOT_IO_R Total number of I/O read requests (command chains which contain at least
one search or read command but no write commands) occurring in this
hour for this logical volume [DOUBLE]
Q_HR_TOT_IO_W Total number of I/O write requests (command chains which contain at least
one write command) occurring in this hour for this logical volume
[DOUBLE]
Q_HR_TOT_SECS Total number of sampling seconds (from VPCCH) in this hour for this
logical volume [INTEGER]
Q_HR_CACHE2DISK Number of cache to disk track transfers in this hour for this logical volume
(number of tracks transferred from cache to DASD asynchronous to
transfers from the channel) [DOUBLE]
Q_HR_DISK2CACHE Number of disk to cache track transfers in this hour for this logical volume
(number of tracks transferred successfully from DASD to cache)
[DOUBLE]
Q_HR_CACHE_HIT_R Cache hit ratio * 1000 (total cache hits/IO requests * 1000) [SMALLINT]
Q_HR_CACHE_HIT_WR Cache hit ratio * 1000 for write requests (total write cache hits/write I/O
requests * 1000) [SMALLINT]
Q_HR_CACHE_HIT_RR Cache hit ratio * 1000 for read requests (total read cache hits/read I/O
requests * 1000) [SMALLINT]
Q_HR_SIO_R Total number of sequential read requests occurring in this hour for this
volume [DOUBLE NOT NULL DEFAULT -1]
Q_HR_SIO_W Total number of sequential write requests occurring in this hour for this
volume [DOUBLE NOT NULL DEFAULT -1]
Q_HR_NIO_R Total number of normal (random) read requests occurring in this hour for
this volume [DOUBLE NOT NULL DEFAULT -1]
Q_HR_NIO_W Total number of normal (random) write requests occurring in this hour for
this volume [DOUBLE NOT NULL DEFAULT -1]
Q_HR_RMR Total number of record mode read requests occurring in this hour for this
volume [DOUBLE NOT NULL DEFAULT -1]
Q_HR_NVS_DELAY Total number of IO requests delayed due to NVS space constraints in this
hour for this volume [DOUBLE NOT NULL DEFAULT -1]
Q_HR_CACHE_HIT_RR Cache hit ratio * 1000 for read requests (total read cache hits/read I/O
requests * 1000) [SMALLINT]
I_PR_SEQ_IDX Sequence number of the Data Preparation task that created this summary
data. [INTEGER, NOT NULL]
I_MACH_IDX An internally generated identifier (index) for a storage server that has
performance summary data available in the database. (See VPSNX)
[INTEGER, NOT NULL]
I_CLUSTER_NO Cluster number for this logical array [SMALLINT, NOT NULL]
I_CARD_NO Card number of adapter associated with this logical array [SMALLINT,
NOT NULL]
I_LOOP_ID SSA Loop Identifier (e.g., A or B) associated with the disk group containing
the logical array [CHAR(1), NOT NULL]
I_DISK_GRP_NO Identifying number of the disk group containing the logical array
[SMALLINT, NOT NULL]
I_DISK_NUM Disk number of the disk group (and lowest level identifier of the logical
array), if an independent disk, 0 otherwise [SMALLINT, NOT NULL]
D_PR_DATE Date to which this performance data applies [DATE, NOT NULL]
I_PR_HOUR Hour of the day (0-23) to which this performance data applies [SMALLINT,
NOT NULL]
Q_HR_CACHE_HITS Total number of cache hits occurring in this hour for this logical array
(command chains that were completed without requiring access to any
DASD) [DOUBLE]
Q_HR_TOT_IO_REQS Total number of I/O requests (command chains) occurring in this hour for
this logical array [DOUBLE]
Q_HR_TOT_IO_R Total number of I/O read requests (command chains which contain at least
one search or read command but no write commands) occurring in this
hour for this logical array [DOUBLE]
Q_HR_TOT_IO_W Total number of I/O write requests (command chains which contain at least
one write command) occurring in this hour for this logical array [DOUBLE]
Q_HR_TOT_SECS Total number of sampling seconds (from VPCCH) in this hour for this
logical array [DOUBLE]
Q_HR_CACHE2DISK Number of cache to disk track transfers in this hour for this logical array
(number of tracks transferred from cache to DASD asynchronous to
transfers from the channel) [DOUBLE]
Q_HR_DISK2CACHE Number of disk to cache track transfers in this hour for this logical array
(number of tracks transferred successfully from DASD to cache)
[DOUBLE]
Q_HR_CACHE_HIT_R Cache hit ratio * 1000 (total cache hits/IO requests * 1000) [SMALLINT]
Q_HR_CACHE_HIT_WR Cache hit ratio * 1000 for write requests (total write cache hits/write I/O
requests * 1000) [SMALLINT]
Q_HR_SIO_R Total number of sequential read requests occurring in this hour for this
logical array [DOUBLE NOT NULL DEFAULT -1]
Q_HR_SIO_W Total number of sequential write requests occurring in this hour for this
logical array [DOUBLE NOT NULL DEFAULT -1]
Q_HR_NIO_R Total number of normal (random) read requests occurring in this hour for
this logical array [DOUBLE NOT NULL DEFAULT -1]
Q_HR_NIO_W Total number of normal (random) write requests occurring in this hour for
this logical array [DOUBLE NOT NULL DEFAULT -1]
Q_HR_RMR Total number of record mode read requests occurring in this hour for this
logical array [DOUBLE NOT NULL DEFAULT -1]
Q_HR_NVS_DELAY Total number of IO requests delayed due to NVS space constraints in this
hour for this logical array [DOUBLE NOT NULL DEFAULT -1]
I_PR_SEQ_IDX Sequence number of the Data Preparation task that created this summary
data. [INTEGER, NOT NULL]
I_MACH_IDX An internally generated identifier (index) for a storage server that has
performance summary data available in the database. (See VPSNX)
[INTEGER, NOT NULL]
D_PR_DATE Date to which this performance data applies [DATE, NOT NULL]
I_PR_HOUR Hour of the day (0-23) to which this performance data applies [SMALLINT,
NOT NULL]
Q_HR_CACHE_HITS Total number of cache hits occurring in this hour for this adapter/loop
(command chains that were completed without requiring access to any
DASD) [DOUBLE]
Q_HR_TOT_IO_REQS Total number of I/O requests (command chains) occurring in this hour for
this adapter/loop [DOUBLE]
Q_HR_TOT_IO_R Total number of I/O read requests (command chains which contain at least
one search or read command but no write commands) occurring in this
hour for this adapter/loop [DOUBLE]
Q_HR_TOT_IO_W Total number of I/O write requests (command chains which contain at least
one write command) occurring in this hour for this adapter/loop [DOUBLE]
Q_HR_TOT_SECS Total number of sampling seconds (from VPCCH) in this hour for this
adapter/loop [DOUBLE]
Q_HR_CACHE2DISK Number of cache to disk track transfers in this hour for this adapter/loop
(number of tracks transferred from cache to DASD asynchronous to
transfers from the channel) [DOUBLE]
Q_HR_DISK2CACHE Number of disk to cache track transfers in this hour for this adapter/loop
(number of tracks transferred successfully from DASD to cache)
[DOUBLE]
Q_HR_CACHE_HIT_R Cache hit ratio * 1000 (total cache hits/IO requests * 1000) [SMALLINT]
Q_HR_CACHE_HIT_WR Cache hit ratio * 1000 for write requests (total write cache hits/write I/O
requests * 1000) [SMALLINT]
Q_HR_CACHE_HIT_RR Cache hit ratio * 1000 for read requests (total read cache hits/read I/O
requests * 1000) [SMALLINT]
Q_HR_SIO_R Total number of sequential read requests occurring in this hour for this
adapter/loop [DOUBLE NOT NULL DEFAULT -1]
Q_HR_SIO_W Total number of sequential write requests occurring in this hour for this
adapter/loop [DOUBLE NOT NULL DEFAULT -1]
Q_HR_NIO_R Total number of normal (random) read requests occurring in this hour for
this adapter/loop [DOUBLE NOT NULL DEFAULT -1]
Q_HR_NIO_W Total number of normal (random) write requests occurring in this hour for
this adapter/loop [DOUBLE NOT NULL DEFAULT -1]
Q_HR_RMR Total number of record mode read requests occurring in this hour for this
adapter/loop [DOUBLE NOT NULL DEFAULT -1]
Q_HR_NVS_DELAY Total number of IO requests delayed due to NVS space constraints in this
hour for this adapter/loop [DOUBLE NOT NULL DEFAULT -1]
I_PR_SEQ_IDX Sequence number of the Data Preparation task that created this summary
data. [INTEGER, NOT NULL]
I_MACH_IDX An internally generated identifier (index) for a storage server that has
performance summary data available in the database. (See VPSNX)
[INTEGER, NOT NULL]
D_PR_DATE Date to which this performance data applies [DATE, NOT NULL]
I_PR_HOUR Hour of the day (0-23) to which this performance data applies [SMALLINT,
NOT NULL]
Q_HR_CACHE_HITS Total number of cache hits occurring in this hour for this cluster (command
chains that were completed without requiring access to any DASD)
[DOUBLE]
Q_HR_TOT_IO_REQS Total number of I/O requests (command chains) occurring in this hour for
this cluster [DOUBLE]
Q_HR_TOT_IO_R Total number of I/O read requests (command chains which contain at least
one search or read command but no write commands) occurring in this
hour for this cluster [DOUBLE]
Q_HR_TOT_IO_W Total number of I/O write requests (command chains which contain at least
one write command) occurring in this hour for this cluster [DOUBLE]
Q_HR_TOT_SECS Total number of sampling seconds (from VPCCH) in this hour for this
cluster [DOUBLE]
Q_HR_CACHE2DISK Number of cache to disk track transfers in this hour for this cluster (number
of tracks transferred from cache to DASD asynchronous to transfers from
the channel) [DOUBLE]
Q_HR_DISK2CACHE Number of disk to cache track transfers in this hour for this cluster (number
of tracks transferred successfully from DASD to cache) [DOUBLE]
Q_HR_CACHE_HIT_R Cache hit ratio * 1000 (total cache hits/IO requests * 1000) [SMALLINT]
Q_HR_CACHE_HIT_WR Cache hit ratio * 1000 for write requests (total write cache hits/write I/O
requests * 1000) [SMALLINT]
Q_HR_CACHE_HIT_RR Cache hit ratio * 1000 for read requests (total read cache hits/read I/O
requests * 1000) [SMALLINT]
Q_HR_AVG_HOLD_TIME Average holding time for this cluster and this hour [INTEGER]
I_AVH_THRESHOLD The integral value for average holding time threshold. A holding time below
this threshold is displayed as an exception. [SMALLINT]
Q_HR_SIO_R Total number of sequential read requests occurring in this hour for this
cluster [DOUBLE NOT NULL DEFAULT -1]
Q_HR_SIO_W Total number of sequential write requests occurring in this hour for this
cluster [DOUBLE NOT NULL DEFAULT -1]
Q_HR_NIO_R Total number of normal (random) read requests occurring in this hour for
this cluster [DOUBLE NOT NULL DEFAULT -1]
Q_HR_NIO_W Total number of normal (random) write requests occurring in this hour for
this cluster [DOUBLE NOT NULL DEFAULT -1]
Q_HR_RMR Total number of record mode read requests occurring in this hour for this
cluster [DOUBLE NOT NULL DEFAULT -1]
Q_HR_NVS_DELAY Total number of IO requests delayed due to NVS space constraints in this
hour for this cluster [DOUBLE NOT NULL DEFAULT -1]
I_VSM_IDX An internally generated identifier (index) for a storage server that has
performance summary data available in the database. [INTEGER, NOT
NULL]
I_VSM_SN The serial number of the associated storage server [CHAR(9), NOT NULL]
I_ITEM_NO Identifier of the row number (currently one row, this value always zero)
[SMALLINT, NOT NULL]
D_PR_DATE Date of most recent set of sample records (VPCCH and VPCRK) the Data
Preparation task successfully processed (for all servers) [DATE, NOT
NULL]
I_PR_HOUR Hour of the day (0-23) the most recent set of sample records (VPCCH and
VPCRK) the Data Preparation task successfully processed [SMALLINT,
NOT NULL]
I_PERF_DB_LEVEL Level of vpcch and vpcrk tables. For automatic upgrade to latest level.
[CHAR(8), NOT NULL]
HRS_TO_RESUM Number of hours to reprocess the next time data preparation task
executes. Updated internally each time task runs. [SMALLINT, NOT NULL]
I_PR_SEQ_IDX Sequence number of the Data Preparation task that created this summary
data. [INTEGER, NOT NULL]
I_VSM_PERF_IDX An internally generated identifier (index) for a storage server that has
performance summary data available in the database. (See VPSNX)
[INTEGER, NOT NULL]
D_PR_DATE Date to which this performance data applies [DATE, NOT NULL]
I_PR_HOUR Hour of the day (0-23) to which this performance data applies [SMALLINT,
NOT NULL]
Q_HR_TOT_SECS Total number of sampling seconds (from VPCRK) in this hour for this
storage server [INTEGER]
Q_HR_TOT_IOS Total number of subsystem I/O requests issued to logical arrays in this
storage server in this hour [DOUBLE]
Q_HR_TOT_RESP_TIME Total time, in milliseconds, to satisfy all subsystem I/O requests issued to
logical arrays in this storage server [DOUBLE]
Q_HR_MAX_IOR Maximum subsystem I/O rate for logical arrays in this storage server in this
hour (max of the sample interval-level, average subsystem I/O rates,
PC_IOR_AVG, for logical arrays in this storage server) [INTEGER]
Q_HR_AVG_IOR Average subsystem I/O rate for this storage server in this hour (number of
subsystem I/O requests/average number of sampling seconds)
[INTEGER]
Q_HR_AVG_MSR Average millisecond time to satisfy all subsystem I/O requests issued to
logical arrays in this storage server and in this hour (total time for all
subsystem I/O requests/number of subsystem I/O requests) [INTEGER]
Q_HR_AVG_IOIN Average I/O intensity for this storage server in this hour (Q_HR_AVG_IOR
* Q_HR_AVG_MSR * 1000) [DOUBLE]
C_PR_CONFIG_CHG Negative if a logical array cannot be found in the configuration snapshot for
this storage server, zero otherwise [SMALLINT]
I_PR_SEQ_IDX Sequence number of the Data Preparation task that created this summary
data. [INTEGER, NOT NULL]
I_VSM_PERF_IDX An ESS internally generated identifier (index) for a storage server that has
performance summary data available in the database. (See VPSNX)
[INTEGER, NOT NULL]
I_ARRAY_ID An ESS internally generated logical array identifier for a logical array
assigned to this disk group. (Some disk groups may contain more than one
logical array.) [CHAR(8), NOT NULL]
I_LOOP_ID SSA Loop Identifier (e.g., A or B) associated with the disk group containing
this logical array. [CHAR(1)]
I_DISK_GRP_NUM Identifying number of the disk group containing the logical array.
[SMALLINT]
I_DISK_NUM Disk number of the disk group (and lowest level identifier of the logical
array), if an independent disk, 0 otherwise [SMALLINT]
I_CLUSTER_NO Cluster number for this logical array [SMALLINT, NOT NULL]
I_CARD_NUM Card number of adapter associated with this logical array [SMALLINT,
NOT NULL]
D_PR_DATE Date to which this performance data applies [DATE, NOT NULL]
I_PR_HOUR Hour of the day (0-23) to which this performance data applies [SMALLINT,
NOT NULL]
Q_HR_SAMPLES Number of performance sample records, collected for this logical array,
used in this summary [SMALLINT]
Q_HR_TOT_SECS Total number of sampling seconds (from VPCRK) for this logical array in
this hour [INTEGER]
Q_HR_TOT_IOS Total number of subsystem I/O requests issued to this logical array in this
hour [DOUBLE]
Q_HR_TOT_RESP_TIME Total time, in milliseconds, to satisfy all subsystem I/O requests issued to
this logical array, in milliseconds [DOUBLE]
Q_HR_MAX_IOR Maximum subsystem I/O rate for this logical array and this hour (max of the
sample interval-level, average subsystem I/O rates, PC_IOR_AVG, for this
logical array) [INTEGER]
Q_HR_MAX_IOIN Maximum I/O intensity for this logical array in this hour (max of
PC_IOR_AVG * PC_MSR_AVG * 1000) [DOUBLE]
Q_HR_AVG_IOR Average subsystem I/O rate for all requests issued to this logical array in
this hour (number of subsystem I/O requests/number of sampling
seconds) [INTEGER]
Q_HR_AVG_MSR Average millisecond time to satisfy all subsystem I/O requests issued to
this logical array in this hour (total time for all subsystem I/O
requests/number of subsystem I/O requests) [INTEGER]
Q_HR_AVG_IOIN Average I/O intensity for this logical array in this hour (Q_HR_AVG_IOR *
Q_HR_AVG_MSR * 1000) [DOUBLE]
Q_HR_DEV_UTIL Average device utilization percent (value is 0-100) for the DDMs in this
logical array (totals for the hour used in formula) [SMALLINT]
Q_HR_DU_NO_EXCEPTS Number of sample time periods when the device utilization exceeded the
threshold (values for each sample time period are used in the formula)
[SMALLINT]
Q_HR_DU_MAX_EXCEPT Maximum disk utilization value (0-100), for all sample time period disk
utilization values exceeding the threshold (or zero, if threshold not
exceeded) [SMALLINT]
I_DU_THRESHOLD The percent (0-100) above which a disk utilization value is reported as an
exception. [SMALLINT]
Q_HR_TOT_VOL_IOS Total number of IO requests for volumes belonging to this logical array in
this hour [DOUBLE NOT NULL DEFAULT -1]
I_PR_SEQ_IDX Sequence number of the Data Preparation task that created this summary
data. [INTEGER, NOT NULL]
I_VSM_PERF_IDX An internally generated identifier (index) for a storage server that has
performance summary data available in the database. (See VPSNX)
[INTEGER, NOT NULL]
D_PR_DATE Date to which this performance data applies [DATE, NOT NULL]
I_PR_HOUR Hour of the day (0-23) to which this performance data applies [SMALLINT,
NOT NULL]
Q_HR_TOT_SECS Total number of sampling seconds (from VPCRK) for this adapter/loop in
this hour [INTEGER]
Q_HR_TOT_IOS Total number of subsystem I/O requests issued to this adapter/loop in this
hour [DOUBLE]
Q_HR_TOT_RESP_TIME Total time, in milliseconds, to satisfy all subsystem I/O requests issued to
this adapter/loop, in milliseconds [DOUBLE]
Q_HR_MAX_IOR Maximum subsystem I/O rate for this adapter/loop and this hour (max of
the sample interval-level, average subsystem I/O rates, PC_IOR_AVG, for
logical arrays associated with this adapter/loop) [INTEGER]
Q_HR_MAX_IOIN Maximum I/O intensity for this adapter/loop in this hour (max of
PC_IOR_AVG * PC_MSR_AVG * 1000 for all logical arrays and sample
intervals) [DOUBLE]
Q_HR_AVG_IOR Average subsystem I/O rate for all requests issued to this adapter/loop in
this hour (number of I/O requests/average number of sampling seconds)
[INTEGER]
Q_HR_AVG_MSR Average millisecond time to satisfy all subsystem I/O requests issued to
this adapter/loop in this hour (total time for all subsystem I/O
requests/number of subsystem I/O requests) [INTEGER]
Q_HR_AVG_IOIN Average I/O intensity for this adapter/loop in this hour (Q_HR_AVG_IOR *
Q_HR_AVG_MSR * 1000) [DOUBLE]
Q_HR_DEV_UTIL Average device utilization percent (value is 0-100) for the DDMs in this
adapter/loop (weighted average of hourly disk utilization values for logical
arrays in this adapter/loop) [SMALLINT]
Q_HR_DU_NO_EXCEPTS Number of sample time periods when one or more logical array device
utilization values, associated with this adapter/loop, exceeded the
threshold [SMALLINT]
Q_HR_DU_MAX_EXCEPT Maximum of all logical array disk utilization values (0-100) which exceeded
the threshold (or zero, if threshold not exceeded by any logical array in this
adapter/loop) [SMALLINT]
I_DU_THRESHOLD The percent (0-100) above which a disk utilization value is reported as an
exception. [SMALLINT]
Q_HR_SEQ_IOS Total number of sequential IO requests for volumes with affinity to this
adapter/loop in this hour [DOUBLE NOT NULL DEFAULT -1]
Q_HR_TOT_VOL_IOS Total number of IO requests for volumes with affinity to this adapter/loop in
this hour [DOUBLE NOT NULL DEFAULT -1]
I_SCHD_TASK An unique identifier for the scheduled task. (See CSCHD and CSCHH)
[CHAR(32), NOT NULL]
I_USER The userid of the creator of the scheduled task. [CHAR(20), NOT NULL]
C_SCHD_TASK_TYPE The scheduled task type. For example "VPD" for performance data
collection and "VAC" for asset and capacity data collection. [CHAR(4), NOT
NULL]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_CLU_NO The cluster number (identifier for the cluster) for the cluster controller on
the storage server this task will communicate with. [INTEGER, NOT NULL]
I_SCHH_TASK_SEQ Task sequence number of the task execution (see CSCHH). [INTEGER,
NOT NULL]
Q_MACH_TRIED Number of storage servers for which this task attempted to perform work
[SMALLINT]
Q_MACH_SUCCEEDED Number of storage servers for which this task completed successfully
[SMALLINT]
Q_MACH_FAILED Number of storage servers for which this task failed [SMALLINT]
I_SCHH_TASK_SEQ Task sequence number of the task execution (See CSCHH). [INTEGER,
NOT NULL]
Q_TOTAL_STEPS Number of execution steps that are being tracked for this task [SMALLINT]
D_MONTH_DATE First-of-month date for the month when data was collected. [DATE, NOT
NULL]
I_VSM_SN The serial number of the storage server. [CHAR(16), NOT NULL]
I_SHORT_NAME An alias name provided by an authorized end user for this storage server
(optional) [CHAR(16)]
I_VSM_TYPE The higher level identifier for the type of storage server product, for
example 2105. [CHAR(16), NOT NULL]
D_TASK_DATE The date when this record was inserted by an asset/capacity collection
task. [DATE]
I_VSM_IDX An internally generated identifier (index) for a storage server. See VMPDX
for storage server data associated with this index. [INTEGER, NOT NULL]
I_PORT_BAY the host adapter bay for this port. [SMALLINT, NOT NULL]
I_PORT_CARD the host adapter card (or slot) for this port. [SMALLINT, NOT NULL]
I_CLUST_AFF cluster affinity of the FC adapter port (1 or 2). If there is no affinity, the
value is -1. [SMALLINT, NOT NULL]
I_PORT_TOPOLOGY the FC topology of the port (0 - not yet defined; 1 - point to point; 2 =
arbitrated loop). [SMALLINT, NOT NULL]
I_PORT_WWPN the world wide port name of this FC port (a string of 16 hexadecimal digits).
[CHAR(16), NOT NULL]
I_HOST_EXISTS indicator if at least one FC host is attached to this adapter port (value is 1);
if no FC host is attached to this port, the value is 0. [SMALLINT, NOT
NULL]
I_HOST_WWPN the world wide port name of a host assigned to this FC port (a string of 16
hexidecimal digits). If no host is assigned to this port, this field contains an
empty string. [CHAR(16), NOT NULL]
I_PORT_HOST The name of the host, as defined to the ESS Specialist (the Nickname). If
no host is assigned to this port, this field contains an empty string.
[CHAR(254), NOT NULL]
I_VSM_IDX An internally generated identifier (index) for the storage server. See
VMPDX for storage server data associated with this index. [INTEGER NOT
NULL]
I_HNICK_NAME The name of the host on the ESS, as defined to the ESS Specialist
[CHAR(254) NOT NULL]
I_HNICK_HW_TYPE Internally defined numeric indicator for the type of operating system of the
host [SMALLINT]
I_HNICK_IP the IP address of the host (if available), otherwise null [CHAR(254)]
D_TASK_DATE The date when a change in the attributes were first detected by the ESS
Expert. [DATE]
T_TASK_TIME The time when a change in the attributes were first detected by the ESS
Expert. [TIME]
I_HNICK_ATTACH Flag for a attached host connected to one or more storage servers, 2 if FC
attached, 1 if SCSI attached. [SMALLINT]
I_VSM_IDX An internally generated identifier (index) for the storage server. See
VMPDX for storage server data associated with this index. [INTEGER NOT
NULL]
I_VSM_IDX An internally generated identifier (index) for the storage server. See
VMPDX for storage server data associated with this index. [INTEGER NOT
NULL]
I_VOL_IDX An internally generated identifier (index) for a fixed block, logical volume
assigned to at least one open system host. [INTEGER NOT NULL]
I_VOL_SN Serial number of the fixed block volume (LUN serial number) [CHAR(16)]
I_VOL_SLOT_NUM Card number of adapter associated with this fixed block volume
[SMALLINT]
I_VOL_SSALOOP_ID SSA Loop Identifier (e.g., A or B) associated with the disk group containing
this fixed block volume [CHAR(1)]
I_VOL_DISK_GROUP Identifying number of the disk group containing this fixed block volume
[SMALLINT]
I_VOL_NUM Identifying number of this fixed block volume (and lowest level identifier of
the volume) [SMALLINT]
I_VSM_SN The serial number of a storage server (ESS) [CHAR(16) NOT NULL]
I_HNICK_NAME The nickname of an open systems host defined on the storage server (as
defined to the ESS Specialist) [CHAR(254) NOT NULL]
I_HOST_CONN_IP The DNS network name or IP address of the host, or an empty string
[CHAR(254) NOT NULL DEFAULT '']
I_HOST_SOURCE Value is "U" if I_HOST_CONN_IP manually entered by the end user; "I" if
ESS Expert obtains value from its asset/capacity data [CHAR(1) NOT
NULL DEFAULT 'U']
I_VSM_IDX An internally generated identifier (index) for the storage server [CHAR(1)]
I_SCHD_TASK A unique identifier for the scheduled task. (See CSCHD and CSCHH)
[CHAR(32) NOT NULL]
I_USER The userid of the creator of the scheduled task. [CHAR(20) NOT NULL]
C_SCHD_TASK_TYPE The scheduled task type, which is "VHD" for host data collection. [CHAR(4)
NOT NULL]
I_VSM_IDX An internally generated identifier (index) for the storage server [INTEGER
NOT NULL]
I_HOST_CONN_ALT The IP address of the host, if the user specified a network (DNS) name
[CHAR(19) NOT NULL]
I_VSM_IDX An internally generated identifier (index) for the storage server [INTEGER
NOT NULL]
I_VOL_SN Serial number of the fixed block volume (LUN serial number) [CHAR(16)
NOT NULL]
I_SDD_DEV_NAME The name used when requesting access through SDD for a virtual path to
the ESS volume, or null if no SDD path has been configured for the volume.
[CHAR(128)]
I_PATH_ID A numeric value associated with this path on the host to the ESS volume.
[SMALLINT NOT NULL]
I_PATH_ADP_NAME The name of the adapter within the host server to which the path is
attached. [CHAR(64) NOT NULL]
I_PATH_DSK_NAME The name of the ESS volume as known to SDD. This is the logical device
to which this path is bound. [CHAR(64) NOT NULL]
I_PATH_MODE The mode of the path as reported by SDD: 0 - closed, 1 - open, 2 - dead,
3 - invalid, 4 - unknown [SMALLINT NOT NULL]
D_TASK_DATE The date when the most recent successful data collection occurred for this
volume on this host. [DATE NOT NULL]
T_TASK_TIME The time when the most recent successful data collection occurred for this
volume on this host. [TIME NOT NULL]
I_TASK_SEQ_IDX Task sequence number of the host data collection task (See CSCHH)
[INTEGER NOT NULL]
I_VSM_IDX An internally generated identifier (index) for the storage server [INTEGER
NOT NULL]
I_VOL_SN Serial number of the fixed block volume (LUN serial number) [CHAR(16)
NOT NULL]
I_TASK_SEQ_IDX Task sequence number of the host data collection task (See CSCHH)
[INTEGER NOT NULL]
I_SCHH_TASK_SEQ Task sequence number of the host data collection task (See CSCHH)
[INTEGER NOT NULL]
Q_HOST_FAILED_SD Number of open systems hosts failed because subsystem device driver
does not respond on host (SDD may not be installed) [SMALLINT]
METRIC The metric to which this threshold definition applies. Current ESS metrics:
"CHT" (cache holding time), "NVS" (percent of time NVS cache is full), &
"DU" (percent of time disks at ESS lower interface are utilized) [CHAR(30)
NOT NULL]
SCOPE The level of granularity to which this threshold definition applies. Current
levels: "IBM Recommended", "General", or a specific ESS serial number.
[CHAR(40) NOT NULL]
TYPE Indicator for whether the values ascend to critical ("U"), or descend to
critical ("L") [CHAR(1) WITH DEFAULT 'U']
CVALUE The critical value for this metric and this scope. [REAL]
WVALUE The warning value for this metric and this scope. [REAL]
TOALERT Indicates which threshold level must be reached in order to issue an alert.
"W" issue alert when Warning threshold is reached; "C" when critical
threshold reached, "N" do not issue any alert. [CHAR(1)]
SUPPRESSINTERVAL The time in seconds when a threshold overflow condition will not result in
a new alert, after one alert for the same component is already issued.
[INTEGER]
EVENTID An identifier for nature of a failing ESS processing event. Current events
are "essTaskFailed", "essConnectionFailed", "essDataProcessingFailed",
"essPerfDataCollectFailed", "essHostServerFailed" [CHAR(30) NOT
NULL]
TOALERT Indicates whether an SNMP alert should be issued when this failing event
occurs. Values are "Y" to issue an alert, "N" to not issue an alert.
[CHAR(1)]
SUPPRESSINTERVAL The time in minutes when a failing event will not result in a new alert, after
one alert is issued for the identical failing event. [INTEGER]
IP IP address
SYSTYPE Server type, "LS" for Library manager, "GS" for Gemini Controller
ACTIVE Flag for whether data collection should collect from this node
REALINTERVAL Real-time data collection interval (in minute) that Expert collects data from,
0 for Gemini
SCOPE Additional part of the key, for narrowing applicability. "SYSTEM" means no
narrowing.
USERID User ID
SEVERITY Severity
SYSID System ID, use SNO 5 EBCDIC Library sequence number, uniquely
identifies a ATL
SLT Library type number. For example, "003495" represents the &atlds.
SLM Library model number. For example, "L30" represents model L30
SPL Library plant of manufacture. For example, "13" represents San Jose,
California, and "77" represents Valencia, Spain
SPAN null
ENDTIMESTAMP null
MT2 Minimum amount of time, in seconds, required to perform any single mount
operation
DIN Index demounts. An index demount moves a volume from the feed station
to the output stack of the automatic cartridge loader of a 3490 tape drive
EPR Number of eject requests currently pending. An eject operation moves one
volume from the ATL to an output station for an operator to remove
ET1 Maximum amount of time, in seconds, required to perform any single eject
operation
ET2 Minimum amount of time, in seconds, required to perform any single eject
operation
APR Number of audit requests currently pending. When the host requests an
audit operation, the accessor moves to a shelf location and ensures that a
volume is present
AT1 Maximum amount of time, in seconds, required to perform any single audit
operation
AT2 Minimum amount of time, in seconds, required to perform any single audit
operation
INS Number of insert stores. This number is the number of volumes moved
from an input station to a location inside the ATL
SYSID System ID, use VLS 5 EBCDIC Library sequence number for the library
segment for which VTS statistics are being reported
SPAN null
ENDTIMESTAMP null
LEASTVCA For hourly, same as VCA, above. For daily, weekly, ..., min of averages
EX1 Number of physical volumes that contain the successfully exported logical
volumes exported during the last hour
EX2 Number of logical volumes successfully exported for export operations that
completed during the last hour
IM3 Megabytes of data imported for import operations that completed in the
last hour
EX3 Megabytes of data exported during export operations that completed in the
last hour
IM4 Megabytes of data that was moved from one physical stacked volume to
another as part of the import operations that completed in the last hour
EX4 Megabytes moved from one physical stacked volume to another as part of
the export operations completed in the last hour
ADV00 Number of volumes containing greater than 95 to 100 percent active data
SPAN null
SPAN Time duration between the collection time of this record and last record, if
SPAN is too long compared to the collect interval, it will be adjusted to the
collect interval by a factor REF under the assumption that some
collection(s) is lost
REF Factor used to normalize metrics when span from last record is too long
PTPENABLED PTPEnabled
EXPIMPENABLED ExportImport
OPMODE OpMode
RECENABLED RecEnabled
RECINHIBITED RecInhibited
RECING RecInProgress
RECONCILEING ReconcileInProgress
EXPORTING ExportInProgress
IMPORTING ImportInProgress
RAIDREBUILDING RAIDRebuildInProgress
ROVRECOVERYING ROVRecoveryInProgress
NUMROVOL NumberROVolumes
ROVOLPROCESSED ROVolumeBeingProcessed
VBW ChannelWriteBytes
VBR ChannelReadBytes
RECALLSQUEUED RecallsQdOrInProg
MBTOCOPY MBToCopyToBackstore
VPM NumDrivesForMigration
VPS NumDrivesForRecall
VPR NumDrivesForReclamation
VPIX NumDrivesForImport
VPEX NumDrivesForExport
THRTRECALLPCNT RecallPredominate
THRTWRITEPCNT WriteOverrun
THRTRECALL AverageRecall
THRTWRITE AverageWrite
THRTALL Overall
VOLTOEXPORT TotalValidVolumesToExport
NUMVOLEXPORTED NumberVolumesExported
VOLTOIMPORT TotalValidVolumesToImport
NUMVOLIMPORTED NumberVolumesImported
HOSTCHANNELADAPTER HostChannelAdapter
RAIDARRAYADAPTER RAIDArrayAdapter
BACKDATAPATH BackstoreDataPath
POWERSUPPLY RedundantPowerSupply
ALLRAIDHDDS AllRAIDHDDs
SPAREHDDS SpareHDDs
NUMEMPTYPHYVOL NumberEmptyPhysicalVols
F1 Field 1: GEM ID
F2 Field 2: VTS ID
F3 Field 3
REPID report id
REPSOURCE null
Select the Additional materials and open the directory that corresponds with
the redbook form number, SG247016.
alert. A message or log that a storage facility array. An arrangement of related disk drive
generates as the result of error event collection and modules that you have assigned to a group.
analysis. An alert indicates that you need to perform
some service action. ASCII (American Standard Code for Information
Interchange). An interchange code in which code
ALTER. An SQL statement used to change the pages use 7-bit coded characters.
definition of an existing DB2 UDB object.
asset management. The organization and
American National Standard Code for arrangement of items, such as storage devices, into
Information Interchange (ASCII). A coding useful and logical units. ESRM for example,
scheme that is defined by ANSI X3.4-1977. identifies storage resources by parameters that
Programmers use it to represent various alphabetic, include, type, model, serial number, features,
numeric, and special symbols with a seven-bit code. location, acquisition date, price, and maintenance
information.
availability. For a storage subsystem, the degree business logic. The code that implements a
to which a data set can be accessed when business application.
requested by a user.
business model. The major business entities of a
backup. The process of creating a copy of a data company and their relationship to each other.
set or object to be used in case of accidental loss.
cache hit ratio. The percentage of read and write
base table. 1. A table created by the SQL requests serviced out of cache. Indicates the
CREATE TABLE statement that is used to hold workload and effectiveness of cache. The higher the
persistent data. Contrast with result table. 2. A table ratio, the more data found in cache, and the less
containing a LOB column definition. The actual LOB work the controller has to do to find data. Note: in
column data is not stored along with the base table. open systems a low ratio is normal. The hit ratio can
The base table contains a row identifier for each row be used to assess the locality present in the
and an indicator column for each of its LOB columns. workload. Hit ratios tend to vary widely, ranging from
3. The set of columns and rows represented by the almost zero for truly random requests to almost 100
table or view name in an SQL FROM clause. percent for large sequential transfers. This field
should be used primarily to gain insight into
bay. Physical space on an IBM Enterprise Storage workload characteristics. To check whether
Server rack. A bay contains SCSI interface cards sufficient cache has been configured, see “cache
and SSA device interface cards. holding time”.
bind. The process by which the output from the cache hit. An event that occurs when a read
DB2 precompiler is converted to a usable control operation is sent to the cluster, and the requested
structure called a package or an application plan data is found in cache.
(OS/390). During the process, access paths to the
data are selected and some authorization checking cache holding time. The average holding time (in
is performed. seconds), averaged over the observed time period.
This indicates the average time data remained in
bindfile. A DB2 UDB UNIX/Intel file that contains cache. Data should remain in cache at least 30
all the SQL statements and other information seconds. Action to take for problem: get more cache.
needed to generate a package for a program. The
precompiler generates this file when the BINDFILE cache miss. An event that occurs when a read
option of the PREP command is chosen. A package operation is sent to the cluster, but the data is not
can then be generated from the bindfile and stored found in cache.
in the database by the BIND command.
catalog. In DB2 UDB, a collection of tables used call level interface (CLI). A callable application
by DB2 UDB to manage itself. Among other things, program interface (API) for database access, which
the catalog contains descriptions of objects such as is an alternative to using embedded SQL. In contrast
tables, views, and indexes. to embedded SQL, DB2 CLI does not require the
user to precompile or bind applications, but instead
channel. In the ESA/390 architecture, the part of provides a standard set of functions to process SQL
a channel subsystem that manages a single I/O statements and related services at run time.
interface between a channel subsystem and a set of
controllers. channel command retries (CCR). In the client. A computer system or process that
ESA/390 architecture, the protocol used between a requests a service of another computer system or
channel and a controller that allows the controller to process that is typically referred to as a server.
request that the channel reissue the current Multiple clients may share access to a common
command. server.
Glossary 327
cluster. 1. A partition of a storage server that is compression. 1. The process of eliminating gaps,
capable of performing all functions of a storage empty fields, redundancies, and unnecessary data
server. When there are multiple clusters in a storage to reduce the length of records or blocks of data. 2.
server, any remaining clusters in the configuration Any encoding to reduce the number of bits used to
can take over the processing of any failing clusters. represent a given message or record.
2. On an AIX platform, a cluster is a group of nodes
within a complex. 3. One of the functional concurrency. The shared use of resources by
components in ESS. A Cluster plays a major role more than one application process at the same time.
required for Storage Control. ESS has two Clusters,
each of which works independently configuration management. The control of
information necessary to identify both physical and
column function. An SQL operation that derives logical information system resources and their
its result from a collection of values across one or relationship to one another. ESRM, for example,
more rows. Contrast with scalar function. provides a way to view and modify hardware
configurations for host adapters, caches, and
column. The vertical component of a table. A storage devices at the licensed internal code,
column has a name and a particular data type (for operating system, and software application level.
example, character, decimal, or integer).
configuration. 1. The manner in which the
command. 1. A control signal. 2. In a conceptual hardware and software of an information processing
schema language, the order or trigger for an action system are organized and interconnected. 2. The
or permissible action to take place. 3. Loosely, a physical and logical arrangement of devices and
mathematical or logical operator. 4. A statement programs that make up a computing system. 3. The
used to request a function of the system. 5. A devices and programs that make up a system,
request from a terminal for the performance of an subsystem, or network.
operation or the execution of a particular program.
configure. To define the logical and physical
commit. The operation that ends a unit of work by configuration of the input/output (I/O) subsystem via
releasing locks so that the database changes made the user interface provided for this function on the
by that unit of work can be perceived by other storage facility.
processes.
constant. A language element that specifies an
comparison operator . A token (such as =, >, <) unchanging value. Constants are classified as string
used to specify a relationship between two values. constants or numeric constants. Contrast with
variable.
compress. (1) To reduce the amount of storage
required for a given data set by having the system count field. The first field of a CKD record. This
replace identical words, phrases, or data patterns eight-byte field contains a four-byte track address
with a shorter token associated with that word, (CCHH). It defines the cylinder and head that are
phrase, or data pattern. (2) To reclaim the unused associated with the track, and a one-byte record
and unavailable space in a partitioned data set that number (R) that identifies the record on the track. It
results from deleting or modifying members by defines a one-byte key length that specifies the
moving all unused space to the end of the data set length of the record’s key field (0 means no key
field). It defines a two-byte data length that specifies
the length of the record’s data field (0 means no data
field). Only the end-of-file record has a data length of
zero.
CREATE. An SQL statement used to define an data field. The third (optional) field of a CKD
object such as a table, index, view, or database to record. You determine the field length by the data
DB2 UDB. length that is specified in the count field. The data
field contains data that the program writes.
CSV file. Comma separated variable file. A
standard flat file that contains all the values from a data record. A subsystem stores data records on
report separated by commas. You can use this file a track by following the track-descriptor record. The
with spreadsheet programs such as Lotus 1-2-3® or subsystem numbers the data records consecutively,
Excel. starting with 1. A track can store a maximum of 255
data records. Each data record consists of a count
CU. See “control unit (CU)”. field, a key field (optional), and a data field (optional)
cylinder. A unit of storage on a CKD device. A data type. An attribute of columns, literals, host
cylinder has a fixed number of tracks. variables, special registers, and the results of
functions and expressions.
DA. See “device adapter (DA)”
data warehouse. An implementation of an
DASD (direct access storage device). A data informational database used to store sharable data
storage device in which access time is independent sourced from an operational database. A collection
of the location of the data. of a company's persistent asset data, made
available to those requiring access.
DASD fast write. A function of a storage controller
that allows caching of active write data without Database 2™ (DB2). A relational database
exposure of data loss by journaling of the active management system. DB2 Universal Database is
write data in NVS. the relational database management system that is
Web-enabled with Java support.
data collection task. A long-running,
asynchronous task which is scheduled to run at database. A collection of tables, or a collection of
specific times. This task collects inventory and tablespaces and index spaces.
storage capacity data from machines within the
customer’s network that is managed by Expert. DataJoiner. A separately available product that
provides client applications integrated access to
data compression. A technique or algorithm that distributed data and provides a single database
you use to encode data such that you can store the image of a heterogeneous environment. With
encoded result in less space than the original data. DataJoiner, a client application can join data (using
This algorithm allows you to recover the original data a single SQL statement) that is distributed across
from the encoded result through a reverse technique multiple database management systems or update a
or reverse algorithm. See also “compression” single remote data source as if the data were local.
Glossary 329
DB2 catalog. Tables maintained by DB2 that device adapter (DA). A physical sub-unit of a
contain descriptions of DB2 objects such as tables, storage controller that provides the ability to attach
views, and indexes. to one or more interfaces used to communicate with
the associated storage devices.
DB2 UDB. An IBM relational database
management system that is available as a licensed device address. 1. On OEMI interfaces, the unit
program on several operating systems. address specifies a controller and device pair on the
Programmers and users of DB2 can create, access, interface. 2. The ESA/390 term for the field of an
modify, and delete data in relational tables using a ESCON device-level frame that selects a specific
variety of interfaces. device on a control-unit image.
DB2. See DB2 UDB. device driver. A program that controls a device. A
driver acts like a translator between the device and
DBA (database administrator). An individual programs that use the device. Each device has its
responsible for the design, development, operation, own set of specialized commands that only its driver
safeguarding, maintenance, and use of a database. knows. The driver, therefore, accepts generic
commands from a program and translates them into
DBADM (database administration or specialized commands for the device.
administrator). A set of privileges conveyed over a
database. Sometimes a term used to refer to the device interface card. A physical sub-unit of a
person having this set of privileges. (See DBA) storage cluster that provides the communication
with the attached disk drive modules (DDM).
DBMS (database management system). A
software system that controls the creation, device number. ESA/390 term for a
organization, and modification of a database and four-hexadecimal-character identifier, for example
access to the data stored within it. 13A0, that you associate with a device to facilitate
communication between the program and the host
DBRM (database request module). A data set operator. The device number that you associate with
member created by the DB2 (OS/390) precompiler a subchannel.
that contains information about SQL statements.
DBRMs are used in the bind process. device. 1. A mechanical, electrical, or electronic
contrivance with a specific purpose. 2. In the AIX
DDM group. See “disk drive module group (DDM operating system, a valuator, button, or the
group)”. keyboard. Buttons have values of 0 or 1 (up or
down); valuators return values in a range, and the
DDM. See “disk drive module (DDM)” . keyboard returns ASCII values. 3. The ESA/390
term for the field of an ESCON device-level frame
default value. A predetermined value, attribute, or that selects a specific device on a control-unit image.
option that is assumed when no other is explicitly
specified. DFSMS/MVS. See data facility storage
management subsystem
DELETE. An SQL statement that will remove zero,
one, or more rows from a table. direct access storage device (DASD). A mass
storage medium on which a computer stores data.
delimiter. A character or flag that groups or See also “disk drive module (DDM)”.
separates items of data.
distributed system. A system that is spread out ECKD™. Enhanced CKD. Usually it refers to
across a network, be it a LAN or WAN. DASDs (or storage control) which support the ECKD
I/O command set. The ECKD I/O command set is a
DNS. See “Domain Name System (DNS)”. superset of the CKD command set, and it is
designed for DASDs which perform
Domain Name System (DNS). In the Internet non-synchronous I/O operation. Since ECKD does
suite of protocols, the distributed database system not refer to data format, there are no ECKD format
used to map domain names to IP addresses. volumes. The ECKD I/O operation is performed
against CKD format volumes. See also “extended
domain. 1. That part of a computer network in count-key-data (ECKD)”.
which the data processing resources are under
common control. 2. In TCP/IP, the naming system eject. To remove or force out. To move a data
used in hierarchical networks. cartridge from a storage slot to a cartridge access
station.
double-precision floating point number. A 64-bit
approximate representation of a real number. embedded SQL. SQL statements coded within an
application program. See static SQL.
drawer. A unit that contains multiple DDMs, and
provides power, cooling, and related interconnection enterprise storage resource management
logic to make the DDMs accessible to attached host (ESRM). A family of products that provides a single
systems. view of storage hardware and software resources
within an Enterprise Storage Server. A disk storage
system that provides storage sharing for all major
types of servers.
Glossary 331
enterprise systems architecture/390 (ESA/390). event. 1. A representation of a change that occurs
An IBM architecture for mainframe computers and to a part. The change enables other interested parts
peripherals. Processor systems that follow this to receive notification when something about the
architecture include the ES/9000® family. part changes. For example, a push button generates
an event by signalling that it has been clicked, which
enterprise systems connection architecture may cause another part to display a window. 2. Any
(ESCON). An ESA/390 computer peripheral significant change in the state of a system resource,
interface. The I/O interface utilizes ESA/390 logical network resource, or network application. An event
protocols over a serial interface that configures can be generated for a problem, for the resolution of
attached units to a communication fabric. a problem, or for the successful completion of a task.
Examples of events are: the normal starting and
ESA/390. See “enterprise systems stopping of a process, the abnormal termination of a
architecture/390 (ESA/390)”. process, and the malfunctioning of a server. 3. The
enqueueing or dequeueing of an element. 4. In
ESCON. See “enterprise systems connection computer graphics, information generated either
architecture (ESCON)”. asynchronously from a device or as the side-effect of
a client request. Events are grouped into types and
ESRM. See “enterprise storage resource are not sent to a client by the server unless the client
management (ESRM)”. has issued a specific request for information of that
type. Events are usually reported relative to a
ESS Specialist. The Web-based management
window. 5. An occurrence of significance to a task or
interface to the IBM TotalStorage Enterprise Storage
system, such as the completion or failure of an
Server.
operation. 6. In OSI, the occurrence of a well-defined
situation. Events may be planned (for example,
ESS. See “Enterprise Storage Server” 5.
transactions), or they may be spontaneous or
ETL. Enterprise Tape Library. unplanned (for example, faults). An agent reports
events to its managers. 7. A data link control
EUR. IBM European Standards. command and response passed between adjacent
nodes that allows the two nodes to exchange
identification and other information necessary for
operation over the data link.
Glossary 333
GRANT. An SQL statement used to give explicit home address (HA). A nine-byte field at the
privileges or privilege sets to one or more users. beginning of a track that contains information that
identifies the physical track and its association with
granularity. The size of the units under a cylinder.
consideration in a context. The term generally refers
to the level of detail being considered (column, row, host adapter (HA). A physical sub unit of a
table, tablespace), (employee, job, department, storage controller that provides the ability to attach
division, company). to one or more host I/O interfaces.
HA. See “home address (Hoar “host adapter (HA)”. host language. A programming language in which
you can embed SQL statements.
hard disk drive. 1. A storage medium within a
storage server used to maintain information that the host name. In the Internet suite of protocols, the
storage server requires. 2. A mass storage medium name given to a computer. Sometimes, ?host
for computers that is typically available as a fixed name? is used to mean fully qualified domain name;
disk (such as the disks used in system units of other times, it is used to mean the most specific
personal computers or in drives that are external to subname of a fully qualified domain name. For
a personal computer) or a removable cartridge. 3. example, if mycomputer.city.company.com is the
The entire drive unit including the HDA. fully qualified domain name, either of the following
may be considered the host name:
hard drive. A storage medium within a storage v mycomputer.city.company.com
server used to maintain information that the storage v mycomputer
server requires.
host processor. A processor that controls all or
HDA. See “head and disk assembly (HDA)”. part of a user application network. In a network, the
processing unit in which the data communication
HDD. See “hard disk drive”. access method resides. See also “host system”.
hdisk. An AIX term for storage space. head and host system. 1. A computer system that is
disk assembly (HDA). The portion of an HDD connected to the ESS. The ESS supports both
associated with the medium and the read/write mainframe (System/390 or zSeries) hosts as well as
head. open-systems hosts. System/390 or zSeries hosts
are connected to the ESS through ESCON
hex, hexadecimal. 1. A selection, choice, or interfaces. Open-system hosts are connected to the
condition that has 16 possible values or states. 2. ESS by SCSI or fibre-channel interfaces. 2. The data
Pertaining to a numeration system with a radix of 16. processing system to which a network is connected
3. Pertaining to a system of numbers to the base 16; and with which the system can communicate. 3. The
hexadecimal digits range from 0 to 9 and A through controlling or highest level system in a data
F, where A represents 10 and F represents 15. communication configuration.
hierarchical storage management (HSM). A host variable. In an application program, an
program that runs on a workstation or file server to application variable referenced by embedded SQL
provide space management services. HSM can statements.
automatically migrate eligible files to storage to
maintain specific levels of free space on local file
systems. Automatic recalls are made for migrated
files when they are accessed. Users can also
migrate and recall specific files.
HSM. See “hierarchical storage management initiator. A SCSI term for the part of a host
(HSM)” computer that communicates with its attached
targets.
HTML (HyperText Markup Language). A markup
language that is specified by an SGML document inner join. The result of a join operation that
type definition (DTD) and is understood by all Web includes only the matched rows of both tables being
servers. It was designed primarily to support the joined. See also join.
online display of textual and graphical information
that includes hypertext links. input/output (I/O). Pertaining to a device,
process, or channel involved in data input, data
HTTP (HyperText Transfer Protocol). In the output, or both.
Internet suite of protocols, the protocol that is used
to transfer and display hypertext documents. It INSERT. An SQL statement that will add one or
shares rules between Web browsers and servers to more rows to a table.
provide multimedia documents to the Web browser.
interface. 1. A shared boundary between two
Hypertext markup language (HTML). See functional units, defined by functional
“HTML”. characteristics, signal characteristics, or other
characteristics, as appropriate. The concept
Hypertext Transfer Protocol (HTTP). The includes the specification of the connection of two
primary protocol in use on the Web. See “HTTP”. devices having different functions. 2.Hardware,
software, or both, that links systems, programs, or
I/O intensity. An indicator of activity on a devices.
subsystem, volume, or data set. The value is the
product of the I/O rate times the millisecond Internet Protocol (IP). A protocol used to route
response time (MSR). data from its source to its destination in the Internet
computing network environment.
I/O interface. An interface that you define in order
to allow a host to perform read and write operations Internet. A wide area network connecting
with its associated peripheral devices. thousands of disparate networks in industry,
education, government, and research. The Internet
I/O. See “input/output (I/O)”. network uses TCP/IP as the standard for
transmitting information.
IBM. International Business Machines.
intranet. A private network that integrates Internet
IMS. Information Management System. standards and applications (such as Web browsers)
with an organization's existing computer networking
index key. The set of columns in a table used to infrastructure; a TCP/IP network inside a company
determine the order of index entries. firewall.
Glossary 335
IP address. A group of four decimal numbers that JDBC. Java Database Connectivity. Part of the
provides a unique address for the computer. Java Development Kit which defines an application
program interface for Java to provide standard SQL
IP. See “Internet Protocol (IP)”. access to databases from Java programs.
IRLM. Internal resource lock manager (OS/390). JDK. See “Java Developers Kit (JDK)”.
isolation level. A bind parameter that controls how join. A relational operation that allows retrieval of
long read locks are held. data from two or more tables based on matching
column values. See also full outer join, inner join, left
ITSO. International Technical Support outer join, outer join, and right outer join.
Organization
KB. Kilobyte (1024 bytes).
JAR file. A JAR (Java ARchive) file collection of
Java classes and other files packaged into a single key field. The second (optional) field of a CKD
file. By using a JAR file, the browser makes only one record. The key length is specified in the count field.
connection to the server rather than several. By The key length determines the field length. The
reducing the number of files that the browser needs program writes the data in the key field. The
to load from the server, you can download and run subsystem uses this data to identify or locate a given
your applet that much faster. JAR files can also be record.
compressed, making the overall file size smaller and
therefore faster to download. key. A column or an ordered collection of columns
identified in the description of a table, index, or
Java applet. Java code that is compiled into a referential constraint.
compact and optimized program.
keyword. In SQL, a name that identifies an option
Java database connectivity. An application used in an SQL statement.
program interface specification for connecting
programs written in Java to the data in popular LAN (local area network). 1. A computer network
database. located on a user's premises within a limited
geographical area. Communication within a local
Java Developers Kit (JDK). The Java Developers area network is not subject to external regulations;
Kit is provided free of charge from the JavaSoft however, communication across the LAN boundary
division of Sun Microsystems. The Java Developers may be subject to some form of regulation. 2. A
Kit provides a convenient framework for writing and network in which a set of devices are connected to
debugging Java code. one another for communication and that can be
connected to a larger network.
Java. A programming language that enables
application developers to create object-oriented LCU. Logical control unit.
programs that are very secure and portable across
different machine and operating system platforms. LIC. See “licensed internal code (LIC)”.
Java is also dynamic enough to allow for easy
expansion.
local. Refers to any object maintained by the local logical device. The functions of a logical
DB2 subsystem or instance. A local table, for subsystem with which the host communicates when
example, is a table maintained by the local DB2 performing I/O operations to a single
subsystem or instance. Contrast with remote. addressable-unit over an I/O interface. The same
device may be accessible over more than one I/O
location. Any place in which data can be stored. interface.
logical address. On an ESCON interface, the portion
of a source or destination address in a frame used to logical subsystem (LSS). The logical functions of
select a specific channel-subsystem or control-unit a storage controller that allow one or more host I/O
image. interfaces to access a set of devices. The controller
aggregates the devices according to the addressing
lock size. The amount of data controlled by a DB2 mechanisms of the associated I/O interfaces. One or
lock on; the value can be a row, a page, a LOB, a more logical subsystems exist on a storage
partition, a table, or a table space. controller. In general, the controller associates a
given set of devices with only one logical subsystem.
lock. A means of controlling concurrent events or
access to data. OS/390 DB2 locking is performed by logical unit number (LUN). 1. The SCSI term for
the IRLM. the field in an identifying message that is used to
select a logical unit on a given target. 2. A number
locking. The process by which the integrity of data associated with the target address of a drive within
is ensured. Locking prevents concurrent users from the library. The host uses the number to identify the
accessing inconsistent data. address of the drive.
Glossary 337
LONG VARCHAR. Same as VARCHAR, except mount. The act of making a tape available for
DB2 UDB determines the maximum length of the processing by a specific tape drive. A mount
column. consists of removing the cartridge from a drive,
returning it to its storage slot, collecting another
LSS. See “logical subsystem (LSS)” . cartridge from a storage slot, moving it to the drive,
and loading that cartridge into the drive.
LUN. See “logical unit number (LUN)”.
mounted. The state of a tape while it is available
mainframe. A computer, usually in a computer for processing by a specific tape device.
center, with extensive capabilities and resources to
which other computers may be connected so that multiple virtual storage (MVS). Consisting of
they can share facilities. MVS/System Product Version 1 and MVS/370 Data
Facility Product operating on an IBM System/370™
management scope. The definition of storage processor.
resources that TotalStorage Expert requires before
performing tasks. This consists of ranges of Internet MVS. See “multiple virtual storage (MVS)”.
Protocol (IP) addresses and port numbers that
TotalStorage Expert checks for the presence of an MVS/ESA SP. An IBM licensed program used to
TotalStorage Expert agent. control the MVS operating system. MVS/ESA SP
together with DFSMS/MVS compose the base
MB. 1. For processor storage and real and virtual MVS/ESA operating environment.
memory, 1,048,576 bytes. 2. For disk storage
capacity and transmission rates, 1,000,000 bytes. MVS/ESA. An MVS operating system
environment that supports ESA/390*.
megabyte (MB). See MB.
network. 1. A configuration of data processing
method. A subroutine within an object. Methods devices and software connected for information
can be private or public. A private method deals with interchange. 2. A group of nodes and the links
data manipulation within an object and the data interconnecting them.
cannot be accessed by other objects outside the
class of this method. A public method allows data node discovery task. A long-running,
within the object to be accessed by other objects asynchronous task responsible for discovering the
outside the class of this method. machines in a customer’s network which are within
the TotalStorage Expert management scope. The
middleware. The term middleware is used to machine will also have a service running on it which
describe separate products that serve as the glue is recognized by TotalStorage Expert (such as the
between two applications. It is, therefore, distinct Tivoli®® Storage Manager Web Administration
from import and export features that may be built into interface).
one of the applications. Middleware connects two
sides of an application and passes data between node group. A definition of nodes in TotalStorage
them. Expert that allows an administrator another level of
granularity in reporting. When you assign a name to
mid-range systems. A set of multi-use servers the group and you use that name when generating
with hard disk capacity of 50 GB to 250 GB. reports, it restricts the reports to the nodes defined
to the group. For example, if you specify group
migration. The process of moving unused data to DEPTA for the storage resources that are associated
lower cost storage in order to make space for with a specific department, this provides the ability to
high-availability data. The data must be recalled to report on that department.
be used again.
Glossary 339
optimizer. A component of the SQL compiler that parallel access volume (PAV). Created by
chooses an access plan for a DML statement by associating multiple devices of a single control-unit
modeling the execution cost of many alternative image with a single logical device. Up to 8 device
access plans and choosing the one with the minimal addresses can be assigned to a parallel access
estimated cost. volume.
page. A unit of storage within a table space (4KB, platform. An ambiguous term that may refer to the
8KB, 16KB, or 32KB) or indexspace (4KB). In a hardware, operating system, or a combination of the
tablespace, a page contains one or more rows of a hardware and the operating system on which
table. In a LOB tablespace, a LOB value can span software programs run.
more than one page but no more than one LOB
value is stored on a page. platform-independent. Code which is
platform-independent can run on multiple
combinations of operating systems and hardware.
primary key. The unique key that best identifies RACF®. See “resource access control facility
occurrences of the business entity. A table can have (RACF)”.
only one defined primary key. A unique, nonnull key
that is part of the definition of a table. A table cannot rack. A unit that houses the components of a
be defined as a parent unless it has a unique key or storage subsystem, such as controllers, disk drives,
primary key. and power.
privilege set. For the installation SYSADM ID, the RAID. See “redundant array of independent disks
set of all possible privileges. For any other (RAID)”.
authorization ID, the set of all privileges recorded for
that ID in the DB2 catalog. rank. A unit of DDMs managed by a DA. There are
two types of rank, RAID or non RAID. A rank consists
privilege. The capability of performing a specific of six DDMs or seven DDMs in an SSA loop. A
function, sometimes on a specific object. The term non-RAID rank is called a JBOD, and each DDM is a
includes: 1. EXPLICIT PRIVILEGES, which have rank in the JBOD rank. See also “array”.
names and are held as the result of SQL GRANT
and REVOKE statements; for example, the SELECT RDB (relational database). A database that can
privilege. 2. IMPLICIT PRIVILEGES, which be perceived as a set of tables and manipulated in
accompany the ownership of an object, such as the accordance with the relational model of data.
privilege to drop a synonym one owns, or the holding
of an authority, such as the privilege of SYSADM RDBM. Relational database manager.
authority to terminate any utility job.
RDBMS (relational database management
promotion. The operation which validates data in system). A relational database manager that
cache. When data image which is to be promoted operates consistently across supported IBM
has not been in cache yet, staging operation occurs. systems.
Glossary 341
rebind. To create a new application plan or relationship. A defined connection between the
package for an application program that has been rows of a table or the rows of two tables. A
bound previously. If, for example, you have added an relationship is the internal representation of a
index for a table accessed by your application, you referential constraint.
must rebind the application in order to take
advantage of that index. remote. Refers to any object maintained by a
remote DB2 subsystem or instance, i.e., by the DB2
recovery. The process of rebuilding databases subsystem or instance other than the local one.
after a system failure.
replication. The process of taking changes that are
redundant array of independent disks (RAID). stored in the database log or journal at the source
A collection of disk drives that operate server and applying them to the target server.
independently. The IBM Enterprise Storage Server
protects all storage with redundant arrays of resource access control facility (RACF). An
independent disks (RAID). The ESS attaches serial IBM-licensed program or a base element of OS/390,
storage architecture (SSA) disk drive modules that provides for access control by identifying and
(DDMs) in RAID-5 configurations. The verifying the users to the system, authorizing access
implementation of RAID-5 configurations distributes to protected resources, logging the detected
(stripes) parity across all DDMs in the array. See unauthorized attempts to enter the system, and
also “array” . logging the detected accesses to protected
resources.
referential constraint. The limiting of a set of
foreign key values to a set of unique key values. The resource. The object of a lock or claim, which could
requirement that nonnull values of a designated be a tablespace, an indexspace, a data partition, an
foreign key are valid only if they equal values of the index partition, or a logical partition.
unique key of a designated table.
result set. 1. For OS/390, the set of rows returned
referential integrity. The automatic enforcement of to a client application by a stored procedure. 2. The
referential constraints. The condition that exists set of rows specified by a SELECT statement.
when all intended references from data in one
column of a table to data in another column of the result table. The set of rows specified by a
same or a different table are valid. Maintaining SELECT statement.
referential integrity requires enforcing referential
constraints on all LOAD, RECOVER, INSERT, REVOKE. An SQL statement used to remove
UPDATE, and DELETE operations. specific privileges or privilege sets from one or more
users.
related tables. Occur when a row in one table
contains a value of a unique key from another table. right outer join. The result of a join operation that
includes the matched rows of both tables being
relation. See table. joined and preserves the unmatched rows of the
second join operand. See also join.
relational connect. DB2 Relational Connect
enhances the distributed request functionality rollback. The process of restoring data changed by
included with DB2 Universal Database by allowing SQL statements to the state at its last commit point.
users and applications to access data stored in All locks are freed. Contrast with commit.
Oracle databases.
row. The horizontal component of a table. A row
consists of a sequence of values, one for each
column of the table.
Glossary 343
software. All or part of the programs, procedures, storage administrator. A person in the data
rules, and associated documentation of a computer processing center who is responsible for defining,
processing system. Software is an intellectual carrying out, and maintaining storage management
creation that is independent of the medium on which policies.
it is recorded.
storage area network (SAN). A high speed
Solaris. Sun Microsystems UNIX operating subnetwork of shared storage devices. A SAN’s
system. architecture makes all storage devices available to
all servers on a LAN or WAN. As more storage
spare. A disk drive that is used to receive data devices are added, they too will be accessible from
from a device that has experienced a failure that any server in the larger network. Since stored data
requires disruptive service. A spare can be does not reside directly on a network’s servers,
pre-designated to allow automatic dynamic sparing. server power is used for applications, and network
Any data on a disk drive that you use as a spare is capacity is released to the end user.
destroyed by the dynamic sparing copy process.
storage controller. A physical unit which provides
special register. A storage area that is defined for an interface between one or more storage devices
a process by DB2 and is used to store information and a host computer by providing the function of one
that can be referenced in SQL statements. or more logical subsystems. The storage controller
Examples of special registers are USER, CURRENT may provide functions that are not provided by the
DATE, and CURRENT TIME. storage device. The storage controller has one or
more clusters.
SQL (Structured Query Language). A
standardized language for defining and storage device. A physical unit which provides a
manipulating data in a relational database. mechanism to store data on a given medium such
that it can be subsequently retrieved. See also “disk
SQL/DS. SQL/Data System. Also known as DB2 for drive module (DDM)” .
VSE & VM.
storage facility. 1. A physical unit which consists
SSA adapter. The adapter that connects stored of a storage controller integrated with one or more
data on devices, such as disk drive modules for storage devices to provide storage capability to a
access and control by the storage server. See also host computer. 2. A storage server and its attached
serial storage architecture. storage devices.
SSA. See “serial storage architecture (SSA)”. storage management subsystem (SMS). A
DFSMS/MVS facility used to automate and
SSID. See “subsystem identification (SSID)” . centralize the management of storage. Using SMS,
a storage administrator describes the following to
stage. 1. The process of reading data into cache the system:
from a disk drive module. 2. The action of moving v Data allocation characteristics
data from an offline or low-priority device back to an v Performance and availability goals
online or higher priority device, usually on demand of v Backup and retention requirements
the system or on request of the user. v Storage requirements
This information is described to the system through
staging. The data transfer operation from disk to
data class, storage class, management class,
cache.
storage group, and ACS routine definitions.
Glossary 345
UDB. Universal Database. VARIABLE. A data element that specifies a value
that can be changed. Contrast with constant.
Uniform Resource Locator (URL). The address
convention that indicates the location of an item on Viador Sage. A java-based query, reporting and
the World Wide Web. It includes the protocol charting tool for Web access to relational databases.
followed by the fully-qualified host name, and the
request. The server typically maps the request view. An alternative representation of data from
portion of the URL to a path and file name. For one or more tables. A view can include all or some
example: http://www.ibm.com/index.html. of the columns contained in tables on which it is
defined.
unique index. An index which ensures that no
identical key values are stored in a table. virtual tape server. A virtual tape server (VTS) is
a new breed of storage solution that combines a
unit address. The ESA/390 term for the address high-speed disk cache with tape automation, tape
associated with a device on a given controller. On drives, and intelligent storage management software
ESCON interfaces, the unit address is the same as running on a server.
the
VOLSER. See “volume serial number”.
UNIX operating system. An operating system
created by Bell Laboratories that features among volume serial number. A number assigned to a
other things multiprogramming in a multi-user tape cartridge, by the system, when it prepares the
environment. cartridge for use.
UOW (unit of work). A recoverable sequence of volume. Logical storage space on disk. A volume
operations within an application process. At any is known as a drive on Windows 2000 SE platforms
time, an application process is a single unit of work, and a file system on UNIX platforms.
but the life of an application process can involve
many units of work as a result of commit or rollback VTS. See virtual tape server.
operations. In a multi-site update operation, a single
unit of work can include several units of recovery. WAN. Wide area network.
UPDATE. An SQL statement that will change Web. The World Wide Web. The network of HTTP
column values in zero, one, or more table rows. servers that contain programs and files, such as
hypertext documents that contain links to other
URL. See “Uniform Resource Locator (URL)”. documents on HTTP servers.
User ID. The unique string of characters that wide area network (WAN). A network that
identifies any person or device (the User) that may provides communication services to a geographic
issue or receive commands and messages to or area larger than served by a local area network or a
from the information processing system. metropolitan area network, and that may use or
provide public communication facilities.
value. 1.The smallest unit of data manipulated in
SQL. 2. A specific occurrence of an attribute. 3. A Windows 2000 SE. A Microsoft distributed
quantity assigned to a constant, a variable, operating system that is used for client/server
parameter or a symbol. systems.
VARCHAR (varying-length string). A character or World Wide Web. A global network of servers
graphic string whose length varies within set limits. containing programs and files, accessible by the
Contrast with fixed-length string. public.
Glossary 347
348 IBM TotalStorage Expert Customized Reports
Related publications
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this redbook.
IBM Redbooks
For information on ordering these publications, see “fHow to get IBM Redbooks”
on page 350. Note that some of the documents referenced here may be available
in softcopy only:
TotalStorage Expert Hands-On Usage Guide, SG24-6102
Other publications
These publications are also relevant as further information sources:
IBM TotalStorage Expert Installation Guide, GC26-7436
IBM DB2 Universal Database SQL Reference Volume 1, SC09-2974
IBM DB2 Universal Database SQL Reference Volume 2, SC09-2975
Online resources
These Web sites and URLs are also relevant as further information sources:
TotalStorage Demo site:
http://storwatch.dfw.ibm.com/index2.html
QMF Web sites:
http://www.ibm.com/software/data/qmf/
http://www-3.ibm.com/software/data/qmf/reporter/june98/downloads.html
http://www.rocketsoftware.com/qmf/
LotusScript
http://www.ibm.com/software/data/db2/db2lotus/db2lscpt.htm
DB2 information management software home page:
http://www.ibm.com/software/data/
DB2 intelligent miner
Index 353
Q SNMP alerts 11
QMF 111 Spreadsheet 15
QMF for Windows 110–111 SQL 35, 46
query 35 SQL Assist 94–95, 104, 107, 171, 176
Query Management Facility (QMF) 82 SQL CREATE statement 47
query optimization 13 SQL functions 70
SQL prototyping 28
SQL SELECT statement 67
R Storage Server Performance Statistics 146
RAID-5 154
StorWatch Expert 4
rank 154
Structured Query Language 1, 3, 17
ranked reports 145
Structured Query Language (SQL) 12, 45, 82
Ray Boyce 13
SUM 70, 72
Redbooks Web site 350
SWDATA 20, 26, 47, 58, 84, 95, 108
Contact us xv
SWExport 38, 72, 182
relational database management system (RDBMS)
SWImport 38, 182
12–13, 16
SYSADM 25, 27, 60
relational model 13
system administrator 25–26, 60
replicating data 15
System R 13
reporting tools
customer report example 171
historical Volume ID-Host report 180 T
SQL commands 168 T_TASK_TIME 53
Resource Management Facility (RMF) 4 table 34, 46
result table 35 tablespace 50
REVOKE statement 59 TALARMDISPLAY 44, 307
REXX 111 TCP/IP 30
right outer join 100 TDATA 44, 306
root table 35 TDATASRC 44, 305
row 34 testing the SQL statement 106
row locking 17 TGPERF 44, 315
row-level locking 13 three-tier architecture 30
Run-Time Client 31–32 threshold exceptions 146
Thresholds Summary 146
TIME 52
S Time Sharing Option (TSO) 19
S/390 capacity report 131
TIMESTAMP 52
saving the SQL statement 106
TotalStorage Demo Web site 2
scalar functions 70–71
TotalStorage ESS Expert 4
schema 33
TotalStorage ETL Expert 4
Script Center 31
TotalStorage Expert 2–4, 9, 12, 20, 26, 35, 38,
SCSI 2, 134, 140
45–47, 52, 54, 57, 59, 61, 65, 71–73, 79–81, 93–95,
SCSI connections 138
107–108, 111, 113, 117, 127–131, 136, 138,
secure sockets layer (SSL) 10
140–142, 144, 149–150, 152, 157–160, 168, 180,
security 16
183, 185, 193, 198
SELECT clause 68
TotalStorage Expert database 34
SELECT statement 67
tracing 16–17
Simple Network Management Protocol (SNMP) 5
TREPINFO 44, 321
SMALLINT 51
TSASSET 44, 307
SNA 30
Index 355
VSXDSTYP 42, 274
VTHRESHOLD 37, 44, 304
VTS 11
VTSEQ 37, 39, 252
VTSTATM 43, 298
VTSTATS 43, 298
VVOLX 36, 40, 261
W
Web services 13–14
WebSphere 12
WorkSheet Format 168
WSF 198
WSF format 171
X
XML 13
XQuery 13
Get familiar with SQL This IBM Redbook provides the basic knowledge, tools, and
and the tools you samples to show you how to extract report data from built-in INTERNATIONAL
need for reporting reports, and create customized reports from the IBM TECHNICAL
TotalStorage ESS Expert application and database. This SUPPORT
See how to create redbook examines the fundamentals of the DB2 Universal ORGANIZATION
Database and the Structured Query Language, which is used
built-in reports
as the basis for sample reports, and the methodology to
create customized ESS reports based on your enterprise
Learn to produce requirements. BUILDING TECHNICAL
customized reports INFORMATION BASED ON
PRACTICAL EXPERIENCE
This book complements the redbook TotalStorage Hands-On
Usage Guide, SG24-6102, by providing comprehensive
documentation and tools to most effectively use the IBM Redbooks are developed by
TotalStorage Expert for enterprise storage resource the IBM International Technical
management. Support Organization. Experts
from IBM, Customers and
Partners from around the world
This book has been written with a wide range of end-user create timely technical
knowledge and capabilities in mind. The range of expertise for information based on realistic
the target audience is for DB2 and SQL beginners just getting scenarios. Specific
started, and also contains information and reference recommendations are provided
materials for those much more comfortable with the to help you implement IT
solutions more effectively in
capabilities of the TotalStorage Expert data management your environment.
components. We strive to provide enough information in a
manner that most users will be able to derive the greatest
value from the information presented.
For more information:
ibm.com/redbooks