Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 177

Table of Contents

1. LOGICAL ARCHITECTURE..........................................................................7
1.1 UNIT OBJECTIVES....................................................................................7
1.2 INTRODUCTION.......................................................................................7
1.3 DATABASE.............................................................................................7
1.3.1 Storage Group...............................................................................10
1.3.2 Address Space...............................................................................10
1.3.3 Exercise........................................................................................11
1.4 TABLE SPACE.......................................................................................11
1.4.1 Simple Table Space........................................................................12
1.4.2 Partitioned Table Space..................................................................12
1.4.3 Segmented Table Space.................................................................12
1.4.4 Large Object (LOB) Table Spaces.....................................................12
1.4.5 Exercise........................................................................................12
1.5 TABLE................................................................................................13
1.6 VIEW.................................................................................................13
1.7 INDEX................................................................................................13
1.7.1 Partitioning Index..........................................................................14
1.7.2 Non-partitioning Index...................................................................14
1.7.3 Exercise........................................................................................14
1.8 DB2 PACKAGES AND PLANS......................................................................14
1.8.1 Package Versioning........................................................................16
1.8.2 Exercise........................................................................................17
1.9 ALIASES AND SYNONYMS..........................................................................18
1.10 SCHEMA...........................................................................................19
1.10.1 Exercise.....................................................................................19
1.11 LOB...............................................................................................20
1.12 SYSTEM CATALOG TABLES......................................................................20
1.13 CATALOG CONTENTION.........................................................................21
1.13.1 Contention within table space SYSDBASE.......................................21
1.13.2 Contention independent of databases:...........................................21
1.14 DATABASE DIRECTORIES.......................................................................22
1.14.1 Exercise.....................................................................................23
1.15 REVIEW QUESTIONS.............................................................................23
1.16 REFERENCE.......................................................................................23
2. TRIGGERS.............................................................................................24
2.1 UNIT OBJECTIVES..................................................................................24
2.2 INTRODUCTION.....................................................................................24
2.3 DEFINITIONS........................................................................................24
2.3.1 Creating and Adding Triggers..........................................................25
2.3.2 Description....................................................................................25
2.4 TRIGGERS V/S TABLE CHECK CONSTRAINTS...................................................27
2.5 TRIGGERS AND DECLARATIVE RI.................................................................28
2.6 PERFORMANCE ISSUES.............................................................................28
2.7 MONITORING AND CONTROLLING TRIGGERS....................................................28
2.7.1 Catalog Information.......................................................................29
2.7.2 Exercise........................................................................................29
2.8 USER DEFINED FUNCTIONS (UDF)..............................................................30
2.8.1 Sourced Scalar Functions................................................................30
2.8.2 External Functions.........................................................................31
2.8.3 UDF Restrictions............................................................................33
2.8.4 UDF Performance Considerations.....................................................33
2.8.5 Monitoring and Controlling UDFs......................................................33
2.8.6 Stopping UDFS..............................................................................34
2.8.7 Exercise........................................................................................34
2.9 REVIEW QUESTIONS...............................................................................34
2.10 REFERENCE.......................................................................................34
3. PHYSICAL ARCHITECTURE.......................................................................35
3.1 UNIT OBJECTIVES..................................................................................35
3.2 INTRODUCTION.....................................................................................35
3.3 BOOT STRAP DATASET.............................................................................35
3.3.1 Introduction..................................................................................35
3.3.2 Recovery of BSDS..........................................................................35
3.3.3 Naming Convention........................................................................36
3.3.4 Exercise........................................................................................36
3.4 ACTIVE AND ARCHIVE LOGS.......................................................................36
3.4.1 Introduction..................................................................................36
3.4.2 Unit of Recovery............................................................................37
3.4.3 Rolling back Work..........................................................................37
3.4.4 Creation of log records...................................................................38
3.4.5 Retrieval of Log records..................................................................39
3.4.6 Writing the Active log.....................................................................39
3.4.7 Writing the archive log...................................................................39
3.4.8 Triggering log offload.....................................................................40
3.4.9 The offloading process....................................................................40
3.4.10 Archive log datasets....................................................................41
3.4.11 Archiving the log.........................................................................42
3.4.12 Naming convention.....................................................................45
3.4.13 Performance considerations..........................................................45
3.4.14 Exercise.....................................................................................53
3.5 DSNZPARMS......................................................................................54
3.5.1 IRLMRWT......................................................................................54
3.5.2 RECURHL......................................................................................54
3.5.3 XLKUPDLT.....................................................................................54
3.5.4 NUMLKTS......................................................................................54
3.5.5 NUMLKUS.....................................................................................55
3.5.6 LOGLOAD......................................................................................55
3.5.7 Other Zparms................................................................................55
3.5.8 Exercise........................................................................................55
3.6 STORAGE GROUPS.................................................................................55
3.6.1 Retrieving Catalog Info about DB2 Storage Groups............................55
3.6.2 Designing Storage Groups and Managing DB2 Data............................56
3.6.3 Managing Your Own DB2 Data Sets..................................................56
3.6.4 Requirements for Your Own Data Sets..............................................57
3.6.5 Implementing Your Storage Groups..................................................58
3.6.6 Exercise........................................................................................59
3.7 REVIEW QUESTIONS...............................................................................59
3.8 REFERENCE..........................................................................................59
4. DATA SERVICES.....................................................................................60
4.1 UNIT OBJECTIVES..................................................................................60
4.2 INTRODUCTION.....................................................................................60
4.3 BUFFER POOLS......................................................................................60
4.3.1 Introduction..................................................................................60
4.3.2 Tuning Buffer Pools........................................................................60
4.3.3 Write operations............................................................................63
4.3.4 Exercise........................................................................................72
4.4 EDM POOLS.........................................................................................72
4.4.1 Introduction..................................................................................72
4.4.2 Tuning the EDM Pool......................................................................73
4.4.3 Exercise........................................................................................75
4.5 RID POOLS..........................................................................................75
4.5.1 Introduction..................................................................................75
4.5.2 Increasing RID Pool Size.................................................................75
4.5.3 Exercise........................................................................................76
4.6 SORT POOLS........................................................................................76
4.6.1 Introduction..................................................................................76
4.6.2 Controlling Sort Pool Size and Sort Processing...................................76
4.6.3 Understanding How Sort Work Files Are Allocated..............................77
4.6.4 Factors That Influence Sort Processing.............................................78
4.6.5 Exercise........................................................................................79
4.7 DB2 DIRECTORY...................................................................................79
4.7.1 Introduction..................................................................................79
4.7.2 Contents for this directory...............................................................79
4.7.3 Exercise........................................................................................80
4.8 DB2 CATALOG TABLES............................................................................80
4.8.1 Introduction..................................................................................80
4.8.2 Examples......................................................................................81
4.8.3 Exercise........................................................................................81
4.9 REVIEW QUESTIONS...............................................................................81
4.10 REFERENCE.......................................................................................81
5. LOCKING, IRLM AND CONCURRENCY........................................................82
5.1 UNIT OBJECTIVES..................................................................................82
5.2 INTRODUCTION.....................................................................................82
5.3 IRLM – CONTROLLING THE IRLM...............................................................82
5.3.1 Starting the IRLM...........................................................................84
5.3.2 Monitoring the IRLM Connection......................................................84
5.3.3 Stopping the IRLM.........................................................................84
5.4 LOCK COMPATIBILITY...............................................................................85
5.4.1 Modes of Page and Row Locks.........................................................85
5.4.2 Modes of table, partition, and table space locks.................................86
5.4.3 Exercise........................................................................................87
5.5 LOCK CONVERSION / LOCK PROMOTION........................................................87
5.6 SUSPENSION........................................................................................88
5.7 LOCK DURATION....................................................................................88
5.7.1 ACQUIRE (ALLOCATE) Vs ACQUIRE (USE).........................................88
5.7.2 RELEASE (COMMIT) Vs RELEASE (DEALLOCATE)................................88
5.7.3 ISOLATION...................................................................................89
5.7.4 CURRENTDATA..............................................................................90
5.7.5 Exercise........................................................................................93
5.8 LOCKSIZE............................................................................................93
5.8.1 Tablespace Lock............................................................................93
5.8.2 Table Lock....................................................................................93
5.8.3 Page Lock.....................................................................................93
5.8.4 Row Lock......................................................................................94
5.8.5 Page Lock Vs Row Lock...................................................................94
5.8.6 Row Level Locking V/S Maxrows = 1................................................94
5.8.7 Lockmax.......................................................................................95
5.8.8 Lock Escalation..............................................................................95
5.8.9 Exercise........................................................................................97
5.9 CLAIMS AND DRAINS..............................................................................97
5.9.1 Claims..........................................................................................98
5.9.2 Drains..........................................................................................98
5.9.3 Compatibility Rules for Claims and Drains.......................................100
5.10 LOCK TUNING..................................................................................100
5.10.1 Deadlock..................................................................................100
5.10.2 Resource Timeout......................................................................104
5.10.3 Idle Thread Timeout..................................................................105
5.10.4 Utility Timeout..........................................................................106
5.10.5 Lock Wait Time.........................................................................106
5.10.6 Monitoring Locking....................................................................106
5.10.7 Exercise...................................................................................111
5.11 CONCURRENCY.................................................................................111
5.11.1 Introduction to Concurrency.......................................................111
5.11.2 ISOLATION Level.......................................................................112
5.11.3 Concurrency vs Lock Size...........................................................117
5.11.4 Deadlock..................................................................................118
5.11.5 Lock Compatibility.....................................................................118
5.11.6 Lock Conversion........................................................................118
5.11.7 Lock Escalation.........................................................................119
5.11.8 Basic recommendations to promote concurrency...........................119
5.11.9 Exercise...................................................................................123
5.12 DB2 SUBSYSTEM OBJECT LOCKING.........................................................123
5.12.1 Locks on the DB2 Catalog and Directory.......................................123
5.12.2 Locks on Skeleton Cursor Tables (SKCT)......................................124
5.12.3 Locks on Database Descriptors....................................................125
5.13 REVIEW QUESTIONS...........................................................................126
5.14 REFERENCE.....................................................................................126
6. DYNAMIC SQL......................................................................................127
6.1 UNIT OBJECTIVES................................................................................127
6.2 INTRODUCTION....................................................................................127
6.3 CODING DYNAMIC SQL IN APPLICATION PROGRAM.........................................127
6.3.1 Choosing between static and dynamic SQL......................................128
6.3.2 Performance of static and Dynamic SQL..........................................128
6.3.3 Caching Dynamic SQL statements and KEEPDYNAMICS.....................128
6.3.4 Dynamic SQL with resource limit facility..........................................131
6.3.5 Dynamic SQL for non-SELECT statements.......................................131
6.3.6 Dynamic SQL for fixed-list SELECT statements.................................133
6.3.7 Dynamic SQL for varying-list SELECT statements.............................134
6.3.8 Exercise......................................................................................136
6.4 REVIEW QUESTIONS..............................................................................136
6.5 REFERENCE........................................................................................136
7. STORED PROCEDURES..........................................................................137
7.1 UNIT OBJECTIVES................................................................................137
7.2 INTRODUCTION....................................................................................137
7.2.1 What are Stored Procedures?........................................................137
7.2.2 When do we use them?.................................................................138
7.2.3 Advantage...................................................................................139
7.2.4 Disadvantages.............................................................................139
7.2.5 Types of Stored Procedures...........................................................139
7.2.6 Exercise......................................................................................140
7.3 SP RELATED TERMINOLOGY.....................................................................140
7.4 EXAMPLE OF STORED PROCEDURE..............................................................141
7.4.1 A few salient points about the Stored Procedure:.............................141
7.5 RUNNING YOUR SP (THE CALLING PROGRAM)................................................142
7.5.1 When the Result Set concept is used..............................................142
7.5.2 When the Result Set concept is not used.........................................142
7.5.3 Exercise......................................................................................143
7.6 SP ADDRESS SPACE.............................................................................143
7.6.1 Changing the RACF Started Procedures Table..................................144
7.6.2 Guidelines for effective use of address space...................................144
7.6.3 Dynamically Extending Load Libraries.............................................145
7.6.4 Exercise......................................................................................148
7.7 SP RUNTIME ENVIRONMENTS...................................................................148
7.8 SP BUILDER.......................................................................................148
7.8.1 Overview of DB2 Stored Procedure Builder......................................148
7.8.2 How has DB2 SP Builder changed the process of creating SPs?..........149
7.8.3 Exercise......................................................................................151
7.9 PERFORMANCE CONSIDERATIONS...............................................................152
7.9.1 Reentrant Code............................................................................152
7.9.2 Fenced and Non-fenced Procedures................................................152
7.9.3 Limiting Resources Used...............................................................152
7.9.4 Workload Manager.......................................................................152
7.9.5 CICS EXCI...................................................................................153
7.9.6 Exercise......................................................................................153
7.10 DB2-DELIVERED SPS.........................................................................153
7.10.1 DSNWZP..................................................................................153
7.10.2 DSNWSPM................................................................................153
7.10.3 DSNACCMG..............................................................................153
7.10.4 DSNACCAV...............................................................................154
7.10.5 DSNUTILS................................................................................154
7.11 DOS AND DONTS.............................................................................154
7.11.1 Do’s.........................................................................................154
7.11.2 DON’T’s....................................................................................154
7.12 REVIEW QUESTIONS...........................................................................155
7.13 REFERENCE.....................................................................................155
GLOSSARY...................................................................................................156
APPENDIX A - LIST OF DSNZPARMS................................................................160
APPENDIX B - PACKAGE BIND PARAMETERS....................................................167
APPENDIX C - LOCK COMPATIBILITY MATRIX...................................................168
APPENDIX D - CLAIMS AND DRAINS COMPATIBILITY MATRICES.........................169
APPENDIX E - DB2 ADDRESS SPACE IDS AND ASSOCIATED RACF USER IDS AND
GROUP NAMES.............................................................................................170
APPENDIX F - SAMPLE JOB TO REASSEMBLE THE RACF STARTED PROCEDURES...171
APPENDIX G - COMPARING WLM-ESTABLISHED AND DB2-ESTABLISHED STORED
PROCEDURES...............................................................................................173
Appendix H - Sample Stored Procedure...........................................................175
Infosys Technologies Ltd. Logical Architecture
___________________________________________________________________

UNIT - I
1. Logical Architecture
1.1 Unit Objectives

This section provides the overview of some of the key database objects.

1.2 Introduction

DB2 is a relational database management system. In a relational database, data is


perceived to exist in one or more tables, each containing a specific number of
columns and a number of unordered rows. Each column in a row is related in some
way to the other columns.

A DB2 database involves more than just a collection of tables. It also includes table
spaces, storage groups, views, indexes, and other items. These are all collectively
referred to as DB2 structures.

STORAGEGROUP

DATABASE

TABLE SPACE

TABLE

INDEX
VIEW
COLUMN

ALIAS

SYNONYM

Figure 1: DB2 Data Objects Hierarchy

1.3 Database

In DB2, a database is a set of DB2 structures. When you define a DB2 database, you
give a name to an eventual collection of tables and associated indexes, as well as to
the table spaces in which they reside. A single database, for example, can contain all

48622202.doc Ver. 0.00a Page 7 of 177


Infosys Technologies Ltd. Logical Architecture
___________________________________________________________________

the data associated with one application or with a group of related applications.
Collecting that data into one database allows you to start or stop access to all the
data in one operation and grant authorization for access to all the data as a single
unit.

If you create a table space or a table and do not specify a database, the table or
table space is created in the default database, DSNDB04, which is defined for you at
installation time. All users have the authority to create tables in database DSNDB04.
However, the system administrator can revoke those privileges and grant them only
to particular users as necessary. The default database is predefined in the
installation process; its default DB2 storage group is SYSDEFLT, and you can specify
its default buffer pool at installation time.

After that, all users have the authority to create table spaces or tables in database
DSNDB04. The system administrator can revoke those privileges and grant them
only to particular users as necessary.

48622202.doc Ver. 0.00a Page 8 of 177


Infosys Technologies Ltd. Logical Architecture
___________________________________________________________________

Figure 2: Database and Table Space structure

48622202.doc Ver. 0.00a Page 9 of 177


Infosys Technologies Ltd. Logical Architecture
___________________________________________________________________

1.3.1 Storage Group


A DB2 storage group is a set of volumes on direct access storage devices (DASD).
The volumes hold the data sets in which tables and indexes are actually stored. The
description of a storage group names the group and identifies its volumes and the
VSAM catalog that records the data sets. Storage group SYSDEFLT, the default
storage group, is created when you install DB2.

1.3.2 Address Space


A range of virtual storage pages identified by a number (ASID) and a collection of
segment and page tables which map the virtual pages to real pages of the
computer's memory.

DB2 requires several different address spaces for the following purposes:

 One for database services, DSN1DBM1, which manipulate most of the


structures in user-created databases.

 One for system services, DSN1MSTR, which perform a variety of system-


related functions.

 One for distributed data facility, DSN1DIST, which provides support for
remote requests.

 One for the internal resource lock manager (IRLM), IRLMPROC, which controls
DB2 locking.

 One for DB2-established stored procedures, DSN1SPAS, which provides an


isolated execution environment for user-written SQL programs at a DB2
server.

 Zero to many for WLM-established stored procedures to be handled in order


of priority and isolated from other stored procedures running in other address
spaces.

48622202.doc Ver. 0.00a Page 10 of 177


Infosys Technologies Ltd. Logical Architecture
___________________________________________________________________

Figure 3: DB2 Address Space

1.3.3 Exercise

Questions:
1. When the user creates a table space or table in which database it is
created by default?
2. What is the default storage Group for a DB2 installation?

Answers:
1. DSNDB04
2. SYSDEFLT

1.4 Table Space

A table space is one or more data sets in which one or more tables are stored. A
table space can consist of a number of VSAM data sets. Data sets are VSAM linear
data sets (LDSs). Table spaces are divided into equal-sized units, called pages, which

48622202.doc Ver. 0.00a Page 11 of 177


Infosys Technologies Ltd. Logical Architecture
___________________________________________________________________

are written to or read from DASD in one operation. You can specify page sizes for the
data; the default page size is 4 KB.

A table space can be either system-managed space (SMS), or database managed


space (DMS). For an SMS table space, each container is a directory in the file space
of the operating system, and the operating system's file manager controls the
storage space. For a DMS table space, each container is either a fixed size pre-
allocated file, or a physical device such as a disk, and the database manager controls
the storage space.

Tables containing user data exist in regular table spaces. The default user table
space is called USERSPACE1. Indexes are also stored in regular table spaces. The
system catalog tables exist in a regular table space. The default system catalog table
space is called SYSCATSPACE.

When you create a table space, you can specify the database to which the table
space belongs and the storage group it uses. If you do not specify the database and
storage group, DB2 assigns the table space to the default database and the default
storage group. You also determine what kind of table spaces is created: simple,
segmented, partitioned, or a table space for LOBs.

1.4.1 Simple Table Space


A simple table space can contain more than one table, but the rows of different
tables are not kept separate (unlike segmented table spaces).

1.4.2 Partitioned Table Space


With a partitioned table space, the available space is divided into separate units of
storage called partitions, each containing one data set of one table. (That is, you
cannot store more than one table in a partitioned table space.) You assign the
number of partitions (from 1 to 254), and you can assign partitions independently to
different storage groups.

1.4.3 Segmented Table Space


With segmented table spaces, the available space is divided into groups of pages
called segments, each the same size. Each segment contains rows from only one
table. To search all the rows for one table, it is not necessary to scan the entire table
space, but only the segments that contain that table.

1.4.4 Large Object (LOB) Table Spaces


A LOB table space is required to hold large object data such as graphics, video, or
very large text strings. A LOB table space is always associated with the table space
that contains the logical LOB column values. The table space that contains the table
with the LOB columns is called, in this context, the base table space.

1.4.5 Exercise
Questions:
1. What is the default system catalog table space?

48622202.doc Ver. 0.00a Page 12 of 177


Infosys Technologies Ltd. Logical Architecture
___________________________________________________________________

2. What is the default user table space?

Answers:
1. The default system catalog table space is SYSCATSPACE.
2. USERSPACE1

1.5 Table

A relational database presents data as a collection of tables. A table consists of data


logically arranged in columns and rows. All database and table data is assigned to
table spaces. The data in the table is logically related, and relationships can be
defined between tables. Data can be viewed and manipulated based on mathematical
principles and operations called relations.

Table data is accessed through Structured Query Language, a standardized language


for defining and manipulating data in a relational database. Query is used in
application programs to retrieve data from a database.

1.6 View

A view is an efficient way of representing data without needing to maintain it in the


same fashion. A view is not an actual table and requires no permanent storage. A
"virtual table" is created and used.

A view can include all or some of the columns or rows contained in the tables on
which it is based. For example, you can join a department table and an employee
table in a view, so that you can list all employees in a particular department.

1.7 Index

An index is an ordered set of pointers to the data in a DB2 table. The index is stored
separately from the table. Each index is based on the values of data in one or more
columns of a table. After you create an index, DB2 maintains the index, but you can
perform necessary maintenance such as reorganizing it or recovering it, as
necessary. The main purposes of indexes are:

 To improve performance. In many cases, access to data is faster with an


index than without. If DB2 can use an ordered index to find a row in a
table, it is likely to be much faster than scanning an entire table to find
the row.
 To ensure that a row is unique. A table with a unique index cannot have
two rows with the same values in the column or columns that form the
index key. For example, if payroll applications use employee numbers, it
is essential that there not be two employees with the same employee
number.

Except for changes in performance, users of the table are unaware that an index is
being used. DB2 decides whether or not to use the index to access the table.

48622202.doc Ver. 0.00a Page 13 of 177


Infosys Technologies Ltd. Logical Architecture
___________________________________________________________________

Indexes take up physical storage in what are called index spaces. Each index
occupies its own index space. The maximum size of an index space depends on the
type of index (partitioning or non-partitioning) and the type of table on which the
index is created. When you create an index, an index space is automatically defined
in the same database as the table.

The physical structure of a index depends on whether an index is partitioning, which


is the focus of this section. However, when you calculate storage for an index, the
more important issue is whether the index is unique or non-unique. (That is, whether
the index contains duplicate values.) And when considering the order in which rows
are stored, you need to consider which index is the clustering index.

1.7.1 Partitioning Index


Use a partitioning index to tell DB2 how to divide data in a partitioned table space
among the partitions. For example, you can apportion data by last names, maybe
using one partition for each letter of the alphabet. Your choice of a partitioning
scheme is based on how an application accesses data, how much data you have, and
how large you expect the total amount of data to grow. It is always a good idea to
plan for growth when determining a partitioning scheme. You can change your
partitioning scheme by altering the limit keys of the partitioning index.

1.7.2 Non-partitioning Index


When used with partitioned table spaces, a non-partitioning index is any index on a
partitioned table space that is not the partitioning index. That is the context in which
the term non-partitioning index is most often used in DB2. For non-partitioned table
spaces, any index on a table in that table space is, by definition, a non-partitioning
index.

1.7.3 Exercise

Questions:
1. What is the difference between table and view?
2. List the two main reason for having index in a table?

Answers:
1. A view is an efficient way of representing data without needing to maintain it. A
view is not an actual table and requires no permanent storage.
2. To improve performance, To ensure that the row is unique.

1.8 DB2 Packages and Plans

The first step in DB2 program preparation is writing a program that contains
embedded SQL statements. You could write the program to run on the system on
which the DB2 for OS/390 subsystem resides, or to access the DB2 database from a
remote client via distributed relational database architecture (DRDA).

48622202.doc Ver. 0.00a Page 14 of 177


Infosys Technologies Ltd. Logical Architecture
___________________________________________________________________

Once you've written the program, the DB2 precompiler processes it and generates
two outputs:

I. A modified program source module. The precompiler comments out each


of the program's embedded SQL statements, and inserts a call to DB2 for
each statement.

II. A database request module (DBRM). A DBRM contains the SQL


statements found in the program source.

The pre-compiler places a unique identifier, called a consistency token, into each of
these outputs.

Following the precompile process, you compile and link-edit the modified source
program into an executable load module and bind the associated DBRM. In the DB2
for OS/390 bind process, such tasks as access path selection (optimization), access
authorization, and database object validation are performed. The output of the bind
process is a control structure that DB2 will use to execute the SQL statements when
the application program is run. The control structure will either be part of a plan (if
the DBRM is bound directly into a plan) or contained within a package that will be
executed via a plan.

If you use the package bind process, you have to bind the package into what is
called a collection. You can create a collection by binding a package into it.

Of course, even if you're using packages, you still need to bind one or more plans if
the program in question will run on the local or a remote DB2 for OS/390 subsystem.
Programs that run on other remote clients and access DB2 via DRDA use a default
plan called DISTSERV. You can execute a particular package using a plan if the
collection into which you've bound the package appears in what is called the plan's
package list (a list of one or more collections specified via the PKLIST option of the
BIND PLAN command). The program, in turn, invokes the plan through a
specification in the resource control table (or a DB2ENTRY if you're using resource
definition online) for a CICS transaction, via the application program load module
name for an IMS transaction, or with a control statement in the job control language
(JCL) for a batch job.

When you execute the application program, each call to DB2 directs the database
manager to execute the corresponding prebound SQL statement in the package
associated with the program. (Recall that the precompiler comments out SQL
statements in the source program and adds calls to DB2.) DB2 searches for the
package in one or more collections using as search criteria the package name (same
as the program name) and the consistency token accompanying the call. (Recall that
the consistency token, generated at precompile time, is carried in both the
application program and the related package.) When a match is found, the statement
is executed and control passes back to the application program until the next DB2
call is issued.

Given this database and application architecture, accessing the right data means
executing the right package; in other words, the package bound into the collection
associated with the database segment of interest. Given a plan with a multi collection
package list, how does DB2 know where to look for the package when executing an

48622202.doc Ver. 0.00a Page 15 of 177


Infosys Technologies Ltd. Logical Architecture
___________________________________________________________________

SQL statement? The search process is as follows (assuming that programs are not
bound directly into the plan):

 DB2 first checks to see whether a special register called CURRENT


PACKAGESET (one such register is maintained for each DB2 thread)
contains a nonblank value. If it does, DB2 will search for the package in
the collection specified (and will find the package, assuming - as is
probably true - that each segment-related collection contains the same set
of packages, distinguished only by the high-level qualifier specified at bind
time). The value of CURRENT PACKAGESET is blank at the beginning of a
transaction or batch job and can be updated by way of the SQL statement
SET CURRENT PACKAGESET. Thus, if a program needs to access data in
the REGION2 database segment, it can do so by issuing the statement SET
CURRENT PACKAGESET = 'REGION2'.
 If the value of CURRENT PACKAGESET is blank, DB2 will check to see
whether the package is already allocated to the thread. This could be the
case if, for example, the thread is reused by multiple transactions (an
example being a CICS-DB2 protected thread) and the package in question
was bound with RELEASE (DEALLOCATE). If the package is already
allocated to the thread, DB2 will use that package.
 If the value of CURRENT PACKAGESET is blank and the package is not
already allocated to the thread, DB2 will search for the package in the
collections listed in the plan's package list - searching in the order in which
the collections are listed - until the package is found.

Given a situation in which multiple collections appear in a plan's package list and all
packages are bound into each collection, some people assume that the first collection
listed will serve, in effect, as the default collection. In other words, if the REGION1
collection is listed first, the package from that collection will be selected if the value
of CURRENT PACKAGESET is blank when an application program issues a call to DB2.
The problem with this assumption is that it does not take into account the possibility
that the package might already be allocated to the thread, as I mentioned earlier. If,
for example, DB2 needs to find package PROGABC, and if the version of that
package bound into the REGION3 collection is already allocated to the program's
DB2 thread, that version of the PROGABC package - and not the version in the
REGION1 collection - will be used for SQL statement execution if CURRENT
PACKAGESET is blank, even though the REGION1 collection is listed first in the plan's
package list. If, for performance reasons, you drive thread reuse and bind your most
frequently executed programs with RELEASE (DEALLOCATE), you'd better not think
of any collection in a multi collection package list as the default. Instead, explicitly
direct DB2 to the desired collection by way of SET CURRENT PACKAGESET. Even if all
of your packages are bound with RELEASE (COMMIT), you should probably use SET
CURRENT PACKAGESET to get where you want to go, in terms of collections.

1.8.1 Package Versioning

Allowing more than one version of a package (say, the newest and the next-most-
recent versions) to exist within the package list of a plan can help in providing a
quick backout capability in the event that a new version of a package causes
problems. If such a situation occurs, you simply free the just-added version of the
package (the one associated with a new and problematic version of the application
program). When the previous (and well-behaved) version of the program is

48622202.doc Ver. 0.00a Page 16 of 177


Infosys Technologies Ltd. Logical Architecture
___________________________________________________________________

executed, the right package will be found because the previous version of the
package is already in the collection list. If the previous version of the package had
been removed from the collection list upon the bind of the new version, falling back
to the old version would have required a rebind - an additional step that could delay
the backout process and perhaps make the process more error-prone.

If you want to use package versioning in this way, you have at least two choices:

1. You could keep the two versions of the package in two separate
collections, with the newer version going in the collection listed first in the
package list of the plan. (I will refer to the two collections as CURRENT and
FALLBACK.) Before a new version of a package is bound, the version of the
package in the FALLBACK collection is freed. Then, you move the version
of the package in the CURRENT collection from that collection to the
FALLBACK collection. Then you bind the new version of the package into
the CURRENT collection. If a problem necessitating a fallback occurs, the
version of the package in the CURRENT collection is freed. When you
execute the previous version of the program, DB2 will search for the
package in the CURRENT collection, since it is listed first in the plan's
package list. On not finding the package in the collection (because it was
freed), DB2 will search for the package in the FALLBACK collection,
resulting in the previous version of the package being found and utilized
for SQL statement execution.
2. Alternatively, you could keep multiple versions of the package in the same
collection, with the version ID (specified at precompile time) serving to
distinguish one version of the package from another. (Version ID is intended
primarily for user management of package versions; DB2 for OS/390 uses the
consistency token to find the correct version of a package at program
execution time, as I mentioned previously.) In this case, you'd want to ensure
that you periodically remove old and unneeded package versions (those older
than the current and next-most-recent versions) from the collection. You
could accomplish this goal with a user-written program, in which case you
could opt for a DB2-supplied version ID. (Specifying VERSION AUTO at
precompile time causes DB2 to use a timestamp value as the package version
ID.) The program would use the version ID to identify and free packages
older than the current and next-most-recent versions. If you want to remove
old package versions manually, you might want to go for a version ID that is
shorter and easier to specify than the timestamp value generated by DB2
with VERSION AUTO. (You can specify your own version ID when a program is
precompiled.)

1.8.2 Exercise
Questions:
1. What are the outputs from the DB2 pre-compiler?
2. What are consistency tokens?
3. What is the advantage of multiple package versioning?

Answers:
1. Modified Program source module and Database request Module.
2. The precompiler places a unique identifier, called a consistency token, into each
of these outputs.

48622202.doc Ver. 0.00a Page 17 of 177


Infosys Technologies Ltd. Logical Architecture
___________________________________________________________________

3. Allowing more than one version of a package to exist within the package list of a
plan can help in providing a quick backout capability in the event that a new
version of a package causes problems.

1.9 Aliases and Synonyms

A table or view can be referred to in an SQL statement by its name, by an alias that
has been defined for its name, or by a synonym that has been defined for its name.
Thus, aliases and synonyms can be thought of as alternate names for tables and
views.

The option of referencing a table or view by an alias or a synonym is not explicitly


shown in the syntax diagrams or mentioned in the description of SQL statements.
Nevertheless, an alias or a synonym can be used wherever a table or view can be
referred to in an SQL statement, with two exceptions: a local alias cannot be used in
CREATE ALIAS, and a synonym cannot be used in CREATE SYNONYM. If an alias is
used in CREATE SYNONYM, it must identify a table or view at the current server. The
synonym is defined on the name of that table or view. If a synonym is used in
CREATE ALIAS, the alias is defined on the name of the table or view identified by the
synonym.

The effect of using an alias or a synonym in an SQL statement is that of text


substitution. For example, if A is an alias for table Q.T, one of the steps involved in
the preparation of SELECT * FROM A is the replacement of 'A' by 'Q.T'. Likewise, if S
is a synonym for Q.T, one of the steps involved in the preparation of SELECT * FROM
S is the replacement of 'S' by 'Q.T'.

The differences between aliases and synonyms are as follows:

 SYSADM or SYSCTRL authority or the CREATE ALIAS privilege is


required to define an alias. No authorization is required to define a
synonym.

 An alias can be defined on the name of a table or view, including


tables and views that are not at the current server. A synonym can
only be defined on the name of a table or view at the current server.

 An alias can be defined on an undefined name. A synonym can only


be defined on the name of an existing table or view.

 Dropping a table or view has no effect on its aliases. But dropping a


table or view does drop its synonyms.

 An alias is a qualified name that can be used by any authorization ID.


A synonym is an unqualified name that can only be used by the
authorization ID that created it.

48622202.doc Ver. 0.00a Page 18 of 177


Infosys Technologies Ltd. Logical Architecture
___________________________________________________________________

 An alias defined at one DB2 subsystem can be used at another DB2


subsystem. A synonym can only be used at the DB2 subsystem where
it is defined.

 When an alias is used, an error occurs if the name that it designates


is undefined or is the name of an alias at the current server. (The
alias can designate an alias defined at another server if that alias
represents a table or view at the other server.) When a synonym is
used, this error cannot occur.

1.10 Schema

A schema is a collection of named objects. The objects that a schema can contain
include distinct types, functions, stored procedures, and triggers. An object is
assigned to a schema when it is created.

When a distinct type, function, stored procedure, or trigger is created, it is given a


qualified, two-part name. The first part is the schema name (or the qualifier), which
is either implicitly or explicitly specified. The default schema is the authorization ID
of the owner of the plan or package. The second part is the name of the object.

Schemas extend the concept of qualifiers for tables, views, indexes, and aliases to
enable the qualifiers for distinct types, functions, stored procedures, and triggers to
be called schema names.

You can create a schema with the schema processor by using the CREATE SCHEMA
statement. CREATE SCHEMA cannot be embedded in a host program or executed
interactively. The ability to process schema definitions is provided for conformance to
ISO/ANSI standards. The result of processing a schema definition is identical to the
result of executing the SQL statements without a schema definition.

Outside of the schema processor, the order of statements is important. They must be
arranged so that all referenced objects have been previously created. This restriction
is relaxed when the schema processor processes the statements if the object table is
created within the same CREATE SCHEMA. The requirement that all referenced
objects have been previously created is not checked until all of the statements have
been processed. For example, within the context of the schema processor, you can
define a constraint that references a table that does not exist yet or GRANT an
authorization on a table that does not exist yet.

1.10.1Exercise
Questions:

1. Aliases and synonyms can be thought of as alternate names for _____


and _____.
2. What authorization is required to define a synonym?
3. An alias defined in one DB2 sub-system can be used by another sub-
system. (TRUE/FALSE)

48622202.doc Ver. 0.00a Page 19 of 177


Infosys Technologies Ltd. Logical Architecture
___________________________________________________________________

4. How can a schema be created?

Answers:
1. Tables and views
2. None
3. TRUE
4. A schema can be created with the schema processor by using the CREATE
SCHEMA statement.

1.11 LOB

A LOB table space is required to hold large object data such as graphics, video, or
very large text strings. A LOB table space is always associated with the table space
that contains the logical LOB column values. The table space that contains the table
with the LOB columns is called, in this context, the base table space.

The LOB data is logically associated with the base table, but it is physically stored in
an auxiliary table that resides in a LOB table space. There can only be one auxiliary
table in a large object table space. A LOB value can span several pages, however
only one LOB value is stored per page.

You must have a LOB table space for each LOB column that exists in the table. For
example, if your table has LOB columns for both resumes and photographs, you
must have one LOB table space (and one auxiliary table) for each of those columns.
If the base table space is partitioned, you must have one LOB table space for each
partition.

1.12 System Catalog Tables

Each DB2 maintains a set of tables that contain information about the data under its
control. These tables are collectively known as the catalog. The catalog tables
contain information about DB2 objects such as tables, views, and indexes. In this
book, "catalog" refers to a DB2 catalog unless otherwise indicated. In contrast, the
catalogs maintained by access method services are known as "integrated catalog
facility catalogs."

Each database includes a set of system catalog tables, which describe the Logical
and physical structure of the data. DB2 creates and maintains an extensive set of
system catalog tables for each database. These tables contain information about the
definitions of database objects such as user tables, views, and indexes, as well as
security information about the authority that users have on these objects. They are
created when the database is created, and are updated during the course of normal
operation. You cannot explicitly create or drop them, but you can query and view
their contents using the catalog views.

Tables in the catalog are like any other database tables with respect to retrieval. If
you have authorization, you can use SQL statements to look at data in the catalog
tables in the same way that you retrieve data from any other table in the system.

48622202.doc Ver. 0.00a Page 20 of 177


Infosys Technologies Ltd. Logical Architecture
___________________________________________________________________

Each DB2 ensures that the catalog contains accurate descriptions of the objects that
the DB2 controls. DB2 for OS/390 maintains a set of tables (in database DSNDB06)
called the DB2 catalog.

The DB2 catalog consists of tables of data about everything defined to the DB2
system, including table spaces, indexes, tables, copies of table spaces and indexes,
storage groups, and so forth. The DB2 catalog is contained in system database
DSNDB06. When you create, alter, or drop any structure, DB2 inserts, updates, or
deletes rows of the catalog that describe the structure and tell how the structure
relates to other structures.

To illustrate the use of the catalog, here is a brief description of some of what
happens when the employee table is created:
 To record the name of the structure, its owner, its creator, its type (alias,
table, or view), the name of its table space, and the name of its database,
DB2 inserts a row into the catalog table SYSIBM.SYSTABLES.
 To record the name of the table to which the column belongs, its length, its
data type, and its sequence number in the table, DB2 inserts rows into
SYSIBM.SYSCOLUMNS for each column of the table.
 To increase by one the number of tables in the sample table space
DSN8S61E, DB2 updates the row in the catalog table
SYSIBM.SYSTABLESPACE.
 To record that the owner (DSN8610) of the table has all privileges on the
table, DB2 inserts a row into table SYSIBM.SYSTABAUTH.

1.13 Catalog Contention

SQL data definition statements, GRANT statements, and REVOKE statements require
locks on the DB2 catalog. If different application processes are issuing these types of
statements, catalog contention can occur.

1.13.1Contention within table space SYSDBASE

SQL statements that update the catalog table space SYSDBASE contend with each
other when those statements are on the same table space. Those statements are:
 CREATE, ALTER, and DROP TABLESPACE, TABLE, and INDEX
 CREATE and DROP VIEW, SYNONYM, and ALIAS
 COMMENT ON and LABEL ON
 GRANT and REVOKE of table privileges

Reduce the concurrent use of statements that update SYSDBASE for the same table
space.

1.13.2Contention independent of databases:

The following limitations on concurrency are independent of the referenced database:


 CREATE and DROP statements for a table space or index that uses a storage
group contend significantly with other such statements.

48622202.doc Ver. 0.00a Page 21 of 177


Infosys Technologies Ltd. Logical Architecture
___________________________________________________________________

 CREATE, ALTER, and DROP DATABASE, and GRANT and REVOKE database
privileges all contend with each other and with any other function that
requires a database privilege.
 CREATE, ALTER, and DROP STOGROUP contend with any SQL statements that
refer to a storage group, and with extensions to table spaces and indexes that
use a storage group.
 GRANT and REVOKE for plan, package, system, or use privileges contend with
other GRANT and REVOKE statements for the same type of privilege, and with
data definition statements that require the same type of privilege.

1.14 Database Directories

The DB2 directory contains information used by DB2 during normal operation. You
cannot access the directory using SQL, although much of the same information is
contained in the DB2 catalog, for which you can submit queries. The structures in the
directory are not described in the DB2 catalog.

The directory consists of a set of DB2 tables stored in five table spaces in system
database DSNDB01. Each of the following table spaces is contained in a VSAM linear
data set:
 SCT02 is the skeleton cursor table space (SKCT), which contains the internal
form of SQL statements contained in an application. When you bind a plan,
DB2 creates a skeleton cursor table in SCT02.
 SPT01 is the skeleton package table space, which is similar to SCT02 except
that the skeleton package table is created when you bind a package.
 SYSLGRNX is the log range table space, used to track the opening and closing
of table spaces, indexes, or partitions. By tracking this information and
associating it with relative byte addressees (RBAs) as contained in the DB2
log, DB2 can reduce recovery time by reducing the amount of log that must
be scanned for a particular table space, index, or partition.
 SYSUTILX is the system utilities table space, which contains a row for every
utility job that is running. The row stays until the utility is finished. If the
utility terminates without completing, DB2 uses the information in the row
when you restart the utility.
 DBD01 is the database descriptor (DBD) table space, which contains internal
information, called database descriptors (DBDs), about the databases existing
within DB2.

Each database has exactly one corresponding DBD that describes the database,
table spaces, tables, table check constraints, indexes, and referential
relationships. A DBD also contains other information about accessing tables in
the database. DB2 creates

and updates DBDs whenever their corresponding databases are created or


updated.

48622202.doc Ver. 0.00a Page 22 of 177


Infosys Technologies Ltd. Logical Architecture
___________________________________________________________________

Figure 4: Contents of Database Descriptor Table

1.14.1Exercise
Questions:
1. The table space that contains the table with the LOB columns is called, in
this context, the ________.
2. The DB2 catalog tables are contained in which database?
3. You can access the DB2 directory using SQL. True/False
4. The DB2 directory consists of _______ in system database _______.

Answers:
1. Base table space
2. DSNDB06
3. False
4. Five table spaces, DSNDB01

1.15 Review Questions

1.What are identity columns?


2.What are surrogate-key indexes?
3.List down the main difference between alias and synonym?
4. Explain the catalog contention concept.

1.16 Reference

 www.ibm.com
 www.db2azine.com
 www.db2mag.com
 www.idug.org
 IBM DB2 V6 Administration Guide

48622202.doc Ver. 0.00a Page 23 of 177


Infosys Technologies Ltd. Triggers
___________________________________________________________________

UNIT - II
2. Triggers
2.1 Unit Objectives

This unit will acquaint the reader with following concepts:


1. Triggers
2. Various considerations for Triggers
3. Performance issues
4. User Defined Functions and associated considerations

2.2 Introduction

DB2’s support for triggers and user-defined functions (UDFs) brings users a vast
capacity to invent many new ways of enhancing applications limited only by their
imagination. But with any extension of user code into a database, many
considerations for achieving good performance are required. User code embedded
using these facilities can completely obliterate any service-level agreement. This
chapter covers the performance and implementation issues surrounding these
features.

2.3 Definitions

A trigger defines a set of actions that are executed when a delete, insert, or update
operation occurs on a specified table. When such an SQL operation is executed, the
trigger is activated. Triggers can be used along with referential constraints and check
constraints to enforce data integrity rules. Triggers are more powerful than
constraints because they can also be used to cause updates to other tables,
automatically generate or transform values for inserted or updated rows, or invoke
functions that perform operations both inside and outside of DB2. For example,
instead of preventing an update to a column if the new value exceeds a certain
amount, a trigger can substitute a valid value and send a notice to an administrator
about the invalid update.

Triggers are useful for defining and enforcing business rules that involve different
states of the data, for example, limiting a salary increase to 10%. Such a limit
requires comparing the value of a salary before and after an increase. For rules that
do not involve more than one state of the data, consider using referential and check
constraints.

Triggers also move the application logic that is required to enforce business rules
into the database, which can result in faster application development and easier
maintenance. With the logic in the database, for example, the previously mentioned
limit on increases to the salary column of a table, DB2 checks the validity of the
changes that any application makes to the salary column. In addition, the application
programs do not need to be changed when the logic changes.
Triggers are optional and are defined using the CREATE TRIGGER statement.

48622202.doc Ver. 0.00a Page 24 of 177


Infosys Technologies Ltd. Triggers
___________________________________________________________________

2.3.1 Creating and Adding Triggers


The CREATE TRIGGER statement defines a trigger in a schema and builds a
trigger package at the current server.

Syntax:

CREATE TRIGGER trigger-name NO CASCADE [BEFORE] [INSERT] OF column-


name ON table-name [AFTER ]
[DELETE]
[UPDATE]

REFERENCING [OLD AS correlation-name ] [FOR EACH ROW]


[NEW AS correlation-name ] [FOR EACH
STATEMENT ---(4)]
[OLD TABLE (2) AS identifier]
[NEW TABLE (3) AS identifier ]

MODE DB2SQL triggered-action

Notes:
1. The same clause must not be specified more than once. OLD TABLE and NEW
TABLE must be specified only for AFTER triggers.
2. OLD_TABLE is a synonym for OLD TABLE.
3. NEW_TABLE is a synonym for NEW TABLE.
4. FOR EACH STATEMENT must not be specified for BEFORE triggers.

2.3.2 Description
trigger-name
Names the trigger. A schema implicitly or explicitly qualifies the name. The name,
including the implicit or explicit schema name, must not identify a trigger that exists
at the current server. The name is also used to create the trigger package;
therefore, the name must also not identify a package that is already described in the
catalog. The schema name becomes the collection-id of the trigger package. The
unqualified form of trigger-name is a short SQL identifier. The unqualified name is
implicitly qualified with a schema name according to the following rules:

If the statement is embedded in a program, the schema name of the trigger is the
AUTHORIZATION ID in the QUALIFIER bind-option when the plan or package was
created or last rebound. If QUALIFIER was not used, the schema name of the trigger
is the owner of the package or plan.
If the statement is dynamically prepared, the schema name of the trigger is the SQL
authorization ID of the process.

The qualified form of trigger-name is a short SQL identifier (the schema name)
followed by a period and a short SQL identifier. The schema name must not begin
with 'SYS unless the name is ‘SYSADM. The schema name that qualifies the trigger
name is the trigger's owner.

The owner of the trigger is determined by how the CREATE TRIGGER statement is
invoked:
 If the statement is embedded in a program, the owner is the authorization ID
of the owner of the plan or package.

48622202.doc Ver. 0.00a Page 25 of 177


Infosys Technologies Ltd. Triggers
___________________________________________________________________

 If the statement is dynamically prepared, the owner is the SQL authorization


ID in the CURRENT SQLID special register.

NO CASCADE BEFORE
Specifies that the trigger is a before trigger. DB2 executes the triggered action
before it applies any changes caused by an insert, delete, or update operation on the
triggering table. It also specifies that the triggered action does not activate other
triggers because the triggered action of a before trigger cannot contain any updates.

AFTER
Specifies that the trigger is an after trigger. DB2 executes the triggered action after
it applies any changes caused by an insert, delete, or update operation on the
triggering table.

INSERT
Specifies that the trigger is an insert trigger. DB2 executes the triggered action
whenever there is an insert operation on the triggering table. However, if the insert
trigger is defined on PLAN_TABLE, DSN_STATEMNT_TABLE, or
DSN_FUNCTION_TABLE, and the insert operation was caused by DB2 adding a row
to the table, the triggered action is not be executed.

DELETE
Specifies that the trigger is a delete trigger. DB2 executes the triggered action
whenever there is a delete operation on the triggering table.

UPDATE
Specifies that the trigger is an update trigger. DB2 executes the triggered action
whenever there is an update operation on the triggering table. If you do not specify
a list of column names, an update operation on any column of the triggering table,
including columns that are subsequently added with the ALTER TABLE statement,
activates the triggered action.

OF column-name,...
Each column-name that you specify must be a column of the subject table and must
appear in the list only once. An update operation on any of the listed columns
activates the triggered action.

ON table-name
Identifies the subject table with which the trigger is associated. The name must
identify a base table at the current server. It must not identify a temporary table, an
auxiliary table, an alias, a synonym, or a catalog table.

REFERENCING
Specifies the correlation names for the transition variables and the table names for
the transition tables. For the rows in the subject table that are modified by the
triggering SQL operation (insert, delete, or update), a correlation name identifies the
columns of a specific row. A table name identifies the complete set of modified rows.
Each row that is modified by the triggering operation is available to the triggered
action by using column names that are qualified with correlation names that are
specified as follows:

OLD AS correlation-name

48622202.doc Ver. 0.00a Page 26 of 177


Infosys Technologies Ltd. Triggers
___________________________________________________________________

Specifies the correlation name that identifies the state of the row prior to the
triggering SQL operation.

NEW AS correlation-name
Specifies the correlation name that identifies the state of the row as modified by the
triggering SQL operation and by any SET statement in a before trigger that has
already been executed.

The complete set of rows that is modified by the triggering operation is available to
the triggered action by using a temporary table name that is specified as follows:

OLD TABLE AS identifier


Specifies the name of a temporary table that identifies the state of the complete set
of rows that are modified rows by the triggering SQL operation prior to any actual
changes. identifier is a long SQL identifier.

NEW TABLE AS identifier


Specifies the name of a temporary table that identifies the state of the complete set
of rows as modified by the triggering SQL operation and by any SET statement in a
before trigger that has already been executed. identifier is a long SQL identifier.

At most, the trigger definition can include two correlation names (OLD and NEW) and
two table names (OLD TABLE and NEW TABLE). All the names must be unique from
one another.

OLD and OLD TABLE are valid only if the triggering SQL operation is a delete or an
update. For a delete operation, OLD captures the values of the columns in the
deleted row, and OLD TABLE captures the values in the set of deleted rows. For an
update operation, OLD captures the values of the columns of a row before the
update, and OLD TABLE captures the values in the set of updated rows.

NEW and NEW TABLE are valid only if the triggering SQL operation is an insert or an
update. For both operations, NEW captures the values of the columns in the inserted
or updated row. For before triggers, the values of the updated rows include the
changes from any SET statement in the triggered action if the trigger is a before
trigger.

OLD and NEW are valid only if you also specify FOR EACH ROW, and OLD TABLE and
NEW TABLE are valid only if you specify AFTER.

2.4 Triggers v/s Table Check Constraints

Generally, use constraints rather than triggers to enforce database rules. Use
triggers when a constraint cannot be used to enforce the rule. Check constraints and
referential integrity constraints are usually better suited for rules that only involve
only one state. Constraints offer these advantages over triggers:

 Constraints are written in a less procedural way than triggers and give the
system more opportunities for optimization.
 Constraints are enforced when they are created for existing data in the
database.

48622202.doc Ver. 0.00a Page 27 of 177


Infosys Technologies Ltd. Triggers
___________________________________________________________________

 Constraints protect data against being placed into an invalid state by any kind
of statement, but each trigger applies only to a specific kind of statement
such as an update or delete.

Triggers are more powerful because they can enforce many rules that constraints
cannot. Use triggers to capture rules that involve different states of data. For
example, a rule that salaries cannot increase more than ten percent requires
knowledge of the before and after state of the data. Constraints cannot enforce such
a rule.

2.5 Triggers and Declarative RI

Trigger operations may result from changes brought about by DB2-enforced


referential constraints. For example, if you delete a row from the EMPLOYEE table
that propagates DELETEs to the PAYROLL table through referential constraints, the
delete triggers defined on the PAYROLL table are subsequently executed. The delete
triggers are activated as a result of the referential constraint defined on the
EMPLOYEE table.

2.6 Performance Issues

Recursive triggers are updates applied by trigger that cause the same trigger to fire
off. These can easily lead to loops and can be very complex statements. However,
they may be required by some applications for related rows. You need code to stop
the trigger.

Ordering multiple triggers can be an issue because triggers on the same table are
activated in the order they were created (identified in the creation time stamp). The
interaction among triggers and referential constraints can also be an issue because
the order of processing can significantly affect the results produced.

Invoking stored procedures and UDFs from triggers present some performance and
manageability concerns. Triggers can include only SQL but can call stored procedures
and UDFs that are user written and therefore have many implications for integrity
and performance. Transaction tables can be passed to stored procedures and UDFs
also.

Trigger cascading is when a trigger modifies the triggering table or another table.
Triggers can be activated at the same level or different levels, and when AFTER
triggers are activated at different levels, cascading occurs. Cascading can occur for
UDFs, stored procedures, and triggers.

2.7 Monitoring and Controlling Triggers

There are various ways to monitor the various actions of triggers. The DB2PM
statistics and accounting reports include these statistics:

 The number of times a trigger was activated


 The number of times a row trigger was activated

48622202.doc Ver. 0.00a Page 28 of 177


Infosys Technologies Ltd. Triggers
___________________________________________________________________

 The number of times an SQL error occurred during the execution of a


triggered action

Other details can be found in the traces. For example, in IFCID 16 you can find
information about the materialization of a work file in support of a transaction table,
where TR indicates transaction table for triggers. Other information in IFCID 16
includes the depth of the trigger (0-16), where 0 indicates that there are no triggers.
You can also find the type of SQL that invoked the trigger: I = INSERT, U =
INSERT into a transaction table because of update, D = INSERT into a transaction
table because of a delete. The type of referential integrity that caused an insert into
a transaction table for a trigger is also indicated with an S for SET NULL (which can
occur when the SQL type is U) or C for CASCADE DELETE (which can occur when the
SQL type is D).

If a transaction table needs to be scanned for a trigger, IFCID 17 is TR (transaction


table scan for a trigger).

2.7.1 Catalog Information


The SYSIBM.SYSTRIGGERS catalog table contains information about the triggers
defined in your databases. To find all the triggers defined on a particular table, the
characteristics of each trigger, and the order in which they are executed, you can
issue the following query:

SELECT DISTINCT SCHEMA, NAME, TRIGTIME, TRIGEVENT,


GRANULARITY, CREATEDTS
FROM SYSIBM.SYSTRIGGERS
WHERE TBNAME = table-name
AND TBOWNER = table-owner
ORDER BY CREATEDTS

You can get the actual text of the trigger with the following statement:

SELECT TEXT, SEQNO


FROM SYSIBM.SYSTRIGGERS
WHERE SCHEMA = schema-name
AND NAME = trigger-name
ORDER BY SEQNO

2.7.2 Exercise
1. It is possible to activate a trigger without executing a SQL. True/False?
2. For rules that involve multiple states, it is better to use constraints rather
than Triggers. True/False?
3. Triggers on the same table are executed in the _____ of their definition.
4. Is it possible to invoke SPs from triggers?
5. What SYSIBM table gives you information about triggers?

Answers:
1. False
2. False
3. ORDER
4. Yes

48622202.doc Ver. 0.00a Page 29 of 177


Infosys Technologies Ltd. Triggers
___________________________________________________________________

5. SYSIBM.SYSTRIGGERS

2.8 User Defined Functions (UDF)

User-defined functions (UDFs) allow users and developers to extend the function of
the DBMS by applying function definitions provided by users to the database engine
itself. This provides more synergy between application and database and helps with
the development cycle because it is more object-oriented and can be used for
application improvements in many areas. This ability leads to a new breed of
developers: the database procedural programmers.

UDFs are functions that are created by the user through DDL using the CREATE
FUNCTION statement. This statement can be issued in an interactive query interface
or in an application program. UDFs can be simple or complex, inside data or outside
data. They can provide application-specific functions or business-specific functions
that cross application. They provide a performance advantage because they execute
on the server, not client. Also, they are stored in one location, so change control
issues need to be revisited but are less problematic.

Use UDFs with LOBs (large objects) for both searching and analysis and with UDTs
(user-defined data types) for object processing unique to your business and
application complex needs. These functions can provide simple data transformations,
financial calculations, imagination and good performance guidelines (discussed later
in the chapter).

UDFs are created with CREATE FUNCTION statement and are used in SQL just like
built-in functions. They define the behavior of a user-defined data type as a method
and encapsulation.

There are three types of UDFs: sourced functions, external scalar functions, and
external table functions. Basically, sourced functions are based on existing built-in
functions, while external functions are written in a host language.
 Sourced functions mimic other functions. They can be column functions that
work on a collection of values and return a single value (e.g., MAX, SUM,
AVG), or scalar functions that work on individual values and return a single
value (CHAR, CONCAT), or operator functions such as >, <, or =.
 External scalar functions are written in a programming language, such as C,
C++, or Java, and return a scalar value. External scalar functions cannot
contain SQL, cannot be column functions, cannot access or modify the
database, and can perform calculations only on parameters.
 External table functions can return a table and can be written in C or Java.
These functions work on scalar values that may be of different data types
with different meanings and return a table. For example:
o CREATE FUNCTION EXM ()
o RETUTNS TABLE (COL1 DATE . . ..))

Just like user-defined data types, UDFs, which are user-created SQL functions; play a
role in both normal data types and object types.

2.8.1 Sourced Scalar Functions


Scalar functions return a single-value answer each time they are called, and table
functions simply return a table. Table functions as a rule are not distributed as part

48622202.doc Ver. 0.00a Page 30 of 177


Infosys Technologies Ltd. Triggers
___________________________________________________________________

of the extenders, but large libraries of scalar functions are included. Extenders
include user-defined functions required for each of the distinct data types supported.
These UDFs perform operations unique to image, audio, video, XML, and spatial
objects. Any developer can create additional UDFs for the objects supported by the
extenders as well as for any other reason. UDFs are created with SQL CREATE
FUNCTION statement. The parameter list is rather large but easily understood. In the
following example, a UDF is created that specifies the data type to which the UDF
can be applied, and it performs a financial calculation on the data.

CREATE FUNCTION something_financial (EURO_DOLLAR)


RETURNS EURO_DOLLAR
EXTERNAL NAME “SOME_FIN”
LANGUAGE C
PARAMETER STYLE DB2SQL
NO SQL
DETERMINISTIC
NO EXTERNAL ACTION

The previous statement defines the data type of the parameter and the data type of
the returned information, both as UDTs. It also states that it is an external function,
written in the C language, and contains no SQL. UDFs can be used in SQL statements
the same way as built-in functions. For example:

SELECT something_financial (OZ_ANNUAL)


FROM places
WHERE location = :whatever

UDFs open up a world of options for highly specialized processing for a wide variety
of applications. UDFs are basically subroutines that can be invoked through SQL
statements and can range from simple SQL statements to large COBOL programs.
We could create a UDF called EURO_TO_DOLLAR and use a parameter EURO
amounts or columns, with the result as a dollar amount. The UDF could get
conversion data in real time from another source, but the UDF has to be written only
once and can be used everywhere. UDFs can search and index LOB data, while
others define the behavior of a UDT.

There are about 100 built-in functions, some of which are supplied as example of
UDFs. Casting functions are helpful when dealing with UDTs. Other very important
built-in functions are ROUND, JULIAN_DAY, LOCATE, LTRIM, RTRIM, RAISE_ERROR.
TRIM, and TRUNCATE. The list is long, but these functions help keep more of that
program-code processing inside the engine via SQL, where it belongs and is better
optimized. Among the sample UDFs is ALTDATE, which can return the current date in
any of 34 possible formats, DAYNAME for weekday name, and MONTHNAME for the
name of the month. Watch for many others coming from IBM and elsewhere.

2.8.2 External Functions

2.8.2.1 Scalar Functions


External scalar functions provide additional functionality written in an application
program. They are user written in a host language and are defined to DB2 with a
reference to an MVS load module, which contains the object code of the function.
The module is loaded when the function is invoked. The following example shows
how to create and invoke an external scalar function.

48622202.doc Ver. 0.00a Page 31 of 177


Infosys Technologies Ltd. Triggers
___________________________________________________________________

--Creating an external scalar


CREATE FUNCTION ADD_IT (INTEGER, INTEGER)
RETURNS INTEGER
EXTERNAL NAME ‘ADDIT’
LANGUAGE COBOL;

--Invoking an external scalar


UPDATE INVENTORY
SET ITEM_COUNT = ADD_IT (INSTOCK, ON_ORDER);

2.8.2.2 Table Functions


A table function takes as parameters individual scalar values and returns a table to
the SQL statement. This type of function can be specified only within the FROM
clause of a SELECT. Table functions are external function and are used, for example,
to retrieve data from a non-DB2 source, pass it to the SQL statement, and perhaps
participate in a join. This is a way to build a table from non-DB2 data and can be
used in a table expression containing a SELECT that is a subquery of an INSERT
statement.

These are some characteristics of table functions.

 They are written in normal programming languages.


 They can perform operating system calls.
 They can read data from files.
 They can read data across a network.
 They use SQL to process any kind of data from anywhere.
 They can join the data from the table function to another table.
 They are like scalar functions (which return one value), except the return
rows of columns.

Following is an example of how a table function might be used.

SELECT MONTH (people.birthdate) AS MONTH, EMP.LASTNAME


FROM EMPLOYEE AS EMP, TABLE (people(CURRENT DATE)) as people ORDER BY
MONTH, EMP.LASTNAME

Basically, this example invokes a UDF that gets from somewhere else and returns it
as columns and rows to the SQL statement. There are many implications, such as
how the optimizer can determine which table is the inner and which is the outer for
the join process.

2.8.2.3 Examples of UDFs that come with DB2


Following are some of the UDFs that come with DB2. Some are written in C and
some in C++.

 ALTDATE 1 converts the current date to a user-specified format.


 ALTDATE 2 converts a date from one format to another.
 ALTDATE 3 converts the current time to a user-specified format.
 ALTTIME 4 converts a time from one format to another.
 DAYNAME returns the day of the week for a user-specified date.
 MONTHNAME returns the month for a user-specified date.

48622202.doc Ver. 0.00a Page 32 of 177


Infosys Technologies Ltd. Triggers
___________________________________________________________________

 CURRENCY formats a floating-point number as a currency value.


 TABLE_NAME returns the unqualified table name for a table, view, or alias.
 TABLE_QUALIF returns the location for a table, view, or alias.
 WEATHER returns a table of whether information from an EBCDIC data set.

2.8.3 UDF Restrictions


The implementation and use of user-defined functions have a few restrictions. DB2
uses the Recoverable Resource Manager Services attachment facility (RRSAF) as its
interface with your user-defined function. You cannot include RRSAF calls in your
UDF; DB2 will reject any RRSAF calls that it finds. If the UDF is not defined with
SCRATCHPAD or EXTERNAL ACTION, it is not guaranteed to execute under the same
task each time it is invoked. You cannot execute COMMIT or ROLLBACK statements
in your UDF, and you must close all open cursors in a UDF scalar function. DB2
returns an SQL error if cursors are not closed before it completes.

The number of parameters that can passed to a routine is restricted in each


language. User-defined table functions is particular can require large numbers of
parameters.

2.8.4 UDF Performance Considerations


UDFs have the same basic set of performance considerations that stored procedures
have. These considerations are basically common sense, such as how much work is
done, where is the work done, and how long does it take. A UDF needs to have a
very specific function and to do that one thing well.

2.8.5 Monitoring and Controlling UDFs


You can invoke user-defined functions in an SQL statement wherever you can use
expressions or built-in functions. User-defined functions, like stored procedures, run
in WLM-established address spaces. DB2 user-defined functions are controlled by the
following commands.

The START FUNCTION SPECIFIC command activates an external function that has
been stopped. You cannot start built-in functions or user-defined functions that are
sourced on another function. You can use the START FUNCTION SPECIFIC command
to activate all or a specific set of stopped external functions.

To activate an external function that is stopped, issue the following command.

START FUNCTION SPECIFIC (function-name)

The new SCOPE (GROUP) option can also be used on the START PROCEDURE
command to allow you to start a UDF on all subsystems in a data sharing group.

The DB2 command DISPLAY FUNCTION SPECIFIC displays statistics about external
user-defined functions that are accessed by DB2 applications. This command displays
an output line for each function that a DB2 application has accessed. The information
that is returned by this command reflects a dynamic status for a point in time that
may change before another DISPLAY is issued. This command does not display
information about built-in functions or user-defined functions that are sourced on
another function.

48622202.doc Ver. 0.00a Page 33 of 177


Infosys Technologies Ltd. Triggers
___________________________________________________________________

To display statistics about an external user-defined function accessed by DB2


applications, issue the following command:

DISPLAY FUNCTION SPECIFIC (function-name)

2.8.6 Stopping UDFS


The DB2 command STOP FUNCTION SPECIFIC prevents DB2 from accepting SQL
statements with invocations of the specified functions. This command does not
prevent SQL statements with invocations of the functions from running if they have
already been queued or scheduled by DB2. Built-in functions or user-defined
functions that are sourced on another function cannot be explicitly stopped. While
the STOP FUNCTION SPECIFIC command is in effect, any attempt to execute the
stopped functions are queued. You can see the STOP FUNCTION SPECIFIC command
to stop access to all or a specific set of external functions.

Use the START FUNCTION SPECIFIC command to activate all or a specific set of
stopped external functions.

To prevent DB2 from accepting SQL statements with invocations of the specified
functions, issue the following statement:

STOP FUNCTION SPECIFIC (function-name)

2.8.7 Exercise
1. UDFs are created using DDL using ______ _______ statement.
2. Sourced functions are written using hot language like C, C++, etc.
True/False?
3. Table functions return a _____.
4. Data from non-DB2 objects could be used in DB2 SQLs. _____ functions are
used for this purpose.
5. The DB2 command _____ ______ _______ prevents DB2 from accepting SQL
statements with invocations of UDFs.

Answers:
1. CREATE FUNCTION
2. False
3. Table
4. Table
5. STOP FUNCTION SPECIFIC

2.9 Review Questions

1. How does a trigger compare vis-à-vis Constraints?


2. What are the performance considerations for triggers?
3. Explain in brief, various types of UDFs.
4. How can one monitor UDFs?

2.10 Reference

 DB2 High Performance Design and Tuning by Richard Yevich and Susan
Lawson.

48622202.doc Ver. 0.00a Page 34 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

UNIT - III
3. Physical Architecture
3.1 Unit Objectives

This unit introduces the physical architecture of DB2. It deals with the Bootstrap
dataset, Active and archive logs, DSNZPARMS and storage groups.

3.2 Introduction

Understanding the journalizing process in DB2 gives much insight into the
architecture of DB2. More or less the same concept is adopted in all the RDBMS.
Here the maintenance of Active and archive logs fall sin the purview of the system
DBA but its understanding helps a great deal in performance tuning critical
applications. The DSNZPARMS are the DB2 installation parameters most of which
cannot be changed without shutting down DB2. But now days the recent versions are
giving the flexibility to change these parameters when actually DB2 is running for
performance improvement.

3.3 Boot Strap dataset

3.3.1 Introduction

The bootstrap data set (BSDS) is a VSAM key-sequenced data set (KSDS) that
contains information critical to DB2. Specifically, the BSDS contains:

 An inventory of all active and archive log data sets known to DB2. DB2 uses
this information to track the active and archive log data sets. DB2 also uses
this information to locate log records to satisfy log read requests during
normal DB2 system activity and during restart and recovery processing.
 A wrap-around inventory of all recent DB2 checkpoint activity. DB2 uses this
information during restart processing.
 The distributed data facility (DDF) communication record, which contains
information necessary to use DB2 as a distributed server or requester.
 Information about buffer pools.

3.3.2 Recovery of BSDS


Because the BSDS is essential to recovery in the event of subsystem failure, during
installation DB2 automatically creates two copies of the BSDS and, if space permits,
places them on separate volumes. If a copy fails while DB2 is running, DB2 sends a
warning message and continues operating with a single BSDS. It is the responsibility
of operations to monitor this circumstance and restore the BSDS duality as soon as
possible.

To recover a lost BSDS, when DB2 is executing:

1. The failing BSDS must be deleted.


2. The failing BSDS must be redefined, or alternatively, an existing spare BSDS copy
must be renamed.

48622202.doc Ver. 0.00a Page 35 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

3. The BSDS is rebuilt with a -RECOVER BSDS command.

If a BSDS copy fails while DB2 is starting, the startup does not complete.
To recover a lost BSDS, when DB2 is stopped:

1. The failing BSDS must be deleted.


2. The failing BSDS must be redefined, or alternatively, an existing spare BSDS copy
must be renamed.
3. The BSDS is rebuilt from the good copy with an IDCAMS REPRO.

3.3.3 Naming Convention


The default names for BSDSs have the following structure:

hlq.BSDS0n

hlq - VSAM catalog high level qualifier


BSDS0 - Standard part of the name
n - BSDS copy, 1 or 2

3.3.4 Exercise

Questions
1. What are the contents of BSDS ?
2. Is BSDS updated when actually DB2 is running
3. Does BSDS give info about the active log dataset names
Answers
1. An inventory of all active and archive log data sets known to DB2 , A wrap-around
inventory of all recent DB2 checkpoint activity , The distributed data facility (DDF)
communication record , about buffer pools
2.YES
3. YES

3.4 Active and archive logs

3.4.1 Introduction

DB2 records all data changes and significant events in a log as they occur. In the
case of failure, DB2 uses this data to recover.

DB2 writes each log record to a disk data set called the active log. When the active
log is full, DB2 copies the contents of the active log to a disk or magnetic tape data
set called the archive log.

You can choose either single logging or dual logging.

 A single active log contains between 2 and 31 active log data sets.
 With dual logging, the active log has the capacity for 4 to 62 active log data
sets, because two identical copies of the log records are kept.
Each active log data set is a single-volume, single-extent VSAM LDS.

48622202.doc Ver. 0.00a Page 36 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

Before you can fully understand how logging works, you need to be familiar with how
database changes are made to ensure consistency. In this section, we discuss units
of recovery and rollbacks.

3.4.2 Unit of Recovery


A unit of recovery is the work, done by a single DB2 DBMS for an application, that
changes DB2 data from one point of consistency to another. A point of consistency
(also, sync point or commit point) is a time when all recoverable data that an
application program accesses is consistent with other data. A unit of recovery begins
with the first change to the data after the beginning of the job or following the last
point of consistency and ends at a later point of consistency. An example of units of
recovery within an application program is shown in here in Figure 1.

Application Process

Unit of Recovery

SQL Tran 1 SQL Tran 2


Time Line

Application Commit Application


Begins ends
SQL T1 SQL T1 SQL T2 SQL T2
Begins ends begins Ends

Point of Consistency
Figure 1: A unit of recovery within an application process

In this example, the application process makes changes to databases at SQL


transaction 1 and 2. The application process can include a number of units of
recovery or just one, but any complete unit of recovery ends with a commit point.

For example, a bank transaction might transfer funds from account A to account B.
First, the program subtracts the amount from account A. Next, it adds the amount to
account B. After subtracting the amount from account A, the two accounts are
inconsistent. These accounts are inconsistent until the amount is added to account B.
When both steps are complete, the program can announce a point of consistency and
thereby make the changes visible to other application programs.

Normal termination of an application program automatically causes a point of


consistency. The SQL COMMIT statement causes a point of consistency during
program execution under TSO.

3.4.3 Rolling back Work

If failure occurs within a unit of recovery, DB2 backs out any changes to data,
returning the data to its state at the start of the unit of recovery; that is, DB2
undoes the work. The events are shown in Figure 2 The SQL ROLLBACK statement,

48622202.doc Ver. 0.00a Page 37 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

and deadlocks and timeouts (reported as SQLCODE -911, SQLSTATE 40001), cause
the same events.

Point of Consistency New point of Consistency

One unit of Recovery

Time Line Data base Updates Back out updates

Begin unit of Recovery Begin Rollback End unit of recovery

Data is returned to its initial state

Figure 2: Unit of recovery (rollback)

The effects of inserts, updates, and deletes to large object (LOB) values are backed
out along with all the other changes made during the unit of work being rolled back,
even if the LOB values that were changed reside in a LOB table space with the LOG
NO attribute.

An operator or an application can issue the CANCEL THREAD command with the
NOBACKOUT option to cancel long running threads without backingout data changes.
As a result, DB2 does not read the log records and does not write or apply the
compensation log records. After CANCEL THREAD NOBACKOUT processing, DB2
marks all objects associated with the thread as refresh pending (REFP) and puts the
objects in a logical page list (LPL). The NOBACKOUT request might fail for either of
the following two reasons:

 DB2 does not completely back out updates of the catalog or directory
(message DSNI032I with reason 00C900CC).
 The thread is part of a global transaction (message DSNV439I).

3.4.4 Creation of log records

Log records typically go through the following cycle:

1. DB2 registers changes to data and significant events in recovery log records.
2. DB2 processes recovery log records and breaks them into segments if
necessary.
3. Log records are placed sequentially in output log buffers, which are formatted
as VSAM control intervals (CIs). Each log record is identified by a continuously

48622202.doc Ver. 0.00a Page 38 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

increasing RBA in the range 0 to 2(48)-1, where 2(48) represents 2 to the


48th power. (In a data sharing environment, a log record sequence number
(LRSN) is used to identify log records)
4. The CIs are written to a set of predefined disk active log data sets, which are
used sequentially and recycled.
5. As each active log data set becomes full, its contents are automatically
offloaded to a new archive log data set.
If you change or create data that is compressed, the data logged is also compressed.
Changes to compressed rows like inserts, updates, and deletes are also logged as
compressed data.

3.4.5 Retrieval of Log records

Log records are retrieved through the following events:

1. A log record is requested using its RBA.


2. DB2 searches for the log record in the locations listed below, in the order
given:
a. The log buffers.
b. The active logs. The bootstrap data set registers which log RBAs apply
to each active or archive log data set. If the record is in an active log,
DB2 dynamically acquires a buffer, reads one or more CIs, and returns
one record for each request.
c. The archive logs. DB2 determines which archive volume contains the
CIs, dynamically allocates the archive volume, acquires a buffer, and
reads the CIs.

3.4.6 Writing the Active log

The log buffers are written to an active log data set when they become full, when the
write threshold is reached (as specified on the DSNTIPL panel), or, more often, when
the DB2 subsystem forces the log buffer to be written (such as, at commit time). In
the last case, the same control interval can be written several times to the same
location. The use of dual active logs increases the reliability of recovery.

When DB2 is initialized, the active log data sets named in the BSDS are dynamically
allocated for exclusive use by DB2 and remain allocated exclusively to DB2 (the data
sets were allocated as DISP=OLD) until DB2 terminates. Those active log data sets
cannot be replaced, nor can new ones be added, without terminating and restarting
DB2. The size and number of log data sets is indicated by what was specified by
installation panel DSNTIPL.

3.4.7 Writing the archive log

The process of copying active logs to archive logs is called offloading. The relation of
offloading to other logging events is shown schematically in Figure 3.

48622202.doc Ver. 0.00a Page 39 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

Write to Triggering
active log event

Write to
archive log
Offload
process Record on
Figure 3: The offloading process
BSDS
3.4.8 Triggering log offload

An offload of an active log to an archive log can be triggered by several events. The
most common are when:

 An active log data set is full


 Starting DB2 and an active log data set is full
 The command ARCHIVE LOG is issued
 An offload is also triggered by two uncommon events:
 An error occurring while writing to an active log data set. The data set is
truncated before the point of failure, and the record that failed to write becomes
the first record of the next data set. An offload is triggered for the truncated data
set as in normal end-of-file. If there are dual active logs, both copies are
truncated so the two copies remain synchronized.
 Filling of the last unarchived active log data set. Message DSNJ110E is issued,
stating the percentage of its capacity in use; IFCID trace record 0330 is also
issued if statistics class 3 is active. If all active logs become full, DB2 stops
processing until offloading occurs and issues this message:
DSNJ111E - OUT OF SPACE IN ACTIVE LOG DATA SETS

3.4.9 The offloading process

During the process, DB2 determines which data set to offload. Using the last log RBA
offloaded, as registered in the BSDS, DB2 calculates the log RBA at which to start.
DB2 also determines the log RBA at which to end, from the RBA of the last log record
in the data set, and registers that RBA in the BSDS.

When all active logs become full, the DB2 subsystem runs an offload and halts
processing until the offload is completed. If the offload processing fails when the
active logs are full, then DB2 cannot continue doing any work that requires writing to
the log.

When an active log is ready to be offloaded, a request can be sent to the MVS
console operator to mount a tape or prepare a disk unit. The value of the field WRITE
TO OPER of the DSNTIPA installation panel determines whether the request is

48622202.doc Ver. 0.00a Page 40 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

received. If the value is YES, the request is preceded by a WTOR (message number
DSNJ008E) informing the operator to prepare an archive log data set for allocating.

The operator need not respond to message DSNJ008E immediately. However,


delaying the response delays the offload process. It does not affect DB2 performance
unless the operator delays response for so long that DB2 runs out of active logs.

The operator can respond by canceling the offload. In that case, if the allocation is
for the first copy of dual archive data sets, the offload is merely delayed until the
next active log data set becomes full. If the allocation is for the second copy, the
archive process switches to single copy mode, but for the one data set only.

3.4.9.1 Interruptions and errors while offloading


Here is how DB2 handles the following interruptions in the offloading process:
 The command STOP DB2 does not take effect until offloading is finished.
 A DB2 failure during offload causes offload to begin again from the previous start
RBA when DB2 is restarted.
An unknown problem that causes the offload task to hang means that DB2 cannot
continue processing the log. This problem might be resolved by retrying the offload,
which you can do by using the option CANCEL OFFLOAD of the command ARCHIVE
LOG.

3.4.10Archive log datasets

Archive log data sets can be placed on standard label tapes or disks and can be
managed by DFSMShsm (Data Facility Hierarchical Storage Manager). They are
always written by QSAM. Archive logs on tape are read by BSAM; those on disk are
read by BDAM. Each MVS logical record in an archive log data set is a VSAM CI from
the active log data set. The block size is a multiple of 4 KB.

Output archive log data sets are dynamically allocated, with names chosen by DB2.
The data set name prefix, block size, unit name, and disk sizes needed for allocation
are specified when DB2 is installed, and recorded in the DSNZPxxx module. You can
also choose, at installation time, to have DB2 add a date and time to the archive log
data set name.

It is not possible to specify specific volumes for new archive logs. If allocation errors
occur, offloading is postponed until the next time offloading is triggered.

3.4.10.1 Using dual archive logging

If you specify dual archive logs at installation time, each log CI retrieved from the
active log is written to two archive log data sets. The log records that are contained
on a pair of dual archive log data sets are identical, but end-of-volumes are not
synchronized for multivolume data sets.

Archiving to disk offers faster recoverability but is more expensive than archiving to
tape. If you use dual logging, you can specify on installation panel DSNTIPA that the
primary copy of the archive log go to disk and the secondary copy go to tape.

48622202.doc Ver. 0.00a Page 41 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

This feature increases recovery speed without using as much disk. The second tape
is intended as a backup or can be sent to a remote site in preparation for disaster
recovery. To make recovering from the COPY2 archive tape faster at the remote site,
use the new installation parameter, ARC2FRST, to specify that COPY2 archive log
should be read first. Otherwise, DB2 always attempts to read the primary copy of the
archive log data set first.

3.4.10.2 Archiving to a tape

If the unit name reflects a tape device, DB2 can extend to a maximum of twenty
volumes. DB2 passes a file sequence number of 1 on the catalog request for the first
file on the next volume. Though that might appear to be an error in the integrated
catalog facility catalog, it causes no problems in DB2 processing.

If you choose to offload to tape, consider adjusting the size of your active log data
sets such that each set contains the amount of space that can be stored on a nearly
full tape volume. That adjustment minimizes tape handling and volume mounts and
maximizes the use of tape resources. However, such an adjustment is not always
necessary.

If you want the active log data set to fit on one tape volume, consider placing a copy
of the BSDS on the same tape volume as the copy of the active log data set. Adjust
the size of the active log data set downward to offset the space required for the
BSDS.

3.4.10.3 Archiving to disk

All archive log data sets allocated on disk must be cataloged. If you choose to
archive to disk, then the field CATALOG DATA of installation panel DSNTIPA must
contain YES. If this field contains NO, and you decide to place archive log data sets
on disk, you receive message DSNJ072E each time an archive log data set is
allocated, although the DB2 subsystem still catalogs the data set.

If you use disk storage, be sure that the primary and secondary space quantities and
block size and allocation unit are large enough so that the disk archive log data set
does not attempt to extend beyond 15 volumes. That minimizes the possibility of
unwanted MVS B37 or E37 abends during the offload process. Primary space
allocation is set with the PRIMARY QUANTITY field of the DSNTIPA installation panel.
The primary space quantity must be less than 64K tracks because of the DFSMS
Direct Access Device Space Management limit of 64K tracks on a single volume when
allocating a sequential disk data set.

3.4.11Archiving the log

A properly authorized operator can archive the current DB2 active log data sets,
whenever required, by issuing the ARCHIVE LOG command. Using ARCHIVE LOG can
help with diagnosis by allowing you to quickly offload the active log to the archive log
where you can use DSN1LOGP to further analyze the problem.

To issue this command, you must have either SYSADM authority, or have been
granted the ARCHIVE privilege.

48622202.doc Ver. 0.00a Page 42 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

-ARCHIVE LOG

When you issue the above command, DB2 truncates the current active log data sets,
then runs an asynchronous offload, and updates the BSDS with a record of the
offload. The RBA that is recorded in the BSDS is the beginning of the last complete
log record written in the active log data set being truncated.

You could use the ARCHIVE LOG command as follows to capture a point of
consistency for the MSTR01 and XUSR17 databases:

-STOP DATABASE (MSTR01,XUSR17)


-ARCHIVE LOG
-START DATABASE (MSTR01,XUSR17)

In this simple example, the STOP command stops activity for the databases before
archiving the log.

3.4.11.1 Quiescing activity before offloading

Another method of ensuring that activity has stopped before the log is archived is
the MODE(QUIESCE) option of ARCHIVE LOG. With this option, DB2 users are
quiesced after a commit point, and the resulting point of consistency is captured in
the current active log before it is offloaded. Unlike the QUIESCE utility, ARCHIVE LOG
MODE(QUIESCE) does not force all changed buffers to be written to disk and does
not record the log RBA in SYSIBM.SYSCOPY. It does record the log RBA in the boot
strap data set.

Consider using MODE(QUIESCE) when planning for offsite recovery. It creates a


system-wide point of consistency, which can minimize the number of data
inconsistencies when the archive log is used with the most current image copy during
recovery.

In a data sharing group, ARCHIVE LOG MODE(QUIESCE) might result in a delay


before activity on all members has stopped. If this delay is unacceptable to you,
consider using ARCHIVE LOG SCOPE(GROUP) instead. This command causes
truncation and offload of the logs for each active member of a data sharing group.
Although the resulting archive log data sets do not reflect a point of consistency, all
the archive logs are made at nearly the same time and have similar LRSN values in
their last log records. When you use this set of archive logs to recover the data
sharing group, you can use the ENDLRSN option in the CRESTART statement of the
change log inventory utility (DSNJU003) to truncate all the logs in the group to the
same point in time.

The MODE(QUIESCE) option suspends all new update activity on DB2 up to the
maximum period of time specified on the installation panel DSNTIPA. If the time
needed to quiesce is less than the time specified, then the command completes
successfully; otherwise, the command fails when the time period expires. This time
amount can be overridden when you issue the command, by using the TIME option:

-ARCHIVE LOG MODE(QUIESCE) TIME(60)

48622202.doc Ver. 0.00a Page 43 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

The above command allows for a quiesce period of up to 60 seconds before archive
log processing occurs.

3.4.11.2 Changing the checkpoint frequency dynamically

Use the LOGLOAD or CHKTIME option of the SET LOG command to dynamically
change the checkpoint frequency without recycling DB2. The LOGLOAD value
specifies the number of log records that DB2 writes between checkpoints. The
CHKTIME value specifies the number of minutes between checkpoints. Either value
affects the restart time for DB2.

For example, during prime shift, your DB2 shop might have a low logging rate, but
require that DB2 restart quickly if it terminates abnormally. To meet this restart
requirement, you can decrease the LOGLOAD value to force a higher checkpoint
frequency. In addition, during off-shift hours the logging rate might increase as
batch updates are processed, but the restart time for DB2 might not be as critical. In
that case, you can increase the LOGLOAD value which lowers the checkpoint
frequency.

You can also use the LOGLOAD or CHKTIME option to initiate an immediate system
checkpoint:

-SET LOG LOGLOAD(0)


or
-SET LOG CHKTIME(0)

The CHKFREQ value that is altered by the SET LOG command persists only while DB2
is active. On restart, DB2 uses the CHKFREQ value in the DB2 subsystem parameter
load module.

3.4.11.3 Setting the limit for archive log tape units

Use the DB2 command SET ARCHIVE to set the upper limit for the number of and the
deallocation time of tape units for the archive log. This command overrules the
values specified during installation or in a previous invocation of the SET ARCHIVE
command. The changes initiated by SET ARCHIVE are temporary; at restart, DB2
uses the values that were set during installation.

3.4.11.4 Displaying log information

Use the DISPLAY LOG command to display the current checkpoint frequency (either
the number of logs records or the minutes between checkpoints to obtain additional
information about log data sets and checkpoints from the Print Log Map utility
(DSNJU004).

3.4.12Naming convention

3.4.12.1 Active log names


The default names for active log data sets have the following structure:

48622202.doc Ver. 0.00a Page 44 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

hlq.LOGCOPYn.DSmm

hlq VSAM catalog high level qualifier


LOGCOPY Standard part of the name
n Activelog copy,1 or2
mm Active log number, 01 to 31

3.4.12.2 Archive log names


The default names for archive log and BSDS backup data sets have the following
optional structure:

hlq.ARCHLOGn.Dyyddd.Thhmmsst.axxxxxx

hlq VSAM catalog high level qualifier


ARCHLOG Standard part of the name
n Archive log copy, 1 or 2
Dyyddd Date, yy=year (2 or 4 digits), ddd=day of year
Thhmmsst Time, hh=hour, mm=minute, ss=seconds, t=tenths
a A=Archive log, B=BSDS backup
xxxxxx File sequence

Dyyddd and Thhmmsst are optional qualifiers defined in DSNZPARM in the


TIMESTAMP ARCHIVES option (YES or NO) of the DSNTIPH panel,
and Dyyddd can assume the format Dyyyyddd if the TIMESTAMP

3.4.13Performance considerations

3.4.13.1 Data write operations


When an application updates data, the updated pages are kept in the virtual buffer
pool. Eventually, the updated data pages in the virtual buffer pool have to be written
to disk. Write operations can be either asynchronous or synchronous, with the
execution of the unit of work.

3.4.13.2 Asynchronous Writes


Most DB2 writes are done asynchronously from the application program and chained
whenever possible. This helps performance and implies that the application may
have long since finished, by the time its data updates are written to disk. Updated
pages are kept in the virtual buffer pool for possible reuse. The reuse ratio can be
obtained from the DB2PM statistics report. Updated pages are written
asynchronously, when:

 A checkpoint is taken, which happens whenever:


 The DB2 parameter LOGLOAD limit is reached.
 An active log is switched.
 The DB2 subsystem stops executing normally.
 The percentage of updated pages in a virtual buffer pool for a single data set
exceeds a preset limit called the vertical deferred write threshold (VDWQT).
 The percentage of unavailable pages in a virtual buffer pool exceeds a preset
limit called the deferred write threshold (DWQT).

48622202.doc Ver. 0.00a Page 45 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

Because these operations are independent from the application program, the DB2
accounting trace cannot show these writes. The DB2 PM statistics report is required
to see the asynchronous writes.

3.4.13.3 Synchronous writes

Synchronous writes occur exceptionally, when:

 The virtual buffer pool is too small and the immediate write threshold (IWTH) is
exceeded.
 More than two DB2 checkpoints have been taken during the execution of a unit of
work, and an updated page has not been written out to disk.

When the conditions for synchronous write occur, the updated page is written to disk
as soon as the update completes. The write is synchronous with the application
program SQL request; that is, the application program waits until the write has been
completed. These writes are shown in the DB2 accounting trace and in the DB2PM
statistics report

3.4.13.4 Immediate write threshold


The immediate write threshold is set when 97.5% of all pages in the virtual buffer
pool are unavailable, and cannot be changed. Monitoring buffer pool usage includes
checking how often this threshold is reached. Generally, you want to set virtual
buffer pool sizes large enough to avoid reaching this threshold. Reaching this
threshold has a significant effect on processor usage and I/O resource consumption.
For example, updating three rows per page in 10 sequential pages ordinarily requires
one or two asynchronous write operations. When IWTH is exceeded, the updates
require 30 synchronous writes.

3.4.13.5 Write quantity


DB2 writes a variable number of pages in each I/O operation. Table below shows the
maximum pages DB2 can write in a single asynchronous I/O operation. Some utilities
can write twice the amount shown in this figure 4. The actual number of pages
written in a time interval can be obtained from the DB2 PM statistics report.

Page Size Maximum Page


4k 32
8k 16
16k 8
32k 4
Figure4: Number of pages written in one I/O

3.4.13.6 Tuning Write frequency


Large virtual buffer pools benefit DB2 by keeping data pages longer in storage, thus
avoiding an I/O operation. With large buffer pools and very high write thresholds,
DB2 can write large amounts of data at system checkpoint time and impact
performance. The DB2 administrators can tune virtual buffer pool parameters to
cause more frequent write to disk and reduce the impact of the writes at system
checkpoint. The tuning parameters are the DWQT and the VDWQT. The DWQT works
at virtual buffer pool level, while the VDWQT works at data set level.

48622202.doc Ver. 0.00a Page 46 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

Table spaces containing pages, which are frequently reread and updated, should
have a high threshold, placing them in a virtual buffer pool with a high DWQT, orhigh
VDWQT. This ensures that pages are reused in storage. The higher this rate, the
better the page reuse for write is in this virtual buffer pool. Large table spaces,
where updates are very scattered and page reuse is infrequent or improbable, can
have their threshold set low, even to zero. A zero threshold means that updated
pages are written to disk very frequently. In this case, the probability of finding the
update page still on the disk cache is higher (cache hit) helping with disk
performance. A low threshold also reduces the write impact at checkpoint time.

Following figure 5 gives the DB2 PM accounting trace buffer pool report extract
Care must be taken if trying to tune the write efficiency with the LOGLOAD value.
DB2 checkpoint performance can be adversely impacted by the LOGLOAD value set
too high. LOGLOAD is the installation parameter that establishes the number of LOG
control intervals generated before taking a checkpoint. If this value is excessive, a
large amount of disk writing takes place at checkpoint, and the DB2 restart time in
case of failure is also impacted. With DB2 V6 the LOGLOAD value can be dynamically
changed to reflect changes in the workload.

TOT4K TOTAL
--------------------- --------
BPOOL HIT RATIO (%) 2
GETPAGES 6135875
BUFFER UPDATES 48
SYNCHRONOUS WRITE 0 F
SYNCHRONOUS READ 19559 A
SEQ. PREFETCH REQS 164649 B
LIST PREFETCH REQS 0 C
DYN. PREFETCH REQS 26065 D
PAGES READ ASYNCHR. 5943947 E
HPOOL WRITES 0
HPOOL WRITES-FAILED 0
PAGES READ ASYN-HPOOL 0
HPOOL READS 0
HPOOL READS-FAILED 0
Figure 5: DB2PM accounting trace

BP4 READ OPERATIONS QUANTITY /MINUTE /THREAD /COMMIT


--------------------------- -------- ------- ------- -------
BPOOL HIT RATIO (%) 55.12
GETPAGE REQUEST 221.8K 6534.43 N/C 110.9K
GETPAGE REQUEST-SEQUENTIAL 18427.00 542.99 N/C 9213.50
GETPAGE REQUEST-RANDOM 203.3K 5991.43 N/C 101.7K
SYNCHRONOUS READS 613.00 18.06 N/C 306.50
SYNCHRON. READS-SEQUENTIAL 64.00 1.89 N/C 32.00
SYNCHRON. READS-RANDOM 549.00 16.18 N/C 274.50
GETPAGE PER SYN.READ-RANDOM 370.36
SEQUENTIAL PREFETCH REQUEST 577.00 17.00 N/C 288.50
SEQUENTIAL PREFETCH READS 577.00 17.00 N/C 288.50
PAGES READ VIA SEQ.PREFETCH 18440.00 543.38 N/C 9220.00
S.PRF.PAGES READ/S.PRF.READ 31.96 K
LIST PREFETCH REQUESTS 0.00 0.00 N/C 0.00
LIST PREFETCH READS 0.00 0.00 N/C 0.00
PAGES READ VIA LIST PREFTCH 0.00 0.00 N/C 0.00
L.PRF.PAGES READ/L.PRF.READ N/C L
DYNAMIC PREFETCH REQUESTED 2515.00 74.11 N/C 1257.50
DYNAMIC PREFETCH READS 2515.00 74.11 N/C 1257.50
PAGES READ VIA DYN.PREFETCH 80470.00 2371.23 N/C 40.2K

48622202.doc Ver. 0.00a Page 47 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

D.PRF.PAGES READ/D.PRF.READ 32.00 M


Figure 6: DB2PM stats for buffer pool reads

3.4.13.7 Log Writes

BP1 WRITE OPERATIONS QUANTITY /MINUTE /THREAD /COMMIT


-------------------------- -------- ------- ------- -------
BUFFER UPDATES 15179.00 1517.84 410.24 0.67
PAGES WRITTEN 4608.00 0.00 N/C N/C
BUFF.UPDATES/PAGES WRITTEN J 3.29
SYNCHRONOUS WRITES G 0.00 0.00 N/C N/C
ASYNCHRONOUS WRITES H 187.00 18.70 5.05 0.01
PAGES WRITTEN PER WRITE I/O I 24.64
HORIZ.DEF.WRITE THRESHOLD 0.00 0.00 N/C N/C
VERTI.DEF.WRITE THRESHOLD 0.00 0.00 N/C N/C
DM CRITICAL THRESHOLD 0.00 0.00 N/C N/C
WRITE ENGINE NOT AVAILABLE 0.00 0.00 N/C N/C
Figure 7: DB2PM stats for buffer pool writes

Application programs create log records when data is updated. Each data update
requires two log records, one with the data before the update, and another with the
data after the update, generally combined into one physical record. The application
program uses two methods to move log records to the log output buffer:

 NOWAIT
 FORCE
3.4.13.7.1 No Wait
Most log records are moved to the log output buffer, and control is immediately
returned to the application program. These moves are the most common. If no log
buffer is available, the application must wait for one to become available. Log
records moved into the output buffer by an application program appear in a DB2 PM
statistics report as the number of NOWAIT requests
3.4.13.7.2 Force
At commit time, the application must wait to ensure that all changes have been
written to the log. In this case, the application forces a write of the current and
previous unwritten buffers to disk. Because the application waits for this to be
completed, it is also called a synchronous write.
3.4.13.7.3 Physical writes
The log records in the log output buffer are written from the output buffer to disk.
DB2 uses two types of log writes: asynchronous and synchronous, which will be
explained further.

DB2 writes the log records (the control intervals) from the output buffer to the active
log data set when the number of log buffers used reaches the value the installation
set for the WRITE THRESHOLD field of installation panel DSNTIPL . The application is
not aware of these writes.

Synchronous writes usually occur at commit time when an application has updated
data. This write is called forcing the log, because the application must wait for DB2
to write the log buffers to disk before control is returned to the application. If the log

48622202.doc Ver. 0.00a Page 48 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

data set is not busy, all log buffers are written to disk. If the log data set is busy, the
requests are queued until it is freed.
3.4.13.7.4 Writing to two logs
If there are two logs (recommended for availability), the write to the first log, in
general, must complete before the write to the second log begins. The first time alog
control interval is written to disk, the write I/Os to the log data sets are done in
parallel. However, if the same 4 KB log control interval is again written to disk, then
the write I/Os to the log data sets must be done serially to prevent any possibility of
losing log data in case of I/O errors occurring on both copies simultaneously. This
method improves system integrity. I/O overlap in dual logging occurs whenever
multiple log control intervals have to be written; for example, when the WRITE
THRESHOLD value is reached, or when log records accumulate because of a log
device busy condition
3.4.13.7.5 Two phase commit log writes
IMS applications with DB2, and CICS and RRS applications with additional resources
besides DB2 to manage, use two-phase commit protocol. Because they use two-
phase commit, these applications force writes to the log twice. The first write forces
all the log records of changes to be written (if they have not been written previously
because of the write threshold being reached). The second write writes a log record
that takes the unit of recovery into an in-commit state.

Figure 8 depicts the Log record Path to the disk

Application Log Record


program

No Wait Force

Log output buffer

Async Sync

Active log
Dataset

Figure 8: Log record path to the disk

48622202.doc Ver. 0.00a Page 49 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

Force Force
End of Phase 1 Beginning of Phase 2

End of
I/O I/O
COMMIT

Log1

I/O I/O

Log2

Application waiting for logging Application waiting for logging

Figure 9: Unit of recovery in-commit state

3.4.13.7.6 Improving log write performance


In this section we present some considerations on choices to improve log write
performance.

The OUTPUT BUFFER field of installation panel DSNTIPL lets the system administrator
specify the size of the output buffer used for writing active log data sets. With DB2
V6, the maximum size of this buffer (OUTBUFF) is 400000 KB. Choose as large a size
as the MVS system can support without incurring additional paging. A large buffer
size will improve both log read and log write performance. If the DB2 PM statistics
report shows a non-zero value, the log output buffer is too small.

The WRITE THRESHOLD field of installation panel DSNTIPL indicates the number of
contiguous 4KB output buffer pages that are allowed to fill before data is written to
the active log data set. The default is 20 buffers, and this is recommended. Never
choose a value that is greater than 20% of the number of buffers in the output
buffer.

The devices assigned to the active log data sets must be fast. In a transactional
environment, the DB2 log may have a very high write I/O rate and will have direct
impact on the transaction response time. In general, log data sets can make
effective use of the DASD Fast Write feature of IBM's 3990 cache.

48622202.doc Ver. 0.00a Page 50 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

To avoid contention on the disks containing active log data sets, place the data sets
so that the following objectives are achieved:

 Define log data set on dedicated volumes


 If dual logging is used, separate the access path for primary and secondary log
data sets
 Separate the access path of the primary log data sets from the next log
data set pair

Do not place any other data sets on disks containing active log data sets. Place the
copy of the bootstrap data set and, if using dual active logging, the copy of the
active log data sets, on volumes that are accessible on a path different from that of
their primary counterparts. Place sequential sets of active log data sets on different
access paths to avoid contention while archiving. To achieve all this, a minimum of
three volumes on separate access paths is required for the log data sets.

3.4.13.8 Log Reads


It is during a rollback, restart, and database recovery that the performance impact of
log reads becomes evident. DB2 must read from the log and apply changes to the
data on disk. Every process that requests a log read has an input buffer dedicated to
that process. DB2 optimizes the log reads searching for log records in the following
order:

1. Log output buffer


2. Active log data set
3. Archive log data set

If the log records are in the output buffer, DB2 reads the records directly from that
buffer. If the log records are in the active or archive log, DB2 moves those log
records into the input buffer used by the reading process (such as a recovery job or
a rollback). From a performance point of view, it is always best for DB2 to obtain the
log records from the output buffer. These accesses are reported by DB2 PM; The
next fastest access for DB2 is the active log. Access to the archive log is not
desirable; it can be delayed for a considerable length of time. For example, tape
drives may not be available, or a tape mount can be required.
3.4.13.8.1 Improving log read performance
In this section we present some considerations on choices to improve log read
performance.

Active Log Size: Active logs should be large enough to avoid reading the archives,
especially during restart, rollback, and recovery. When data is backed out,
performance is optimal if the data is available from the output buffer or from the
active log. If the data is no longer available from the active log, the active log is
probably too small.

Log Input Buffer: The default size for the input buffer is 60 KB. It is specified in the
INPUT BUFFER field of installation panel DSNTIPL . The default value is
recommended.

Avoid Device Contention: Avoid device contention on the log data sets.

48622202.doc Ver. 0.00a Page 51 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

Archive to Disk or Tape: If the archive log data set resides on disk, it can be
shared by many log readers. In contrast, an archive on tape cannot be shared
among log readers. Although it is always best to avoid reading archives altogether, if
a process must read the archive, that process is serialized with anyone else who
must read the archive tape volume. For example, every rollback that accesses the
archive log must wait for any previous rollback work that accesses the same archive
tape volume to complete.

Archiving to disk offers several advantages:

 Recovery times can be reduced by eliminating tape mounts and rewind time for
archive logs kept on tape.
 Multiple RECOVER utilities can be run in parallel.
 DB2 log data can span a greater length of time than what is currently kept in
your active log data sets.
 Need for tape drives during DB2 archive log creation is eliminated. If DB2 needs
to obtain a tape drive on which to create the archive logs and it cannot allocate
one, all activity will stop until DB2 can create the archive log data sets.

If you allow DB2 to create the archive log data sets on RVA disks, you can take
advantage of the compression capability offered by the device. Depending on the
type of application data DB2 is processing and storing in the log data sets, you could
obtain a very good reduction in DASD occupancy with RVA and achieve good
recoverability at a reasonable price.

Archive to Disk and Tape: DB2 V5 has introduced the option to archive one copy
of the log to disk and the other one to tape. This allows more flexibility than when
archiving only to tapes and disk space savings when compared to archiving only to
disk.

 In case of unavailability of tape units, you can, in fact, cancel the request for
allocation (having previously set the WRITE TO OPER parameter to YES in the
Archive Log) and let DB2 continue with a single archiving.
 Disk space utilization is improved by reducing the number of data sets for the
dual copy of active logs to one copy of the archive log data set on disk and one
on tape.

Active Log Size: The capacity the system administrator specifies for the active log
can affect DB2 performance significantly. If the capacity is too small, DB2 might
need to access data in the archive log during rollback, restart, and recovery.
Accessing an archive log generally takes longer than accessing an active log. An
active log, which is too small, is shown by a non-zero value in.

Log Sizing Parameters: The following DB2 parameters affect the capacity of the
active log. In each case, increasing the value the system administrator specifies for
the parameter increases the capacity of the active log. The parameters are:

The NUMBER OF LOGS field on the installation panel DSNTIPL controls the number
of active log data sets.The ARCHIVE LOG FREQ field on the installation panel
DSNTIPL controls how often active log data sets are copied to the archive log. The
UPDATE RATE on the installation panel DSNTIPL is an estimate of how many
database changes (inserts, update, and deletes) are expected per hour. The
CHECKPOINT FREQ on the installation panel DSNTIPN specifies the number of log

48622202.doc Ver. 0.00a Page 52 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

records that DB2 writes between checkpoints. The DB2 installation CLIST uses
UPDATE RATE and ARCHIVE LOG FREQ to calculate the data set size of each active
log data set.

Calculating Average Log Record Size: One way to determine how much log
volume is needed is to calculate the average size in bytes of log records written. To
do this, the DB2 system administrator needs values from the statistics report: the
NOWAIT counter C, and the number of control intervals created in the active log,
counter D. Use the following formula:

avg size of log record in bytes=D * 4096 / C

Using this value to estimate logging needs, plus considering the available device
sizes, the DB2 system administrator can update the output of the installation CLIST
to modify the calculated values for active log data set sizes.

Log Statistics in a Sample DB2 PM Statistics Report

LOG ACTIVITY QUANTITY /MINUTE /THREAD /COMMIT


--------------------------- -------- ------- ------- -------
READS SATISFIED-OUTPUT BUFF 15756.00 F 11.21 2.82 0.07
READS SATISFIED-OUTP.BUF(%) 100.00
READS SATISFIED-ACTIVE LOG 0.00 G 0.00 0.00 0.00
READS SATISFIED-ACTV.LOG(%) 0.00
READS SATISFIED-ARCHIVE LOG 0.00 A 0.00 0.00 0.00
READS SATISFIED-ARCH.LOG(%) 0.00
TAPE VOLUME CONTENTION WAIT 0.00 0.00 0.00 0.00
WRITE-NOWAIT C 2019.6K 1437.45 361.15 8.70
WRITE OUTPUT LOG BUFFERS E 250.3K 178.17 44.76 1.08
BSDS ACCESS REQUESTS 2041.00 1.45 0.36 0.01
UNAVAILABLE OUTPUT LOG BUFF B 0.00 0.00 0.00 0.00
CONTR.INTERV.CREATED-ACTIVE 59442.00 D 42.31 10.63 0.26
ARCHIVE LOG READ ALLOCATION 0.00 0.00 0.00 0.00
ARCHIVE LOG WRITE ALLOCAT. 2.00 0.00 0.00 0.00
CONTR.INTERV.OFFLOADED-ARCH 65023.00 46.28 11.63 0.28
READ DELAYED-UNAVAIL.RESOUR 0.00 0.00 0.00 0.00
LOOK-AHEAD MOUNT ATTEMPTED 0.00 0.00 0.00 0.00
LOOK-AHEAD MOUNT SUCCESSFUL 0.00 0.00 0.00 0.00

Figure 10: DB2PM sample report

3.4.14Exercise
Questions
1. list down the points which affects the log read and log write performance
2. What is the log dataset name of your installation
Answers
1. Log write performance : OUTPUT BUFFER , WRITE THRESHOLD , data set on
dedicated volumes , access path for primary and secondary log data sets and Log
read performance : INPUT BUFFER , Archive to Disk or Tape , NUMBER OF LOGS ,
Average Log Record Size
2. Depends on your installation .

48622202.doc Ver. 0.00a Page 53 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

3.5 DSNZPARMS

The subsystem parameter module is generated by job DSNTIJUZ each time you
install, migrate, or update DB2. Seven macros expand to form this data-only
subsystem parameter load module. It contains the DB2 execution-time parameters
that you selected using the ISPF panels. These seven macros are DSN6ARVP,
DSN6ENV, DSN6FAC, DSN6LOGP, DSN6SPRM, DSN6SYSP, and DSN6GRP. These
parameters provide DB2 subsystem with lots of control information for its
functioning. Here we will just discuss a few and the following table will give the
exhaustive list of DSNZPARMS.

3.5.1 IRLMRWT
This is the number of seconds that a transaction will wait for a lock before a time out
is detected. The IRLM uses this value for timeout and deadlock detection. Most shops
take this default as 60 seconds but more and more high performance situations
where detection should occur sooner (so that the application do not incur excessive
wait lock times) it is set to lower. Is this threshold is exceeded the application is
often reviewed and tuned. The simple philosophy in practice is that in high
performance applications those that do 10000 to 20000 SQL statements per second,
waiting more than 5 seconds for a lock signals there that is something really wrong
has happened.

3.5.2 RECURHL
The use of this can help in concurrency. When its set to YES it allows DB2 to release
the locks that are held by cursor defined WITH HOLD but still to maintain the position
of the open cursor.

3.5.3 XLKUPDLT
This is new in version 6. It allows you to specify the locking method to use when
searched update or delete is performed. The default is no which is best for
concurrency. When its set to NO DB2 uses S or U lock when scanning qualifying rows
and then upgrades to X lock when the qualifying rows are found. The value of YES is
useful in data sharing environments when the search involves index because it takes
X lock on the qualifying rows or pages. During the scan of the index no data rows are
touched but when the qualifying row is found and immediate request for an X lock is
issued thus assuring that the update or delete completes rapidly when the lock is
acquired and that the transaction

3.5.4 NUMLKTS
It represents the maximum number of locks on an object. If you turn off lock
escalation (LOCKMAX 0 on the tabelspace), you need to increase this number. If you
use LOCKMAX SYSTEM on any individual tablespace, the value defined here in
NUMLKTS is the value of the system.

3.5.5 NUMLKUS
Its the maximum number of page or row locks that a single application can hold
concurrently on all table spaces. This includes data pages, index pages, index sub
pages (only for Type 1 index) and rows. If you specify 0 there is not limit on the
number of locks. You should be careful with 0, because of you turn off lock escalation
and do not commit frequently enough, you could run into storage problems (DB2

48622202.doc Ver. 0.00a Page 54 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

uses approximately 250 bytes for each lock). These storage problems can completely
consume all available storage and potentially shut down the system.

3.5.6 LOGLOAD
The DB2 Checkpoint Interval It is the factor that has the most influence on the speed
of DB2 startup. This is the number of log records that DB2 writes between successive
checkpoints. This value is controlled by the DB2 subsystem parameter LOGLOAD.
Choosing a reasonable LOGLOAD value involves balancing speed of restart with the
overhead of taking more frequent checkpoints. Generally it is best to set the
LOGLOAD so that a checkpoint is taken every 10 to 15 minutes during periods of
peak activity. In a data-sharing environment, carefully evaluate the value of
LOGLOAD for each member separately. If one member does frequent updates, its
LOGLOAD may be lower than the one for a member that is mainly issuing queries. It
is not recommended to have a LOGLOAD value higher then 500,000.

3.5.7 Other Zparms


There is an exhaustive list of Zparms in the Appendix A.

3.5.8 Exercise
Questions
1. List down the DSNZPARM used by the IRLMPROC?
2. What are the parameters for DB2 thread control?
Answers
1. IRLMRWT, RECURHL, XLKUPDLT, NUMLKTS, NUMLKUS
2. CMTSTAT, CTHREADS, IDTHTOIN, MAXTYPE1, POOLINAC

3.6 Storage Groups

A DB2 storage group is a set of volumes on direct access storage devices (DASD).
The volumes hold the data sets in which tables and indexes are actually stored. The
description of a storage group names the group and identifies its volumes and the
VSAM catalog that records the data sets.

All volumes of a given storage group must have the same device type. But, parts of
a single database can be stored in different storage groups.

It is a list of DASD volumes you specify to hold your DB2 objects. Storage is then
allocated from these volumes as your tables are loaded with data.

3.6.1 Retrieving Catalog Info about DB2 Storage Groups


SYSIBM.SYSSTOGROUP and SYSIBM.SYSVOLUMES contain information about DB2
storage groups and the volumes in those storage groups. The following query shows
what volumes are in a DB2 storage group, how much space is used, and when that
space was last calculated.

SELECT SGNAME, VOLID, SPACE, SPCDATE


FROM SYSIBM.SYSVOLUMES, SYSIBM.SYSSTOGROUP
WHERE SGNAME=NAME

48622202.doc Ver. 0.00a Page 55 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

3.6.2 Designing Storage Groups and Managing DB2 Data

DB2 manages the auxiliary storage requirements of a DB2 database by using DB2
storage groups. Data sets in these DB2 storage groups are DB2-managed data sets.
These DB2 storage groups are not the same as storage groups defined by
DFSMS/MVS's storage management subsystem (DFSMS). A DB2 storage group is a
named set of DASD volumes, in which DB2 does the following:

 Allocates storage for table spaces and indexes


 Defines the necessary VSAM data sets
 Extends and deletes the VSAM data sets
 Alters VSAM data sets
Name for DB2 storage groups and database is an unqualified identifier of up to eight
characters. A DB2 storage group name must not be the same as the name of any
other storage group in the DB2 catalog, and a DB2 database name must not be the
same as the name of any other DB2 database.

3.6.3 Managing Your Own DB2 Data Sets


You might choose to manage your own VSAM data sets for reasons such as these:
 You have a large linear table space on several data sets. If you manage your own
data sets, you can better control the placement of the individual data sets on the
volumes. (Although you can do a similar type of control by using single-volume
DB2 storage groups.)
 You want to prevent deleting a data set within a specified time period, by using
the TO and FOR options of the access method services DEFINE and ALTER
commands. You can create and manage the data set yourself, or you can create
the data set with DB2 and use the ALTER command of access method services to
change the TO and FOR options.
 You are concerned about recovering dropped table spaces. Your own data set is
not automatically deleted when a table space is dropped, making it easier to
reclaim the data if the table space is dropped.
Managing DB2 auxiliary storage on your own involves using access method services
directly. To define the required data sets, use DEFINE CLUSTER; to add secondary
volumes to expanding datasets, use ALTER ADDVOLUMES; and to delete data sets,
use DELETE CLUSTER.

You can define a data set for each of these items:


 A simple or segmented table space
 A partition of a partitioned table space
 A nonpartitioned index
 A partition of a partitioned index.
Furthermore, as table spaces and index spaces expand, you might need to provide
additional data sets. To take advantage of parallel I/O streams when doing certain
read-only queries, consider spreading large table spaces over different DASD
volumes that are attached on separate channel paths.

48622202.doc Ver. 0.00a Page 56 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

3.6.4 Requirements for Your Own Data Sets


DB2 checks whether you have defined your data sets correctly. If you plan to define
and manage VSAM data sets yourself, you must meet these requirements:
1. Define the data sets before you issue the CREATE TABLESPACE or CREATE
INDEX statement.
2. Give each data set a name with this format:
catname.DSNDBx.dbname.psname.I0001.Annn

The name variables are defined below:

catname
Integrated catalog name or alias (up to eight characters). Use the same name
or alias here as in the USING VCAT clause of the CREATE TABLESPACE and
CREATE INDEX statements.

x
C (for VSAM clusters) or D (for VSAM data components).

dbname
DB2 database name. If the data set is for a table space, dbname must be the
name given in the CREATE TABLESPACE statement. If the data set is for an
index, dbname must be the name of the database containing the base table.
If you are using the default database, dbname must be DSNDB04.

psname
Tablespace name or index name. This name must be unique within the
database. You will use this name on the CREATE TABLESPACE or CREATE
INDEX statement. (You can use a name longer than eight characters on the
CREATE INDEX statement, but the first eight characters of that name must be
the same as in the data set's psname.)

nnn
Dataset number. For partitioned table spaces, the number is 001 for the first
partition, 002 for the second, and so forth, up to the maximum of 254
partitions. For a non-partitioning index on a partitioned table space that you
defined using the LARGE option, the maximum data set number is 128.For
simple or segmented tablespaces, the number is 001 for the first data set.
When space runs short, DB2 issues a warning message. If the size of the data
set for a simple or segmented table space approaches 2 gigabytes, define
another data set. Give it the same name as the first data set, and the number
002. The next data set will be 003, and so forth. You might eventually need
up to 32 data sets (the maximum) for a simple or segmented table space. For
table spaces, it is possible to reach the 119-extent limit for the data set
before reaching the 2-gigabyte limit for a Nonpartitioned table space or the 4-
gigabyte limit for a partitioned table space. If this happens, DB2 does not
extend the data set.
3. You must use the DEFINE CLUSTER command to define the size of the
primary and secondary extents of the VSAM cluster. If you specify zero for
the secondary extent size, data set extension does not occur.

48622202.doc Ver. 0.00a Page 57 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

4. If you specify passwords when defining a VSAM data set, give your highest-
level password in the DSETPASS clause (in the CREATE TABLESPACE or
CREATE INDEX statement).
5. Define the data sets as LINEAR. Do not use RECORDSIZE or
CONTROLINTERVALSIZE; these attributes are invalid, and are replaced by the
specification LINEAR.
6. Use the REUSE option. You must define the data set as REUSE in order to use
the DSN1COPY utility.
7. Use SHAREOPTIONS (3,3).

The DEFINE CLUSTER command has many optional parameters that do not apply
when the data set is used by DB2. If you use the parameters SPANNED,
EXCEPTIONEXIT, SPEED, BUFFERSPACE, or WRITECHECK, VSAM applies them to
your data set, but DB2 ignores them when it accesses the data set. The OWNER
parameter value for clusters defined for storage groups is the first SYSADM
authorization ID specified at installation. When you drop indexes or table spaces for
which you defined the data sets, you must delete the data sets yourself unless you
want to reuse them. To reuse a data set, first commit, and then create a new table
space or index with the same name. When DB2 uses the new object, it overwrites
the old information with new information, destroying the old data.

Likewise, if you delete data sets, you must drop the corresponding table spaces and
indexes; DB2 does not do it automatically.

3.6.5 Implementing Your Storage Groups


When you create table spaces and indexes, you name the storage group from which
you want space to be allocated. Try to assign frequently accessed objects (indexes,
for instance) to fast devices and seldom-used tables to slower devices; that choice of
storage groups improves performance.
Here are some of the things that DB2 does for you in managing your auxiliary
storage requirements:
 When a table space is created, DB2 defines the necessary VSAM data sets using
VSAM access method services. After the data sets have been created, you can
process them with access method service commands that support VSAM control-
interval (CI) processing (for example, IMPORT and EXPORT).
 When a table space is dropped, DB2 automatically deletes the associated data
sets.
 When a data set in a simple table space reaches its maximum size of 2 GB, DB2
might automatically create a new data set. The primary data set allocation is
obtained for each new data set.
When needed, DB2 can extend individual data sets. For more information, see When
creating or reorganizing a table space, if the associated data sets already exist, DB2
deletes and then redefines them.

When you want to move data sets to a new volume, you can alter the volumes list in
your storage group. DB2 will automatically relocate your data sets during utility
operations that build or rebuild a data set (LOAD REPLACE, REORG, and RECOVER).
With user-defined data sets, on the other hand, you must delete and redefine your
data sets in order to move them.

48622202.doc Ver. 0.00a Page 58 of 177


Infosys Technologies Ltd. Physical Architecture
___________________________________________________________________

After you define a storage group, DB2 stores information about it in the DB2 catalog.
(This catalog is not the same as the integrated catalog facility catalog that describes
DB2 VSAM data sets). The catalog table SYSIBM.SYSSTOGROUP has a row for each
storage group and SYSIBM.SYSVOLUMES has a row for each volume. With the proper
authorization, you can display the catalog information about DB2 storage groups by
using SQL statements. Use storage groups whenever you can, either specifically or
by default. However, if you want to maintain closer control over the physical storage
of your tables and indexes, you can define and manage your own VSAM data sets
using VSAM access method services. For both user-managed and DB2-managed
data sets, you need at least one integrated catalog facility catalog, either user or
master, created with the integrated catalog facility. You must identify the integrated
catalog facility catalog (the "integrated catalog") when you create a storage group or
when you create a table space that does not use storage groups.

3.6.6 Exercise
Questions:
1. Given a table space how do we find out the values that it resides
2. Write A SQL to find out the information about a specific storage group

Answers:
1. Using the SYSIBM.SYSTABLESPACE AND SYSIBM.SYSTABLES
2. SELECT SGNAME, VOLID, SPACE, SPCDATE
FROM SYSIBM.SYSVOLUMES, SYSIBM.SYSSTOGROUP
WHERE SGNAME=NAME

3.7 Review Questions

1. How is the architecture of DB2 differs that from oracle?


2. What are the steps involved if roll back operations needs to be done from the
archive log?

3.8 Reference

 DB2 Admin guide (Manual)


 Red book on storage administration
 Other IDUG material

48622202.doc Ver. 0.00a Page 59 of 177


Infosys Technologies Ltd. Data Services
___________________________________________________________________

UNIT - IV
4. Data Services
4.1 Unit Objectives

We will be dealing with the various memory handling detail of the DB2 subsystem.
This is the maximum memory area that DB2 requires for its functioning

4.2 Introduction

DB2 requires a plethora of memory structures – commonly referred to as "pools" –


to manage, modify and access the data it maintains. DB2 storage pools reside in the
DBM1 address space, which is 2 GB in size. But in most MVS environments, system
and common storage area are reduced to less than 1.6 GB.

There are four pool types used by DB2 to cache information in memory as it
operates: buffer, EDM, RID and sort. The longer information can be cached in
memory, the better the chance the data can be reused by other processes – without
reading it from disk again. When disk I/O can be avoided, database performance is
enhanced.

However, constant vigilance is required to keep these pools optimally configured so


that DB2 applications can achieve high performance and deliver required service
levels. There are many factors to take into consideration for each type of pool, and
each must be examined and tuned without causing negative impact on any other.
This can be a time-consuming and error-prone job.

4.3 Buffer Pools

4.3.1 Introduction
Buffer pools are areas of virtual storage that temporarily store pages of table spaces
or indexes. When an application program accesses a row of a table, DB2 places the
page containing that row in a buffer. If the requested data is already in a buffer, the
application program does not have to wait for it to be retrieved from DASD. Avoiding
the need to retrieve data from DASD results in faster performance.
If the row is changed, the data in the buffer must be written back to DASD
eventually. But that write operation might be delayed until DB2 takes a checkpoint,
or until one of the related write thresholds is reached.
The data remains in the buffer until DB2 decides to use the space for another page.
Until that time, the data can be read or changed without a DASD I/O operation.
DB2 allows you to use up to 50 buffer pools that contain 4KB buffers and 10 buffer
pools that contain 32KB buffers. You can set the size of each of those buffer pools
separately when installing DB2. You can change the sizes and other characteristics of
a buffer pool at any time while DB2 is running, by using the ALTER BUFFERPOOL
command.

4.3.2 Tuning Buffer Pools


DB2 buffer pools are the first of the four pools. DB2 includes a built-in caching
mechanism for data pages read from DASD to satisfy SQL statements from
applications. When data is read from disk, DB2 places it into a buffer pool. This

48622202.doc Ver. 0.00a Page 60 of 177


Infosys Technologies Ltd. Data Services
___________________________________________________________________

happens for both data pages and index pages. Whenever a user or program accesses
a particular piece of data, DB2 first checks for it in the buffer pools. If it exists there,
DB2 can avoid an expensive I/O to disk, thereby enhancing performance.

DB2 buffer pools provide one of the most productive areas for performance tuning
and optimization. In general, DB2 loves large buffer pools. But buffer pools are
backed by memory, and memory is expensive. So tuning the size of the DB2 buffer
pools appropriately, based on application workload, is important. However, it is also
complex: there are 80 different buffer pools used by DB2, each of which can have
multiple DB2 tablespaces and indexes assigned to them. And there are tuning knobs
for each of the 80 buffer pools that control things such as how the buffer pool is
optimized for random versus sequential access and how parallel operations are
handled.

Tuning considerations include determining how many buffer pools and hiperpools to
define, the size and settings of each pool, which objects should be grouped together
into which pools, and how the configuration should be impacted as workloads
change. Each of these tuning options can have a tremendous impact on the
performance of database applications.

Monitoring activity in the buffer pools for multiple, concurrent applications is often
too difficult for resource-constrained DBA groups. As such, buffer pool tuning is
usually accomplished only when a problem occurs. And therefore, resources may be
wasted (because buffer pools are over-allocated) or applications may be running
slower (if buffer pools are under-allocated).

Without Pool Advisor, the task of properly configuring buffer pools is typically too
great to be performed. First, the data required to classify and group similar objects is
not readily available without expensive, high-volume buffer traces. Second, data
objects at a typical DB2 site now number in the thousands or tens of thousands, as
seen in an ERP shop. The primary tuning methodology that Pool Advisor uses is
based on standard industry techniques advocated by IBM® and tuning professionals
for years. This methodology involves:

 Classifying the access characteristics of all database objects


 Grouping objects into pools so that those with similar characteristics
are together (and separated from objects with very different access
characteristics)
Configuring those pools to optimize their performance for the kind of access
techniques their assigned objects use the most

4.3.2.1 Buffer Pools and Hiperpools


If your DB2 subsystem is running under MVS Version 4 Release 3 or later, and the
Asynchronous Data Mover Facility of MVS is installed, you have the option of using
hiperspaces to extend DB2’s virtual buffer pools. A hiperspace is a storage space of
up to 2GB that a program can use as a data buffer. Hiperspace is addressable in 4KB
blocks; in other words, it is page addressable. For more information on hiperspace,
see MVS/ESA Programming: Extended Addressability Guide.

DB2 cannot directly manipulate data that resides in hiperspace, but it can transfer
the data from hiperspace into a regular DB2 buffer pool much faster than it could get

48622202.doc Ver. 0.00a Page 61 of 177


Infosys Technologies Ltd. Data Services
___________________________________________________________________

it from DASD. To distinguish between hiperpools and buffer pools, we now refer to
the regular DB2 buffer pools as virtual buffer pools.
On systems that have the prerequisite hardware and software, DB2 maintains two
levels of storage for each buffer pool:

 The first level of storage, the virtual buffer pool, is allocated from
DB2’s ssnmDBM1 address space. A virtual buffer pool is backed by
central storage, expanded storage, or auxiliary storage. The sum of all
DB2 virtual buffer pools cannot exceed 1.6GB.
 The second level of storage, the hiperpool, uses the MVS/ESA
hiperspace facility to utilize expanded storage only (ESO) hiperspace.
The sum of all hiperpools cannot exceed 8GB. Hiperpools are optional.

Virtual buffer pools hold the most frequently accessed data, while hiperpools serve
as a cache for data that is accessed less frequently. When a row of data is needed
from a page in a hiperpool, the entire page is read into the corresponding virtual
buffer pool. If the row is changed, the page is not written back to the hiperpool until
it has been written to DASD: all read and write operations to data in the page, and
all DASD I/O operations, take place in the virtual buffer pool. The hiperpool holds
only pages that have been read into the virtual buffer pool and might have been
discarded; they are kept in case they are needed again.

Because DASD read operations are not required for accessing data that resides in
hiperspace, response time is shorter than for DASD retrieval. Retrieving pages
cached in hiperpools takes only microseconds, rather than the milliseconds needed
for retrieving a page from DASD, which reduces transaction and query response
time.

A hiperpool is an extension to a virtual buffer pool and must always be associated


with a virtual buffer pool. You can define a hiperpool to be larger than its
corresponding virtual buffer pool.

Reducing the size of your virtual buffer pools and allocating hiperpools provides
better control over the use of central storage and can reduce overall contention for
central storage. The maximum expanded storage available on ES/9000 processors is
8GB.

A virtual buffer pool and its corresponding hiperpool, if defined, are built dynamically
when the first page set that references those buffer pools is opened.

4.3.2.2 Buffer Pool Pages


At any moment, a database virtual buffer pool can have three types of pages:
In-use Pages: These are pages that are currently being read or updated. The data
they contain is available for use by other applications.
Updated Pages: These are pages whose data have been changed but have not yet
been written to DASD. After the updated page has been written to DASD, it remains
in the virtual buffer pool available for migration to the corresponding hiperpool. In
this case, the page is not considered to be “updated” until it is changed again.
Available pages: These pages can be considered for new use, to be overwritten by an
incoming page of new data. Both in-use pages and updated pages are unavailable in
this sense; they are not considered for new use.

48622202.doc Ver. 0.00a Page 62 of 177


Infosys Technologies Ltd. Data Services
___________________________________________________________________

4.3.2.3 Read Operations


DB2 uses three read mechanisms: normal read, sequential prefetch, and list
sequential prefetch.
Normal Read: Normal read is used when just one or a few consecutive pages are
retrieved. The unit of transfer for a normal read is one page.
Sequential Prefetch: Sequential prefetch is performed concurrently with other
operations of the originating application program. It brings pages into the virtual
buffer pool before they are required and reads several pages with a single I/O
operation.

Sequential prefetch can be used to read data pages, by table space scans or index
scans with clustered data reference. It can also be used to read index pages in an
index scan. Sequential prefetch allows CP and I/O operations to be overlapped.
List Sequential Prefetch: This mechanism is used to prefetch data pages that are not
contiguous (such as through non-clustered indexes). List prefetch can also be used
by incremental image copy. For a complete description of the mechanism .

4.3.3 Write operations


Write operations are usually performed concurrently with user requests. Updated
pages are queued by data set until they are written when:
 A checkpoint is taken
 The percentage of updated pages in a virtual buffer pool for a single
data set exceeds a preset limit called the vertical deferred write
threshold (VDWQT). For more information on this threshold.
 The percentage of unavailable pages in a virtual buffer pool exceeds a
preset limit called the deferred write threshold (DWQT). For more
information on this threshold

Up to 32 4KB or 4 32KB pages can be written in a single I/O operation.

4.3.3.1 Assigning a Table Space or Index to a Virtual Buffer Pool


You assign a table space or an index to a particular virtual buffer pool by a clause of
the following SQL statements: CREATE TABLESPACE, ALTER TABLESPACE, CREATE
INDEX, ALTER INDEX. The virtual buffer pool is actually allocated the first time a
table space or index assigned to it is opened.

The table spaces and indexes of the directory (DSNDB01) and catalog (DSNDB06)
are assigned to BP0; you cannot change that assignment. BP0 is also the default
virtual buffer pool for sorting. It has a default size of 2000 buffers, and a minimum
of 56 buffers.

4.3.3.2 Buffer Pool Thresholds


DB2’s use of a virtual buffer pool or hiperpool is governed by several preset values
called thresholds. Each threshold is a level of use which, when exceeded, causes DB2
to take some action. When you reach some thresholds, it indicates a problem, while
reaching other thresholds merely indicates normal buffer management. The level of
use is usually expressed as a percentage of the total size of the virtual buffer pool or
hiperpool. For example, the “immediate write threshold” of a virtual buffer pool
(described in more detail later) is set at 97.5%; when the percentage of unavailable
pages in a virtual buffer pool exceeds that value, DB2 writes pages to DASD when
updates are completed.

48622202.doc Ver. 0.00a Page 63 of 177


Infosys Technologies Ltd. Data Services
___________________________________________________________________

4.3.3.2.1Fixed Thresholds
Fixed Thresholds: SPTH DMTH IWTH

90% 95% 97.5%

Updated Used Available Pages


Pages Pages

Being Normal Read Queue


Handled

Sequential prefetch queue

Queued
Per dataset
Unavailable Pages Available pages

Some thresholds, like the immediate write threshold, you cannot change. Monitoring
buffer pool usage includes noting how often those thresholds are reached. If they are
reached too often, the remedy is to increase the size of the virtual buffer pool or
hiperpool, which you can do with the ALTER BUFFERPOOL command. Increasing the
size, though, can affect other buffer pools, depending on the total amount of central
and expanded storage available for your buffers.

The fixed thresholds are more critical for performance than the variable thresholds.
Generally, you want to set virtual buffer pool sizes large enough to avoid reaching
any of these thresholds, except occasionally. Each of the fixed thresholds is
expressed as a percentage of the buffer pool that might be occupied by unavailable
pages.
4.3.3.2.2The fixed thresholds are (from highest to lowest value):
 Immediate Write Threshold (IWTH)--97.5% This threshold is checked
whenever a page is to be updated. If it has been exceeded, the
updated page is written to DASD as soon as the update completes.
The write is synchronous with the SQL request; that is, the request
waits until the write has been completed and the two operations are
not carried out concurrently. Reaching this threshold has a significant
effect on processor usage and I/O resource consumption. For example,
updating three rows per page in 10 sequential pages ordinarily
requires one or two write operations. When IWTH is exceeded,
however, the updates require 30 synchronous writes.
Sometimes DB2 uses synchronous writes even when the IWTH is not exceeded; for
example, when more than two checkpoints pass without a page being written.
Situations such as these do not indicate a buffer shortage.

48622202.doc Ver. 0.00a Page 64 of 177


Infosys Technologies Ltd. Data Services
___________________________________________________________________

 Data Management Threshold (DMTH)--95% This threshold is checked


before a page is read or updated. If the threshold has not been
exceeded, DB2 accesses the page in the virtual buffer pool once for
each page, no matter how many rows are retrieved or updated in that
page. If the threshold has been exceeded, DB2 accesses the page in
the virtual buffer pool once for each row that is retrieved or updated in
that page. In other words, retrieving or updating several rows in one
page causes several page access operations. Avoid reaching this
threshold, because it has a significant effect on processor usage. The
DMTH is maintained for each individual virtual buffer pool. When the
DMTH is reached in one virtual buffer pool, DB2 does not release
pages from other virtual buffer pools.

 Sequential Prefetch Threshold (SPTH)--90% This threshold is checked


at two different times: Before scheduling a prefetch operation. If the
threshold has been exceeded, the prefetch is not scheduled. During
buffer allocation for an already-scheduled prefetch operation. If the
threshold has been exceeded, the prefetch is canceled. When the
sequential prefetch threshold is reached, sequential prefetch is
inhibited until more buffers become available. Operations that use
sequential prefetch, such as those using large and frequent scans, are
adversely affected.

4.3.3.2.3Variable Thresholds

Changing a threshold in one virtual buffer pool or hiperpool has no effect on any
other virtual buffer pool or hiperpool. The variable thresholds are (from highest to
lowest default value):

 Sequential Steal Threshold (VPSEQT)


 This threshold is a percentage of the virtual buffer pool that might be
occupied by sequentially accessed pages. These pages can be in any
state: updated, in-use, or available. Hence, any page might or might
not count toward exceeding any other buffer pool threshold. The
default value for this threshold is 80%. You can change that to any
value from 0% to 100% by using the VPSEQT option of the ALTER
BUFFERPOOL command. This threshold is checked before stealing a
buffer for a sequentially accessed page instead of accessing the page
in the virtual buffer pool. If the threshold has been exceeded, DB2
tries to steal a buffer holding a sequentially accessed page rather than
one holding a randomly accessed page. Setting the threshold to 0%
would prevent any sequential pages from taking up space in the virtual
buffer pool. In this case, prefetch is disabled, and any sequentially
accessed pages are discarded as soon as they are released.

Setting the threshold to 100% would allow sequential pages to monopolize


the entire virtual buffer pool.

 Hiperpool Sequential Steal Threshold (HPSEQT)


 This threshold is a percentage of the hiperpool that might be occupied
by sequentially accessed pages. The effect of this threshold on the

48622202.doc Ver. 0.00a Page 65 of 177


Infosys Technologies Ltd. Data Services
___________________________________________________________________

hiperpool is essentially the same as that of the sequential steal


threshold on the virtual pool. The default value for this threshold is
80%. You can change that to any value from 0% to 100% by using the
HPSEQT option of the ALTER BUFFERPOOL command. Because
changed pages are not written to the hiperpool, HPSEQT is the only
threshold for hiperpools.

 Virtual Buffer Pool Parallel Sequential Threshold (VPPSEQT)


This threshold is a portion of the virtual buffer pool that might be used to
support parallel operations. It is measured as a percentage of the sequential
steal threshold (VPSEQT). Setting VPPSEQT to zero disables parallel
operation. The default value for this threshold is 50% of the sequential steal
threshold (VPSEQT). You can change that to any value from 0% to 100% by
using the VPPSEQT option on the ALTER BUFFERPOOL command.

 Virtual Buffer Pool Assisting Parallel Sequential Threshold


(VPXPSEQT)
This threshold is a portion of the virtual buffer pool that might be used to
assist with parallel operations initiated from another DB2 in the data-sharing
group. It is measured as a percentage of VPPSEQT. Setting VPXPSEQT to zero
disallows this DB2 from assisting with Sysplex query parallelism at run time
for queries that use this bufferpool. The default value for this threshold is 0%
of the parallel sequential threshold (VPPSEQT). You can change that to any
value from 0% to 100% by using the VPXPSEQT option on the ALTER
BUFFERPOOL command.

 Deferred Write Threshold (DWQT)


This threshold is a percentage of the virtual buffer pool that might be
occupied by unavailable pages, including both updated pages and pages in
use. The default value for this threshold is 50%. You can change that to any
value from 0% to 90% by using the DWQT option on the ALTER BUFFERPOOL
command. DB2 checks this threshold when an update to a page is completed.
If the percentage of unavailable pages in the virtual buffer pool exceeds the
threshold, write operations are scheduled for enough data sets (at up to 128
pages per data set) to decrease the number of unavailable buffers to 10%
below the threshold. For example, if the threshold is 50%, the number of
unavailable buffers is reduced to 40%. When the deferred write threshold is
reached, the data sets with the oldest updated pages are written
asynchronously. DB2 continues writing pages until the ratio goes below the
threshold.

 Vertical Deferred Write Threshold (VDWQT)


This threshold is expressed as a percentage of the virtual buffer pool that
might be occupied by updated pages from a single data set. The default value
for this threshold is 10%. You can change that to any value from 0% to 90%
by using the VDWQT keyword on the ALTER BUFFERPOOL command. This
threshold is checked whenever an update to a page is completed. If the
percentage of updated pages for the data set exceeds the threshold, writes
are scheduled for that data set. Because any buffers that count toward
VDWQT also count toward DWQT, setting VDWQT higher than DWQT has no
effect: DWQT is reached first, write operations are scheduled, and VDWQT is
never reached. Therefore, the ALTER BUFFERPOOL command does not allow

48622202.doc Ver. 0.00a Page 66 of 177


Infosys Technologies Ltd. Data Services
___________________________________________________________________

you to set VDWQT to a value greater than DWQT. This threshold is overridden
by certain DB2 utilities, which use a constant limit of 64 pages rather than a
percentage of the virtual buffer pool size. LOAD, REORG, and RECOVER use a
constant limit of 128 pages.
4.3.3.2.4Guidelines for Setting Buffer Pool Thresholds
Because increasing DWQT and VDWQT allows updated pages to use a larger portion
of the virtual buffer pool, setting DWQT and VDWQT to large values can have a
significant effect on the other thresholds. For example, for a work load in which
pages are frequently updated, and the set of pages updated exceeds the size of the
virtual buffer pool, setting both DWQT and VDWQT to 90% would probably cause the
sequential prefetch threshold (and possibly the data management threshold and the
immediate write threshold) to be reached frequently.

If a virtual buffer pool is large enough, it is unlikely that the default values of either
DWQT or VDWQT will ever be reached. In this case, there tend to be surges of write
I/Os as deferred writes are triggered by DB2 checkpoints. Lowering the VDWQT and
the DWQT could improve performance by distributing the write I/Os more evenly
over time.

If you set VPSEQT to 0%, the value of HPSEQT is essentially meaningless: because
when sequential pages are not kept in the virtual buffer pool, they have no chance of
ever going to the hiperpool. But there is no restriction against having a non-zero
value for HPSEQT with a zero value for VPSEQT. Buffer Pools Used for Queries and
Transactions: For a buffer pool used exclusively for query processing, it is reasonable
to set VPSEQT and HPSEQT to 100%. For a buffer pool used for both query and
transaction processing, the values you set for VPSEQT and HPSEQT should depend
on the respective priority of the two types of processing. The higher you set VPSEQT
and HPSEQT, the better queries tend to perform, at the expense of transactions.

4.3.3.3 Determining Size and Number of Buffer Pools


Considering the real storage and expanded storage that is available to DB2, it can
help your applications and queries to make the virtual buffer pools large enough to
increase the buffer hit ratio, which is a measure of how often a page access (a
getpage) is satisfied without requiring an I/O operation.
Do not automatically assume a low buffer pool hit ratio is bad. The hit ratio is a
relative value, based on the type of application. For example, an application that
browses huge amounts of data using table space scans might very well have a buffer
pool hit ratio of 0. What you want to watch for is those cases where the hit ratio
drops significantly for the same application. In those cases, it might be helpful to
investigate further.

4.3.3.4 Calculating the Buffer Pool Hit Ratio


To determine the approximate buffer hit ratio, you first need to determine how many
getpage operations did not require an I/O operation. To do this, subtract the number
of pages read from DASD (both synchronously and using prefetch) from the total
number of getpage operations. Then divide this number by the total number of
getpage operations to determine the hit ratio.

The highest possible value for the hit ratio is 1.0, which is achieved when every page
requested is always in the buffer pool. The lowest hit ratio is when the requested
page is not in the buffer pool; in this case, the hit ratio is 0 or less. When the hit

48622202.doc Ver. 0.00a Page 67 of 177


Infosys Technologies Ltd. Data Services
___________________________________________________________________

ratio is negative, it means that prefetch has brought pages into the buffer pool that
are not subsequently referenced, either because the query stops before it reaches
the end of the table space, or because the prefetched pages are stolen by DB2 for
reuse before the query can access them.

Hit Ratios for Additional Processes: The hit ratio measurement becomes less
meaningful if additional processes, such as work files or utilities, are using the buffer
pool. Some utilities use a special type of getpage request that reserve an empty
buffer without requiring that the page be read from DASD.
For work files, there is always a hit as DB2 reads the empty buffer for the input to
sort, and then there is a read for the output. The hit ratio can be calculated if the
work files are isolated in their own buffer pools. If they are, then the number of
getpages used for the hit ratio formula is divided in half as follows:

((getpages / 2) - I/O) / (getpages / 2)

Measure the looks and the reads with the DB2 Server .Calculate the hit ratio using
LPAGBUFF (the number of requests) and PAGEREAD (the number of misses). Divide
the hits (LPAGBUFF - PAGEREAD) by the number of looks (LPAGBUFF):

(LPAGBUFF - PAGEREAD) / LPAGBUFF

Increase the number of buffers.


After a comparable workload has been run, re-measure the looks and reads and
recalculate the hit ratio.

As an example assume you have the following COUNTER * values:

Looks in the Page Buffers: LPAGBUFF = 14876 (Requests)


DASD Page Reads: PAGEREAD = 1231 (Misses)

Looks minus reads: (LPAGBUFF - PAGEREAD) = 13654 (Hits)

Hit Ratio: (Hits / Requests) = 13654 / 14876 = 0.917 or 91.7%

As a rule of thumb, anything below 90% is poor, and more buffers should be used.

Your hit ratio has improved if it has increased in value. You will be using less I/Os to
do the same processing. You should repeat this process until your hit ratio no longer
increases or until your system paging begins to increase.

There are two methods for tuning buffers

Method 1:
You should increase your number of buffers by a small amount at a time and monitor
the change in hit ratio, repeating this until you find your peak hit ratio. If you
increase your buffers by a large number, you may exceed the optimal number of
buffers without knowing it. This is because you may see an increase in the hit ratio,
but you will not know if a smaller number of buffers would give you the same
increase in hit ratio. To be sure, you would need to start decreasing the number of
buffers to see if the hit ratio decreases. If it does not decrease, you may still have

48622202.doc Ver. 0.00a Page 68 of 177


Infosys Technologies Ltd. Data Services
___________________________________________________________________

more than the optimal number of buffers. If it does decrease, then you have less
than the optimal number of buffers. It is better to have slightly more than the
optimal number of buffers than slightly less and by increasing the number of buffers
by small increments, you will probably be able to determine the optimal number of
buffers more quickly.

Method 2:
An alternative is the following, knowing that more buffers will almost always reduce
database I/O:

 Work out how much storage you can spend without hitting paging problems.
 Give all that space to the database.
 Define 500 directory buffers and give all the remaining space to page buffers.
NPAGBUF should be at least 1000 to take advantage of blocking (Multi-Block
*BLOCKIO in VM or Extended user buffering in VSE).
 Monitor the COUNTER values for DASD I/O and calculate the buffer hit ratios.
 Increase the directory buffers decreasing the page buffers until you get the best
results, monitoring as described in step 4. Add eight directory buffers for every
page buffer removed.
 Decrease the page buffers until you notice that the results get worse. Thus you
determine how much buffer space you really exploit.

4.3.3.5 Buffer Pool Size Guidelines


DB2 handles large virtual buffer pools very efficiently. Searching in large virtual
buffer pools (100MB or more) does not use any more of the processor’s resources
than searching in smaller pools.

For processors dedicated to DB2, start with the default buffer pool sizes. You can
increase the buffer pool size as long as the number of I/Os continues to decrease, or
until paging becomes a problem. If your application uses random I/Os to access the
data, the number of I/Os might not decrease significantly unless the buffer pool is
larger than the table, and other applications require little concurrent buffer pool
usage.

Problems with Paging: When the buffer pool size requirements are excessive (real
storage plus expanded storage), the oldest buffer pool pages migrate to auxiliary
paging storage. Subsequent access to these pages results in a page fault. I/O must
bring the data into real storage. Paging of buffer pool storage impacts DB2
performance. The statistics for PAGE-INS REQUIRED FOR WRITE and PAGE-INS
REQUIRED FOR READ are useful in determining if the buffer pool size setting is too
large for available real storage.

4.3.3.6 Advantages of Large Buffer Pools


4.3.3.6.1In general, larger buffer pool sizes can:
 Result in a higher buffer pool hit ratio, which can reduce the number of
I/O operations. Fewer I/O operations can reduce I/O contention, which
can provide better response time and reduce the processor resource
needed for I/O operations.
 Give an opportunity to achieve higher transaction rates with the same
response time. For any given response time, the transaction rate
depends greatly on buffer pool size.

48622202.doc Ver. 0.00a Page 69 of 177


Infosys Technologies Ltd. Data Services
___________________________________________________________________

 Prevent I/O contention for the most frequently used DASD devices,
particularly the catalog tables and frequently referenced user tables
and indexes. In addition, a large buffer pool is beneficial when a DB2
sort is used during a query, because I/O contention on the devices
containing the work file table spaces is reduced.

Watch for Storage Paging: If the large buffer pool size results in excessive real
storage paging to expanded storage, consider using hiperpools.

4.3.3.7 Choosing One or Many Buffer Pools


Reasons to Choose a Single Buffer Pool: If your system has any or all of the following
conditions, it is probably best to choose a single 4KB buffer pool:
 Not enough total buffer space for more than 10 000 4KB buffers.
 No people with the application knowledge necessary to do more
specialized tuning.
 It is a test system.

Reasons to Choose More than One Buffer Pool: The following are some
advantages to having more than one buffer pool:
 You can isolate data in separate buffer pools to favor certain
applications, data, and indexes.

For example, if you have large buffer pools, putting indexes into separate
pools from data might improve performance. You might want to put tables
and indexes that are updated frequently into a buffer pool with different
characteristics from those that are frequently accessed but infrequently
updated.

 You can put work files into a separate buffer pool. This can provide
better performance for sort-intensive queries. Applications that use
temporary tables use work files for those temporary tables. Keeping
work files separate allows you to monitor temporary table activity
more easily.

This process of segregating different activities and data into separate buffer
pools has the advantage of providing good and relatively inexpensive
performance diagnosis data from statistics and accounting traces.

4.3.3.8 Using the 32KB Buffer Pool


Though the default storage value for at least one 32KB buffer pool is recommended,
in general, the use of a 32KB buffer pool should be carefully considered. Data in
table spaces that use a 32KB buffer pool is stored and allocated as 8 records, each
4KB in size. Inefficiencies can occur if small records are stored in table spaces that
use a 32KB buffer pool. On the other hand, a 32KB page can be very good for
predominately sequential processing where record size is greater than 1 KB. For
example, only one 2100-byte record can be stored in a 4KB page, wasting almost
half of the space, but storing the record in a 32KB page can significantly reduce this
waste.

4.3.3.9 Buffer Pool Queue Management

48622202.doc Ver. 0.00a Page 70 of 177


Infosys Technologies Ltd. Data Services
___________________________________________________________________

Pages used in buffer pools are of two types:


1. Random – Read one at a time.
2. Sequential – Read via prefetch.
These pages are queued separately under LRU (Random - Least Recently Used
queue) or SLRU (Sequential Least Recently Used queue). VPSEQT (sequential steal
threshold) parameter is used to decide percentage of each queue. This threshold
normally requires two setting, one for online and another for batch processes. Batch
processing is more sequential in nature (more SLRU) v/s online processing, which is
random in nature (more LRU).

Since version 5, DB2 breaks up these queues into multiple LRU chains, resulting in
less overhead of queue management. The latch taken at head of queue is latched
less due to smaller queue size. Multiple subpools are created for a large virtual buffer
pool and threshold is controlled by DB2 and does not exceed 4000 VBP buffers in
each subpool. The LRU queue is managed within each of the each of the subpools in
order to reduce buffer-pool latch contention when concurrency is high. These buffers
are stolen in round-robin fashion through the subpools.

In version 6, a new page-stealing option was introduced with a new method of queue
management. FIFO can now be used instead of default of LRU. With this method,
oldest pages are always moved out. This decreases cost of doing GET-PAGE
operation and reduces internal latch contention for high concurrency. This should be
used only where there is little or no I/O and where table space or index is resident in
buffer pool. Separate buffer pools for LRU and FIFO objects can be set via ALTER
BUFFERPOOL command with a new PGSTEAL option for FIFO. LRU is the PGSTEAL
option default.

4.3.3.10 Monitoring and Tuning Buffer Pools Using Online Commands


The DISPLAY BUFFERPOOL and ALTER BUFFERPOOL commands allow you to monitor
and tune buffer pools and hiperpools on line, while DB2 is running, without the
overhead of running traces. You can use the ALTER BUFFERPOOL command to
change the size of a virtual buffer pool or hiperpool, some of the threshold values, or
the hiperpool CASTOUT attribute for active or inactive virtual buffer pools or
hiperpools. You can use the DISPLAY BUFFERPOOL command to display the current
status of one or more active or inactive buffer pools. For example, the following
command:

DISPLAY BUFFERPOOL(BP1) DETAIL


4.3.3.10.1 Output of this display command has the following details
 SYNC READ I/O (S) ( A ) shows the number of sequential synchronous
read I/O operations. Sequential synchronous read I/Os occur when
prefetch is disabled or when the requested pages are not consecutive.
One way to decrease the value of 326, which might be high for this
application, is to increase the buffer pool size until the number of read
I/Os decreases while avoiding paging between real storage and
expanded storage. To determine the total number of synchronous read
I/Os, add SYNC READ I/O (S) and SYNC READ I/O . In message
DSNB412I, REQUESTS ( B ) shows the number of times that sequential
prefetch was triggered, and PREFETCH I/O ( C ) shows the number of
times that sequential prefetch occurred. PAGES READ ( D ) shows the
number of pages read using sequential prefetch. If you divide the

48622202.doc Ver. 0.00a Page 71 of 177


Infosys Technologies Ltd. Data Services
___________________________________________________________________

PAGES READ value by the PREFETCH I/O, you get 7.99. This is
because the prefetch quantity for sort work files is 8 pages. For
operations other than sorts, the prefetch quantity could be up to 32
pages, depending on the application. SYS PAGE UPDATES ( E )
corresponds to the number of buffer updates. SYS PAGES WRITTEN ( F
) is the number of pages written to DASD. DWT HIT ( G ) is the
number of times the deferred write threshold (DWQT) was reached.
This number is workload dependent.
 VERTICAL DWT HIT ( H ) is the number of times the vertical deferred
write threshold (VDWQT) was reached. This value is per data set, and
it is related to the number of asynchronous writes.

If the number of synchronous read I/Os ( A ) and the number of sequential prefetch
I/Os ( C ) are relatively high, you would want to tune the buffer pools by changing
the buffer pool specifications. For example, you could make the buffer operations
more efficient by defining a hiperpool if you have expanded storage on your
machine. To do that, enter the following command:

ALTER BUFFERPOOL(BP1) VPSIZE(20000) HPSIZE(20000) CASTOUT(NO)

To obtain buffer pool information on a specific data set, you can use the LSTATS
option of the DISPLAY BUFFERPOOL command. For example, you can use the
LSTATS option to Provide page count statistics for a certain index. With this
information, you could determine whether a query used the index in question, and
perhaps drop the index if it was not used. And also to Monitor the response times on
a particular data set. If you determine that I/O contention is occurring, you could
redistribute the data sets across your available DASD.

4.3.4 Exercise
Questions
1. List the different types of buffer pool thresholds
2. If you have 1000 getpages and 100 pages were read from trhe DASD then
what would be the hit ratio
Answers
1.Fixed thresholds: Immediate Write Threshold (IWTH) , Data Management
Threshold (DMTH) , Sequential Prefetch Threshold (SPTH)
Variable thresholds: Sequential Steal Threshold (VPSEQT) , Hiperpool Sequential
Steal Threshold (HPSEQT) , Virtual Buffer Pool Parallel Sequential Threshold
(VPPSEQT) , Virtual Buffer Pool Assisting Parallel Sequential Threshold
(VPXPSEQT) , Deferred Write Threshold (DWQT) , Vertical Deferred Write
Threshold (VDWQT)
2. ((1000-100)/1000 The hit ratio in this case is 0.9)

4.4 EDM pools

4.4.1 Introduction

The second of DB2’s four pools is the EDM pool. The EDM pool is used by DB2 to
control programs as they execute against DB2. It contains structures that house the

48622202.doc Ver. 0.00a Page 72 of 177


Infosys Technologies Ltd. Data Services
___________________________________________________________________

access paths of the SQL statements for running programs. The EDM pool also
contains database information (DBDs) for the databases being accessed. Managing
the size and efficiency of the EDM pool is vitally important because if there is no
room in the EDM pool, a critical application may not be allowed to run.

The EDM pool is also used to cache prepared dynamic SQL statements. Caching can
help when statements are recalled from memory, using far fewer resources than
statements that need to be prepared again. However, DB2 caches all prepared
dynamic statements, not just those that are used repetitively.

4.4.2 Tuning the EDM Pool


During the installation process, DSNTINST CLIST calculates the size of the EDM pool,
based on parameters specified on the DSNTIPD and DSNTIP panels. The EDM pool
contains:

 Database descriptors (DBDs)


 Skeleton cursor tables (SKCTs)
 Cursor tables (CTs), or copies of the SKCTs
 Skeleton package tables (SKPTs)
 Package tables (PTs), or copies of the SKPTs

 An authorization cache block for each plan, excluding plans that you
created specifying CACHESIZE(0) Skeletons of dynamic SQL if your
installation has YES for the CACHE DYNAMIC SQL field of installation
panel DSNTIP4.

4.4.2.1 Using Packages to Aid EDM Pool Storage Management


By using multiple packages you can increase the effectiveness of EDM pool storage
management by decreasing the number of large objects in the pool.

4.4.2.2 Releasing thread storage


If your EDM pool storage grows continually, consider having DB2 periodically free
unused thread storage. To do this, specify YES for the CONTSTOR subsystem
parameter and then reassemble DSNTIJUZ. This option can affect performance and is
best used when your system has many long-running threads and your EDM storage
is constrained.

4.4.2.3 EDM Pool Space Handling


When pages are needed for the EDM pool, any pages that are available are allocated
first. If the available pages do not provide enough space to satisfy the request,
pages are “stolen” from an inactive SKCT, SKPT, DBD, or dynamic SQL skeleton. If
there is still not enough space, an SQL error code is sent to the application program.
You should design the EDM pool to contain:

 The CTs, PTs, and DBDs in use


 The SKCTs for the most frequently used applications
 The SKPTs for the most frequently used applications
 The DBDs referred to by these applications
 The cache blocks for your plans that have caches
 The skeletons of the most frequently used dynamic SQL statements, if
your system has enabled the dynamic statement cache.

48622202.doc Ver. 0.00a Page 73 of 177


Infosys Technologies Ltd. Data Services
___________________________________________________________________

By designing the EDM pool this way, you can avoid allocation I/Os, which can
represent a significant part of the total number of I/Os for a transaction. You can
also reduce the processing time necessary to check whether users attempting to
execute a plan are authorized to do so.

An EDM pool that is too small causes: Increased I/O activity in DSNDB01.SCT02,
DSNDB01.SPT01, and DSNDB01.DBD01 Increased response times, due to loading
the SKCTs, SKPTs, and DBDs. If caching of dynamic SQL is used, and the needed
SQL statement is not in the EDM pool, that statement has to be re-prepared. Fewer
threads used concurrently, due to a lack of storage

An EDM pool that is too large might use more virtual storage than necessary.
Implications for Database Design: When you design your databases, be aware that a
very large number of objects in your database mean a larger DBD for that database.
And when you drop objects, storage is not automatically reclaimed in that DBD,
which can mean that DB2 must take more locks for the DBD. To reclaim storage in
the DBD, use the MODIFY utility. The DB2 statistics record provides information on
the EDM pool.

EDM POOL QUANTITY


--------------------------- --------
PAGES IN EDM POOL A 16218.00
% PAGES IN USE 6.07
FREE PAGES IN FREE CHAIN B 15233.96
PAGES USED FOR CT 36.16
PAGES USED FOR DBD 136.36
PAGES USED FOR SKCT 755.71
PAGES USED FOR PT 4.41
PAGES USED FOR SKPT 51.40

FAILS DUE TO POOL FULL 0.00

REQUESTS FOR CT SECTIONS 135.1K


CT NOT IN EDM POOL 984.00
CT REQUESTS/CT NOT IN EDM C 137.31
REQUESTS FOR PT SECTIONS 28302.00
PT NOT IN EDM POOL 134.00
PT REQUESTS/PT NOT IN EDM D 211.21
REQUESTS FOR DBD SECTIONS 45799.00
DBD NOT IN EDM POOL 38.00
DBD REQUESTS/DBD NOT IN EDM E 1205.24
PREP_STMT_HIT_RATIO F 0.67
PREP_STMT_CACHE_INSERTS 0.30
PREP_STMT_CACHE_REQUESTS 0.90
PREP_STMT_CACHE_PAGES_USED 47.11
Figure 1: EDM Pool Utilization in the DB2 PM Statistics Report

The important values to monitor are:


Efficiency of the Pool: You can measure the efficiency of the EDM pool by using the
following ratios:

 CT REQUESTS/CT NOT IN EDM C


 PT REQUESTS/PT NOT IN EDM D
 DBD REQUESTS/DBD NOT IN EDM E

48622202.doc Ver. 0.00a Page 74 of 177


Infosys Technologies Ltd. Data Services
___________________________________________________________________


These ratios for the EDM pool depend upon your location’s workload. In most DB2
subsystems, a value of 5 or more is acceptable. This means that at least 80% of the
requests were satisfied without I/O. The number of free pages is shown in FREE
PAGES IN FREE CHAIN B in. If this value is more than 20% of PAGES IN EDM POOL A
during peak periods, the EDM pool size is probably too large. In this case, you can
reduce its size without affecting the efficiency ratios significantly. Calculating the
EDM Pool Hit Ratio for Cached Dynamic SQL: If you have caching turned on for
dynamic SQL, the EDM pool statistics have information that can help you determine
how successful your applications are at finding statements in the cache. See
mapping macro DSNDQISE for descriptions of these fields. QISEDSG records the
number of requests to search the cache. QISEDSI records the number of times that a
statement was inserted into the cache, which can be interpreted as the number of
times a statement was not found in the cache. Use the following calculation to
determine how often the dynamic statement was used from the cache:

(QISEDSG - QISEDSI) / QISEDSG = hit ratio

4.4.3 Exercise
Questions
1. list down the problems that are caused when the EDM pool is small or large ?
2. What are the ratios which indicate the efficiency of the pool ?
Answers
1. Increased I/O activity in DSNDB01.SCT02, DSNDB01.SPT01, and
DSNDB01.DBD01, Increased response times, due to loading the SKCTs, SKPTs, and
DBDs, Fewer threads used concurrently, due to a lack of storage
2. CT REQUESTS/CT NOT IN EDM C , PT REQUESTS/PT NOT IN EDM D , DBD
REQUESTS/DBD NOT IN EDM

4.5 RID pools

4.5.1 Introduction
The RID pool, third of the four pools, is used by DB2 to sort RIDs for List Prefetch,
Multiple Index Access and Hybrid Join access paths. RID pool failures can cause
performance degradation as alternate access paths are invoked, such as scans, and
the CPU invested up to the point of the failure is wasted.

4.5.2 Increasing RID Pool Size


The RID pool is used for all record identifier (RID) processing. It is used for sorting
RIDs during the following operations:

 List prefetch, including single index list prefetch,


 Access via multiple indexes
 Hybrid joins

RID pool storage is also used when DB2 enforces unique keys while updating
multiple rows. SQL statements that use those methods of access can benefit from
using the RID pool. RID pool processing can help reduce I/O resource consumption
and elapsed time. However, if there is not enough RID pool storage, it is possible
that the statement might revert to a table space scan. To determine if a transaction
used the RID pool, see the RID Pool Processing section of the DB2 PM accounting

48622202.doc Ver. 0.00a Page 75 of 177


Infosys Technologies Ltd. Data Services
___________________________________________________________________

trace record. The RID pool, which concurrent processes share, is limited to a
maximum of 1000MB. The RID pool is created at system initialization, but no space
is allocated until RID storage is needed. It is then allocated above the 16MB line in
16KB blocks as needed, until the maximum size you specified on installation panel
DSNTIPC is reached.

The general formula for computing RID pool size is =

Number of concurrent RID processing activities * average number of RIDs


* 2 * 5 bytes per RID

For example, three concurrent RID processing activities, with an average of 4000
RIDs each, would require 120KB of storage, because:
3 * 4000 * 2 * 5 = 120KB

Whether your SQL statements that use RID processing complete efficiently or not
depends on other concurrent work using the RID pool.
When the DSNTINST CLIST calculates the value for RID POOL SIZE on panel
DSNTIPC, the default is calculated as 50% of the sum of virtual buffer pools BP0,
BP1, BP2, and BP32K.

You can modify the maximum RID pool size that you specified on installation panel
DSNTIPC by using the installation panels in UPDATE mode, as follows:
To favor the selection and efficient completion of list prefetch, multiple index access,
or hybrid join, you can increase the maximum RID pool size.
To disable list prefetch, multiple index access, and hybrid join, specify a RID pool
size of 0.
If you do this, plans or packages that were previously bound with a non-zero RID
pool size might experience significant performance degradation. Rebind any plan or
package that include SQL statements that use RID processing.

4.5.3 Exercise
Questions
1. How much of storage three concurrent RID processing activities with an average
of 4000 RIDs each would require?

Answers
1. Ans : 3 * 4000 * 2 * 5 = 120KB

4.6 Sort pools

4.6.1 Introduction
Sort pool is used by DB2 to perform highly efficient internal sorts of data. Sort pool
performance degradations can impact elapsed times dramatically, and sort failures
can terminate a statement.

4.6.2 Controlling Sort Pool Size and Sort Processing


Sort is invoked when a cursor is opened for a SELECT statement that requires
sorting. The maximum size of the sort work area allocated for each concurrent sort
user depends on the value you specified for the SORT POOL SIZE field on installation
panel DSNTIPC.

48622202.doc Ver. 0.00a Page 76 of 177


Infosys Technologies Ltd. Data Services
___________________________________________________________________

When the DSNTINST CLIST calculates the value for SORT POOL SIZE on panel
DSNTIPC, the default is calculated as 10% of the sum of virtual BP0, BP1, BP2, and
BP32K. The default is limited as follows:

MINIMUM = 240KB
MAXIMUM = 64000KB

You can change this value by using the installation panels in UPDATE mode. A rough
formula for determining the maximum sort pool size is as follows:

16000 * (12 + sort key length + sort data length + 4 (if hardware sort))

For sort key length and sort data length, use values that represent the maximum
values for the queries you run. To determine these values, refer to fields QW0096KL
(key length) and QW0096DL (data length) in IFCID 0096, as mapped by macro
DSNDQW01. You can also determine these values from an SQL activity trace.

For example, if you wanted to run the following query:


SELECT C1, C2, C3
FROM tablex
ORDER BY C1;

and C1, C2, and C3 are each 10 bytes in length, for an MVS/ESA system you could
estimate the sort pool size as follows:
16000 * (12 + 4 + 10 + (10 + 10 + 10)) = 896000 bytes

where: 16000 = maximum number of sort nodes


12 = size (in bytes) of each node
4 = number of bytes added for each node if
sort facility hardware used
10 = sort key length (ORDER BY C1)
10+10+10 = sort data length (each column is 10 bytes in length)

4.6.3 Understanding How Sort Work Files Are Allocated


The sort begins with the input phase when ordered sets of rows are written to work
files. At the end of the input phase, when all the rows have been sorted and inserted
into the work files, the work files are merged together, if necessary, into one work
file containing the sorted data. The merge phase is skipped if there is only one work
file at the end of the input phase. In some cases, intermediate merging might be
needed if the maximum number of sort work files has been allocated.

The work files used in sort are logical work files, which reside in work file table
spaces in your work file database (which is DSNDB07 in a non data-sharing
environment). DB2 uses the buffer pool when writing to the logical work file. The
number of work files that can be used for sorting is limited only by the buffer pool
size when you have the sort assist hardware. If you do not have the sort hardware,
up to 140 logical work files can be allocated per sort, and up to 255 work files can be
allocated per user.

It is possible for a sort to complete in the buffer pool without I/Os. This is the ideal
situation, but it might be unlikely, especially if the amount of data being sorted is
large. The sort row size is actually made up of the columns being sorted (the sort
key length) and the columns the user selects (the sort data length). When your

48622202.doc Ver. 0.00a Page 77 of 177


Infosys Technologies Ltd. Data Services
___________________________________________________________________

application needs to sort data, the work files are allocated on a least recently used
basis for a particular sort. For example, if five logical work files (LWFs) are to be
used in the sort, and the installation has three work file table spaces (WFTSs)
allocated, then:

LWF 1 would be on WFTS 1.


LWF 2 would be on WFTS 2.
LWF 3 would be on WFTS 3.
LWF 4 would be on WFTS 1.
LWF 5 would be on WFTS 2.

To support large sorts, DB2 can allocate a single logical work file to several physical
work file table spaces.

4.6.4 Factors That Influence Sort Processing


You can influence the following factors that affect the performance of DB2 sort
processing: Design your configuration to ensure minimal I/O contention on the I/O
paths to the physical work files. Also, make sure that physical work files are
allocated on different I/O paths and packs to minimize I/O contention.

Allocate additional physical work files in excess of the defaults, and put those work
files in their own buffer pool. Segregating work file activity enables you to better
monitor and tune sort performance. It also allows DB2 to handle sorts more
efficiently because these buffers are available only for sort without interference from
other DB2 work. Applications using temporary tables use work file space until a
COMMIT or ROLLBACK occurs. (If a cursor is defined WITH HOLD, then the data is
held past the COMMIT.) If sorts are happening concurrently with the temporary
table's existence, then you probably need more space to handle the additional use of
the work files.

The size of the sort pool affects the performance of the sort. The larger the work
area, the more efficient is the sort. When the sort occurs, the sort row size depends
on the data fields that need to be sorted. Therefore, your applications should only
sort those columns that need to be sorted, as these key fields appear twice in the
sort row size. The smaller the sort row size, there are more rows that can fit.
VARCHARs are padded to their maximum length. Therefore, if VARCHAR columns are
not required, your application should not select them. This will reduce the sort row
size.

Other factors that influence sort performance include the following:

The better sorted the data is, the more efficient the sort will be. If the buffer pool
deferred write threshold (DWQT) or data set deferred write threshold (VDWQT) are
reached, writes are scheduled. For a large sort using many logical work files, this is
difficult to avoid, even if a very large buffer pool is specified.

If I/Os occur in the sorting process, in the merge phase DB2 uses sequential prefetch
to bring pages into the buffer pool with a prefetch quantity of one, two, four, or eight
pages. However, if the buffer pool is constrained, then prefetch could be disabled
because not enough pages are available.

If your DB2 subsystem is running on a processor that has the sort facility hardware
instructions, you will see an improvement in the performance of SQL statements that

48622202.doc Ver. 0.00a Page 78 of 177


Infosys Technologies Ltd. Data Services
___________________________________________________________________

contain any of the following: ORDER BY clause, GROUP BY clause, CREATE INDEX
statement, DISTINCT clause of subselect, and joins and queries that use sort.

For any SQL statement that initiates sort activity, the DB2 PM SQL activity reports
provide information on the efficiency of the sort involved.

4.6.5 Exercise

Questions:
1. If you wanted to run the following query:SELECT C1, C2, C3 FROM tablex ORDER
BY C1and C1, C2, and C3 are each 10 bytes in length, for an MVS/ESA system
estimate the sort pool size.

Answers:
1.
16000 * (12 + 4 + 10 + (10 + 10 + 10)) = 896000 bytes
where: 16000 = maximum number of sort nodes
12 = size (in bytes) of each node
4 = number of bytes added for each node if
sort facility hardware used
10 = sort key length (ORDER BY C1)
10+10+10 = sort data length (each column is 10 bytes in length)

4.7 DB2 Directory

4.7.1 Introduction
The DB2 directory contains information required to start DB2, and DB2 uses the
directory during normal operation. You cannot access the directory using SQL. The
structures in the directory are not described in the DB2 catalog.

The directory consists of a set of DB2 tables stored in five table spaces in system
database DSNDB01. Each of the following table spaces is contained in a VSAM linear
data set:
 SCT02 is the skeleton cursor table space (SKCT).
 SPT01 is the skeleton package table space.
 SYSLGRNX is the log range table space.
 SYSUTILX is the system utilities table space.
 DBD01 is the database descriptor (DBD) table space.

4.7.2 Contents for this directory


The directory contains DSNSCT02, an index space for SCT02; DSNSPT01 and
DSNSPT02, index spaces for SPT01; DSNLLX01 and DSNLLX02, indexes for
SYSLGRNX; and DSNLUX01 and DSNLUX02, the indexes for SYSUTILX.

Skeleton Cursor Table: The skeleton cursor table space (SCT02) contains a table
that describes the internal form of SQL statements of application programs. When
you bind a plan, DB2 creates a skeleton cursor table in SCT02. The index space for
the skeleton cursor table is DSNSCT02.

Skeleton Package Table: The skeleton package table space (SPT01) contains a
table that describes the internal form of SQL statements in application programs.

48622202.doc Ver. 0.00a Page 79 of 177


Infosys Technologies Ltd. Data Services
___________________________________________________________________

When you bind a package, DB2 creates a skeleton package table in SPT01. The index
spaces for the skeleton package table are DSNSPT01 and DSNSPT02.

Log Range: DB2 inserts a row in the log range table space (SYSLGRNX) every time
a table space or partition is opened and updated, and it updates SYSLGRNX
whenever that structure is closed. The row contains the opening log relative byte
address (RBA), the closing log RBA, or both for the structure. The log RBA is the
relative byte address in the log data set where open and close information about the
structure is contained. The use of SYSLGRNX speeds up recovery by limiting the log
information that must be scanned to apply changes to a table space or partition
being recovered.
The two indexes defined on SYSLGRNX are DSNLLX01 (a clustered index) and
DSNLLX02.

System Utilities: DB2 inserts a row in the system utilities table space (SYSUTILX)
for every utility job that is run. The row remains there until the utility completes its
full processing. If the utility terminates without completing, DB2 uses the information
in the row to restart the utility. The indexes defined on SYSUTILX are DSNLUX01 and
DSNLUX02.

Database Descriptor: The database descriptor table space (DBD01) contains


internal control blocks, called database descriptors (DBDs) that describe the
databases existing within DB2. Each database has exactly one corresponding DBD
that describes the database, table spaces, tables, table check constraints, indexes,
and referential relationships. A DBD also contains other information about accessing
tables in the database. DB2 creates and updates DBDs whenever their corresponding
databases are created or updated.

Contents of the Database Descriptor Tablespace (DBD01). DBD02, DBD04, and


DBD06 are shipped with DB2. Other DBDs are produced when databases are created.

4.7.3 Exercise
Questions
1. List down the contents of the DB2 directory.

Answers
1. SCT02 is the skeleton cursor table space (SKCT), SPT01 is the skeleton
package table space, SYSLGRNX is the log range table space, SYSUTILX is
the system utilities table space, DBD01 is the database descriptor (DBD)
table space

4.8 DB2 Catalog tables

4.8.1 Introduction
The DB2 catalog consists of tables of data about everything defined to the DB2
system. The DB2 catalog is contained in system database DSNDB06. When you
create, alter, or drop any structure, DB2 inserts, updates, or deletes rows of the
catalog that describe the structure and tell how the structure relates to other
structures .For Version 5, the communications database (CDB) is moved into the
catalog.

48622202.doc Ver. 0.00a Page 80 of 177


Infosys Technologies Ltd. Data Services
___________________________________________________________________

4.8.2 Examples
DB2 has extensive support to help move your applications into the next millennium.
The Version 5 catalog supports timestamps generated both before and after the year
2000. To illustrate the use of the catalog, here is a brief description of some of what
happens when the employee table is created:
 To record the name of the structure, its owner, its creator, its type (alias,
table, or view), the name of its table space, and the name of its database,
DB2 inserts a row into the catalog table SYSIBM.SYSTABLES.
 To record the name of the table to which the column belongs, its length, its
data type, and its sequence number in the table, DB2 inserts rows into
SYSIBM.SYSCOLUMNS for each column of the table.
 To increase by one the number of tables in the table space DSN8S51E, DB2
updates the row in the catalog table SYSIBM.SYSTABLESPACE.
 To record that the owner (DSN8510) of the table has all privileges on the
table, DB2 inserts a row into table SYSIBM.SYSTABAUTH.
 Because the catalog consists of DB2 tables in a DB2 database, you can use
SQL statements to retrieve information from it. There are many more catalog
tables with the creator as SYSIBM. For more information on these catalog
tables the SQL reference manual can be referred.

4.8.3 Exercise
Questions:
1. How can we find out the storage group on which a particular table resides?
2. List the catalog tables for finding out the plan and package privileges?
3. How do we find the keys (primary) of a table?

Answers:
1. From the SYSIBM.SYSTABLESPACE
2. From SYSIBM.SYSPLANAUTH
3. From the SYSIBM.SYSINDEXES and SYSIBM.SYSKEYS

4.9 Review Questions

1. What are the various tools available for tuning buffer pools and EDM pools and
how are they effective for performance tuning?

4.10 Reference

 www.ibm.com
 IBM Redbook - Storage Management with DB2 for OS/390 SG24-5462-00

48622202.doc Ver. 0.00a Page 81 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

UNIT - V
5. Locking, IRLM and Concurrency
5.1 Unit Objectives

This unit is meant to provide the aspiring DBA’s with an in-depth knowledge on DB2
LOCKING and its management. It also speaks about the various system and
application parameters that affect DB2 locking, and how to use the various resources
available properly to provide maximum value addition to the database system.

5.2 Introduction

Locking is a mechanism by which DB2 maintains the integrity of data by enforcing


different locking strategies. Depending on the environment, the database and the
application, the DBA is to choose the locking option from various ones avaliable.

In this chapter, we are going to see what are the different types of locks available,
how they are implemented by DB2, and the criteria of choosing the locking options.
We shall also have a look at the different problems normally associated with locks,
and how to tackle them.

Locks And Latches

The internal resource lock manager (IRLM) subsystem manages DB2 locks. The
particular IRLM to which DB2 is connected is specified in DB2's load module for
subsystem parameters. It is also identified as an MVS subsystem in the
SYS1.PARMLIB member IEFSSNxx. That name is used as the IRLM procedure name
(irlmproc) in MVS commands.

Often DB2 handles the locking itself. This type of call is called a latch. Since this
eliminates the cross-memory service calls to the IRLM, and avoids the overhead of
calling an external address space, latches are more efficient than locks, and requires
about one-third the amount of instructions needed for locks. These are mainly used
to lock index pages and internal DB2 resources, and also to lock data pages for very
small durations.

5.3 IRLM – Controlling the IRLM

The internal resource lock manager (IRLM) subsystem manages DB2 locks. The
particular IRLM to which DB2 is connected is specified in DB2's load module for
subsystem parameters. It is also identified as an MVS subsystem in the
SYS1.PARMLIB member IEFSSNxx. That name is used as the IRLM procedure name
(irlmproc) in MVS commands.

IMS and DB2 must use separate instances of IRLM.

Data Sharing: In a data sharing environment, the IRLM handles global locking, and
each DB2 member has its own corresponding IRLM. See Data Sharing: Planning and
Administration for more information about configuring IRLM in a data sharing
environment.

48622202.doc Ver. 0.00a Page 82 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

The following MVS commands can be used to monitor and control the IRLM:

 MODIFY irlmproc,ABEND,DUMP
Abends the IRLM and generates a dump.
 MODIFY irlmproc,ABEND,NODUMP
Abends the IRLM but does not generate a dump.
 MODIFY irlmproc,DIAG,DELAY
Initiates diagnostic dumps for IRLM subsystems in a data sharing group when there is
a delay in the child-lock propagation process.
 MODIFY irlmproc,SET
Sets dynamically the maximum amount of CSA storage or the number of trace
buffers used for this IRLM.
 MODIFY irlmproc,SET,CSA=nnn
Sets dynamically the maximum amount of CSA storage that this IRLM can use for
lock control structures.
 MODIFY irlmproc,SET,TRACE=nnn
Sets dynamically the maximum number of trace buffers used for this IRLM.
 MODIFY irlmproc,STATUS
Displays the status for the subsystems on this IRLM.
 MODIFY irlmproc,STATUS,irlmx
Displays the status of a specific IRLM.
 MODIFY irlmproc,STATUS,ALLD
Displays the status of all subsystems known to this IRLM in the data sharing group.
 MODIFY irlmproc,STATUS,ALLI
Displays the status of all IRLMs known to this IRLM in the data sharing group.
 MODIFY irlmproc,STATUS,STOR
Displays the current and "high water" allocation for CSA and ECSA storage.
 MODIFY irlmproc,STATUS,TRACE
Displays information about trace types of IRLM subcomponents.
 START irlmproc
Starts the IRLM.
 STOP irlmproc
Stops the IRLM normally.
 TRACE CT,OFF,COMP=irlmx
Stops IRLM tracing.

48622202.doc Ver. 0.00a Page 83 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

 TRACE CT,ON,COMP=irlmx
Starts IRLM tracing for all subtypes (DBM,SLM,XIT,XCF)
 TRACE CT,ON,COMP=irlmx,SUB=(subname)
Starts IRLM tracing for a single subtype.

5.3.1 Starting the IRLM


The IRLM must be available when DB2 starts, or DB2 abends with reason code
X'00E30079'.

When DB2 is installed, you normally specify that the IRLM be started automatically.
Then, if the IRLM is not available when DB2 is started, DB2 starts it, and periodically
checks whether it is up before attempting to connect. If the attempt to start the
IRLM fails, DB2 terminates.

If an automatic IRLM start has not been specified, start the IRLM before starting
DB2, using the MVS START irlmproc command.

When started, the IRLM issues this message to the MVS console:

DXR117I irlmx INITIALIZATION COMPLETE

Consider starting the IRLM manually if you are having problems starting DB2 for
either of these reasons:

An IDENTIFY or CONNECT to a data sharing group fails.

DB2 expereriences a failure that involves the IRLM.

When you start the IRLM manually, you can generate a dump to collect diagnostic
information.

5.3.2 Monitoring the IRLM Connection


To display the status of all subsystems connected to an IRLM, use this MVS
command:

MODIFY irlmproc,STATUS

In MVS, MODIFY is abbreviated by F; you can enter F irlmproc,STATUS.

5.3.3 Stopping the IRLM


If the IRLM is started automatically by DB2, it stops automatically when DB2 is
stopped. If the IRLM is not started automatically, you must stop it after DB2 stops.

If you try to stop the IRLM while DB2 or IMS is still using it, the following message
appears:

# DXR105E irlmx STOP COMMAND REJECTED. AN IDENTIFIED SUBSYSTEM


# IS STILL ACTIVE

48622202.doc Ver. 0.00a Page 84 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

If that happens, issue the STOP irlmproc command again, when the subsystems are
finished with the IRLM.

Or, if you must stop the IRLM immediately, enter the following command to force the
stop:

MODIFY irlmproc,ABEND

The system responds with this message:

# DXR124E irlmx ABENDED VIA MODIFY COMMAND

DB2 abends. An IMS subsystem using the IRLM does not abend, and can be
reconnected.

IRLM does exploit the MVS Automatic Restart Manager (ARM) services. However, it
de-registers from ARM for normal shutdowns. IRLM registers with ARM during
initialization and provides ARM with an event exit. The event exit must be in linklist.
It is part of the IRLM DXRRL183 load module. The event exit will make sure that the
IRLM name is defined to MVS when ARM restarts IRLM on a target MVS that is
different from the failing MVS. The IRLM element name used for the ARM registration
depends on the IRLM mode. For local mode IRLM, the element name is a
concatenation of the IRLM subsystem name and the IRLM ID. For global mode IRLM,
the element name is a concatenation of the IRLM data sharing group name, IRLM
subsystem name, and the IRLM ID.

IRLM will de-register from ARM during normal shutdowns using:

# a STOP command

# a MODIFY irlmproc,ABEND,NODUMP command

# auto-stop of IRLM.

Use the MODIFY command listed above to FORCE the DMBSs using the IRLM down
and stop IRLM without having it automatically restarted by ARM. IRLM will de-
register DB2 from ARM before DB2 abends to prevent ARM from restarting DB2 and
IRLM if using the auto-start feature.

5.4 Lock compatibility

Before discussing the compatibility between different modes of locks, let us go


through the different lock modes in brief.

5.4.1 Modes of Page and Row Locks


The following are the different lock modes for page and row locks.

5.4.1.1 S (SHARE)
The lock owner and any concurrent processes can read, but not change, the locked
page or row. Concurrent processes can acquire S or U locks on the page or row or
might read data without acquiring a page or row lock.

48622202.doc Ver. 0.00a Page 85 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

5.4.1.2 U (UPDATE)
The lock owner can read, but not change, the locked page or row. Concurrent
processes can acquire S locks or might read data without acquiring a page or row
lock, but no concurrent process can acquire a U lock.

U locks reduce the chance of deadlocks when the lock owner is reading a page or row
to determine whether to change it, because the owner can start with the U lock and
then promote the lock to an X lock to change the page or row.

5.4.1.3 X (EXCLUSIVE)
The lock owner can read or change the locked page or row. A concurrent process can
access the data if the process runs with UR isolation. (A concurrent process that is
bound with cursor stability and CURRENTDATA(NO) can also read X-locked data if
DB2 can tell that the data is committed.)

5.4.2 Modes of table, partition, and table space locks


The table and tablespace lock modes are more complex than the page or row lock
modes. The following are the different types of the table and tablespace lock modes.

5.4.2.1 IS (INTENT SHARE)


The lock owner can read data in the table, partition, or table space, but not change
it. Concurrent processes can both read and change the data. The lock owner might
acquire a page or row lock on any data it reads.

5.4.2.2 IX (INTENT EXCLUSIVE)


The lock owner can read data in the table, partition, or table space, but not change
it. Concurrent processes can both read and change the data. The lock owner might
acquire a page or row lock on any data it reads.

5.4.2.3 S (SHARE)
The lock owner and any concurrent processes can read, but not change, data in the
table, partition, or table space. The lock owner does not need page or row locks on
data it reads.

5.4.2.4 U (UPDATE)
The lock owner can read, but not change, the locked data; however, the owner can
promote the lock to an X lock and then can change the data. Processes concurrent
with the U lock can acquire S locks and read the data, but no concurrent process can
acquire a U lock. The lock owner does not need page or row locks.

U locks reduce the chance of deadlocks when the lock owner is reading data to
determine whether to change it. U locks are acquired on a table space when locksize
is TABLESPACE and the statement is SELECT FOR UPDATE OF. Similarly, U locks are
acquired on a table when lock size is TABLE and the statement is SELECT FOR
UPDATE OF.

5.4.2.5 SIX (SHARE with INTENT EXCLUSIVE)


The lock owner can read and change data in the table, partition, or table space.
Concurrent processes can read data in the table, partition, or table space, but not
change it. Only when the lock owner changes data does it acquire page or row locks.

48622202.doc Ver. 0.00a Page 86 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

5.4.2.6 X (EXCLUSIVE)
The lock owner can read or change data in the table, partition, or table space. A
concurrent process can access the data if the process runs with UR isolation or if
data in a LOCKPART (YES) table space is running with CS isolation and
CURRENTDATA (NO). The lock owner does not need page or row locks.

If the state of one lock placed on a data resource enables another lock to be placed
on the same resource, the two locks (or states) are said to be compatible. Whenever
one transaction holds a lock on data resource and a second transaction requests a
lock on the same resource, DB2 examines the two lock states to determine whether
they are compatible. If the locks are compatible, the lock is granted to the second
transaction (as long as no other transaction is waiting for the data resource). If the
locks are incompatible, however, the second transaction must wait until the first
transaction releases its lock. In fact, the second transaction must wait until all
existing incompatible locks are released.

Appendix C shows the compatibility matrices for page and tablespace locks.

5.4.3 Exercise
1. Latches are managed by IRLM. True/False?
2. If transaction A holds a U lock on a page, transaction B can acquire a S lock
on the same. True/False?

Answers:
1. False
2. True

5.5 Lock Conversion / Lock Promotion

Lock conversion / Lock promotion is the action of exchanging one lock on a resource
for a more restrictive lock on the same resource, held by the same application
process.

When a transaction accesses a data resource on which the transaction already holds
a lock, and the mode of access requires a more restrictive lock than the one the
transaction already holds, the state of the lock is changed to the more restrictive
state. The operation of changing the state of a lock already held to a more restrictive
state is called Lock conversion / Lock Promotion. Lock conversion occurs because a
transaction can hold one lock on a data resource at a time.

Example: An application reads data, which requires an IS lock on a table space.


Based on further calculation, the application updates the same data, which requires
an IX lock on the table space. The application is said to promote the table space lock
from mode IS to mode IX.

Effects: When promoting the lock, DB2 first waits until any incompatible locks held
by other processes are released. When locks are promoted, it is in the direction of
increasing control over resources: from IS to IX, S, or X; from IX to SIX or X; from S
to X; from U to X; and from SIX to X.

48622202.doc Ver. 0.00a Page 87 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

5.6 Suspension

Definition: An application process is suspended when it requests a lock that is


already held by another application process and cannot be shared. The suspended
process temporarily stops running.

Order of precedence for lock requests: Incoming lock requests are queued.
Requests for lock promotion, and requests for a lock by an application process that
already holds a lock on the same object, precede requests for locks by new
applications. Within those groups, the request order is "first in, first out."

Example: Using an application for inventory control, two users attempt to reduce
the quantity on hand of the same item at the same time. The two lock requests are
queued. The second request in the queue is suspended and waits until the first
request releases its lock.

Effects: The suspended process resumes running when:


All processes that hold the conflicting lock release it.
The requesting process times out or deadlocks and the process resumes to deal with
an error condition.

5.7 Lock Duration

The time during which a lock in maintained in DB2 is called Lock Duration. This has a
tremendous effect on the various aspects of an application – performance, integrity,
concurrency etc. This is mostly controlled by the BIND option of the plan or the
package (refer to Appendix B).

We shall now discuss the BIND parameters affecting locking.

5.7.1 ACQUIRE (ALLOCATE) Vs ACQUIRE (USE)


The ACQUIRE parameter mentions when the lock will be acquired by the transaction
– at the execution of the first SQL or just in time, i.e. SQL by SQL. This parameter
works at the tablespace level and affects the tablespace level lock.

ACQUIRE (ALLOCATE) option specifies that the lock will be acquired at the time the
plan is allocated, i.e. the first SQL is executed. The merit of this option is that all the
resources required are allocated to the transaction at once and hence, at any point
during execution, the transaction does not need to worry about the availibity of the
resources. The demerit is that, this might lead to longer wait times for other jobs /
transaction which might require the same resources. ACQUIRE (USE) indicates that
the locks will be acquired only at the time of the execution of the SQL. This increases
the chance of deadlock / timeout, but gives better concurrency, and is preferred in a
multi-user environment.

The default option for DB2 (in case no parameter is mentioned) is ACQUIRE (USE).

5.7.2 RELEASE (COMMIT) Vs RELEASE (DEALLOCATE)


This is the parameter which controls when the locks are to be released after / during
the execution of a plan. This parameter also works at the tablespace level.

48622202.doc Ver. 0.00a Page 88 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

RELEASE (DEALLOCATE) indicates that the locks would be released after the plan is
terminated. This also gives a better performance in case of a stand-alone job /
transaction. RELEASE (COMMIT) releases all locks after a syncpoint (COMMIT or
ROLLBACK) is issued. This is preferred for a multi-user environment.

The default option for DB2 (in case no parameter is mentioned) is RELEASE
(COMMIT).

5.7.3 ISOLATION
This is a very important parameter, which significantly affects how a program
processes page locks. Obviously this acts as a page level. This parameter specifies
the isolation level of a package or a plan by controlling the mode of page locking
implemented by the program when it runs.

The following are the options for the ISOLATION parameter.

5.7.3.1 Uncommitted Read (UR)


If this option is mentioned, the program can read pages that has been updated by a
different transaction but not yet updated. It is to be noted that this Isolation Level
work only for SELECT and FETCH, and not in update mode. If a program bound with
UR as the Isolation level has UPDATE statements, these will assume CS option.

Even if a program is bound with any other option, this can be overrirrden at an SQL
level by mentioning the WITH UR option, as demonstrated below.

SELECT EMPNO, LASTNAME


FROM EMPTABLE
WITH UR

This option gives a good concurrency and performance, but data integrity is risked
with this option. It is advisable not to use this when accurate data is necessary for
the transaction. However, this can be used in the following kind of applications:

5.7.3.2 Repeatable Read (RR)


This option makes the locks to be left behind on a Read-Only page as the program
proceeds to read a different page. Thus, all the pages read are kept locked till a
syncpoint is issued or the plan is terminated (depending on the RELEASE parameter
used).

This gives highest level of data integrity, but is associated with concurrency and
performance issues. Also has a potential of lock escalation. This is normally not used
in a busy multi-user environment. However, when the audience of the database is
limited and data integrity is of utmost importance, this might be preferred. Also, this
might be considered when a particular transaction / job has the possibilty of reading
the same pages more than once, for the sake of data integrity.

5.7.3.3 Cursor Stability (CS)


If this option is used, the locks are released from the Read-Only pages that have
already been read. This is the default for DB2 if not explicitly mentioned in the
bindcard. This is preferred by most shops as it gives high amount of concurrency and
performance, but can be a threat to the integrity of the data in case the data can

48622202.doc Ver. 0.00a Page 89 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

potentially be read twice. However, such situations are rare and CS is the favourite
ISOLATION option among most of trhe DBAs.

5.7.3.4 Read Stability (RS)


This is similar in functionality to the RR option, but allows inserts on the read pages.
This can be used in case data integrity is required, but the program needs to be
equipped to handle the different sets of rows retrieved.

5.7.4 CURRENTDATA
The CURRENTDATA option has different effects, depending on if access is local or
remote:

 For local access, the option tells whether the data upon which your cursor
is positioned must remain identical to (or "current with") the data in the
local base table. For cursors positioned on data in a work file, the
CURRENTDATA option has no effect. This effect only applies to read-only
or ambiguous cursors in plans or packages bound with CS isolation.

A cursor is "ambiguous" if DB2 cannot tell whether it is used for update or


read-only purposes. If the cursor appears to be used only for read-only,
but dynamic SQL could modify data through the cursor, then the cursor is
ambiguous. If CURRENTDATA is used to indicate an ambiguous cursor is
read-only when it is actually targeted by dynamic SQL for modification, an
error will be thrown.

 For a request to a remote system, CURRENTDATA has an effect for


ambiguous cursors using isolation levels RR, RS, or CS. For ambiguous
cursors, it turns block fetching on or off. (Read-only cursors and UR
isolation always use block fetch.) Turning on block fetch offers best
performance, but it means the cursor is not current with the base table at
the remote site.

Local access: Locally, CURRENTDATA(YES) means that the data upon which the
cursor is positioned cannot change while the cursor is positioned on it. If the cursor
is positioned on data in a local base table or index, then the data returned with the
cursor is current with the contents of that table or index. If the cursor is positioned
on data in a work file, the data returned with the cursor is current only with the
contents of the work file; it is not necessarily current with the contents of the
underlying table or index.

Figure 1 shows locking with CURRENTDATA (YES).

As with work files, if a cursor uses query parallelism, data is not necessarily current
with the contents of the table or index, regardless of whether a work file is used.
Therefore, for work file access or for parallelism on read-only queries, the
CURRENTDATA option has no effect.

If parallelism is required along with maintenance of currency with the data, the
following options can be explored:

48622202.doc Ver. 0.00a Page 90 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

Figure 1: How an application using isolation CS with CURRENTDATA (YES) acquires locks. This
figure shows access to the base table. The L2 and L4 locks are released after DB2 moves to the
next row or page. When the application commits, the last lock is released.

- Disable parallelism (Use SET DEGREE = '1' or bind with DEGREE(1)).

- Use isolation RR or RS (parallelism can still be used).

- Use the LOCK TABLE statement (parallelism can still be used).

For local access, CURRENTDATA(NO) is similar to CURRENTDATA(YES) except


for the case where a cursor is accessing a base table rather than a result
table in a work file. In those cases, although CURRENTDATA(YES) can
guarantee that the cursor and the base table are current, CURRENTDATA(NO)
makes no such guarantee.

Remote access: For access to a remote table or index, CURRENTDATA(YES)


turns off block fetching for ambiguous cursors. The data returned with the
cursor is current with the contents of the remote table or index for ambiguous
cursors.

Lock avoidance: With CURRENTDATA(NO), the opportunity for avoiding locks


is much higher. DB2 can test to see if a row or page has committed data on
it. If it has, DB2 does not have to obtain a lock on the data at all. Unlocked
data is returned to the application, and the data can be changed while the
cursor is positioned on the row. (For SELECT statements in which no cursor is
used, such as those that return a single row, a lock is not held on the row
unless you specify WITH RS or WITH RR on the statement.)

To take the best advantage of this method of avoiding locks, the DBA should
make sure all applications that are accessing data concurrently issue
COMMITs frequently.

Figure 2 shows how DB2 can avoid taking locks and Figure 3 summarizes the
factors that influence lock avoidance.

48622202.doc Ver. 0.00a Page 91 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

Figure 2: Best case of avoiding locks using CS isolation with CURRENTDATA(NO). This figure
shows access to the base table. If DB2 must take a lock, then locks are released when DB2
moves to the next row or page, or when the application commits (the same as
CURRENTDATA(YES)).

_________ ____________________ ____________________ __________ _________


| | | | Avoid | Avoid |
| | | | locks on | locks |
|Isolation| CURRENTDATA | Cursor type | returned | on |
| | | | data? | rejected|
| | | | | data? |
|_________|____________________|____________________|__________|_________|
| UR | N/A | Read-only | N/A | N/A |
|_________|____________________|____________________|__________|_________|
| | | Read-only | | |
| | |____________________| | |
| | YES | Updatable | No | |
| | |____________________| | |
| | | Ambiguous | | |
| CS |____________________ ____________________|__________| Yes |
| | | Read-only | Yes | |
| | |____________________|__________| |
| | NO | Updatable | No | |
| | |____________________|__________| |
| | | Ambiguous | Yes | |
|_________|____________________|____________________|__________|_________|
| | | Read-only | | |
| | |____________________| | |
| RS | N/A | Updatable | No | Yes |
| | |____________________| | |
| | | Ambiguous | | |
|_________|____________________|____________________|__________|_________|
| | | Read-only | | |
| | |____________________| | |
| RR | N/A | Updatable | No | No |
| | |____________________| | |
| | | Ambiguous | | |
|_________|____________________|____________________|__________|_________|

Figure 3: Lock avoidance factors. "Returned data" means data that satisfies the predicate.
"Rejected data" is that which does not satisfy the predicate.

48622202.doc Ver. 0.00a Page 92 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

5.7.5 Exercise
1. When Transaction A holds an S lock on a particular page, and then it needs
to update one row in the page, a process comes into picture called
a) Lock Escalation
b) Lock Promotion
c) Deadlock
d) Timeout

2. A suspended process gets activated when the transaction holding the lock
on the page releases the lock. True/False?

Answers:
1. b
2. True

5.8 Locksize

Locks can be of various sizes – Row, Page, Table and Tablespace, giving different
levels of concurrency. Here the trade-off is between concurrency and amount of
resources to be allocated to the lock.

The size of a lock is determined by the LOCKSIZE parameter in the DDL of the
object. The DB2 default is LOCKSIZE ANY, which allows DB2 to choose a lock of its
choice, by looking at the request, at the time of execution. However, this can be
overridden by mentioning LOCKSIZE ROW or LOCKSIZE PAGE, depending on the
need of the application.

Catalog record: Column LOCKRULE of table SYSIBM.SYSTABLESPACE.


A -> Any
P -> Page
R –> Row
S -> Tablespace
T -> Table

5.8.1 Tablespace Lock


A Tablespace lock is acquired when a DB2 table or index is accessed. The type of
lock acquired depends on the DDL parameters (mainly LOCKSIZE) and the bind
parameters for the package / plan.

The Lock Compatibility Matrix for tablespace locks is shown in Appendix C.

5.8.2 Table Lock


Table locks come into picture only in case of Segmented TS, where data pertaining to
different tables reside in different pages physically. They are similar in nature to
tablespace locks and the types of locks available and the compatibility between
different locks are the same as those for tablespace locks.

5.8.3 Page Lock


Depending on installation, page lock is, is most cases, the most granular lock in a
particular database, when row lock is not preferred. This can be taken when the DDL
specifies LOCKSIZE PAGE or LOCKSIZE ANY for the object in question.

48622202.doc Ver. 0.00a Page 93 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

The Lock Compatibility Matrix for Page locks is shown in Appendix C.

Page locks can be promoted from one type to another based on the processing
taking place. For example, when a FOR UPDATE cursor is fetched for a row, a U lock
is taken on the page. When the actual update takes place, the U lock is promoted to
the X lock.

5.8.4 Row Lock


Row level locks are absolutely the most granular level of locking available in DB2.
This kind of lock locks the row in concern, and all the other rows in the same page
are free for any operation by any other transaction.

Row level locks are taken when LOCKSIZE ROW is mentioned in the DDL. LOCKSIZE
ANY can also theoretically give rise to this kind of lock, but the possibility of this
happening is very small.

The lock compatibility matrix for Row locking is the same as that for Page locking.

Row level locks can also be promoted similar to page locks.

5.8.5 Page Lock Vs Row Lock


The choice between Page and Row level locking, needs to be made during the
creation of the object in question. The selection depends on various factors including
the number of rows per page, locking resources available and the extent of
concurrency needed for the application.

If the application needs very high level of concurrency, then LOCKSIZE ROW may be
preferred as this allows a program to read / update a row while another row in the
same page is locked by some other transaction. But it is to be kept in mind that
LOCKSIZE ROW assumes far more amount of resources compared to LOCKSIZE
PAGE, and this can lead to degradation of performance as well.

Also, in certain cases, where the same data is read in different orders by two
transactions, running at the same time, row lock can actually increase the contention
level as each of the transactions may keep one row in the same page locked, which
is required by the other transaction.

In general, if there is no particular demand from the application size for a particular
lock size, it seems wise to leave the decision on DB2 by mentioning LOCKSIZE ANY
in the DDL.

Note: LOCKSIZE PAGE or LOCKSIZE ROW can be used more efficiently when you
commit your data more frequently or when you use cursor stability with
CURRENTDATA NO.

5.8.6 Row Level Locking V/S Maxrows = 1


DB2 provides the DBA to set a maximum limit to the number of rows to reside in a
particular page, by specifying the MAXROWS option. The maximum number of rows
residing in a particular page is given by:

Catalog record: Column MAXROWS of table SYSIBM.SYSTABLESPACE.

48622202.doc Ver. 0.00a Page 94 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

Max no of rows = MIN (MAXROWS, FLOOR, 255)

Floor = usable page size / average record size

MAXROWS = 1 indicates that only one row can exist per page for that particular
tablespace.

This provides the same level of concurrency as Row-level locking. However, this can
create overallocation of space as only one row can reside in a page. But still, it is
preferred to Row-level locking in certain cases as in a data sharing environment,
row-level locking can lead to the propagation of a lot of additional page p-locks to
the XCF (Extended Coupling Facility).

The MAXROWS parameter can be altered for a tablespace, and this is takes effect
immediately. However, a REORG should be done after doing this kind of alteration.

5.8.7 Lockmax
This tablespace DDL parameter specifies the maximum numnber of page or row locks
that a single application process can hold on the tablespace before lock escalation
takes place.

The DBA can mention one of the three values for this parameter:
LOCKMAX <n>: Maximum n number of locks can be held by a single application
process on the tablespace.
LOCKMAX SYSTEM: Specifies that n is effectively equal to the system default set by
the field LOCKS PER TABLE(SPACE) of installation panel DSNTIPJ.
LOCKMAX 0: Disables lock escalation entirely.

The default value for this parameter depends on the LOCKSIZE parameter, as
follows:
LOCKSIZE ANY -> LOCKMAX SYSTEM
LOCKSIZE any other value -> LOCKMAX 0

Catalog record: Column LOCKMAX of table SYSIBM.SYSTABLESPACE.


0 -> 0
<n> -> <n>
-1 -> SYSTEM

5.8.8 Lock Escalation

5.8.8.1 What Is Lock Escalation


All locks require space for storage (250 bytes as a thumb rule, as suggested by
IBM), and because this space is finite, DB2 limits the amount of space the system
can use for locks. Furthermore, a limit is placed on the space each transaction can
use for its own locks. A process known as Lock Escalation occurs when too many row
/ page locks are issued in the database and one of the space limitations is exceeded.
Lock Escalation is the process of converting several locks on individual rows / pages
in a table into a single table or tablespace level lock. When the transaction requests
a lock after the lock space is full, one of its tables / tablespaces is selected, and lock
escalation takes place to create space in the lock list data structure. If all possible

48622202.doc Ver. 0.00a Page 95 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

lock escalations are also not sufficient to free enough space for the transaction to
continue, the transaction is asked to issue a COMMIT / ROLLBACK.

Lock escalation takes place when a limit is encountered by the transaction. However,
this limit can be at the system level also, caused by some other transaction. Say,
transaction A has taken a lot of locks on the tablespace T1, and the transaction B has
taken only a few locks. Now the number of locks reaches its limit at the system level,
though the limit might not have reached for the individual transactions. In such a
case, one of the transactions will be asked to escalate its locks. If B, which is holding
only a few locks, is asked to do so, it might attempt the same, fail to do so, and
terminate abnormally, while the main offending transaction was A. Thus, offending
transactions holding many locks over a long period of time can cause other
transactions to terminate abnormally.

5.8.8.2 Monitoring Lock Escalation


From DB2 V6 onwards, DB2 issues message DSNI031I, which identifies the table
space for which lock escalation occurred, and some information to help you identify
what plan or package was running when the escalation occurred.

The statistics and accounting trace records contain information on locking. The IBM
licensed program, DATABASE 2 Performance Monitor (DB2 PM), provides one way to
view the trace results.

Statistics Trace tells how many suspensions, deadlocks, timeouts, and lock
escalations occur in the trace record.

Accounting Trace gives the same information for a particular application. It also
shows the maximum number of concurrent page locks held and acquired during the
trace. Review applications with a large number to see if this value can be lowered.
This number is the basis for the proper setting of LOCKS PER USER and, indirectly,
LOCKS PER TABLE(SPACE).

5.8.8.3 Actions To Be Taken:


A thumb rule is, if a quarter or more of the lock escalations cause deadlocks /
timeouts, then the lock escalation is not effective for the applications system. The
following methods can be used to solve the problem:

 The number of locks enabled in the database configuration can be increased, by


changing the LOCKMAX parameter to a higher value or 0, which would prevent
lock escalation altogether. However, this can put more overhead on the
processing, and can degrade the performance.
 To save the transaction B, a LOCK TABLE / LOCK TABLESPACE SQL can be issued
explicitly to help it escalate the lock.
 The degree of the transaction isolation can be changed.
 The COMMIT FREQUENCY can be increased for the offending transactions.

5.8.8.4 Lock Escalation for different types of tablespaces


Lock escalation takes place differently for different types of tablespaces.

48622202.doc Ver. 0.00a Page 96 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

For Simple tablespaces, where the rows corresponding to different tables reside in
same pages, a row or a page lock is escalated directly to a tablespace lock, which
locks all the tables in the tablespace.

For Segmented tablespaces, where rows from different tables reside in different
pages, row / page locks are escalated to the individual table level, allowing other
applications to access data from other tables of the same tablespace.

For Partitioned tablespaces, where different partitions of a single table reside in


different tablespaces, lock escalalation can be restricted to one tablespace by
mentionin LOCKPART YES in the DDL for the tablespace. This is termed as Selective
Partition Locking.

Figure 4 shows the difference in lock escalations for different types of tablespaces.

Tablespace
Table Partitione
containing
d
all tables
tablespac

ROW PAGE ROW PAGE ROW PAGE

SIMPLE SEGMENTED PARTITIONED


Figure 4: LOCK ESCALATION IN VARIOUS TABLESPACES

5.8.9 Exercise

1. Which field in the catalog mentions the size of the lock to be taken?
2. Page lock gives higher level of concurrency than row lock. True / false?
3. Lock Escalation takes place in order to save resources. True / False?
4. What is the field that contains maximum number of locks to be taken, before lock
escalation?

Answers:
1. LOCKRULE of SYSIBM.SYSTABLESPACE
2. False
3. True
4. LOCKMAX

5.9 Claims and Drains

As of DB2 V3, resource serialization has been augmented to include claims and
drains in addition to transaction locking. The claim and drain process enables DB2 to
perform concurrent operations on multiple partitions on the same tablespace.

48622202.doc Ver. 0.00a Page 97 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

Claims and drains provide a new “locking” mechanism to control concurrency for
resources between SQL statements, utilities and commands. As with transaction
locks, claims and drains can time out while waiting for a resource.

5.9.1 Claims
DB2 uses a claim to register that a resource is being accessed. They can be
considered as usage indicators. The following resources can be claimed:
 Simple tablespaces
 Segmented tablespaces
 A single data partition of a partitioned tablespace
 A non-partitioned index space
 A single index partition of a partitioned index

Claims prevent drains from acquiring a resource. A claim is acquired when a resource
is first accessed. This is true regardless of the ACQUIRE parameter specified (USE or
ALLOCATE). Claims are reelased at commit time, except for cursors declared using
the WITH HOLD clause or when the claimer is a utility.

Multiple agents can claim a single resource. Claims on objects are acquired by the
following:
 SQL statements (SELECT, INSERT, UPDATE, DELETE)
 DB2 restart on INDOUBT objects
 Some utilities (e.g. COPY SHRLEVEL CHANGE, RUNSTATS SHRLEVEL CHANGE
and REPORT)

5.9.1.1 Claim Classes


Every claim has a claim class associated with it. The claim class is based on the type
of access being requested, as follows:
 A CS claim is acquired when the data is read from a package or a plan bound
specifying ISOLATIN (CS).
 An RR Claim is acquired when the data is read from a package or a plan bound
specifying ISOLATIN (RR).
 A Write Claim is acquired when data is deleted, inserted or updated.

5.9.1.2 Effects of a Claim


 Unlike a transaction lock, a claim normally does not persist past the commit
point. To access the object in the next unit of work, the application must make a
new claim.
However, there is an exception. If a cursor defined with the clause WITH HOLD is
positioned on the claimed object, the claim is not released at a commit point.

 A claim indicates to DB2 that there is activity on or interest in a particular page


set or partition. Claims prevent drains from occurring until the claim is released.

5.9.2 Drains
Drain is the act of acquiring a locked resource by quiescing access to that object.
Like claims, drains are also acquired when a resource is first accessed. A drain
acquires a resource by quiescing claims against that resource. Drains can be
requested by commands and utilities.

Multiple drainers can access a single resource. However, a process that drains all
claim classes cannot drain an object concurrently with any other process.

48622202.doc Ver. 0.00a Page 98 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

The following resources can be drained:


 Simple tablespaces
 Segmented tablespaces
 A single data partition of a partitioned tablespace
 A non-partitioned index space
 A single index partition of a partitioned index

The process of quiescing a claim class and prohibiting new claims from being
acquired for the resource is called draining. Draining allows DB2 utilities and
commands to acquire partial or full control of a specific object with a minimal impact
on concurrent access.

5.9.2.1 Drain Locks


A drain places drain locks on a resource. A drain lock prevents conflicting processes
from trying to drain the same object at the same time. Processes that drain only
writers can run concurrently; but a process that drains all claim classes cannot drain
an object concurrently with any other process. To drain an object, a drainer first
acquires one or more drain locks on the object, one for each claim class it needs to
drain. When the locks are in place, the drainer can begin at the next commit point or
after the release of all held cursors.

A drain lock also prevents new claimers from accessing an object while a drainer has
control of it.
5.9.2.1.1Types of Drain Locks
Three types of drain locks on an object correspond to the three claim classes:
 Write
 Repeatable read
 Cursor stability read

A drain requires either partial control of a resource, in which case a write drain lock
is taken, or complete control of a resource, accomplished by placing a CS drain lock,
an RR drain lock, and a write drain lock on an object.

In general, after an initial claim has been made on an object by a user, no other user
in the system needs a drain lock. When the drain lock is granted, no drains on the
object are in process for the claim class needed, and the claimer can proceed.

Drain locks are released when the utility or command completes. When the resource
has been drained of all appropriate claim classes, the drainer acquires sole access to
the resource.

5.9.2.2 Wait time for drains


A process that requests a drain might wait for two events:

I) Acquiring the drain lock. If another user holds the needed drain lock in an
incompatible lock mode, then the drainer waits.

II) Releasing all claims on the object. Even after the drain lock is acquired, the
drainer waits until all claims are released before beginning to process.

48622202.doc Ver. 0.00a Page 99 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

If the process drains more than one claim class, it must wait for those events to
occur for each claim class it drains.

Hence, to calculate the maximum amount of wait time:

i) Start with the wait time for a drain lock.


ii) Add the wait time for claim release.
iii) Multiply the result by the number of claim classes drained.

5.9.3 Compatibility Rules for Claims and Drains


As with transaction locks, concurrent claims and drains can be taken, but only if they
are compatible with one another. Appendix D shows which drains are compatible
with existing claims and drains.

5.10 Lock Tuning

Locking in DB2 is a very efficient mechanism, and can do wonders if used properly.
The implementation of locks is a very important part of a database set-up, and
extreme care should be taken while doing this. For a proper implementation of locks,
a DBA not only needs to know about how DB2 functions, and the databasew, but also
needs a very good understanding about the application from the database usage
perspective.

5.10.1Deadlock

5.10.1.1 What is Deadlock


When a lock is requested, a series of operations is performed to ensure that the
requested lock can be acquired. Two conditions can cause the lock acquisition to fail
– a deadlock or a timeout.

When two or more transactions are contending for locks, a situation called deadlock
can occur. The example in figure 5 illustrates the scenario.

Because at least two transactions are involved in a deadlock cycle, one might
assume that two data objects are always involved in the deadlock. However, this is
not true. A certain type of deadlock, known as conversion deadlock, can occur on a
single data object. A conversion deadlock occurs when when two or more
transactions already hold compatible locks on an object, and then each transaction
requests new, incompatible lock modes on that same object. This often takes place
on index pages.

5.10.1.2 Detection and Resolution of Deadlock by DB2


When a deadlock cycle occurs, all the transactions involved in the deadlock will wait
indefinitely, unless some outside agent performs some action to end the deadlock
cycle. Because of this, DB2 contains an asynchronous system background process
associated with each active database that is responsible for finding and resolving
deadlocks in the locking subsystem. This background process is called the deadlock
detector. When a database becomes active, the deadlock detector is started as part
of the process that initializes the database for use. The deadlock detector stays
“asleep” most of the time, but “wakes up” at preset intervals to look for the presence
of deadlocks between transactions using the database. If a deadlock is discovered,

48622202.doc Ver. 0.00a Page 100 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

the detector selects one of the transactions in the cycle to roll back and terminate
with an SQLCODE of -911, releasing all its locks.

1. Jobs EMPLJCHG and PROJNCHG are two transactions. Job


EMPLJCHG accesses table M, and acquires an exclusive lock
for page B, which contains record 000300.
2. Job PROJNCHG accesses table N, and acquires an exclusive
lock for page A, which contains record 000010.
3. Job EMPLJCHG requests a lock for page A of table N while
still holding the lock on page B of table M. The job is
suspended, because job PROJNCHG is holding an exclusive
lock on page A.
4. Job PROJNCHG requests a lock for page B of table M while
still holding the lock on page A of table N. The job is
suspended, because job EMPLJCHG is holding an exclusive
lock on page B. The situation is a deadlock.
Figure 5: A deadlock example

5.10.1.3 Monitoring of Deadlock


Deadlock needs to be monitored on a continual basis, especially when new program
or additional users are added to the existing workload mix. These can be monitored
in the statistics and accounting reports.

DB2 trace record IFCID 172 (statistics class 3) contains information about deadlock.
The DB2 PM report Locking Trace formats this information to outline all the resources
and agents involved in a deadlock and the significant locking parameters, such as
lock state and duration, related to their requests, for a given interval of time. Also,
an accompanying message DSNT375I helps identify the members on which deadlock
has been encountered.

5.10.1.4 Actions to be taken


A lot of deadlocks in the system can prevent the system from working efficiently. So,
deadlocks need to be monitored regularly, and in case some transactions are found
to be offensive, action should be taken in the database or the application level to
reduce the number of deadlocks. The following are the normal courses of action that
can be taken un case of excessive deadlocks:
5.10.1.4.1 Choosing a Deadlock time:

48622202.doc Ver. 0.00a Page 101 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

Setting the proper Deadlock Time (interval for the deadlock detector) in the database
configuration file is necessary to ensure a good concurrent application performance.
An interval that is too short will cause unnecessary overhead, and an interval that is
too long will enable a deadlock cycle to delay a process for an unacceptable amount
of time. One must balance the possible delays in resolving deadlocks with the
overhead of detecting the possible delays.
5.10.1.4.2 Access data in a consistent order:
When different applications access the same data, attempt should be made to make
them do so in the same sequence. For example, one should make both access rows
1,2,3,5 in that order. In that case, the first application to access the data delays the
second, but the two applications cannot deadlock. For the same reason, one should
try to make different applications access the same tables in the same order.
5.10.1.4.3 Commit work as soon as is practical
To avoid unnecessary lock contentions, one should issue a COMMIT statement as
soon as possible after reaching a point of consistency, even in read-only applications.
To prevent unsuccessful SQL statements (such as PREPARE) from holding locks,
issue a ROLLBACK statement after a failure. The SPUFI autocommit feature can
commit statement issued through SPUFI immediately.

Taking commit points frequently in a long running unit of recovery (UR) has the
following benefits:

 Reduces lock contention


 Improves the effectiveness of lock avoidance, especially in a data
sharing environment
 Reduces the elapsed time for DB2 system restart following a system
failure
 Reduces the elapsed time for a unit of recovery to rollback following an
application failure or an explicit rollback request by the application
 Provides more opportunity for utilities, such as online REORG, to break
in

One should consider using the UR CHECK FREQ field or the UR LOG WRITE CHECK
field of installation panel DSNTIPN to help you identify those applications that are not
committing frequently. UR CHECK FREQ, which identifies when too many checkpoints
have occurred without a UR issuing a commit, is helpful in monitoring overall system
activity. UR LOG WRITE CHECK enables one to detect applications that might write
too many log records between commit points, potentially creating a lengthy recovery
situation for critical tables.

Even though an application might conform to the commit frequency standards of the
installation under normal operational conditions, variation can occur based on system
workload fluctuations. For example, a low-priority application might issue a commit
frequently on a system that is lightly loaded. However, under a heavy system load,
the use of the CPU by the application may be pre-empted, and, as a result, the
application may violate the rule set by the UR CHECK FREQ parameter. For this
reason, one should add logic to one’s application to commit based on time elapsed
since last commit, and not solely based on the amount of SQL processing performed.
In addition, take frequent commit points in a long running unit of work that is read-
only to reduce lock contention and to provide opportunities for utilities, such as
online REORG, to access the data.

48622202.doc Ver. 0.00a Page 102 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

5.10.1.4.4 Retry an application after deadlock or timeout


One should include logic in a batch program so that it retries an operation after a
deadlock or timeout. Such a method could help one recover from the situation
without assistance from operations personnel. Field SQLERRD (3) in the SQLCA
returns a reason code that indicates whether a deadlock or timeout occurred.
5.10.1.4.5 Close cursors
If one defines a cursor using the WITH HOLD option, the locks it needs can be held
past a commit point. One should use the CLOSE CURSOR statement as soon as
possible in the program to cause those locks to be released and the resources they
hold to be freed at the first commit point that follows the CLOSE CURSOR statement.
Whether page or row level locks are held for WITH HOLD cursors is controlled by the
RELEASE LOCKS parameter on panel DSNTIP4.
5.10.1.4.6 Bind plans with ACQUIRE (USE)
ACQUIRE (USE), which indicates that DB2 will acquire table and table space locks
when the objects are first used and not when the plan is allocated, is the best choice
for concurrency.

Packages are always bound with ACQUIRE (USE), by default. ACQUIRE (ALLOCATE)
can provide better protection against timeouts. One should consider ACQUIRE
(ALLOCATE) for applications that need gross locks instead of intent locks or that run
with other applications that may request gross locks instead of intent locks.
Acquiring the locks at plan allocation also prevents any one transaction in the
application from incurring the cost of acquiring the table and table space locks. If one
needs ACQUIRE (ALLOCATE), one might want to bind all DBRMs directly to the plan.
5.10.1.4.7 Use ISOLATION (CS) and CURRENTDATA(NO)
ISOLATION (CS) lets DB2 release acquired row and page locks as soon as possible.
CURRENTDATA (NO) lets DB2 avoid acquiring row and page locks as often as
possible. After that, in order of decreasing preference for concurrency, these bind
options should be used:

ISOLATION (CS) with CURRENTDATA (YES), when data returned to the application
must not be changed before the next FETCH operation.

ISOLATION (RS), when data returned to the application must not be changed before
the application commits or rolls back. However, one does not care if other application
processes insert additional rows.

ISOLATION (RR), when data evaluated as the result of a query must not be changed
before the application commits or rolls back. New rows cannot be inserted into the
answer set.

For updateable scrollable cursors, ISOLATION (CS) provides the additional advantage
of letting DB2 use optimistic concurrency control to further reduce the amount of
time that locks are held.
5.10.1.4.8 Use ISOLATION (UR) cautiously
UR isolation acquires almost no locks on rows or pages. It is fast and causes little
contention, but it reads uncommitted data. Do not use it unless you are sure that
your applications and end users can accept the logical inconsistencies that can occur.

48622202.doc Ver. 0.00a Page 103 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

5.10.1.4.9 Use global transactions


The Recoverable Resource Manager Services attachment facility (RRSAF) relies on an
OS/390 component called OS/390 Transaction Management and Recoverable
Resource Manager Services (OS/390 RRS). OS/390 RRS provides system-wide
services for coordinating two-phase commit operations across MVS products. For
RRSAF applications and IMS transactions that run under OS/390 RRS, one can group
together a number of DB2 agents into a single global transaction. A global
transaction allows multiple DB2 agents to participate in a single global transaction
and thus share the same locks and access the same data. When two agents that are
in a global transaction access the same DB2 object within a unit of work, those
agents will not deadlock with each other. The following restrictions apply:

There is no Parallel Sysplex support for global transactions.

Because each of the "branches" of a global transaction are sharing locks,


uncommitted updates issued by one branch of the transaction are visible to other
branches of the transaction.

Claim/drain processing is not supported across the branches of a global transaction,


which means that attempts to issue CREATE, DROP, ALTER, GRANT, or REVOKE may
deadlock or timeout if they are requested from different branches of the same global
transaction.

Attempts to update a partitioning key may deadlock or timeout because of the same
restrictions on claim/drain processing.

LOCK TABLE may deadlock or timeout across the branches of a global transaction.

5.10.2Resource Timeout

5.10.2.1 What is Resource Timeout


The maximum amount of time in seconds to wait for an unavailable resource to
become available before timing out is called Resource Timeout, and is controlled by
the DSNZPARM value, IRLMRWT. When one user has a lock on a DB2 resource that
another user needs, DB2 waits for the time specified by IRLMRWT and then issues an
SQLCODE of –911 or –913.

5.10.2.2 Monitoring Resource Timeout


Timeouts can be monitored the following ways:

 Accompanying message DSNT376I to help identify the members that


have timed out.

 To detect deadlocks and timeouts, the DISPLAY DATABASE command


with the LOCKS keyword van be mentioned. This displays DB2 status
on lock objects across all the members of the group.

 To obtain more detailed information about the locks in a data sharing


environment, the following trace classes and IFCIDs can be examined

48622202.doc Ver. 0.00a Page 104 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

CLASS 3 (Statistics) trace contains information regarding deadlocks,


timeouts, connects and disconnects from GBPs, and long-running UR’s in
IFCIDs 172, 196, 250, 261, 262 and 213.
 CLASS 6 (Performance) trace contains summary lock information with
details in IFCIDs 20, 44, 45, 105, 106, 107, 172, 192, 213, 214 and
218.
 Class 7 (Performance) trace contains detail lock information in IFCIDs
21, 105, 106, 107 and 223.
 The -START TRACE command can be issued on the problem member,
and depending on the trace used, DB2 can collect data from the group
(for group buffer pools or global locking) or from each member. The
refresh of DB2 Version 6 introduced the SCOPE (GROUP) option on
START TRACE command to support global traces.

5.10.2.3 Actions to be taken in case of a Resource Timeout


 Sometimes it is impossible to find a compromise value for IRLMRWT.
Online transactions wait too long before timing out, while batcj jobs
time out too frequently. If this is the case, one should consider starting
DB2 in the morning for online activity with a modest IRLMRWT value
(45 to 60 seconds) and restarting it in the evening for batch jobs with
a larger IRLMRWT value (90 to 120 seconds). For this, DB2 must go
down and come up during the day, which may not be possible in a
24x7 shop.

 Using Retry logic in the application

 Binding plan using ACQUIRE (USE) option

 Using Global transaction

5.10.3Idle Thread Timeout


Active server threads that have remained idle for a specified period of time (in
seconds) are called Idle Threads, and can be cancelled by DB2. When DB2 is
installed, a maximum IDLE THREAD TIMEOUT period is chosen, from 0 to 9999
seconds. The timeout period is an approximation. If a server thread has been waiting
for a request from the requesting site for this period of time, it is cancelled unless it
is an inactive or an in doubt thread. A value of 0, the default, means that the server
threads cannot be canceled because of an idle thread timeout.

5.10.3.1 Idle Thread Timeout On Installation Panel


Effect: Specifies a period for which an active distributed thread can hold locks
without doing any processing. After that period, a regular scan (at 3-minute
intervals) detects that the thread has been idle for the specified period, and DB2
cancels the thread.

The cancellation applies only to active threads. If the installation permits distributed
threads to be inactive and hold no resources, those threads are allowed to remain
idle indefinitely.

Default: 0. That value disables the scan to time out idle threads. The threads can
then remain idle indefinitely.

48622202.doc Ver. 0.00a Page 105 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

Recommendation: If one has experienced distributed users leaving an application


idle while it holds locks, one needs to pick an appropriate value other than 0 for this
period. Because the scan occurs only at 3-minute intervals, the idle threads will
generally remain idle for somewhat longer than the value specified.

5.10.4Utility Timeout
In most cases, it is not practical to have the timeout for an utility the same as that
for an SQL application. Utility Timeout is controlled by a parameter, UTIMOUT, to be
specified on installation panel DSNTIPI, which acts as an operation multiplier for
utilities waiting for a drain lock, for a transaction lock, or for claims to be released.

Default value: 6.

5.10.5Lock Wait Time


Not seeing deadlocks or timeouts in an application does not necessarily mean that
there are no issues with locking. If the lock timeout period is 60 seconds (which is
the default), then monitoring the timeouts will never reveal the situation where the
application has been waiting 30 or 45 or 59 seconds, but definitely this is an
impediment to good performance. So the Lock Wait Time needs to be monitored
along with deadlocks and timeouts.

5.10.5.1 Guidelines to Detect Lock Wait Problems


Periodic monitoring of locking issues, such as suspension and time-outs, can help
determine whether one has potential locking problems. Some guidelines:

 If the lock suspension time for all DB2 programs is greater than 1% of
total elapsed execution time for the programs, then the application is
waiting too long for locks. One needs to get down to the individual
program level to determine the root cause.

 If th e number of transactions with greater than 5 seconds of


suspension time is greater than 0.01% of total transactions, there is a
lock wait problem.

 If the number of TSO/Batch programs with greater than 60 seconds of


lock suspension timesis more than 0.01% of the total number of
TSO/Batch programs, one needs to investigate who is holding the
locks and why.

These numbers can be obtained an accounting or statistics report and from the
lock suspension report in DB2PM.

5.10.6Monitoring Locking
DB2 provides a lot of features to monitor the locking as well as the IRLM connection.

5.10.6.1 Monitoring IRLM Connection


The following MVS commands can be used to monitor the IRLM connection:

 MODIFY irlmproc,STATUS,irlmnm
Displays the status of a specific IRLM.

48622202.doc Ver. 0.00a Page 106 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

 MODIFY irlmproc,STATUS,ALLD
Displays the status of all subsystems known to this IRLM in the data sharing
group.

 MODIFY irlmproc,STATUS,ALLI
Displays the status of all IRLMs known to this IRLM in the data sharing group.

 # MODIFY irlmproc,STATUS,MAINT
Displays the maintenance levels of IRLM load # module CSECTs for the specified
IRLM instance.

 MODIFY irlmproc,STATUS,STOR
Displays the current and high water allocation for CSA and ECSA storage.

 MODIFY irlmproc,STATUS,TRACE
Displays information about trace types of IRLM subcomponents.

5.10.6.2 Monitoring of DB2 Locking


If one has problems with suspensions, timeouts or deadlocks, monitoring of DB2
locks is needed.
5.10.6.2.1 DISPLAY DATABASE
Using the DB2 command DISPLAY DATABASE, one can find out what locks are held
or waiting at any moment on any tablespace, partition or index. The report can
include Claim and Drain locks on logical partitions or indexes.

If applicable, the output also includes information about physical I/O errors for those
objects. The use is as follows:

-DISPLAY DATABASE (dbname)

This results in the following messages, as shown in Figure 6:

11:44:32 DSNT360I - ***************************************


11:44:32 DSNT361I - * DISPLAY DATABASE SUMMARY
11:44:32 * report_type_list
11:44:32 DSNT360I - ***************************************
11:44:32 DSNT362I - DATABASE = dbname STATUS = xx
DBD LENGTH = yyyy

11:44:32 DSNT397I -

NAME TYPE PART STATUS PHYERRLO PHYERRHI CATALOG PIECE


-------- ---- ---- ---------------- --------- -------- -------- -----
D1 TS RW,UTRO
D2 TS RW
D3 TS STOP
D4 IX RO
D5 IX STOP
D6 IX UT
LOB1 LS RW
******* DISPLAY OF DATABASE dbname ENDED **********************
11:45:15 DSN9022I - DSNTDDIS 'DISPLAY DATABASE' NORMAL COMPLETION
Figure 6: DISPLAY DATABASE Messages I

In the preceding messages:

48622202.doc Ver. 0.00a Page 107 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

 indicates which options were included when the DISPLAY DATABASE command
was issued.

 dbname is an 8-byte character string indicating the database name. The pattern-
matching character, *, is allowed at the beginning, middle, and end of dbname.

 STATUS is a combination of one or more status codes delimited by a comma. The


maximum length of the string is 18 characters. If the status exceeds 18
characters, those characters are wrapped onto the next status line. Anything that
exceeds 18 characters on the second status line is truncated.

The pattern-matching character, *, can be used in the commands DISPLAY


DATABASE, START DATABASE, and STOP DATABASE. The pattern-matching
character can be used in the beginning, middle, and end of the database and table
space names. The keyword ONLY can be added to the command DISPLAY
DATABASE. When ONLY is specified with the DATABASE keyword but not the
SPACENAM keyword, all other keywords except RESTRICT, LIMIT, and AFTER are
ignored. Use DISPLAY DATABASE as follows:

-DISPLAY DATABASE (*S*DB*) ONLY

This results in the following messages, as shown in Figure 7:


11:44:32 DSNT360I - ****************************************************
11:44:32 DSNT361I - * DISPLAY DATABASE SUMMARY
11:44:32 * GLOBAL
11:44:32 DSNT360I - ****************************************************
11:44:32 DSNT362I - DATABASE = DSNDB01 STATUS = RW
DBD LENGTH = 8066
11:44:32 DSNT360I - ****************************************************
11:44:32 DSNT362I - DATABASE = DSNDB04 STATUS = RW
DBD LENGTH = 21294
11:44:32 DSNT360I - ****************************************************
11:44:32 DSNT362I - DATABASE = DSNDB06 STATUS = RW
DBD LENGTH = 32985
11:45:15 DSN9022I - DSNTDDIS 'DISPLAY DATABASE' NORMAL COMPLETION

Figure 7: DISPLAY DATABASE Message II

In the preceding messages:

DATABASE (*S*DB*) displays databases that begin with any letter, have the letter S
followed by any letters, then the letters DB followed by any letters.

ONLY restricts the display to databases names that fit the criteria.

The RESTRICT(REFP) option of the DISPLAY DATABASE command can be used to


limit the display to a table space or partition in refresh pending (REFP) status.

The ADVISORY option on the DISPLAY DATABASE command can be used to limit the
display to table spaces or indexes that require some corrective action. The DISPLAY
DATABASE ADVISORY command without the RESTRICT option can be used to
determine when:

48622202.doc Ver. 0.00a Page 108 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

 An index space is in the informational copy pending (ICOPY) advisory status

 A base table space is in the auxiliary warning (AUXW) advisory status


5.10.6.2.2 Use EXPLAIN to tell which locks DB2 chooses
Procedure:

 The EXPLAIN statement, or the EXPLAIN option of the BIND and REBIND
subcommands, can be used to determine which modes of table and table space
locks DB2 initially assigns for an SQL statement.

 EXPLAIN stores its results in a table called PLAN_TABLE. To review the results,
query PLAN_TABLE. After running EXPLAIN, each row of PLAN_TABLE describes
the processing for a single table, either one named explicitly in the SQL
statement that is being explained or an intermediate table that DB2 has to
create. The column TSLOCKMODE of PLAN_TABLE shows an initial lock mode for
that table. The lock mode applies to the table or the table space, depending on
the value of LOCKSIZE and whether the table space is segmented or
nonsegmented.

 In Figure 8, one can find what table or table space lock is used and whether page
or row locks are used also, for the particular combination of lock mode and
LOCKSIZE one are interested in.

For statements executed remotely: EXPLAIN gathers information only about data
access in the DBMS where the statement is run or the bind operation is carried out.
To analyze the locks obtained at a remote DB2 location, one must run EXPLAIN at
that location.
___________________________________ ____________________________________
| | Lock mode from EXPLAIN |
| |_______ ______ _______ ______ ______|
| Table space structure | IS | S | IX | U | X |
|___________________________________|_______|______|_______|______|______|
| For nonsegmented table spaces: | | | | | |
| Table space lock acquired is: | IS | S | IX | U | X |
| Page or row locks acquired? | Yes | No | Yes | No | No |
|___________________________________|_______|______|_______|______|______|
| Note: For partitioned table spaces defined with LOCKPART YES and for |
| which selective partition locking is used, the lock mode |
| applies only to those partitions that are locked. Lock modes |
| for LOB table spaces are not reported with EXPLAIN. |
|___________________________________ _______ ______ _______ ______ ______|
| For segmented table spaces with: | | | | | |
| LOCKSIZE ANY, ROW, or PAGE | | | | | |
| Table space lock acquired is: | IS | IS | IX | n/a | IX |
| Table lock acquired is: | IS | S | IX | n/a | X |
| Page or row locks acquired? | Yes | No | Yes | No | No |
|___________________________________|_______|______|_______|______|______|
| LOCKSIZE TABLE | | | | | |
| Table space lock acquired is: | n/a | IS | n/a | IX | IX |
| Table lock acquired is: | n/a | S | n/a | U | X |
| Page or row locks acquired? | No | No | No | No | No |
|___________________________________|_______|______|_______|______|______|
| LOCKSIZE TABLESPACE | | | | | |
| Table space lock acquired is: | n/a | S | n/a | U | X |
| Table lock acquired is: | n/a | n/a | n/a | n/a | n/a |
| Page or row locks acquired? | No | No | No | No | No |
|___________________________________|_______|______|_______|______|______|

48622202.doc Ver. 0.00a Page 109 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

Figure 8: Which locks DB2 chooses. N/A = Not applicable; Yes = Page or row locks are
acquired; No = No page or row locks are acquired.

5.10.6.2.3 Using the statistics and accounting traces


The statistics and accounting trace records contain information on locking. The IBM
licensed program, DATABASE 2 Performance Monitor (DB2 PM), provides one way to
view the trace results. Figure 9 contains extracts from the DB2 PM reports
Accounting Trace and Statistics Trace. Each of those corresponds to a single DB2
trace record. (Details of those reports are subject to change without notification from
DB2 and are available in the appropriate DB2 PM documentation). As the figure
shows:

 Statistics Trace tells how many suspensions, deadlocks, timeouts, and lock
escalations occur in the trace record.

 Accounting Trace gives the same information for a particular application. It also
shows the maximum number of concurrent page locks held and acquired during
the trace. Review applications with a large number to see if this value can be
lowered. This number is the basis for the proper setting of LOCKS PER USER and,
indirectly, LOCKS PER TABLE(SPACE).

Recommendations: One should check the results of the statistics and accounting
traces for the following possibilities:

 Lock escalations are generally undesirable and are caused by processes that use
a large number of page, row, or LOB locks. In some cases, it is possible to
improve system performance by using table or table space locks.

 Timeouts can be caused by a small value of RESOURCE TIMEOUT. If there are


many timeouts, check whether a low value for RESOURCE TIMEOUT is causing
them. Sometimes the problem suggests a need for some change in database
design.

LOCKING ACTIVITY QUANTITY /MINUTE /THREAD /COMMIT || LOCKING TOTAL


--------------------------- -------- ------- ------- ------- || ------------------- --------
SUSPENSIONS (ALL) 2 1.28 1.00 0.40 || TIMEOUTS 0
SUSPENSIONS (LOCK ONLY) 2 1.28 1.00 0.40 || DEADLOCKS 0
SUSPENSIONS (IRLM LATCH) 0 0.00 0.00 0.00 || ESCAL.(SHARED) 0
SUSPENSIONS (OTHER) 0 0.00 0.00 0.00 || ESCAL.(EXCLUS) 0
|| MAX PG/ROW LCK HELD 2
TIMEOUTS 0 0.00 0.00 0.00 || LOCK REQUEST 8
DEADLOCKS 1 0.64 0.50 0.20 || UNLOCK REQUEST 2
|| QUERY REQUEST 0
LOCK REQUESTS 17 10.92 8.50 3.40 || CHANGE REQUEST 5
UNLOCK REQUESTS 12 7.71 6.00 2.40 || OTHER REQUEST 0
QUERY REQUESTS 0 0.00 0.00 0.00 || LOCK SUSPENSIONS 1
CHANGE REQUESTS 5 3.21 2.50 1.00 || IRLM LATCH SUSPENS. 0
OTHER REQUESTS 0 0.00 0.00 0.00 || OTHER SUSPENSIONS 0
|| TOTAL SUSPENSIONS 1
LOCK ESCALATION (SHARED) 0 0.00 0.00 0.00 ||
LOCK ESCALATION (EXCLUSIVE) 0 0.00 0.00 0.00 || DRAIN/CLAIM TOTAL
|| ------------ --------
DRAIN REQUESTS 0 0.00 0.00 0.00 || DRAIN REQST 0
DRAIN REQUESTS FAILED 0 0.00 0.00 0.00 || DRAIN FAILED 0
CLAIM REQUESTS 7 4.50 3.50 1.40 || CLAIM REQST 4
CLAIM REQUESTS FAILED 0 0.00 0.00 0.00 || CLAIM FAILED 0

Figure 9: Locking activity blocks from statistics trace and accounting trace

48622202.doc Ver. 0.00a Page 110 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

5.10.7Exercise
1. Which DB2 background process checks for deadlocks at regular intervals?
2. Commit frequency affects locking. True/False?
3. Which DSNZPARM contols the timeout duration?
4. What DB2 command shows the locks held on the database?

Answers:
1. Deadlock Detector
2. True
3. IRLMRWT
4. DISPLAY DATABASE

5.11 Concurrency

5.11.1Introduction to Concurrency
Definition: Concurrency is the ability of more than one application process to access
the same data at essentially the same time.

Example: An application for order entry is used by many transactions


simultaneously. Each transaction makes inserts in tables of invoices and invoice
items, reads a table of data about customers, and reads and updates data about
items on hand. Two operations on the same data, by two simultaneous transactions,
might be separated only by microseconds. To the users, the operations appear
concurrent.

Conceptual background: Concurrency must be controlled to prevent lost updates


and such possibly undesirable effects as unrepeatable reads and access to
uncommitted data.

 Lost updates: Without concurrency control, two processes, A and B, might both
read the same row from the database, and both calculate new values for one of
its columns, based on what they read. If A updates the row with its new value,
and then B updates the same row, A's update is lost.

 Access to uncommitted data: Also without concurrency control, process A


might update a value in the database, and process B might read that value
before it was committed. Then, if A's value is not later committed, but backed
out, B's calculations are based on uncommitted (and presumably incorrect) data.

 Unrepeatable reads: Some processes require the following sequence of events:


A reads a row from the database and then goes on to process other SQL
requests. Later, A reads the first row again and must find the same values it read
the first time. Without control, process B could have changed the row between
the two read operations.

To prevent those situations from occurring unless they are specifically allowed, DB2
might use locks to control concurrency.

What do locks do? A lock associates a DB2 resource with an application process in
a way that affects how other processes can access the same resource. The process
associated with the resource is said to "hold" or "own" the lock. DB2 uses locks to

48622202.doc Ver. 0.00a Page 111 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

ensure that no process accesses data that has been changed, but not yet committed,
by another process.

What do you do about locks? To preserve data integrity, the application process
acquires locks implicitly, that is, under DB2 control. It is not necessary for a process
to request a lock explicitly to conceal uncommitted data. Therefore, sometimes one
need not do anything about DB2 locks. Nevertheless processes acquire, or avoid
acquiring, locks based on certain general parameters. One can make better use of
the resources and improve concurrency by understanding the effects of those
parameters.

Concurrency normally goes hand-in-hand with the performance of an application,


and is, in most cases, a trade-off with data integrity, as to ensure the best data
integrity, the locks need to be rigid, which reduces the concurrency level of the
application. So, while designing a database for an application, and the application
itself, one needs to optimize these two aspects to the maximum extent. There is no
hard-and-fast rule for coming to an optimized, balanced design by looking at the
database. It is a function of the requirement of the application, the size of the
database, the amount of processing that goes into an application, the extent of
integrity of the data required, and a lot more things.

Here are a few items that affect concurrency, and an idea as to how to use them to
get results demanded by the application.

5.11.2ISOLATION Level
We have already discussed about the different levels of isolation in Section 6.3. Now
we shall see how they affect the concurrency aspect.

The various isolation levels offer less or more concurrency at the cost of more or less
protection from other application processes. The values one chooses should be based
primarily on the needs of the application. This section presents the isolation levels in
order from the one offering the least concurrency (RR) to that offering the most
(UR).

5.11.2.1 Repeatable Read (RR):


Allows the application to read the same pages or rows more than once without
allowing any UPDATE, INSERT, or DELETE by another process. All accessed rows
or pages are locked, even if they do not satisfy the predicate.

Figure 10 shows that all locks are held until the application commits. In the
following example, the rows held by locks L2 and L4 satisfy the predicate.

48622202.doc Ver. 0.00a Page 112 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

Figure 10: How an application using RR isolation acquires locks. All locks are held until the
application commits.

Applications that use repeatable read can leave rows or pages locked for longer
periods, especially in a distributed environment, and they can claim more logical
partitions than similar applications using cursor stability.

Applications that use repeatable read and access a nonpartitioning index cannot
run concurrently with utility operations that drain all claim classes of the
nonpartitioning index, even if they are accessing different logical partitions. For
example, an application bound with ISOLATION(RR) cannot update partition 1
while the LOAD utility loads data into partition 2. Concurrency is restricted
because the utility needs to drain all the repeatable-read applications from the
nonpartitioning index to protect the repeatability of the reads by the application.

Because so many locks can be taken, lock escalation might take place. Frequent
commits release the locks and can help avoid lock escalation.

With repeatable read, lock promotion occurs for table space scan to prevent the
insertion of rows that might qualify for the predicate. (If access is via index, DB2
locks the key range. If access is via table space scans, DB2 locks the table, partition,
or table space.)

An installation option determines the mode of lock chosen for a cursor defined with
the clause FOR UPDATE OF and bound with repeatable read.

5.11.2.2 Read Stability (RS)


 Allows the application to read the same pages or rows more than once without
allowing qualifying rows to be updated or deleted by another process. It offers
possibly greater concurrency than repeatable read, because although other
applications cannot change rows that are returned to the original application,
they can insert new rows or update rows that did not satisfy the original
application's search condition. Only those rows or pages that satisfy the stage 1
predicate (and all rows or pages evaluated during stage 2 processing) are locked
until the application commits. Figure 11 illustrates this. In the example, the rows
held by locks L2 and L4 satisfy the predicate.

48622202.doc Ver. 0.00a Page 113 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

Figure 11: How an application using RS isolation acquires locks when no lock avoidance
techniques are used. Locks L2 and L4 are held until the application commits. The other locks
aren't held.

Applications using read stability can leave rows or pages locked for long periods,
especially in a distributed environment. If read-stability is used, one should plan
for frequent commit points.

 An installation option determines the mode of lock chosen for a cursor defined
with the clause FOR UPDATE OF and bound with read stability.

5.11.2.3 Cursor Stability (CS)


Allows maximum concurrency with data integrity. However, after the process leaves
a row or page, another process can change the data. With CURRENTDATA(NO), the
process doesn't have to leave a row or page to allow another process to change the
data. If the first process returns to read the same row or page, the data is not
necessarily the same. Consider these consequences of that possibility:

 For table spaces created with LOCKSIZE ROW, PAGE, or ANY, a change can occur
even while executing a single SQL statement, if the statement reads the same
row more than once. In the following example, data read by the inner SELECT
can be changed by another transaction before it is read by the outer SELECT.
Therefore, the information returned by this query might be from a row that is no
longer the one with the maximum value for COL1.

SELECT * FROM T1
WHERE COL1 = (SELECT MAX(COL1) FROM T1);

 In another case, if your process reads a row and returns later to update it, that
row might no longer exist or might not exist in the state that it did when your
application process originally read it. That is, another application might have
deleted or updated the row. If your application is doing non-cursor operations on
a row under the cursor, make sure the application can tolerate "not found"
conditions.
Similarly, assume another application updates a row after you read it. If your
process returns later to update it based on the value you originally read, you are,
in effect, erasing the update made by the other process. If you use isolation (CS)

48622202.doc Ver. 0.00a Page 114 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

with update, your process might need to lock out concurrent updates. One
method is to declare a cursor with the clause FOR UPDATE OF.

Product-sensitive Programming Interface

For packages and plans that contain updatable scrollable cursors, ISOLATION(CS)
lets DB2 use optimistic concurrency control. DB2 can use optimistic concurrency
control to shorten the amount of time that locks are held in the following situations:

 Between consecutive fetch operations

 Between fetch operations and subsequent positioned update or delete operations

Figure 12 and Figure 13 show processing of positioned update and delete operations
without optimistic concurrency control and with optimistic concurrency control.

Figure 12: Positioned updates and deletes without optimistic concurrency control

Figure 13: Positioned updates and deletes with optimistic concurrency control

Optimistic concurrency control consists of the following steps:

48622202.doc Ver. 0.00a Page 115 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

I) When the application requests a fetch operation to position the cursor on a row,
DB2 locks that row, executes the FETCH, and releases the lock.

II) When the application requests a positioned update or delete operation on the
row, DB2 performs the following steps:

a) Locks the row.


b) Reevaluates the predicate to ensure that the row still qualifies for the result
table.
c) For columns that are in the result table, compares current values in the row
to the values of the row when step 1 was executed. Performs the positioned
update or delete operation only if the values match.

5.11.2.4 Uncommitted Read (UR)


Allows the application to read while acquiring few locks, at the risk of reading
uncommitted data. UR isolation applies only to read-only operations: SELECT,
SELECT INTO, or FETCH from a read-only result table.

There is an element of uncertainty about reading uncommitted data.

Example: An application tracks the movement of work from station to station along
an assembly line. As items move from one station to another, the application
subtracts from the count of items at the first station and adds to the count of items
at the second. Assume one wants to query the count of items at all the stations,
while the application is running concurrently.

What can happen if the query reads data that the application has changed
but has not committed?

 If the application subtracts an amount from one record before adding it to


another, the query could miss the amount entirely.

 If the application adds first and then subtracts, the query could add the amount
twice.

If those situations can occur and are unacceptable, it is not advisable to use UR
isolation.

Restrictions: One cannot use UR isolation for the types of statement listed below. If
one binds with ISOLATION(UR), and the statement does not specify WITH RR or
WITH RS, then DB2 uses CS isolation for:
- INSERT, UPDATE, and DELETE
- Any cursor defined with FOR UPDATE OF

When can one use uncommitted read (UR)? You can probably use UR isolation
in cases like the following ones:

 When errors cannot occur.

Example: A reference table, like a table of descriptions of parts by part number.


It is rarely updated, and reading an uncommitted update is probably no more

48622202.doc Ver. 0.00a Page 116 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

damaging than reading the table 5 seconds earlier. Go ahead and read it with
ISOLATION(UR).

Example: The employee table of Spiffy Computer, our hypothetical user. For
security reasons, updates can be made to the table only by members of a single
department. And that department is also the only one that can query the entire
table. It is easy to restrict queries to times when no updates are being made and
then run with UR isolation.

 When an error is acceptable.

Example: Spiffy wants to do some statistical analysis on employee data. A typical


question is, "What is the average salary by sex within education level?" Because
reading an occasional uncommitted record cannot affect the averages much, UR
isolation can be used.

 When the data already contains inconsistent information.

Example: Spiffy gets sales leads from various sources. The data is often
inconsistent or wrong, and end users of the data are accustomed to dealing with
that. Inconsistent access to a table of data on sales leads does not add to the
problem.

It is recommended not to use uncommitted read (UR):

 When the computations must balance


 When the answer must be accurate
 When you are not sure it can do no damage

Restrictions on concurrent access: An application using UR isolation cannot run


concurrently with a utility that drains all claim classes. Also, the application must
acquire the following locks:

A special mass delete lock acquired in S mode on the target table or table space. A
"mass delete" is a DELETE statement without a WHERE clause; that operation must
acquire the lock in X mode and thus cannot run concurrently.

An IX lock on any table space used in the work file database. That lock prevents
dropping the table space while the application is running.

5.11.3Concurrency vs Lock Size


As long as multiple transactions access tables for the purpose of reading data,
concurrency should be only a minor concern. What becomes more of an issue is the
situation in which at least one transaction writes to a table. Unless an appropriate
index is defined on a table, there is almost no cocurrent write access to the table.
Cocurrent updates are only possible with Intent Share or Intent Exclusive locks. If no
index exists for the locked table, the entire table must be scanned for the
appropriate data row (table scan). In this case, the transaction must hold either a
share or an exclusive lock on the table. Simply creating indexes on all tables does
not guarantee concurrency. DB2 optimizer decides whether indexes are used in
processing the SQL statements, so even if indexes are defined, the optimizer might
choose to perform a table scan for any of several reasons:

48622202.doc Ver. 0.00a Page 117 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

 No index is defined for the search criteria (WHERE clause). The index key must
match the columns used in the WHERE clause in order for the optimizer to use
the index to help locate the desired rows. If one chooses to optimize for high
concurrency, one should make sure the table design includes a primary key for
each table that will be updated. These primary keys should then be used
whenever these tables are referenced with an UPDATE SQL statement.

 Direct access might be faster than via the index. The table must be large enough
so the optimizer thinks it is worthwhile to take the extra step of going through
the index, rather than just searching all the rows in the table. For example, the
optimizer would probably not use any index defined on a table with only four
rows of data.

 A large number of row / page locks will be acquired. If many rows in the table are
going to be accessed by a transaction, the optimizer will probably acquire a table
or tablespace lock.

Any time one transaction holds a lock on a table or row, other transaction might be
denied access until the owner transaction has terminated. To optimize for maximum
concurrency, a small, row-level lock is usually better than a large table lock. Because
locks require storage space (to keep) and processing time (to manage), one can
minimize both these factors by using one large lock – rather than many small ones.

5.11.4Deadlock
Please refer to section 9.1 for details about deadlock.

Application designers need to watch out for deadlock scenarios whwn designing high-
concurrency applications that are to be run by multiple concurrent users. In
situations where the same set of rows will likely be read and then updated by
multiple copies of the same application program, the program should be designed to
roll back and retry any transactions that might be terminated as a result of a
deadlock situation. As a general rule, the shorter the transaction, the less likely the
transaction will be to get into a deadlock cycle. Srtting the proper interval for the
deadlock detector (in the database configuration file) is also necessary to ensure
good concurrent application performance. An interval that is too short will cause
unnecessary overhead, and an interval that is too long will enable a deadlock cycle to
delay a process for an unacceptable amount of time. One must balance the possible
delays in resolving deadlocks with the overhead of detecting the possible delays.

5.11.5Lock Compatibility
If a transaction A is running, holding a certain lock on resource X, then transaction B,
which also requires a lock on resource X, can run concurrently with transaction A
only if the lock requested by transaction B is compatible with the lock that
transaction A holds on resource X.

5.11.6Lock Conversion
Lock conversion often prevents transactions to run concurrently by promoting the
locks one of them is holding on a particular resource to a more restrictive type. This
can prevent the other transaction to do its own updates, thereby causing waits or
timeouts or deadlocks in some cases.

48622202.doc Ver. 0.00a Page 118 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

5.11.7Lock Escalation
Lock escalation takes place when the number of locks requested by a transaction on
an application exceeds the LOCKMAX parameter for the tablespace. This enhances
the performance of the transaction in question by reducing the overhead of excessive
number of page/row locks requested, but gives a blow to the concurrency.

5.11.8Basic recommendations to promote concurrency


Recommendations are grouped roughly by their scope, as:

5.11.8.1 Recommendations for system options


 Reduce swapping: If a task is waiting or is swapped out and the unit of work
has not been committed, then it still holds locks. When a system is heavily
loaded, contention for processing, I/O, and storage can cause waiting. One
should consider reducing the number of initiators, increasing the priority for the
DB2 tasks, and providing more processing, I/O, or storage resources.

 Make way for the IRLM: One should make sure that the IRLM has a high MVS
dispatching priority or is assigned to the SYSSTC service class. It should come
next after VTAM and before DB2.

 If more ECSA can be defined, then one should start the IRLM with PC=NO rather
than PC=YES. One can make this change without changing the application
process. This change can also reduce processing time.

 Restrict updating of partitioning key columns: In systems with high


concurrency and long running transactions, allowing updating of partitioning key
columns when the update moves the row from one partition to another can cause
concurrency problems. One should allow updating only when the row stays in the
same partition by setting the UPDATE PART KEY COLS field in DSNTIP4 to SAME.

5.11.8.2 Recommendations for database design


 Keep like things together: One should cluster tables relevant to the same
application into the same database, and give each application process that
creates private tables a private database in which to do it. In the ideal model,
each application process uses as few databases as possible.

 Keep unlike things apart: One should give users different authorization IDs for
work with different databases; for example, one ID for work with a shared
database and another for work with a private database. This effectively adds to
the number of possible (but not concurrent) application processes while
minimizing the number of databases each application process can access.

 Plan for batch inserts: If the application does sequential batch insertions,
excessive contention on the space map pages for the table space can occur. This
problem is especially apparent in data sharing, where contention on the space
map means the added overhead of page P-lock negotiation. For these types of
applications, one should consider using the MEMBER CLUSTER option of CREATE
TABLESPACE. This option causes DB2 to disregard the clustering index (or
implicit clustering index) when assigning space for the SQL INSERT statement.

 Use LOCKSIZE ANY until you have reason not to: LOCKSIZE ANY is the
default for CREATE TABLESPACE. It allows DB2 to choose the lock size, and DB2

48622202.doc Ver. 0.00a Page 119 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

usually chooses LOCKSIZE PAGE and LOCKMAX SYSTEM for non-LOB table
spaces.

For LOB table spaces, it chooses LOCKSIZE LOB and LOCKMAX SYSTEM. One
should use LOCKSIZE TABLESPACE or LOCKSIZE TABLE only for read-only
tablespaces or tables, or when concurrent access to the object is not needed.
Before one chooses LOCKSIZE ROW, one should estimate whether there will be
an increase in overhead for locking and weigh that against the increase in
concurrency.

 Examine small tables: For small tables with high concurrency requirements,
one should estimate the number of pages in the data and in the index. If the
index entries are short or they have many duplicates, then the entire index can
be one root page and a few leaf pages. In this case, one should spread out the
data to improve concurrency, or consider it a reason to use row locks.

 Partition the data: Online queries typically make few data changes, but they
occur often. Batch jobs are just the opposite; they run for a long time and
change many rows, but occur infrequently. The two do not run well together. One
might be able to separate online applications from batch, or two batch jobs from
each other. To separate online and batch applications, one should provide
separate partitions. Partitioning can also effectively separate batch jobs from
each other.

 Fewer rows of data per page: By using the MAXROWS clause of CREATE or
ALTER TABLESPACE, one can specify the maximum number of rows that can be
on a page. For example, if one uses MAXROWS 1, each row occupies a whole
page, and a page lock is confined to a single row. One should consider this option
if one has a reason to avoid using row locking, such as in a data sharing
environment where row locking overhead can be excessive.

Fewer rows per page can also be achieved by properly using the PCTFREE
parameter for a tablespace.

5.11.8.3 Recommendations for application design


 Access data in a consistent order: When different applications access the
same data, one should try to make them do so in the same sequence. For
example, one should make both access rows 1,2,3,5 in that order. In that case,
the first application to access the data delays the second, but the two
applications cannot deadlock. For the same reason, one should try to make
different applications access the same tables in the same order.

 Commit work as soon as is practical: To avoid unnecessary lock contentions,


one should issue a COMMIT statement as soon as possible after reaching a point
of consistency, even in read-only applications. To prevent unsuccessful SQL
statements (such as PREPARE) from holding locks, one should issue a ROLLBACK
statement after a failure. Statements issued through SPUFI can be committed
immediately by the SPUFI autocommit feature.

Taking commit points frequently in a long running unit of recovery (UR) has the
following benefits:

 Reduces lock contention.

48622202.doc Ver. 0.00a Page 120 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

 Improves the effectiveness of lock avoidance, especially in a data


sharing environment.
 Reduces the elapsed time for DB2 system restart following a system
failure.
 Reduces the elapsed time for a unit of recovery to rollback following an
application failure or an explicit rollback request by the application.
 Provides more opportunity for utilities, such as online REORG, to break
in.

One should consider using the UR CHECK FREQ field or the UR LOG WRITE
CHECK field of installation panel DSNTIPN to help identify those applications that
are not committing frequently. UR CHECK FREQ, which identifies when too many
checkpoints have occurred without a UR issuing a commit, is helpful in
monitoring overall system activity. UR LOG WRITE CHECK enables one to detect
applications that might write too many log records between commit points,
potentially creating a lengthy recovery situation for critical tables.

Even though an application might conform to the commit frequency standards of


the installation under normal operational conditions, variation can occur based on
system workload fluctuations. For example, a low-priority application might issue
a commit frequently on a system that is lightly loaded. However, under a heavy
system load, the use of the CPU by the application may be pre-empted, and, as a
result, the application may violate the rule set by the UR CHECK FREQ
parameter. For this reason, one should add logic to the application to commit
based on time elapsed since last commit, and not solely based on the amount of
SQL processing performed. In addition, one should take frequent commit points
in a long running unit of work that is read-only to reduce lock contention and to
provide opportunities for utilities, such as online REORG, to access the data.

 Retry an application after deadlock or timeout: One should include logic in a


batch program so that it retries an operation after a deadlock or timeout. Such a
method could help recover from the situation without assistance from operations
personnel. Field SQLERRD (3) in the SQLCA returns a reason code that indicates
whether a deadlock or timeout occurred.

 Close cursors: If cursor is defined using the WITH HOLD option, the locks it
need could be held past a commit point. One should use the CLOSE CURSOR
statement as soon as possible in the program to cause those locks to be released
and the resources they hold to be freed at the first commit point that follows the
CLOSE CURSOR statement. Whether page or row level locks are held for WITH
HOLD cursors is controlled by the RELEASE LOCKS parameter on panel DSNTIP4.

 Bind plans with ACQUIRE (USE): ACQUIRE (USE), which indicates that DB2
will acquire table and table space locks when the objects are first used and not
when the plan is allocated, is the best choice for concurrency.

Packages are always bound with ACQUIRE (USE), by default. ACQUIRE


(ALLOCATE) can provide better protection against timeouts. One should consider
ACQUIRE (ALLOCATE) for applications that need gross locks instead of intent
locks or that run with other applications that may request gross locks instead of
intent locks. Acquiring the locks at plan allocation also prevents any one
transaction in the application from incurring the cost of acquiring the table and

48622202.doc Ver. 0.00a Page 121 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

table space locks. If one needs ACQUIRE (ALLOCATE), one might want to bind all
DBRMs directly to the plan.

 Bind with ISOLATION(CS) and CURRENTDATA(NO) typically:


ISOLATION(CS) lets DB2 release acquired row and page locks as soon as
possible. CURRENTDATA(NO) lets DB2 avoid acquiring row and page locks as
often as possible. After that, in order of decreasing preference for concurrency,
one should use these bind options:

- ISOLATION(CS) with CURRENTDATA(YES), when data returned to the


application must not be changed before the next FETCH operation.

- ISOLATION(RS), when data returned to the application must not be changed


before the application commits or rolls back. However, one does not care if
other application processes insert additional rows.

- ISOLATION(RR), when data evaluated as the result of a query must not be


changed before the application commits or rolls back. New rows cannot be
inserted into the answer set.

For updateable scrollable cursors, ISOLATION(CS) provides the additional


advantage of letting DB2 use optimistic concurrency control to further reduce the
amount of time that locks are held.

 Use ISOLATION(UR) cautiously: UR isolation acquires almost no locks on rows


or pages. It is fast and causes little contention, but it reads uncommitted data.
One should not use it unless one is sure that the application and end users can
accept the logical inconsistencies that can occur.

 Use global transactions: The Recoverable Resource Manager Services


attachment facility (RRSAF) relies on an OS/390 component called OS/390
Transaction Management and Recoverable Resource Manager Services (OS/390
RRS). OS/390 RRS provides system-wide services for coordinating two-phase
commit operations across MVS products. For RRSAF applications and IMS
transactions that run under OS/390 RRS, one can group together a number of
DB2 agents into a single global transaction. A global transaction allows multiple
DB2 agents to participate in a single global transaction and thus share the same
locks and access the same data. When two agents that are in a global transaction
access the same DB2 object within a unit of work, those agents will not deadlock
with each other. The following restrictions apply:

- There is no Parallel Sysplex support for global transactions.

- Because each of the "branches" of a global transaction are sharing


locks, uncommitted updates issued by one branch of the
transaction are visible to other branches of the transaction.

- Claim/drain processing is not supported across the branches of a


global transaction, which means that attempts to issue CREATE,
DROP, ALTER, GRANT, or REVOKE may deadlock or timeout if they
are requested from different branches of the same global
transaction.

48622202.doc Ver. 0.00a Page 122 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

- Attempts to update a partitioning key may deadlock or timeout


because of the same restrictions on claim/drain processing.

- LOCK TABLE may deadlock or timeout across the branches of a


global transaction.

5.11.9Exercise
1. During BIND, which ISOLATION option gives maximum concurrency?
2. Which ACQUIRE and RELEASE combination would give maximum concurrency?

Answers:
1.UR
2. USE and COMMIT

5.12 DB2 Subsystem Object Locking

There is some locking activity that is not related to user data but takes place in
shared DB2 subsystem objects. User application programs can cause this locking
activity. It can also be caused by DB2 system plans processing. One is never aware
of most of this locking activity, although in some cases it can produce suspension
problems in your system. The DB2 system plans that generate locking activity are
these:

BCT, the plan used to perform service tasks and handle requests from DB2
resource managers. A lock issued by this plan is usually of short duration.

ACT, the authorization plan used in the process of validating a user's


authority to access a specified plan. A lock issued by this plan is of short
duration, but ACT is used frequently, with every create thread.

DSNBIND, the plan used for all binds. It is major contributor to the length of
time it takes to complete a bind.

DSNUTIL, the plan used by DB2 utility control program DSNUTILB. The
duration of locks held by this plan varies according to the specific DB2 utility
function being invoked.

The DB2 subsystem objects involved in locking activity are:


 DB2 catalog and directory
 Skeleton cursor table(SKCT) and skeleton package table(SKPT)
 Database descriptors(DBDs).

5.12.1Locks on the DB2 Catalog and Directory


The DB2 Catalog, which consists of tables of data about everything that is defined to
the DB2 system, and the DB2 directory, whose tables contain information that DB2
uses to control normal operation, are subject to update activity that must be
serialized.

48622202.doc Ver. 0.00a Page 123 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

Although the catalog and directory are designed to minimize contention among
application processes, update activity may lead to locking problems in concurrency
situations.

Five main activities can cause suspension problems in the DB2 catalog and
directory:

 DB2 tasks that update DB2 catalog and directory tables. An example is
writing to SYSLGRNG every time a table space or partition is opened and
updated.

 The BIND, REBIND, and FREE process, which reads some DB2 catalog
tables, such as SYSIBM.SYSTABLES, and inserts and/or updates others,
such as SYSIBM.SYSPACKAGE.

 Data definition processes, such as CREATE, ALTER and DROP, which also
insert, update, or delete catalog and directory entries (for example,
DBD01).

 Data control processes, such as GRANT and REVOKE, which affect the
concurrency of the catalog authorization tables (for example,
SYSIBM.SYSUSERAUTH).

 Utility processing, which may also lock some directory tables (for
example, SYSUTILX).

To avoid most of these concurrency problems, one should process all the data
definition, data control and BIND activities in a dedicated window outside
production hours. One can also greatly relieve limitations on concurrency by
converting all catalog and directory indexes to Type 2, thus avoiding all index
locking.

5.12.2Locks on Skeleton Cursor Tables (SKCT)


The SKCT is located in the SCT02 table space in the directory and describes the
structure of SQL statements in application plans. The SKPT is located in the SPT01
table space in the directory and applies to packages.

When one binds a plan, DB2 creates an SKCT in SCT02. When one binds a package,
DB2 creates an SKPT in SPT01.

The following operations require exclusive control of the related SKCT and SKPT as
they are updating them:

 Using BIND, REBIND, and FREE for the plan or package.


 Dropping a resource or authority on which the plan or package depends.
 In some cases, altering a resource or authority on which the plan or package
depends.

When one runs a plan or package, DB2 takes a shared lock on it, so that changes
cannot be made to the object while it is being executed in an application program.
Thus, the execution of a plan or package contends with operations listed above.

48622202.doc Ver. 0.00a Page 124 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

5.12.3Locks on Database Descriptors


The database descriptors (DBDs) describe the DB2 databases. Each DBD fully
describes a database and all of its objects, such as table spaces, tables, indexes,
and referential relationships, and contains other access information. DBDs are
located in the table space DBD01 in the directory.

Two main processes have to read and lock a DBD in shared mode:
 Dynamic SQL statements
 Active utilities.

Static SQL does not take any locks on the DBD if the DBD is cached in the
environment descriptor management (EDM) pool. Therefore DB2 does not have to go
to the DB2 directory table space to retrieve the DBD.

Each time a database or a dependent object definition is modified, the DBD object in
the directory must be updated. This update process requires an exclusive lock on
the DBD which is incompatible with dynamic SQL and utility execution. If the DBD
is in use for a plan or package, dynamic SQL and utility execution are suspended.

Figure 14 summarizes the DB2 subsystem locking activity.

__________________________________________________________________________________
| DB2 Process |
|________ ________________ ___________ ________ ___________ ___________ ___________|
| | | | | | | Drop | | |
| Object of | Static| | | Create | Alter | Table | | |
| Locking | SQL |Dynamic SQL| BIND | Table | Table | Space | Grant | Revoke|
|___________|_______|___________|_______|_________|_______|_______|_______|________|
| Catalog | IS | IS | IX | IX | IX | IX | IX | IX |
| Table | (a) | (b) | | | | | | |
| spaces | | | | | | | | |
|___________|_______|___________|_______|_________|_______|_______|_______|________|
| SKCT or | S | S | X | - | X | X | - | X |
| SKPT | | | | | (c) | (d) | | |
|___________|_______|___________|_______|_________|_______|_______|_______|________|
| DBD | - | S | S | X | X | X | - | - |
| | (e) | | | | | | | |
|___________|_______|___________|_______|_________|_______|_______|_______|________|
| Notes: |
| |
| (a) IS locks on the catalog table spaces are held only for a short time to check|
| EXECUTE authority if the plan or package is not public or the authorization |
| list is not cached in the EDM pool. |
| |
| (b) Except when checking EXECUTE authority (see Note a), IS locks on the |
| catalog table spaces are held until the COMMIT point. |
| |
| (c) SKCT or SKPT is marked invalid if a referential constraint (such as a new |
| primary key or foreign key) is added or changed, or the AUDIT attribute is |
| changed in the table. |
| |
| (d) SKCT or SKPT is marked invalid as a result of a drop table space operation. |
| |
| (e) If the DBD is not in the EDM pool, S-locks are acquired on the DBD table |
| space, which effectively locks the DBD. | |
__________________________________________________________________________________|

Figure 14: DB2 Subsystem Locking Activity

48622202.doc Ver. 0.00a Page 125 of 177


Infosys Technologies Ltd. Locking, IRLM & Concurrency
___________________________________________________________________

5.13 Review Questions

1. What is the difference between a gross lock and an intent lock?

2. What is the difference between lock and latch?

3. What is the difference between lock promotion and lock escalation?

4. What are the different values for ISOLATION, the BIND parameter? Arrange
them in the ascending order of restrictiveness as far as locking is concerned.

5. What is the difference between deadlock and timeout?

6. How does DB2 resolve deadock?

7. What is UTIMOUT?

8. When is a process said to be in suspended state?

9. What are the LOCKSIZE and LOCKMAX parameters?

5.14 Reference

 http://publibz.boulder.ibm.com/cgi-
bin/bookmgr_OS390/BOOKS/DSNAG0F4/CCONTENTS
 DB2 by Craig Mullins

48622202.doc Ver. 0.00a Page 126 of 177


Infosys Technologies Ltd. Dynamic SQL
___________________________________________________________________

UNIT - VI
6. Dynamic SQL
6.1 Unit Objectives

This section will describe in details about Dynamic SQL and it’s usage in application
program.

6.2 Introduction

Static SQL is hard coded, and only the values of host variables in predicates can
change. Dynamic SQL is characterized by its capability to change columns, tables,
and predicates during a program’s execution. This flexibility requires different
techniques for embedding dynamic SQL in application program.

Programs containing embedded dynamic SQL statements must be precompiled like


those containing static SQL, but unlike static SQL, the dynamic SQL statements are
constructed and prepared at run time. The SQL statement text is prepared and
executed using either the PREPARE and EXECUTE statements, or the EXECUTE
IMMEDIATE statement. The statement can also be executed with the cursor
operations if it is a SELECT statement.

6.3 Coding Dynamic SQL in Application Program

For most DB2 users, static SQL--embedded in a host language program and bound
before the program runs--provides a straightforward, efficient path to DB2 data. You
can use static SQL when you know before run time what SQL statements your
application needs to execute.

Dynamic SQL prepares and executes the SQL statements within a program, while the
program is running. There are four types of dynamic SQL:

 Embedded Dynamic SQL: The application puts the SQL source in host
variables and includes PREPARE and EXECUTE statements that tell DB2
to prepare and run the contents of those host variables at run time.
The programs that include embedded dynamic SQL must go through
pre-compile and bind.

 Interactive SQL: A user enters SQL statements through SPUFI. DB2


prepares and executes those statements as dynamic SQL statements.

 Deferred embedded SQL: Deferred embedded SQL statements are


neither fully static nor fully dynamic. Like static statements, deferred
embedded SQL statements are embedded within applications, but like
dynamic statements, they are prepared at run time. DB2 processes
deferred embedded SQL statements with bind-time rules. For example,
DB2 uses the authorization ID and qualifier determined at the bind
time as the plan or package owner. Deferred embedded SQL
statements are used for DB2 private protocol access to remote data.

48622202.doc Ver. 0.00a Page 127 of 177


Infosys Technologies Ltd. Dynamic SQL
___________________________________________________________________

 Dynamic SQL through ODBC functions: When the application contains


ODBC function calls that pass dynamic SQL statements as arguments.
The programs do not need to pre-compile and bind programs that use
ODBC function calls.

6.3.1 Choosing between static and dynamic SQL


When you use static SQL, you cannot change the form of SQL statements unless you
make changes to the program. However, you can increase the flexibility of those
statements by using host variables.

A program that provides for dynamic SQL accepts as input, or generates, an SQL
statement in the form of a character string. You can simplify the programming if you
can plan the program not to use SELECT statements, or to use only those that return
a known number of values of known types. In the most general case, in which you
do not know in advance about the SQL statements that will execute, the program
typically takes these steps:
1. Translates the input data, including any parameter markers, into an SQL
statement
2. Prepares the SQL statement to execute and acquires a description of the
result table
3. Obtains, for SELECT statements, enough main storage to contain retrieved
data
4. Executes the statement or fetches the rows of data
5. Processes the information returned
6. Handles SQL return codes.

6.3.2 Performance of static and Dynamic SQL


To access DB2 data, an SQL statement requires an access path. Two big factors in
the performance of an SQL statement are the amount of time that DB2 uses to
determine the access path at run time and whether the access path is efficient. DB2
determines the access path for a statement at either of these times:
 When you bind the plan or package that contains the SQL statement
 When the SQL statement executes

The time at which DB2 determines the access path depends on these factors:
 Whether the statement is executed statically or dynamically
 Whether the statement contains input host variables
For dynamic SQL statements, DB2 determines the access path at run time, when the
statement is prepared. This can make the performance worse than that of static SQL
statements. However, if you execute the same SQL statement often, you can use the
dynamic statement cache to decrease the number of times that those dynamic
statements must be prepared.

6.3.3 Caching Dynamic SQL statements and KEEPDYNAMICS


As DB2's ability to optimize SQL has improved, the cost of preparing a dynamic SQL
statement has grown. Applications that use dynamic SQL might be forced to pay this
cost more than once. When an application performs a commit operation, it must
issue another PREPARE statement if that SQL statement is to be executed again. For
a SELECT statement, the ability to declare a cursor WITH HOLD provides some relief
but requires that the cursor be open at the commit point. WITH HOLD also causes
some locks to be held for any objects that the prepared statement is dependent on.

48622202.doc Ver. 0.00a Page 128 of 177


Infosys Technologies Ltd. Dynamic SQL
___________________________________________________________________

Also, WITH HOLD offers no relief for SQL statements that are not SELECT
statements.

DB2 can save prepared dynamic statements in a cache. The cache is a DB2-wide
cache in the EDM pool that all application processes can use to store and retrieve
prepared dynamic statements. After an SQL statement has been prepared and is
automatically stored in the cache, subsequent prepare requests for that same SQL
statement can avoid the costly preparation process by using the statement in the
cache. Cached statements can be shared among different threads, plans, or
packages.

Eligible Statements: The following statements are eligible for caching.

 SELECT
 UPDATE
 INSERT
 DELETE

Distributed and local SQL statements are eligible. Prepared, dynamic statements
using DB2 private protocol access are eligible.

6.3.3.1 Keeping prepared statements after commit points


The bind option KEEPDYNAMIC(YES) lets you hold dynamic statements past a
commit point for an application process. An application can issue a PREPARE for a
statement once and omit subsequent PREPAREs for that statement.

PREPARE STMT1 FROM ... Statement is prepared.


EXECUTE STMT1
COMMIT
.
.
.
EXECUTE STMT1 Application does not issue PREPARE.
COMMIT
.
.
.
EXECUTE STMT1 Again, no PREPARE needed.
COMMIT

Figure 1: Dynamic SQL to use bind option KEEPDYNAMICS(YES)

Relationship between KEEPDYNAMIC(YES) & statement caching:


When the dynamic statement cache is not active, and you run an application bound
with KEEPDYNAMIC(YES), DB2 saves only the statement string for a prepared
statement after a commit operation. On a subsequent OPEN, EXECUTE, or
DESCRIBE, DB2 must prepare the statement again before performing the requested
operation.

When the dynamic statement cache is active, and you run an application bound with
KEEPDYNAMIC(YES), DB2 retains a copy of both the prepared statement and the
statement string. The prepared statement is cached locally for the application
process. It is likely that the statement is globally cached in the EDM pool, to benefit

48622202.doc Ver. 0.00a Page 129 of 177


Infosys Technologies Ltd. Dynamic SQL
___________________________________________________________________

other application processes. If the application issues an OPEN, EXECUTE, or


DESCRIBE after a commit operation, the application process uses its local copy of the
prepared statement to avoid a prepare and a search of the cache.

PREPARE STMT1 FROM ... Statement is prepared and put in memory.


EXECUTE STMT1
COMMIT
.
.
.
EXECUTE STMT1 Application does not issue PREPARE.
COMMIT DB2 uses the prepared statement in
Memory.
.
.
EXECUTE STMT1 Again, no PREPARE needed.
COMMIT DB2 uses the prepared statement in
Memory.
.
.
.
PREPARE STMT1 FROM ... Statement is prepared and put in
Memory.

Figure 2: Using KEEPDYNAMIC(YES) when dynamic statement cache is active

The local instance of the prepared SQL statement is kept in ssnmDBM1 storage until
one of the following occurs:
 The application process ends.
 A rollback operation occurs.
 The application issues an explicit PREPARE statement with the same
statement name.
 If the application does issue a PREPARE for the same SQL statement
name that has a kept dynamic statement associated with it, the kept
statement is discarded and DB2 prepares the new statement.
 The statement is removed from memory because the statement has
not been used recently, and the number of kept dynamic SQL
statements reaches a limit set at installation time.

If an application requester does not issue a PREPARE after a COMMIT, the package at
the DB2 for OS/390 server must be bound with KEEPDYNAMIC(YES). If both
requester and server are DB2 for OS/390 subsystems, the DB2 requester assumes
that the KEEPDYNAMIC value for the package at the server is the same as the value
for the plan at the requester.

The KEEPDYNAMIC option has performance implications for DRDA clients that specify
WITH HOLD on their cursors:
 If KEEPDYNAMIC (NO) is specified, a separate network message is
required when the DRDA client issues the SQL CLOSE for the cursor.
 If KEEPDYNAMIC (YES) is specified, the DB2 for OS/390 server
automatically closes the cursor when SQLCODE +100 is detected,
which means that the client does not have to send a separate message
to close the held cursor. This reduces network traffic for DRDA

48622202.doc Ver. 0.00a Page 130 of 177


Infosys Technologies Ltd. Dynamic SQL
___________________________________________________________________

applications that use held cursors. It also reduces the duration of locks
that are associated with the held cursor.

6.3.4 Dynamic SQL with resource limit facility


The resource limit facility (or governor) limits the amount of CPU time an SQL
statement can take, which prevents SQL statements from making excessive
requests. The predictive governing function of the resource limit facility provides an
estimate of the processing cost of SQL statements before they run. To predict the
cost of an SQL statement, you execute EXPLAIN to put information about the
statement cost in DSN_STATEMNT_TABLE.

The governor controls only the dynamic SQL manipulative statements SELECT,
UPDATE, DELETE, and INSERT. Each dynamic SQL statement used in a program is
subject to the same limits. The limit can be a reactive governing limit or a predictive
governing limit. If the statement exceeds a reactive governing limit, the statement
receives an error SQL code. If the statement exceeds a predictive governing limit, it
receives a warning or error SQL code.

6.3.4.1 Writing an application to handle reactive governing


When a dynamic SQL statement exceeds a reactive governing threshold, the
application program receives SQLCODE -905. The application must then determine
what to do next.

If the failed statement involves an SQL cursor, the cursor's position remains
unchanged. The application can then close that cursor. All other operations with the
cursor do not run and the same SQL error code occurs.

If the failed SQL statement does not involve a cursor, then all changes that the
statement made are undone before the error code returns to the application. The
application can either issue another SQL statement or commit all work done so far.

6.3.4.2 Writing an application to handle predictive governing


If the installation uses predictive governing, you need to modify your applications to
check for the +495 and -495 SQLCODEs that predictive governing can generate after
a PREPARE statement executes. The +495 SQLCODE in combination with deferred
prepare requires that DB2 do some special processing to ensure that existing
applications are not affected by this new warning SQLCODE.

6.3.5 Dynamic SQL for non-SELECT statements


The easiest way to use dynamic SQL is not to use SELECT statements dynamically.
Because you do not need to dynamically allocate any main storage, you can write
your program in any host language, including OS/VS COBOL and FORTRAN.

The application program must take the following steps:


1. Include an SQLCA. The requirements for an SQL communications area
(SQLCA) are the same as for static SQL statements. For REXX, DB2 includes
the SQLCA automatically.
2. Load the input SQL statement into a data area. The procedure for building or
reading the input SQL statement is not discussed here; the statement
depends on your environment and sources of information. You can read in
complete SQL statements, or you can get information to build the statement

48622202.doc Ver. 0.00a Page 131 of 177


Infosys Technologies Ltd. Dynamic SQL
___________________________________________________________________

from data sets, a user at a terminal, previously set program variables, or


tables in the database.
3. Execute the statement.
4. Handle any errors that might result. The requirements are the same as those
for static SQL statements. The return code from the most recently executed
SQL statement appears in the host variables SQLCODE and SQLSTATE or
corresponding fields of the SQLCA.

6.3.5.1 Dynamic execution using EXECUTE IMMEDIATE


To execute the statements:

EXEC SQL
EXECUTE IMMEDIATE :DSTRING;

(Read a DELETE statement into the host variable DSTRING.)

DSTRING is a character-string host variable. EXECUTE IMMEDIATE causes the


DELETE statement to be prepared and executed immediately.

DSTRING is the name of a host variable, and is not a DB2 reserved word. In
assembler, COBOL and C, you must declare it as a varying-length string variable. In
FORTRAN, it must be a fixed-length string variable. In PL/I, it can be a fixed- or
varying-length character string variable, or any PL/I expression that evaluates to a
character string.

6.3.5.2 Dynamic execution using PREPARE and EXECUTE


We can think of PREPARE and EXECUTE as an EXECUTE IMMEDIATE done in two
steps. The first step, PREPARE, turns a character string into an SQL statement, and
then assigns it a name of our choice.

For example, let the variable: DSTRING have the value "DELETE FROM
DSN8610.EMP WHERE EMPNO = ?". To prepare an SQL statement from that string
and assign it the name S1, write:
EXEC SQL PREPARE S1 FROM :DSTRING;

The prepared statement still contains a parameter marker, for which you must
supply a value when the statement executes. After the statement is prepared, the
table name is fixed, but the parameter marker allows you to execute the same
statement many times with different values of the employee number.

EXECUTE executes a prepared SQL statement, naming a list of one or more host
variables, or a host structure, that supplies values for all of the parameter markers.

After you prepare a statement, you can execute it many times within the same unit
of work. In most cases, COMMIT or ROLLBACK destroys statements prepared in a
unit of work. Then, you must prepare them again before you can execute them
again. However, if you declare a cursor for a dynamic statement and use the option
WITH HOLD, a commit operation does not destroy the prepared statement if the
cursor is still open. You can execute the statement in the next unit of work without
preparing it again.

48622202.doc Ver. 0.00a Page 132 of 177


Infosys Technologies Ltd. Dynamic SQL
___________________________________________________________________

Example:

A DO Loop executing a static SQL statement

DO UNTIL (EMP = 0);


EXEC SQL
DELETE FROM DSN8610.EMP WHERE EMPNO = :EMP ;
< Read a value for EMP from the list. >
END;

The Equivalent dynamic SQL statement

Read a statement containing parameter markers into DSTRING

EXEC SQL PREPARE S1 FROM :DSTRING;


Read a value for EMP from the list.
DO UNTIL (EMPNO = 0);
EXEC SQL EXECUTE S1 USING :EMP;
Read a value for EMP from the list.
END;

The PREPARE statement prepares the SQL statement and calls it S1. The EXECUTE
statement executes S1 repeatedly, using different values for EMP.

6.3.6 Dynamic SQL for fixed-list SELECT statements


The term "fixed-list" does not imply that you must know in advance how many rows
of data will return; however, you must know the number of columns and the data
types of those columns. A fixed-list SELECT statement returns a result table that can
contain any number of rows; your program looks at those rows one at a time, using
the FETCH statement. Each successive fetch returns the same number of values as
the last, and the values have the same data types each time. Therefore, you can
specify host variables as you do for static SQL.

An advantage of the fixed-list SELECT is that you can write it in any of the
programming languages that DB2 supports. Varying-list dynamic SELECT statements
require assembler, C, PL/I, and versions of COBOL other than OS/VS COBOL.

To execute a fixed-list SELECT statement dynamically, your program must:


1. Include an SQLCA
2. Load the input SQL statement into a data area
3. Declare a cursor for the statement name.

Dynamic SELECT statements cannot use INTO; hence, you must use a cursor to put
the results into host variables. In declaring the cursor, use the statement name (call
it STMT), and give the cursor itself a name (for example, C1):
EXEC SQL DECLARE C1 CURSOR FOR STMT;
4. Prepare the statement

Prepare a statement (STMT) from DSTRING. Here is one possible PREPARE


statement:
EXEC SQL PREPARE STMT FROM :DSTRING;

48622202.doc Ver. 0.00a Page 133 of 177


Infosys Technologies Ltd. Dynamic SQL
___________________________________________________________________

As with non-SELECT statements, the fixed-list SELECT could contain


parameter markers. However, this example does not need them.
To execute STMT, your program must open the cursor, fetch rows from the result
table, and close the cursor. The following sections describe how to do those
steps.
5. Open the cursor

The OPEN statement evaluates the SELECT statement named STMT. For
example:

Without parameter markers: EXEC SQL OPEN C1;

If STMT contains parameter markers, then you must use the USING clause of
OPEN to provide values for all of the parameter markers in STMT. If there are
four parameter markers in STMT, you need:

EXEC SQL OPEN C1 USING :PARM1, :PARM2, :PARM3, :PARM4;


6. Fetch rows from result table.

Your program could repeatedly execute a statement such as this:


EXEC SQL FETCH C1 INTO :NAME, :PHONE;

The key feature of this statement is the use of a list of host variables to
receive the values returned by FETCH. The list has a known number of items
(two--:NAME and :PHONE) of known data types (both are character strings,
of lengths 15 and 4, respectively).

It is possible to use this list in the FETCH statement only because you planned
the program to use only fixed-list SELECTs. Every row that cursor C1 points
to must contain exactly two character values of appropriate length.

7. Close the cursor

6.3.7 Dynamic SQL for varying-list SELECT statements


A varying-list SELECT statement returns rows containing an unknown number of
values of unknown type. When you use one, you do not know in advance exactly
what kinds of host variables you need to declare in order to store the results.
Because the varying-list SELECT statement requires pointer variables for the SQL
descriptor area, you cannot issue it from a FORTRAN or an OS/VS COBOL program. A
FORTRAN or OS/VS COBOL program can call a subroutine written in a language that
supports pointer variables (such as PL/I or assembler), if you need to use a varying-
list SELECT statement.

Suppose your program dynamically executes SQL statements, but this time without
any limits on their form. Your program reads the statements from a terminal, and
you know nothing about them in advance. They might not even be SELECT
statements.

As with non-SELECT statements, your program puts the statements into a varying-
length character variable; call it DSTRING. Your program goes on to prepare a
statement from the variable and then give the statement a name; call it STMT.

48622202.doc Ver. 0.00a Page 134 of 177


Infosys Technologies Ltd. Dynamic SQL
___________________________________________________________________

Now there is a new wrinkle. The program must find out whether the statement is a
SELECT. If it is, the program must also find out how many values are in each row,
and what their data types are. The information comes from an SQL descriptor area
(SQLDA).

6.3.7.1 SQL Description Area (SQLDA)


The SQLDA is a structure used to communicate with your program, and storage for it is
usually allocated dynamically at run time.

To include the SQLDA in a PL/I or C program, use:


EXEC SQL INCLUDE SQLDA;

For assembler, use this in the storage definition area of a CSECT:


EXEC SQL INCLUDE SQLDA

For COBOL, except for OS/VS COBOL, use:


EXEC SQL INCLUDE SQLDA END-EXEC.

You cannot include an SQLDA in an OS/VS COBOL, FORTRAN, or REXX program.

6.3.7.2 How bind option REOPT(VARS) affects dynamic SQL


When you specify the bind option REOPT(VARS), DB2 reoptimizes the access path at
run time for SQL statements that contain host variables, parameter markers, or
special registers. The option REOPT(VARS) has the following effects on dynamic SQL
statements:
 When you specify the option REOPT(VARS), DB2 automatically uses
DEFER(PREPARE), which means that DB2 waits to prepare a statement
until it encounters an OPEN or EXECUTE statement.
 When you execute a DESCRIBE statement and then an EXECUTE
statement on a non-SELECT statement, DB2 prepares the statement
twice: Once for the DESCRIBE statement and once for the EXECUTE
statement. DB2 uses the values in the input variables only during the
second PREPARE. These multiple PREPAREs can cause performance to
degrade if your program contains many dynamic non-SELECT
statements. To improve performance, consider putting the code that
contains those statements in a separate package and then binding that
package with the option NOREOPT(VARS).
 If you execute a DESCRIBE statement before you open a cursor for
that statement, DB2 prepares the statement twice. If, however, you
execute a DESCRIBE statement after you open the cursor, DB2
prepares the statement only once. To improve the performance of a
program bound with the option REOPT(VARS), execute the DESCRIBE
statement after you open the cursor. To prevent an automatic
DESCRIBE before a cursor is opened, do not use a PREPARE statement
with the INTO clause.
 If you use predictive governing for applications bound with
REOPT(VARS), DB2 does not return a warning SQL code when dynamic
SQL statements exceed the predictive governing warning threshold.
DB2 does return an error SQLCODE when dynamic SQL statements

48622202.doc Ver. 0.00a Page 135 of 177


Infosys Technologies Ltd. Dynamic SQL
___________________________________________________________________

exceed the predictive governing error threshold. DB2 returns the error
SQL code for an EXECUTE or OPEN statement.

6.3.8 Exercise
Questions:
1. List down different types of Dynamic SQL.
1. What is dynamic SQL cache?
2. Which are the eligible statements for caching?
3. How can we predict the cost of SQL statement?
4. What is SQLDA?

Answers:
1. Embedded Dynamic SQL, Interactive SQL, Deferred embedded SQL,
Dynamic SQL through ODBC functions
2. DB2 can save prepared dynamic statements in a cache. The cache is a
DB2-wide cache in the EDM pool that all application processes can use to
store and retrieve prepared dynamic statements. After an SQL statement
has been prepared and is automatically stored in the cache, subsequent
prepare requests for that same SQL statement can avoid the costly
preparation process by using the statement in the cache.
3. SELECT,UPDATE,INSERT AND DELETE
4. To predict the cost of an SQL statement, you execute EXPLAIN to put
information about the statement cost in DSN_STATEMNT_TABLE
5. The SQLDA is a structure used to communicate with your program,
and storage for it is usually allocated dynamically at run time.

6.4 Review Questions

1. How can we monitor Dynamic SQL?


2. How the EDM Pool size vary with respect to Dynamic SQL?
3. How does the REOPT(VARS) affect the dynamic SQL?

6.5 Reference

 IBM DB2 Manual (V6 Reference)


 www.db2mag.com
 www.idug.org
 www.db2azine.com

48622202.doc Ver. 0.00a Page 136 of 177


Infosys Technologies Ltd. Stored Procedures
___________________________________________________________________

UNIT - VII
7. Stored Procedures
7.1 Unit Objectives

This unit will acquaint readers with following topics


1. Introduction to Stored Procedures (SP)
2. How to run SP
3. Types of address spaces available to SPs
4. Runtime Environments of SPs
5. Introduction to SP Builder
6. Various performance considerations for SPs
7. Do’s and Don’ts for SPs

7.2 Introduction

7.2.1 What are Stored Procedures?


DB2's stored procedures function offers significant time and overhead savings for
applications using DRDA. Stored procedures can be also be accessed locally. Using a
stored procedure, the client application issues a single network operation to run the
stored procedure. The stored procedure can issue many SQL statements using a
single thread (the same thread used by the calling application) and returns a
parameter list in another single operation.

A stored procedure is simply a DB2 application program, containing SQL


statements, that runs in a DB2-managed or WLM-managed stored procedures
address space. With a single operation, a series of SQL statements are executed in
the stored procedure, thus significantly decreasing the costs of distributed SQL
statement processing.

48622202.doc Ver. 0.00a Page 137 of 177


Infosys Technologies Ltd. Stored Procedures
___________________________________________________________________

SEQUENCE OF
STEPS IN THE
EXECUTION OF A
USER STORED PROCEDURE

OUTPUT
USER
INPUT

CALLING RESULT
PROGRAM SET
DATABAS
E

STORED
PROCEDURE

7.2.2 When do we use them?


The question now is when do we decide when should we put our SQL statements in a
Stored Procedure instead of having them in our program. Thus the circumstances
that call for the use of Stored Procedures are as follows:
 When the number of SQL statements is more.
 When the SQL has a number of joins which might affect performance of the
application.
 When a single query is being used by a number of programs or applications.
 In a distributed system when the network traffic affects the performance of
the application.
 While using a dynamic SQL when we need to cut down on authorization
checks that are done each time the database is accessed.
 When we need to minimize the locking done.
 When security of the SQL statements and the host variables used is an
important issue.

48622202.doc Ver. 0.00a Page 138 of 177


Infosys Technologies Ltd. Stored Procedures
___________________________________________________________________

7.2.3 Advantage
 Reusability
 Consistency
 Data Integrity
 Performance
 Security

7.2.4 Disadvantages
 It is more cumbersome to change a Stored Procedure as compared to a
normal program due to the higher levels of security involved.
 Inclusion of business logic within the Stored Procedure makes it less
generic and hence should be avoided.
 CICS transactions should not be called from a Stored Procedure.

7.2.5 Types of Stored Procedures


As has been mentioned earlier the Stored Procedure is a compiled piece of code
written in a host language and contains one or more SQL statements. The SQL
statements in the Stored Procedure can be in form of a cursor. Parameters can be
passed to the Stored Procedure or back to the calling program using host variables,
which should be declared appropriately, as for example, in the Linkage Section in
COBOL programs. Stored Procedures in general can be classified into ones that use
the Result Set concept and ones that don’t. Let me elaborate slightly on both these
types:
 
With Result Set – A Result Set is, in layman’s terms, one row of data returned to
the called program by the Stored Procedure. An important point to be kept in mind is
that while we are using the Result Set concept in our Stored Procedure, we need to
call the Stored Procedure only once. This concept requires us to be familiar with a
few other terms. The Result Set Locator is a 4-byte value that uniquely identifies a
Result Set returned by a Stored Procedure in the calling program. The Result Set
Locator should be defined as ‘USAGE SQL TYPE IS RESULT-SET-LOCATOR VARYING.’
in the Working Storage Section of the calling program in COBOL. The Result Set
concept should be used in the situation when we are not sure about the maximum
number of rows that can be returned by the Stored Procedure. When we use the
Result Set concept we should put the SQL statements, which would be returning the
rows of the Result Set, in a cursor.
 
Without Result Set – When we don’t use the Result Set concept it is not necessary
that the SQL statements should be in a cursor. But the catch here is that we should
be sure of the maximum number of rows that can be returned by the Stored
Procedure. Thus the calling program should have ample provision for the same. In
this case the calling program should call the Stored Procedure as many times, until
SQLCODE = +100. It is convenient in this case to declare an array for receiving the
values returned by the Stored Procedure if more than one row might be returned.
Also note that the size of the array should be equal to the maximum number of rows
that can be returned by the Stored Procedure.
 
Comparison – Let us see how, using the Result Set concept stands against not
using it:
 
 Using the Result Set concept will free us from the pain of deciding upon the
maximum number of rows that can be returned by the Stored Procedure.

48622202.doc Ver. 0.00a Page 139 of 177


Infosys Technologies Ltd. Stored Procedures
___________________________________________________________________

 If we do not use the Result Set concept we have to declare an array to


accommodate the maximum number of rows, if the Stored Procedure returns
more than one row.
 However, if the number of rows that are to be returned by the Stored
Procedure is not large, say above 25, it would be more convenient not to use
the Result Set concept as it might affect the performance of the application.
 The non-Result Set Stored Procedure is usually more efficient than the Stored
Procedure that uses the Result Set concept in terms of performance when the
number of rows selected is less.
 However, we need to use the non Result Set Stored Procedure if need to use
multiple queries to fetch data within the Stored Procedure.
 Also, when we need to include some business logic within the Stored
Procedure, we should use the non-Result Set type of Stored Procedure.

7.2.6 Exercise
1. When network traffic is a concern, it is advised not to use SPs. True/False?
2. Maintaining SPs is easier compared to embedded SQL. True/False?
3. Security is better for SPs as compared to embedded SQL. True/False?
4. What are two types of SPs?

Answers: 1. False 2. False 3. True 4. a. SPs using Result Sets and b. SPs that
don’t use Result Sets

7.3 SP Related Terminology

Here we would take a look at few terms that get used in connections with SPs.

The Load Module for the Store Procedure is always present in a different Load
Library. In most cases the name of this Load Library would be ‘TEST.SP.LOADLIB’,
unless the System Administrator has modified this. Before the Stored Procedure can
be executed it needs to be declared in the SYSIBM.SYSPROCEDURES. This is one of
the system tables that is maintained by DB2 and can be updated only by the DBA.
Some of the columns that are of interest are enumerated below:
 
 PROCEDURE – This is column contains the name of the Stored Procedure.
 LOADMOD – This is the name of the Load Module. This name may be different
from the Procedure name.
 LINKAGE – This column specifies whether the input parameters for the Stored
Procedure can be nulls or not. In case the Stored Procedure is defined with
‘input characters can be null’, then there should be check at the beginning to
determine whether the input parameters are nulls.
 LANGUAGE – This column specifies the host language used for coding the
Stored Procedure.
 COLLID – This column contains the collection name for the stored procedure
package. If COLLID is blank, the client application collection name is used.
 PGM_TYPE – This column specifies whether the Stored Procedure is a Main
program or a Sub program. If the Stored Procedure is declared as a Main
program then it should not have the GOBACK statement at the end.
 RESULT_SETS – This column specifies the maximum number of Result Sets
that the Stored Procedure can return to the calling program.

48622202.doc Ver. 0.00a Page 140 of 177


Infosys Technologies Ltd. Stored Procedures
___________________________________________________________________

 PARMLIST – The input and output parameters being used by the Stored
Procedure are declared in this column along with their size and type.
Parameters can be of three types, namely, input, output and inout types.
 COMMIT_ON_RETURN – This column specifies whether the unit of work done
by the Stored Procedure should be committed as soon as the Stored
Procedure ends successfully and control is returned back to the calling
program.
 STAYRESIDENT – This column determines whether the Stored Procedure is
deleted from memory when execution is over. If this column is defined as ‘Y’
then it means that the Stored Procedure remains in memory even after
execution is over. If, however, this column is blank, then the Stored is
deleted from memory once it ends.
 
There are a few other columns in the SYSIBM.SYSPROCEDURE table, which contain
other attributes of the Stored Procedure. The Online Manuals contain detailed
descriptions of all the columns in the SYSIBM.SYSPROCEDURES table.

7.4 Example of Stored Procedure

Now let us take a look at a Stored Procedure that uses the Result Set concept with
the host language as COBOL. Please refer to Appendix D.

7.4.1 A few salient points about the Stored Procedure:

7.4.1.1 Stored Procedure using Result Set concept


 It uses a cursor to fetch the Result Sets from the database.
 The cursor should be declared with the ‘WITH RETURN FOR’ clause.
 In a Stored Procedures that use the Result Set concept the cursor should
be opened and then left open. The cursor is not closed within the Stored
Procedure.
 The cursor will be closed automatically once the calling program completes
execution.
 The variables being used for passing values should be in the linkage
section.
 Validation of the input parameters is important before we use them in the
cursor.
 The Stored Procedure should terminate with the GOBACK statement.
 In some of the systems the Stored Procedure should be compiled with the
Stored Procedure option as ‘Y’ (this was applicable to the Mainframe I
worked on). This however can vary from one system to another. So,
please check with your System Administrator or DBA before compiling the
Stored Procedure.
 In some of the systems, only a user with SYSADM privileges can compile
the Stored Procedure.
 It can be bound as any other COBOL program.
 It is possible for a Stored Procedure to call another Stored Procedure in
DB2 Ver-6 only if the calling Stored Procedure has been defined as a Main
Program in the SYSIBM.SYSPROCEDURES table and not as a Sub Program.
This is however not possible in DB2 Ver-5.
 The DRDA client needs to support level 3 result sets.
 

48622202.doc Ver. 0.00a Page 141 of 177


Infosys Technologies Ltd. Stored Procedures
___________________________________________________________________

7.4.1.2 When the Result Set concept is not used


 Values are passed to the Stored Procedure and results back to the calling
program using input-output parameters.
 A copybook should be used for the purpose of passing parameters between
the Stored Procedure and the calling program (an example of the copybook
structure is given later in the document).
 A cursor can be used to fetch data in this kind of a Stored Procedure, but it is
not necessary to declare it with the ‘WITH HOLD FOR’ clause.
 This kind of a Stored Procedure can have multiple SQL statements.
 Business logic can also be included within the Stored Procedure.
 All cursors opened during the execution of the Stored Procedure need to be
explicitly closed before control is returned to the calling program.
 A point of great importance is that we should always validate the input data
being passed by the calling program to the Stored Procedure and pass back
appropriate error messages to the calling program in case exceptions occur.
 If we retrieve data within the Stored Procedure using an SQL statement that
doesn’t use a primary key, we should handle SQLCODE = -811 for that query.
 Null indicators should be used while fetching data and appropriate values
moved to the output variables in case any of the fetched values are null.

7.5 Running your SP (The calling program)

Now we come to the structure of the calling program. A few salient points about the
calling program:

7.5.1 When the Result Set concept is used


 All the input and output parameters used for passing values should be
properly initialized.
 The EXEC SQL CALL statement used to call the Stored Procedure should use
the name of the Stored Procedure and not the Load Module name.
 SQLCODE = +466 for the call to the Stored Procedure implies that the call
has been successful.
 The Result Set Locators need to be associated before they can be allocated to
a Result Set.
 For the Associate and Allocate operations an SQLCODE of zero would indicate
success.
 A Fetch statement is required in the calling program to fetch the Result Sets.
 The Fetch operation should be continued till SQLCODE = +100.
 Null indicators should be used appropriately during the Fetch operation.
 In case the input and output parameters have a number of sub-variables,
then the Redefines clause should be used so that the number of input and
output parameters used match the numbers defined in the
SYSIBM.SYSPROCEDURES table.
 Commit should be done in the calling program if the COMMIT_ON_RETURN =
‘N’ in the SYSIBM.SYSPROCEDURES table.
 Closing the cursor is not necessary as it is automatically done once the calling
program commits or rolls back.

7.5.2 When the Result Set concept is not used


 All the input and output parameters used for passing values should be properly
initialized.

48622202.doc Ver. 0.00a Page 142 of 177


Infosys Technologies Ltd. Stored Procedures
___________________________________________________________________

 The EXEC SQL CALL statement used to call the Stored Procedure should use the
name of the Stored Procedure and not the Load Module name.
 SQLCODE = +466 for the call to the Stored Procedure implies that the call has
been successful.
 In this case we need not use Locators as they are used only when Result Sets are
used.
 A Fetch statement is not required, to retrieve data pulled out by the Stored
Procedure as we use input output parameters for this. All data would be fetched
within the Stored Procedure and moved to the appropriate output variables and
made accessible to the calling program.
 In case the input and output parameters have a number of sub-variables, then
the Redefines clause should be used so that the number of input and output
parameters used match the numbers defined in the SYSIBM.SYSPROCEDURES
table.
 We should use a Copybook to pass parameters to the Stored Procedure and back
to the calling program for a non Result Set Stored Procedure.
 Commit should be done in the calling program if the COMMIT_ON_RETURN = ‘N’
in the SYSIBM.SYSPROCEDURES table.

7.5.3  Exercise
1. What is the significance of LINKAGE column in SYSIBM.SYSPROCEDURES
table?
2. What is the significance of STAYRESIDENT column in
SYSIBM.SYSPROCEDURES table?
3. When a SP is using Result Set concept, what is clause a MUST in declaration
of cursor?
4. SQLCODE ____ implies that a call to SP is successful.
5. When a SP uses Result Set concept, it is mandatory to close all cursors.
True/False?
6. A FETCH statement is a MUST when not using Result Set concept. True/False?

Answers:
1. LINKAGE column specifies whether parameters passed to SP could be nulls.
2. A STAYRESIDENT column value decides whether to SP remains in memory
after execution.
3. WITH RETURN FOR
4. +466
5. False
6. False

7.6 SP Address Space

A DB2 subsystem has the following started-task address spaces:


 ssnmDBM1 for database services,
 ssnmMSTR for system services,
 ssnmDIST for the distributed data facility
 ssnmSPAS for the DB2-established stored procedures address space
 Names for your WLM-established address spaces for stored procedures

You must associate each of these address spaces with a RACF user ID. Each of them
can also be assigned a RACF group name. The DB2 address spaces cannot be started
with batch jobs.

48622202.doc Ver. 0.00a Page 143 of 177


Infosys Technologies Ltd. Stored Procedures
___________________________________________________________________

For stored procedures, stored procedures address space entries are required in the
RACF started procedures table. The associated RACF user ID and group name do not
have to match those used for the DB2 address spaces, but they must be authorized
to run the call attachment facility (for the DB2-established stored procedure address
space) or Recoverable Resource Manager Services attachment facility (for WLM-
established stored procedure address spaces).

7.6.1 Changing the RACF Started Procedures Table


To change the RACF started procedures table (ICHRIN03), change, reassemble, and
link edit the resulting object code to MVS. The IDs and group names associated with
the address spaces are shown in Appendix A. Please refer to Appendix B for a
Sample Job to Reassemble the RACF Started Procedures.

7.6.2 Guidelines for effective use of address space


To maximize the number of stored procedures that can run concurrently in a stored
procedures address space, use the following guidelines:
 Set REGION size for the stored procedures address spaces to REGION=0 to
obtain the largest possible amount of storage below the 16MB line.
 Limit storage required by application programs below the 16MB line by:
o Link editing programs above the line with AMODE(31) and RMODE(ANY)
attributes
o Using the RES and DATA(31) compiler options for COBOL programs
 Limiting storage required by IBM Language Environment for MVS & VM by using
these runtime options:
o HEAP(,,ANY) to allocate program heap storage above the 16MB line
o STACK(,,ANY,) to allocate program stack storage above the 16MB line
o STORAGE(,,,4K) to reduce reserve storage area below the line to 4KB
o BELOWHEAP(4K,,) to reduce the heap storage below the line to 4KB
o LIBSTACK(4K,,) to reduce the library stack below the line to 4KB
o ALL31(ON) to indicate all programs contained in the stored procedure run
with AMODE(31) and RMODE(ANY)

If you follow these guidelines, each TCB that runs in the DB2-established stored
procedures address space requires approximately 100KB below the 16MB line. Each
TCB that runs in a WLM-established stored procedures address space uses
approximately 200KB.

DB2 needs extra storage for stored procedures in the WLM-established address
space because you can create both main and sub programs, and DB2 must create an
environment for each.

You must have Language Environment to run stored procedures. Your requirements
can differ significantly depending on your release of Language Environment.

7.6.3 Dynamically Extending Load Libraries


It is recommended to use partitioned data set extended (PDSEs) for load libraries
containing stored procedures. Using PDSEs may eliminate your need to stop and
start the stored procedures address space due to growth of the load libraries. If a

48622202.doc Ver. 0.00a Page 144 of 177


Infosys Technologies Ltd. Stored Procedures
___________________________________________________________________

load library grows from additions or replacements, the library may have to be
extended.

If you use PDSEs for the load libraries, the new extent information is dynamically
updated and you do not need to stop and start the address space. If PDSs are used,
load failures may occur because the new extent information is not available.

7.6.3.1 Assigning Stored Procedures to WLM Application Environments


Workload manager routes work to stored procedure address spaces based on the
environment name and service class associated with the stored procedure. You must
use WLM panels to associate an application environment name with the JCL
procedure used to start an address space. See MVS/ESA Planning: Workload
Management for details about workload management panels.

There are other tasks that must be completed before a stored procedure can run in a
WLM-established stored procedures address space. Here is a summary of those
tasks:
1. Make sure you have a numeric value specified in the TIMEOUT VALUE field of
installation panel DSNTIPX. If you have problems with setting up the
environment, this timeout value ensures that your stored procedures will not
hang for an unlimited amount of time.
2. If you want to migrate any stored procedures that use the DB2-established
stored procedure address space (ssnmSPAS), you must link edit them or code
them so that they use the Recoverable Resource Manager Services
attachment facility (RRSAF) instead of the call attachment facility. Use the JCL
startup procedure for WLM-established stored procedures address space that
was created when you installed or
migrated as a model. (The default name is ssnmWLM.)

Unless a particular environment or service class is not used for a long time,
WLM creates on demand at least one address space for each combination of
WLM environment name and service class that is encountered in the
workload. For example, if there are five environment names that each have
six possible service classes, and all
those combinations are in demand, it is possible to have 30 stored procedure
address spaces.

To prevent creating too many address spaces, create a relatively small


number of WLM environments and MVS service classes.

3. Use the WLM application environment panels to associate the environment


name with the JCL procedure. Figure 1 is an example of this panel. You can
also use the variable &IWMSSNM for the DB2SSN parameter
(DB2SSN=&IWMSSNM). This variable represents the name of the subsystem
for which you are starting this address space. This variable is useful for using
the same JCL procedure for multiple DB2 subsystems.

______________________________________________________________________________________________________________
| |
| Application-Environment Notes Options Help |
| ------------------------------------------------------------------------ |
| Create an Application Environment |
| Command ===> ___________________________________________________________ |

48622202.doc Ver. 0.00a Page 145 of 177


Infosys Technologies Ltd. Stored Procedures
___________________________________________________________________

| |
| Application Environment Name . : WLMENV2 |
| Description . . . . . . . . . . Large Stored Proc Env. |
| Subsystem Type . . . . . . . . . DB2 |
| Procedure Name . . . . . . . . . WLMENV2 |
| Start Parameters . . . . . . . . DB2SSN=DB2A,NUMTCB=2,APPLENV=WLMENV2 |
| _______________________________________ |
| ___________________________________ |
| |
| Select one of the following options. |
| 1 1. Multiple server address spaces are allowed. |
| 2. Only 1 server address space per MVS system is allowed. |
| |
| |
|__________________________________________________________________________________|
Figure 1: WLM Panel to Create an Application Environment.
4. Update the WLM_ENV column of SYSIBM.SYSPROCEDURES to associate a
stored procedure with an application environment.

UPDATE SYSIBM.SYSPROCEDURES
SET WLM_ENV='WLM_ENV2'
WHERE PROCEDURE='BIGPROC';
5. Using the WLM install utility, install the WLM service definition that contains
information about this application environment into the couple data set.
6. Activate a WLM policy from the installed service definition.
7. Issue STOP PROCEDURE and START PROCEDURE for any stored procedures
that run in the ssnmSPAS address space. This allows those procedures to pick
up the application environment from the WLM_ENV column of
SYSIBM.SYSPROCEDURES.
8. Begin running stored procedures.

7.6.3.2 Accounting Trace


Through a stored procedure one SQL statement generates other SQL statements
under the same thread. The processing done by the stored procedure is included in
DB2's class 1 and class 2 times for accounting.

The accounting report on the server has several fields that specifically relate to
stored procedures processing, as shown in figure 2.

PRIMAUTH: USRT001 PLANNAME: IT8EC

AVERAGE APPL (CLASS 1) DB2 (CLASS 2) IFI (CLASS 5) CLASS 3 SUSP. AVERAGE TIME AV.EVENT
------------ -------------- -------------- -------------- -------------- ------------ --------
ELAPSED TIME 0.123676 0.053400 N/P LOCK/LATCH 0.000000 0.00
CPU TIME 0.012648 0.009332 N/P SYNCHRON. I/O 0.040742 1.00
TCB 0.004097 0.001719 N/P OTHER READ I/O 0.000000 0.00
TCB-STPROC A 0.008551 0.007613 N/A OTHER WRTE I/O 0.000000 0.00
CPU-PARALL 0.000000 0.000000 N/A SER.TASK SWTCH 0.000000 0.00
SUSPEND TIME N/A 0.040742 N/A ARC.LOG(QUIES) 0.000000 0.00
TCB N/A 0.040742 N/A ARC.LOG READ 0.000000 0.00
CPU-PARALL N/A 0.000000 N/A DRAIN LOCK 0.000000 0.00
NOT ACCOUNT. N/A 0.003327 N/A CLAIM RELEASE 0.000000 0.00
DB2 ENT/EXIT N/A 8.00 N/A PAGE LATCH 0.000000 0.00
EN/EX-STPROC N/A 36.00 N/A STORED PROC. B 0.000000 0.00
DCAPT.DESCR. N/A N/A N/P NOTIFY MSGS. 0.000000 0.00
LOG EXTRACT. N/A N/A N/P GLOBAL CONT. 0.000000 0.00
NOT NULL 1 1 0 TOTAL CLASS 3 0.040742 1.00

.
.
.

STORED PROCEDURES AVERAGE TOTAL


----------------- -------- --------
CALL STATEMENTS C 1.00 1
PROCEDURE ABENDS 0.00 0

48622202.doc Ver. 0.00a Page 146 of 177


Infosys Technologies Ltd. Stored Procedures
___________________________________________________________________
CALL TIMEOUT D 0.00 0
CALL REJECT 0.00 0

.
.
.

Figure 2: Partial Long Accounting Report, Server - Stored Procedures

Descriptions of Fields:
 The number of calls to stored procedures is indicated in C.
 The part of the total CPU time that was spent satisfying stored procedures
requests is indicated in A.
 The amount of time spent waiting for a stored procedure to be scheduled is
indicated in B.
 The number of times a stored procedure timed out waiting to be scheduled is
shown in D.

7.6.3.3 What to Do for Excessive Timeouts or Wait Time


If you have excessive wait time (B) or timeouts (D), there are several possible
causes.

For stored procedures in a DB2-established address space, the causes for


excessive wait time include:
 Someone issued the DB2 command STOP PROCEDURE
ACTION(QUEUE) that caused requests to queue up for a long time and
time out.
 The stored procedures are hanging onto the ssnmSPAS TCBs for too
long. In this case, you need to find out why this is happening.
 If you are getting many DB2 lock suspensions, maybe you have too
many ssnmSPAS TCBs, causing them to encounter too many lock
conflicts with one another. Or, maybe you just need to make code
changes to your application. Or, you might need to change your
database design to reduce the number of lock suspensions.
 If the stored procedures are getting in and out quickly, maybe you
don't have enough ssnmSPAS TCBs to handle the workload. In this
case, increase the number on field NUMBER OF TCBS on installation
panel DSNTIPX.

For stored procedures in a WLM-established address space, the causes for


excessive wait time include:
 The priority of the service class that is running the stored procedure is
not high enough.
 You are running in compatibility mode, which means you might have
to manually start more address spaces.
 If you are using goal mode, make sure that the application
environment is available by using the MVS command DISPLAY
WLM,APPLENV=applenv. If the application environment is quiesced,
WLM does not start any address spaces for that environment; CALL
statements are queued or be rejected.

7.6.4 Exercise
1. A DB2 subsystem has _______ address space for the distributed data facility.
2. It is recommended to use _______ for load libraries containing stored
procedures.

48622202.doc Ver. 0.00a Page 147 of 177


Infosys Technologies Ltd. Stored Procedures
___________________________________________________________________

3. Workload manager routes work to stored procedure address spaces based on


the ____________ and ___________ associated with the stored procedure.

Answers:
1. ssnmDIST
2. Partitioned Data Set Extended (PDSEs)
3. environment name and service class

7.7 SP Runtime Environments

Store Procedure could be run in two types of environments, WLM (Work load
Manager)-established stored procedures address spaces and DB2-established stored
procedure address space.

Appendix C summarizes the differences between stored procedures that run in WLM-
established stored procedures address spaces and those that run in DB2-established
stored procedure address space.

7.8 SP Builder

7.8.1 Overview of DB2 Stored Procedure Builder


DB2 Stored Procedure Builder assists you with creating a stored procedure that runs
on a database server. You must write the client application separately.

The database administrator (DBA) or developer who builds the stored procedure
must have the database privileges the stored procedure requires, but the users of
the client applications that call the stored procedure do not need such privileges.

Reduced development cost and increased reliability from reusing common


routines in multiple applications
In a database application environment, many situations are recurrent, such as
returning a fixed set of data, or performing the same set of multiple requests to a
database. Stored procedures provide a highly efficient way to address these
recurrent situations by reusing one common procedure.

Centralized security, administration, and maintenance for common routines


You can simplify security, administration, and maintenance by managing shared logic
in one place at the server, instead of managing multiple copies of the same business
logic on client workstations. Client applications can call stored procedures that run
SQL queries with little or no additional processing. By using stored procedures, you
gain centralized maintenance and authorization and can potentially encapsulate the
database tables.

DB2 Stored Procedure Builder is a graphical application that supports the rapid
development of DB2 stored procedures. Using Stored Procedure Builder, you can
perform the following tasks:
 Create stored procedures
 Build stored procedures on local and remote DB2 servers
 Modify and rebuild existing stored procedures
 Run stored procedures for testing and debugging the execution of installed
stored procedures

48622202.doc Ver. 0.00a Page 148 of 177


Infosys Technologies Ltd. Stored Procedures
___________________________________________________________________

Tip: Using Stored Procedure Builder, you can also run stored procedures that you
created in another editor and that use a language other than Java or the SQL
procedure language. You might want to test the execution of existing installed stored
procedures.

Stored Procedure Builder provides a single development environment that supports


the entire DB2 family ranging from the workstation to OS/390. You can launch
Stored Procedure Builder as a separate application from the DB2 Universal Database
program group or from any of the following development applications:
 IBM VisualAge for Java Version 3.0 or later
 Microsoft Visual C++ Version 5 and Version 6
 Microsoft Visual Basic Version 5 and Version 6
 IBM DB2 Control Center

Stored Procedure Builder is implemented with Java and all database connections are
managed by using a Java Database Connectivity (JDBC) API. Using a JDBC driver,
you can connect to any DB2 database by using a local alias.

7.8.2 How has DB2 SP Builder changed the process of creating SPs?
DB2 Stored Procedure Builder provides an easy-to-use development environment for
creating, installing, and testing stored procedures. Using Stored Procedure Builder
allows you to focus on creating your stored procedure logic rather than the details of
registering, building, and installing stored procedures on a DB2 server. Additionally,
with Stored Procedure Builder, you can develop stored procedures on one operating
system and build them on other server operating systems.

7.8.2.1 Creating Stored Procedure Builder projects


Stored Procedure Builder manages your work by using projects. Each Stored
Procedure Builder project saves the following information:
 Your connections to specific databases.
 The filters you created to display subsets of the stored procedures on each
database. When opening a new or existing Stored Procedure Builder project,
you can filter stored procedures so that you view stored procedures based on
their name, schema, language, or collection ID (for OS/390 only).
 Stored procedure objects that have not been successfully built to a target
database.

You can identify Stored Procedure Builder project files by their SPP file extension.
The Project window shows all of the stored procedures on the DB2 database to which
you are connected.

Depending on their state and language, stored procedures have different icons:
 Java stored procedure
 SQL stored procedure
 Stored procedure created in a language other than Java or SQL
 Java stored procedure that has not been built to the database or that has
changed since it was last built to the database
 SQL stored procedure that has not been built to the database or that has
changed since it was last built to the database

48622202.doc Ver. 0.00a Page 149 of 177


Infosys Technologies Ltd. Stored Procedures
___________________________________________________________________

7.8.2.2 Creating stored procedures


Stored Procedure Builder greatly simplifies the process of creating and installing
stored procedures on a DB2 database server. When you build a stored procedure on
a target database, you no longer have to manually register the stored procedure with
DB2 by using the CREATE PROCEDURE statement.

With Stored Procedure Builder, you create stored procedures in Java and the SQL
procedure language. Creating stored procedures in Java and the SQL procedure
language produces stored procedures that are highly portable among operating
systems.

Using a stored procedure wizard and SQL Assistant facilitate the development of
stored procedures. You launch a stored procedure wizard from the Project window by
selecting the
Stored Procedures folder; then, use the Selected menu to create a new stored
procedure. You can use a stored procedure wizard to create your basic SQL
structure; then, you can use the code editor to add highly sophisticated stored
procedure logic.

You can also create stored procedures without using a stored procedure wizard. You
can create stored procedures by using a template that creates the stored procedure
structure but does not include any SQL statements. After generating the basic stored
procedure structure, you can use the code editor to further modify the generated
code to add logic and SQL statements.

When creating a stored procedure, you can choose to return a single result set,
multiple result sets, or output parameters only. You might choose not to return a
result set or output parameters when your stored procedure creates or updates
database tables.

You can use a stored procedure wizard to define input and output parameters for a
stored procedure so that it receives values for host variables from the client
application. Additionally, you can create multiple SQL statements in a stored
procedure; the stored procedure receives a case expression and selects one of a
number of queries.

7.8.2.3 Editing stored procedures


Stored Procedure Builder includes a fully functional code editor that is language
sensitive depending on whether the stored procedure was written in Java or the SQL
procedure language. Once you have created a stored procedure, you can easily
modify it by opening the source code in the Stored Procedure Builder code editor.
When you open a stored procedure, the source code is displayed in the editor pane
on the right side.

Using the code editor, you can add parameters, result sets, and additional stored
procedure logic. By modifying a Java stored procedure, you can add methods to the
code to include sophisticated stored procedure logic. For Java stored procedures, you
can add logic to the method that is called when the stored procedure starts. In Java
stored procedures and SQL stored procedures, you can add calls to other stored
procedures and run other SQL statements.

48622202.doc Ver. 0.00a Page 150 of 177


Infosys Technologies Ltd. Stored Procedures
___________________________________________________________________

See the following topics for detailed information about how you can use the code
editor in Stored Procedure Builder:
 Modifying existing Java stored procedure source code
 Modifying existing SQL stored procedure source code
 Editing generated code

7.8.2.4 Working with existing stored procedures


After you successfully build a stored procedure on a database server, you are ready
to rebuild, run, and test the procedure. Running a stored procedure within Stored
Procedure Builder allows you to ensure that the stored procedure is correctly
installed and that the stored procedure logic is working as expected.

When you run a stored procedure, depending on how you set it up, it can return
result sets and output parameters based on test input parameter values that you
enter. When you run a stored procedure that has input parameters, you are
prompted for any required parameters.

7.8.3 Exercise
1. Stored Procedure could be run in ___ types of environments.
2. Can we use SP Builder to run SPs?

Answers:
1. Two
2. Yes

7.9 Performance Considerations

7.9.1 Reentrant Code


SPs need to be coded as reentrant for following reasons
1. They need not be loaded each time they are called.
2. Several threads can share a single copy of SP, thus requiring less virtual
storage

You should use “stay resident” option with reentrant program. For any program that
is not reentrant, you need to absolutely negate the “stay resident” option.

7.9.2 Fenced and Non-fenced Procedures


A fenced SP executes in a separate process from the database agent. Non-fenced
SPs execute in the same process as database agent and can increase application
performance because fewer overheads are needed for communication between
application and DB2 coordinating agent. However, non-fenced SPs can overwrite DB2
control blocks. A non-fenced SP is considered safe for running in the process or
address of the database manager’s operating environment.

On DB2 for OS/390, SPs are fenced. On other platforms they can be fenced or non-
fenced.

7.9.3 Limiting Resources Used


SPs are designed for high volume online transactions. You can limit resources used
by SPs by setting a processor limit for each SP. You can do this by updating

48622202.doc Ver. 0.00a Page 151 of 177


Infosys Technologies Ltd. Stored Procedures
___________________________________________________________________

ASUTIME column in SYSIBM.SYSROUTINES (as of version 6) to allow DB2 to cancel


SPs that are in a loop.

The routine, which checks for ASUTIME overage, runs ones a clock minute. This way,
it does not provide a very strict control on how much CPU time a SP can use. We can
use priorities and service goals of WLM to have total control.

You can also specify a limit for number of times SPs can abnormally terminate. On
installation panel DSNTIPX, a field called MAX ABEND COUNT is used for this. This
helps us in preventing overloading of the system with ABEND dumps.

7.9.4 Workload Manager


A simple performance guideline is to use WLM in goal mode if you use SPs.

Advantages of using WLM address space are as follows:


1. WLM established address space provides multiple isolated environments for
SPs so that failures do not affect other SPs.
2. WLM address space reduces demand for storage below 16MB line, removing
on limitation on number of SPs that can run concurrently.
3. These address spaces inherit dispatching priority of DB2 thread that issues
the CALL statement. This allows high priority work to have its SPs executed
ahead of low priority work and its SPs.
Compared to DB2 address spaces, this is a big advantage, as you cannot
prioritize SPs in DB2 address space. Also, there you are limited by storage
availability.
4. The real benefits to SPs using WLM are static priority assignment and
dynamic workload balancing.
5. SPs designated a high priority to WLM achieve very consistent response
times.
6. WLM provides dynamic workload balancing and distribution by routing
incoming requests to SP address spaces with least workload or by starting
new address spaces, if required. This is fully automatic and does not require
monitoring and tuning.

7.9.5 CICS EXCI


CICS EXCI is an interface that allows and Os/390 application program executing in
OS/390 address space to link to a CICS program running in CICS address space. DB2
SPs run in OS/390 address space. By using EXCI commands, you can call a CICS
transaction from non-CICS client programs. These clients could reside on Os/390
address space or other platforms.

Using EXCI gives the CICS program the ability to both run as online transaction in
CICS and as a program called via SP y a client application. So when it comes to
existing system integration, SPs come really handy.

7.9.6 Exercise
1. For any program that is not reentrant, you need to absolutely negate the
_____ column in SYSIBM.SYSPROCEDURES table..
2. On DB2 for OS/390, SPs are fenced. True/False?
3. _____ column in SYSIBM.SYSROUTINES is set to allow DB2 to cancel SPs that
are in a loop.
4. For better performance, guideline is to use WLM in ____ mode if you use SPs.

48622202.doc Ver. 0.00a Page 152 of 177


Infosys Technologies Ltd. Stored Procedures
___________________________________________________________________

5. DB2 SPs run in _____ address space.

Answers:
1. STAYRESIDENT
2. True
3. ASUTIME
4. Goal
5. OS/390

7.10 DB2-Delivered SPs

Some additional SPs that come with DB2 are not documented. They were included
starting with version 5. Its possible that some are no longer included. Here we would
just introduce you to these SPs.

7.10.1DSNWZP
It returns all values of the normal and hidden DSNZPARMs and the version 6
DSNHDECP values.

7.10.2DSNWSPM
Retrieves many performance statistics like SQL CPU.

7.10.3DSNACCMG
Formats SQLCA as DSNTIAR did.

7.10.4DSNACCAV
Gives partition information for table spaces and index spaces to show which needs
REORG, RUNSTATS or image copy utilities executed.

7.10.5DSNUTILS
Executes DB2 utilities from any process that can issue an SQL call statement.

7.11 DOs and DONTs.

7.11.1Do’s
 Defining the Stored Procedure properly in the SYSIBM.SYSPROCEDURES
should always be done at the beginning by the DBA.
 Check whether the user compiling the Stored Procedure requires to have
some special authorities to execute this operation or not.
 Use the clause ‘WITH HOLD FOR’ while declaring the cursor within the Stored
Procedure if it uses the Result Set concept. 
 Do not initialize the Input parameters within the Stored Procedure.
 If the Stored Procedure has been defined as Main Program in the
SYSIBM.SYSPROCEDURES table do not use the GOBACK statement in the
Stored Procedure. This is mentioned in the Manual. However from personal
experience I found that it does not make a significant difference.
 The number of input and output parameters being used by the Stored
Procedure should tally with the values defined in the
SYSIBM.SYSPROCEDURES table. In case sub-variables are being used, they
must be clubbed together using the REDEFINES clause.

48622202.doc Ver. 0.00a Page 153 of 177


Infosys Technologies Ltd. Stored Procedures
___________________________________________________________________

 Always validate the input parameters before using them in SQL statements in
the Stored Procedure and also check whether any of the input parameters are
null.
 The Result Set concept should be used in the Stored Procedure if the number
of rows that can be returned are large or if one is not sure of the maximum
number of rows that might be returned.
 Always pass appropriate error messages back to the calling program in case
of exceptional conditions.
 Using a copybook to pass parameters make things a lot easier.
 Once the Stored Procedure executes successfully, make sure that the Load
Module gets updated into the ‘TEST.SP.LOADLIB’ library every time it is
compiled.
 Always check for SQLCODE = +466 for the call to the Stored Procedure to
determine whether the call has been successful or not.
 Any changes to the columns in the SYSIBM.SYSPROCEDURES table for the
Stored Procedure should ask for appropriate changes in both the Stored
Procedure as well as the calling program.
 Use COMMIT ON RETURN option for SPs to reduce network traffic,
contentions, etc. However, note that this option would not be useful and
would be ignored if specified, in nested SPs.
 Set DESCSTAT parm of DSNZPARM on host DB2 to YES if you want column
names to be returned from a SP. For dynamic SELECT statements, column
names would be returned by default.

7.11.2DON’T’s
 Do not call any other program from the Stored Procedure if the version of
DB2 is 5. This is possible in case of version 6.
 Do not put any display statements in the Stored Procedure. All error
messages should be displayed in the calling program as per the status code
passed by the Stored Procedure.
 Do not issue any CICS calls from within the Stored Procedure. 
 Do not issue any ‘COMMIT’ statements within the Stored Procedure.
 Do not create a Stored Procedure that does not contain any SQL statements.
 Do not close the cursor within the Stored Procedure if it uses the Result Set
concept, as this will make the Result Set unavailable to the calling program.
 It’s better if we do not include business logic within the Stored Procedure.
 Be careful in use of nested SPs. Nested SPs are ones where SP can invoke
User Defined Functions (UDFs) and subsequently can call other SPs (local or
remote), any of which can invoke triggers. If not planned and implemented
properly, there could be looping issues, unnecessary table updates resulting
in unnecessary trigger invocation, etc.

7.12 Review Questions

1. What would be the considerations in deciding whether a SP would use result-


set concept or otherwise?
2. What are the advantages of WLM controlled environment over that of DB2?
3. In which situations would you recommend use of CICS EXCI interface?
4. Why are we not supposed to issue any CICS calls from within the Stored
Procedure?
5. What precautions should one take while creating nested SPs?

48622202.doc Ver. 0.00a Page 154 of 177


Infosys Technologies Ltd. Stored Procedures
___________________________________________________________________

7.13 Reference

 Redbook on SP Builder
 http://publibz.boulder.ibm.com/cgi-
bin/bookmgr_OS390/BOOKS/DSNAG0F4/CCONTENTS
 DB2 High Performance Design and Tuning by Richard Yevich and Susan
Lawson.

48622202.doc Ver. 0.00a Page 155 of 177


Infosys Technologies Ltd. Glossary
___________________________________________________________________

Glossary

Active Log - The portion of the DB2 log to which log records are written as they are
generated. The active log always contains the most recent log records, whereas the
archive log holds those records that are older and no longer will fit on the active log.

Address Space - A range of virtual storage pages identified by a number (ASID)


and a collection of segment and page tables, which map the virtual pages to real
pages of the computer's memory.

Caching Dynamic SQL statements - DB2 can save prepared dynamic statements
in a cache. The cache is a DB2-wide cache in the EDM pool.

Checkpoint - A point at which DB2 records internal status information on the DB2
log that would be used in the recovery process if DB2 should abend.

Check Pending - A state of a table space or partition that prevents its use by some
utilities and some SQL statements, because it can contain rows that violate
referential constraints, table check constraints, or both.

CICS EXCI - An interface that allows and Os/390 application program executing in
OS/390 address space to link to a CICS program running in CICS address space.

Claims – To register to DB2 that an object is being accessed. This registration is also
called a claim. A claim is used to ensure that an object cannot be drained until a
commit is reached. Contrast with drain

Commit - The operation that ends a unit of work by releasing locks so that the
database changes made by that unit of work can be perceived by other processes.

Commit Point - A point in time when data is considered consistent.

Concurrency – The shared use of resources by more than one application process
at the same time

Control Interval (CI) - A fixed-length area or direct access storage in which VSAM
stores records and creates distributed free space. Also, in a key-sequenced data set
or file, the set of records pointed to by an entry in the sequence-set index record.
The control interval is the unit of information that VSAM transmits to or from direct
access storage. A control interval always includes an integral number of physical
records.

Database Descriptor (DBD) - An internal representation of DB2 database


definition, which reflects the data definition found in the DB2 catalog. The objects
defined in a database descriptor are table spaces, tables, indexes, index spaces, and
relationships.

Deadlock – Unresolvable contention for the use of a resource such as a table or an


index

48622202.doc Ver. 0.00a Page 156 of 177


Infosys Technologies Ltd. Glossary
___________________________________________________________________

Deferred embedded SQL - Like static statements, deferred embedded SQL


statements are embedded within applications, but like dynamic statements, they are
prepared at run time. Deferred embedded SQL statements are used for DB2 private
protocol access to remote data.

Drain – To acquire a locked resource by quiescing access to that object.

Drain Lock - A lock on a claim class, which prevents a claim from occurring.

Dynamic SQL - is characterized by its capability to change columns, tables, and


predicates during a program’s execution.

Dynamic SQL through ODBC functions - When the application contains ODBC
function calls that pass dynamic SQL statements as arguments.

EDM pool - A pool of main storage used for database descriptors and application
plans.

Embedded Dynamic SQL - The application puts the SQL source in host variables
and includes PREPARE and EXECUTE statements that tell DB2 to prepare and run the
contents of those host variables at run time.

Fenced Stored Procedure – Stored Procedure that executes in separate process


than that of database agent.

Fixed-list SELECT statements - User must know the number of columns and the
data types of those columns.

Forward Log Recovery - The third phase of restart processing during which DB2
processes the log in a forward direction to apply all REDO log records.

Interactive SQL - A user enters SQL statements through SPUFI.

Internal Resource Lock Manager (IRLM) - An MVS subsystem used by DB2 to


control communication and database locking.

IRLM – Internal Resource Lock Manager. An MVS subsystem used by DB2 to control
communication and database locking

Latch – A DB2 internal mechanism for controlling concurrent events or the use of
system resources

Lock - A means of controlling concurrent events or access to data. DB2 locking is


performed by the IRLM

Lock Conversion – The process of changing the size or mode of a DB2 lock to a
higher level

Lock Duration - The interval over which a DB2 lock is held

Lock Escalation - The change in the size of a lock from a row or page lock to a
table space lock because the number of page locks concurrently held on a given
resource exceeds a preset limit

48622202.doc Ver. 0.00a Page 157 of 177


Infosys Technologies Ltd. Glossary
___________________________________________________________________

Lock Mode – A representation for the type of access concurrently running programs
can have to a resource held by a DB2 lock.

Lock Promotion – The process of changing the size or mode of a DB2 lock to a
higher level

Locking - The process by which the integrity of data is ensured. Locking prevents
concurrent users from accessing inconsistent data

Locksize - The amount of data controlled by a DB2 lock on table data; the value can
be a row, a page, a table, or a table space
Logical Unit - An access point through which an application program accesses the
SNA network in order to communicate with another application program.

Log Initialization - The first phase of restart processing during which DB2 attempts to
locate the current end of the log.

Log Record Sequence Number (LRSN) - A number DB2 generates and associates
with each log record. DB2 also uses the LRSN for page versioning. The LRSNs
generated by a given DB2 data sharing group form a strictly increasing sequence for
each DB2 log and a strictly increasing sequence for each page across the DB2 group

Log Truncation - A process by which an explicit starting RBA is established. This


RBA is the point at which the next byte of log data will be written.

Non-Partitioned Index (NPI) - When used with partitioned table spaces, a non-
partitioning index is any index on a partitioned table space that is not the partitioning
index.

Predictive Governing Limit - If the statement exceeds a predictive governing limit,


it receives a warning or error SQL code.

Reactive Governing Limit - If the statement exceeds a reactive governing limit,


the statement receives an error SQL code.

Resource Limit Facility (or Governor) - It limits the amount of CPU time an SQL
statement can take, which prevents SQL statements from making excessive
requests.

Result set - One row of data returned to the called program by the Stored
Procedure.

Sequential Prefetch - A mechanism that triggers consecutive asynchronous I/O


operations. Pages are fetched before they are required, and several pages are read
with a single I/O operation.

SQL Description Area(SQLDA) - The SQLDA is a structure used to communicate


with your program, and storage for it is usually allocated dynamically at run time.

48622202.doc Ver. 0.00a Page 158 of 177


Infosys Technologies Ltd. Glossary
___________________________________________________________________

Static SQL - is hard coded, and only the values of host variables in predicates can
change.

Storage Group - A DB2 storage group is a set of volumes on direct access storage
devices (DASD).

Sysplex - A set of MVS systems that communicate and cooperate with each other
through certain multisystem hardware components and software services to process
customer workloads.

Table Space - A table space is one or more data sets in which one or more tables
are stored.

Task Control Block (TCB) - A control block used to communicate information about
tasks within an address space that are connected to DB2. An address space can
support many task connections (as many as one per task), but only one address
space connection.

Timeout – Abnormal termination of either the DB2 subsystem or of an application


because of the unavailability of resources

Varying-list SELECT statements - A varying-list SELECT statement returns rows


containing an unknown number of values of unknown type.

48622202.doc Ver. 0.00a Page 159 of 177


Infosys Technologies Ltd. App DBA
___________________________________________________________________

APPENDIX A - List of DSNZPARMs

Sl Macro Macro Panel Acceptable values Description


No parameter name
1 ABEXP DSN6SPRM DSNTIPO YES,NO Explain processing
2 ABIND DSN6SPRM DSNTIPO YES,NO,COEXIST Invalid plan or
package
automatically
rebound
3 ALCUNIT DSN6ARVP DSNTIPA BLK,TRK,CYL Allocation Units
4 ALL/dbname DSN6SPRM DSNTIPS ALL, Space names Start names
5 ARCPFX1 DSN6ARPV DSNTIPH DSNCAT.ARCHLOG1
DSNCAT.DSN1.ARCL Archive log1 prefix
G1 Valid dataset
name prefix
6 ARCPFX2 DSN6ARVP DSNTIPH DSNCAT.ARCHLOG2 Archive log2 prefix
DSNCAT.DSN1.ARCL
G2 Valid dataset
name prefix
7 ARCRETN DSN6ARVP DSNTIPA 0 thur 9999 Retention period
8 ARCWRTC DSN6ARVP DSNTIPA Specify from 1 to 14 WTOR route code
route codes with
values 1 to 16 ;
default 1,3,4
9 ARCWTOR DSN6ARVP DSNTIPA NO, YES Write to operator
10 ARC2FRST DSN6LOGP DSNTIPO NO, YES Read COPY2
archives first for
subsystem restart
11 ASSIST DSN6GRP DSNTIPK NO, YES This DB2 can assist
a parallelism co-
ordinator
12 AUDITST DSN6SYSP DSNTIPN NO, YES ( Class 1) Audit trace
List of audit classes
(1-9) or “*
13 AUTHAUTHCAC DSN6SPRM DSNTIPP 0 to 4096 in Plan authorization
H multiples of 256; chahe
1024
14 BACKODUR DSN6SYSP DSNTIPN 0-255 ; dafault os 5 Muliplier indicates
how much log to
process when limit
BACKOUT=YES or
AUTO
15 BINDNV DSN6SPRM DSNTIPP BINDADD , BIND Authority to bind
new package
16 BLKSIZE DSN6ARVP DSNTIPA 8192 to 28672 Block size of the
archive log dataset
17 BMPTOUT DSN6SPRM DSNTIPI 1-254 ; Dafault is 4 Number of resource
time outs Ims BMP
connection waits for

48622202.doc Ver. 0.00a Page 160 of 177


Infosys Technologies Ltd. App DBA
___________________________________________________________________

lock release
18 CACHEDYN DSN6SPRM DSNTIP4 NO , YES Save dynamic
prepared SQL
19 CACHEPAC DSN6SPRM DSNTIPP 0-2M ; Dafault is Storage allocations
32K for caching package
authorizations
20 CACHERAC DSN6SPRM DSNTIPP 0-2M ; Dafault is Storage allocations
32K for caching routine
authorizations
21 CATALOG DSN6ARVP DSNTIPA NO. YES Catalog data
22 CATALOG DSN6SPRM DSNTIPA2 DSNCAT or ICF user Catalog alias ,
catalog names of 1- cannot be updated
8 char
23 CDSSRDEF DSN6SPRN DSNTIP4 1, ANY Dafault value for
current degree of
special register
24 CHGDC DSN6SPRM DSNTIPO 1,2,3 DPROP support
25 CMTSTAT DSN6FAC DSNTIPR Active , Inactive DDF threads
26 COMPACT DSN6ARVP DSNTIPA No , YES Compact data
27 CONDBAT DSN6SYSP DSNTIPE 0-25000,64 Max remote
connected
28 CONTSTOR DSN6SPRM DSNTIPE No, YES Contract threads
working storage area
29 COORDNTR DSN6GRP DSNTIPK NO, YES Coordinate parallel
processing
30 CTHREAD DSN6SYSP DSNTIPE Acceptable values Max users
are 1 – 2000;
default is 30
31 DBPROTCL DSN6SYSP DSNTIP5 DRDA, Private Dafault protocol
32 DDF DSN6FAC DSNTIPR NO, AUTO , DB2 start up option
COMMAND
33 DEALLCT DSN6LOGP DSNTIPA 0-1439(min) , 0- Dealloc period
59(sec) ; or
NOLIMIT ; default is
0
34 DECDIV3 DSN6SPRM DSNTIPF NO, YES Min devide scale
35 DEFIXTP DSN^SPRM DSNTIPE 1,2 Dafault index type
36 DEFLTID DSN6SPRM DSNTIPP IBMUSER or auth id Unknown AUTHID
if RACF is not
available
37 DESCSTAT DSN6SPRM DSNTIPF NO, YES Build a describe
SQLDA when
building static SQL
38 DSHARE DSN6GRP DSNTIPA1 NO, YES or blank for Data sharing
upgrade
39 DSMAX DSN6SPRM DSNTIPC 1-32767 Max available open
datasets
40 DLDFREQ DSN6SYSP None 0-32767 ; dafault is Check points
5 between level ID
updates
41 DLITOUT DSN6SPRM DSNTIPI 1-254 ; default is 6 Number of resource
timeouts values

48622202.doc Ver. 0.00a Page 161 of 177


Infosys Technologies Ltd. App DBA
___________________________________________________________________

DL/1 batch
connection wiats for
lock release
42 DSSTIME DSN6SYSP DSNTIPN 1-1440; dafault is 5 Time interval in
minutes for
restarting dataset
stats for online
performance
monitors
43 EDMDSPAC DSN6SPRM DSNTIPC 1-2097152 ; dafault Size in kilobytes of
is 5 the dataspace for
the environment
description manager
pool
44 EDMPOOL DSN6SPRM DSNTIPC 1k-2097152K ; Environmental
description manager
45 EDPROP DSN6SPRM DSNTIPO 1,2,3 DPROP support
46 EXTRAREQ DSN6SYSP DSNTIPS 0-100, Default is Upper limit of extra
100 DRDA query blocks
form remote server
47 EXTRASRV DSN6SYSP DSNTIPS 0-100, Default is Upper limit of DRDA
100 query blocks to a
DRDA client
48 EXTSEC DSN6SYSP DSNTIPR NO , YES Control security
potions
49 GRPNAME DSN6GRP DSNTIPK 1-8 char , DSNCAT DB2 group name
50 HOPAUTH DSN6SPRM DSNTIP5 BOTH , RUNNER Package owner from
non DB2 server
51 IDBACK DSN6SYSP DSNTIPE Acceptable values Max batch connect
are 1 – 2000 dafault
is 20
52 IDFORE DSN6SYSP DSNTIPE Acceptable are 1-
2000 dafault is 40
53 IDTHTOIN DSN6FAC DSNTIPR 0 – 9999 Idle thread time out
54 IDXBPOOL DSN6SYSP DSNTIP1 Any $k BP name ; Default BP for user
dafault is BP0 indexes
55 INBUFF DSN6LOGP DSNTIPL 28K to 60 K Input buffer size
56 IRLMAUT DSN6SPRM DSNTIPI YES , NO Auto start for IRLM
57 IRLMPRC DSN6SPRM DSNTIPI IRLMPROC or the Proc name `
name of the IRLM
proc that MVS
invokes if RLMaoto
start = YES
58 IRLMRWT DSN6SPRM DSNTIPI 1-3600, dafault is 60 Resource time out
59 IRLMSID DSN6SPRM DSNTIPI IRLM or name of the Subsystem name
IRLM subsystem
60 IRLMSWT DSN6SPRM DSNTIPI Acceptable values Time to autostart
are 1 –3600 dafault
is 300
61 LBACKOUT DSN6SYSP DSNTIPN AUTO, YES , NO Backward log
processing
postponed

48622202.doc Ver. 0.00a Page 162 of 177


Infosys Technologies Ltd. App DBA
___________________________________________________________________

62 LEMAX DSN6SPRM DSNTIP7 0-50 , dafault is 20 Max number of Le


tokens active at one
time
63 LOBVALA DSN6SYSP DSNTIP7 1-2097152 dafault is Upper limit of
2048 storage for LOB
values per user
64 LOBVALS DSN6SYSP DSNTIP7 1-51200, dafault is Upper limit of
2048 storage for LOB
values per system
65 LOGAPSTG DSN6SYSP DSNTIPL 0-100M , dafault Maximum
is0M ssnmDBm1 storage
for fast log apply
66 LOGLOAD DSN6SYSP DSNTIPN Acceptable values Check point
are 200-160000000, frequency
dafault is 5000
67 MAXARCH DSN6LOGP DSNTIPA Acceptable values Racording max is
are 10 to 1000 , BSDS
dafault is 1000
68 MAXDBAT DSN6SYSP DSNTIPE 0-1999 , dafault is Max remote
64
69 MAXKEEPD DSN6SPRM DSNTIPE 0-65535 , dafault is Total number of
500 prepared SQL
statements to save
past commit
70 MAXRBLK DSN6SPRM DSNTIPC 0 , 16K-1000000K, RID POOL size
dafault is 4Mb
71 MAXRTU DSN6LOGP DSNTIPA 1-99, dafault is 2 Read tape units
72 MAXTYPE1 DSN6FAC DSNTIPR 0-MAX Remote Number of type1
connected inactive threads
allowed
73 MEMBNAME DSN6GRP DSNTIPK 1- 8 char , DSN1 DB2 member name
74 MON DSN6SYSP DSNTIPN NO, YES ( Class 1) , Monitor trace
list of classes
75 MONSIZE DSN6SYSP DSNTIPN 8K to 1M Monitor size
76 NUMLKTS DSN6SPRM DSNTIPJ 0-5000, dafault is Lock per table or
1000 tablespace
77 NUMLKUS DSN6SPRM DSNTIPJ 0-100000, dafault is Locks per user
10000
78 OPTHINTS DSN6SPRM DSNTIP4 No, YES Allows info to be
passed to the
optimizer that may
influence the access
path
79 OUTBUFF DSN6LOGP DSNTIPL 40K-4000K , dafault Output buffer
is 400K
80 PCLOSEN DSN6SYSP DSNTIPN 1-32767 dafault is 5 Number of
consequitive DB2
check points since a
dataset or parition
was last updated
81 PCLOSET DSN6SYSP DSNTIPN 1-32767 , dafault is Amt of elapsed time
10 in minutes since a

48622202.doc Ver. 0.00a Page 163 of 177


Infosys Technologies Ltd. App DBA
___________________________________________________________________

dataset or a partition
was last updated
82 POOLINAC DSN6FAC DSNTIP5 0-9999, dafault is Time in seconds a
120 database access
thread (DBAT)
remains idle in pool
before terminated
83 PRIQTY DSN6ARVP DSNTIPA Blank , clist Primary allocation
calculates using
blksize and log size
1-9999999 units
84 PROTECT DSN6ARVP DSNTIPP No , YES Archive log RACF
85 PTASKROL DSN6SYSP ------ YES, NO Roll up query
parallel task
accounting trace
records into
originating tasks
accounting records
86 QUIESCE DSN6ARVP DSNTIPA 0-999 dafault is 5 Max quiesce time
87 RECALL DSN6SPRM DSNTIPO YES ,NO Recall database
88 RECALLD DSN6SPRM DSNTIPO 0-32767 Recall delay
89 RELCURHL DSN6SPRM DSNTIP4 Yes , NO At commit release
data pages or row
lock for cursor with
hold
90 RESTART/DEFE DSN6SPRM DSNTIPS Restart /defer Restrt or defer
R processing objects
91 RESYNC DSN6FAC DSNTIPR 1,2-99 Resync interval
92 RETLWAIT DSN6SPRM DSNTIPI 0-254 dafault is0 Data sharing only ,
how long trans waits
for an incompatible
reatained lock
93 RETVLCFK DSN6SPRM DSNTIP4 No , YES Varchar retrieved
from index
94 RGFCOLID DSN6SPRM DSNTIPZ 1-8 chars dafault is Owner of
DSNRGCOL registration tables
95 RGFDBNAM DSN6SPRM DSNTIPZ 1-8 chars dafault is Name of the dbase
DSNRGFDB that contains
registration tables
96 RGFDEDPL DSN6SPRM DSNTIPZ No , YES DB2 subsystem
controlled by a
closed application
thru registration
97 RGFDEFTL DSN6SPRM DSNTIPZ ACCEPT, REJECT , Action for
APPL unregistered DDL
98 RGFESCP DSN6SPRM DSNTIPZ Any non alpha ART/ORT escape
numeric char char
99 RGFFULLQ DSN6SPRM DSNTIPZ Yes, NO Registered object
require fully qualified
names
100 RGFINSTL DSN6SPRM DSNTIPZ No , YES Install data
definition control

48622202.doc Ver. 0.00a Page 164 of 177


Infosys Technologies Ltd. App DBA
___________________________________________________________________

support
101 RGFNMORT DSN6SPRM DSNTIPZ 1-17 char Name of the object
DSN_REGISTER_OBJ registration table
T
102 RGFNMPRT DSN6SPRM DSNTIPZ 1-17 Char Name of the
DSN_REGISTER_APP application
L registration table
103 RLF DSN6SYSP DSNTIPO NO , YES RLf Auto start
104 RLFAUTH DSN6SYSP DSNTIPP SYSIBM or the auth Resource AUTHID
Id used if RLF is
automatically
started
105 RLFERR DSN6SYSP DSNTIPO NOLIMIT , NORUN , RLST access error
1-50000
106 RLFERRD DSN6FAC DSNTIPR NOLIMIT , NORUN , RLST access error
1-50000
107 RLFTBL DSN6SYSP DSNTIPO 01 or any 2 alpha RLST name suffix
numeric chars
108 ROUTCDE DSN6SYSP DSNTIPI 1 or 1-14 route WTO route codes
codes with values
from 1 to 16
109 RRULOCK DSN6SPRM DSNTIPI No , YES Use page level U
(Update ) locks
when using RR
isolation
110 SECQTY DSN6ARVP DSNTIPA Blank – clist Secondary space
calculates using allocation
blksize and logsize
1-9999999 units
111 SEQCACH DSN6SPRM DSNTIPE BYPASS , SEQ Use 3990 – 3 seq
cache
112 SEQPRES DSN6SPRM DSNTIPE No , YES For utility scanning
non partition indexes
followed by update
to keep data in 3990
cache longer when
reading data
113 SITETYP DSN6SPRM DSNTIPO LOCALSITE or Location of current
RECOVERYSITE system
114 SMFACCT DSN6SYSP DSNTIPN 1 or NO , YES SMF accounting
( Class 1) list of
classes ( 1-5 , 7 ,8 )
, or *
115 SMFSTAT DSN6SYSP DSNTIPN YES , NO SMF Stats
116 SRTPOOL DSN6SPRM DSNTIPC 240K-64000K Sort pool size
dafault is 1MB
117 STATIME DSN6SYSP DSNTIPN 1-1440min dafault Stats time
is 30
118 STORMXAB DSN6SYSP DSNTIPX 0-255 dafault is 0 Maximum abend
count
119 STORPROC DSN6SYSP DSNTIPX 1-8 chars dafault is MVS procedure
ssnSPAS name

48622202.doc Ver. 0.00a Page 165 of 177


Infosys Technologies Ltd. App DBA
___________________________________________________________________

120 STORTIME DSN6SYSP DSNTIPX 5-1800 dafault isa Timeout value


180
121 SYSADM DSN6SPRM DSNTIPP SYSADM auth id 1 System admin 1
122 SYSADM2 DSN6SPRM DSNTIPP SYSADM auth id 2 System admin 2
123 SYSOPR1 DSN6SPRM DSNTIPP SYSOPR id 1 System operator 1
124 SYSOPR2 DSN6SPRM DSNTIPP SYSOPR id 2 System operator 2
125 TBSBPOOL DSN6SYSP DSNTIP1 Any 4k buffer name Default buffer pool
dafault is BP0 for user data
126 TCPALVER DSN6FAC DSNTIP5 No , YES User id only TCP/IP
connectionsd are
accepted
127 TCPKPALV DSN6FAC DSNTIP5 ENABLE , DISABLE , Overrides
dafault is ENABLE inappropriate TCP/IP
keep alive values
128 TRACSTR DSN6SYSP DSNTIPN No , YES Trace auto start
129 TRACTBL DSN6SYSP DSNTIPN 4K-396K dafault is Trace size
64K
130 TRKRSITE DSN6SPRM DSNTIPO No , YES Indicates remote
tracker site in case
of disaster
131 TSTAMP DSN6ARVP DSNTIPH No , Yes , EXT Time stamp archives
132 TWOACTV DSN6LOGP DSNTIPH 2 or 1 Number of copies of
active log
133 TWOARCH DSN6LOGP DSNTIPH 2 or 1 Number of copies of
archive log
134 TWOBSDS DSN6LOGP None Yes , NO Number of BSDS
135 UNIT DSN6ARVP DSNTIPA Tape or any other Device type
device name
136 UNIT2 DSN6ARVP DSNTIPA Device type or unit Device type 2
name
137 URCHKTH DSN6SYSP DSNTIPN 0-255 dafault is 0 Number of check
point cycles before
warning of
uncommitted unit of
recovery
138 UTIMOUT DSN6SPRM DSNTIPI 1-254 , dafault is 6 Utility timeout
138 WLMENV DSN6SYSP DSNTIPX 1-18 char any valid Default
names WLM_ENVIRONMENT
for user defined
function and stored
procedure during
create
140 WRTHRSH DSN6LOGP DSNTIPL 1-256 dafault is 20 Write threshold
141 XLKUPDLT DSN6SPRM DSNTIPI No , YES Locking method for
searched UPDATE or
DELETE

48622202.doc Ver. 0.00a Page 166 of 177


Infosys Technologies Ltd. App DBA
___________________________________________________________________

Appendix B - Package BIND Parameters

ISOLATION (CS)
VALIDATE (BIND)
ACTION (REPLACE)
SQLERROR (NOPACKAGE)
FLAG (I)
ACQUIRE (USE)
RELEASE (COMMIT)
DEGREE (ANY)
CURRENTDATA (NO)
EXPLAIN (YES)

48622202.doc Ver. 0.00a Page 167 of 177


Infosys Technologies Ltd. App DBA
___________________________________________________________________

Appendix C - Lock Compatibility Matrix

Tablespace and Table Lock Compatability Matrix

S U X IS IX SIX
S Y Y N Y N N
U Y N N Y N N
X N N N N N N
IS Y Y N Y Y Y
IX N N N Y Y N
SIX N N N Y N N

Page and Row Lock Compatibility Matrix

S U X
S Y Y N
U Y N N
X N N N

48622202.doc Ver. 0.00a Page 168 of 177


Infosys Technologies Ltd. App DBA
___________________________________________________________________

Appendix D - Claims and Drains Compatibility


Matrices

Claim/Drain Compatibility Matrix

Existing Claim Drain required by PGM1


for PGM2 Write RR CS
Write No No No
RR Yes No No
CS Yes No No

Drain/Drain Compatibility Matrix

Existing Drain Drain required by PGM1


for PGM2 Write RR CS
Write Yes No No
RR No No No
CS No No No

48622202.doc Ver. 0.00a Page 169 of 177


Infosys Technologies Ltd. App DBA
___________________________________________________________________

Appendix E - DB2 Address Space IDs and


Associated RACF User IDs and Group Names
_______________________ ________________________ _______________________
| Address Space | RACF User ID | RACF Group Name |
|_______________________|________________________|_______________________|
| DSNMSTR | SYSDSP | DB2SYS |
|_______________________|________________________|_______________________|
| DSNDBM1 | SYSDSP | DB2SYS |
|_______________________|________________________|_______________________|
| DSNDIST | SYSDSP | DB2SYS |
|_______________________|________________________|_______________________|
| DSNSPAS | SYSDSP | DB2SYS |
|_______________________|________________________|_______________________|
| DSNWLM | SYSDSP | DB2SYS |
|_______________________|________________________|_______________________|
| DB2TMSTR | SYSDSPT | DB2TEST |
|_______________________|________________________|_______________________|
| DB2TDBM1 | SYSDSPT | DB2TEST |
|_______________________|________________________|_______________________|
| DB2TDIST | SYSDSPT | DB2TEST |
|_______________________|________________________|_______________________|
| DB2TSPAS | SYSDSPT | DB2TEST |
|_______________________|________________________|_______________________|
| DB2PMSTR | SYSDSPD | DB2PROD |
|_______________________|________________________|_______________________|
| DB2PDBM1 | SYSDSPD | DB2PROD |
|_______________________|________________________|_______________________|
| DB2PDIST | SYSDSPD | DB2PROD |
|_______________________|________________________|_______________________|
| DB2PSPAS | SYSDSPD | DB2PROD |
|_______________________|________________________|_______________________|
| CICSSYS | CICS | CICSGRP |
|_______________________|________________________|_______________________|
| IMSCNTL | IMS | IMSGRP |
|_______________________|________________________|_______________________|

48622202.doc Ver. 0.00a Page 170 of 177


Infosys Technologies Ltd. App DBA
___________________________________________________________________

Appendix F - Sample Job to Reassemble the


RACF Started Procedures
________________________________________________________________________
| |
| //* |
| //* REASSEMBLE AND LINKEDIT THE RACF STARTED PROCEDURES |
| //* TABLE ICHRIN03 TO INCLUDE USERIDS AND GROUP NAMES |
| //* FOR EACH DB2 CATALOGED PROCEDURE. OPTIONALLY, ENTRIES |
| //* FOR AN IMS OR CICS SYSTEM MIGHT BE INCLUDED. |
| //* |
| //* AN IPL WITH A CLPA (OR AN MLPA SPECIFYING THE LOAD |
| //* MODULE) IS REQUIRED FOR THESE CHANGES TO TAKE EFFECT. |
| //* |
| |
| ENTCOUNT DC AL2(((ENDTABLE-BEGTABLE)/ENTLNGTH)+32768) |
| * NUMBER OF ENTRIES AND INDICATE RACF FORMAT |
| * |
| * PROVIDE FOUR ENTRIES FOR EACH DB2 SUBSYSTEM NAME. |
| * |
| |
| |
| |
| BEGTABLE DS 0H |
| * ENTRIES FOR SUBSYSTEM NAME "DSN" |
| DC CL8'DSNMSTR' SYSTEM SERVICES PROCEDURE |
| DC CL8'SYSDSP' USERID |
| DC CL8'DB2SYS' GROUP NAME |
| DC X'00' NO PRIVILEGED ATTRIBUTE |
| DC XL7'00' RESERVED BYTES |
| ENTLNGTH EQU *-BEGTABLE CALCULATE LENGTH OF EACH ENTRY |
| DC CL8'DSNDBM1' DATABASE SERVICES PROCEDURE |
| DC CL8'SYSDSP' USERID |
| DC CL8'DB2SYS' GROUP NAME |
| DC X'00' NO PRIVILEGED ATTRIBUTE |
| DC XL7'00' RESERVED BYTES |
| DC CL8'DSNDIST' DDF PROCEDURE |
| DC CL8'SYSDSP' USERID |
| DC CL8'DB2SYS' GROUP NAME |
| DC X'00' NO PRIVILEGED ATTRIBUTE |
| DC XL7'00' RESERVED BYTES |
| DC CL8'DSNSPAS' STORED PROCEDURES PROCEDURE |
| DC CL8'SYSDSP' USERID |
| DC CL8'DB2SYS' GROUP NAME |
| DC X'00' NO PRIVILEGED ATTRIBUTE |
| DC XL7'00' RESERVED BYTES |
| DC CL8'DSNWLM' WLM-ESTABLISHED S.P. ADDRESS SPACE|
| DC CL8'SYSDSP' USERID |
| DC CL8'DB2SYS' GROUP NAME |
| DC X'00' NO PRIVILEGED ATTRIBUTE |
| DC XL7'00' RESERVED BYTES |
| |
| |
| |
| * ENTRIES FOR SUBSYSTEM NAME "DB2T" |
| DC CL8'DB2TMSTR' SYSTEM SERVICES PROCEDURE |
| DC CL8'SYSDSPT' USERID |
| DC CL8'DB2TEST' GROUP NAME |
| DC X'00' NO PRIVILEGED ATTRIBUTE |
| DC XL7'00' RESERVED BYTES |
| DC CL8'DB2TDBM1' DATABASE SERVICES PROCEDURE |
| DC CL8'SYSDSPT' USERID |
| DC CL8'DB2TEST' GROUP NAME |
| DC X'00' NO PRIVILEGED ATTRIBUTE |
| DC XL7'00' RESERVED BYTES |
| DC CL8'DB2TDIST' DDF PROCEDURE |
| DC CL8'SYSDSPT' USERID |

48622202.doc Ver. 0.00a Page 171 of 177


Infosys Technologies Ltd. App DBA
___________________________________________________________________

| DC CL8'DB2TEST' GROUP NAME |


| DC X'00' NO PRIVILEGED ATTRIBUTE |
| DC XL7'00' RESERVED BYTES |
| DC CL8'DB2TSPAS' STORED PROCEDURES PROCEDURE |
| DC CL8'SYSDSPT' USERID |
| DC CL8'DB2TEST' GROUP NAME |
| DC X'00' NO PRIVILEGED ATTRIBUTE |
| DC XL7'00' RESERVED BYTES |
| |
| |
| |
| * ENTRIES FOR SUBSYSTEM NAME "DB2P" |
| DC CL8'DB2PMSTR' SYSTEM SERVICES PROCEDURE |
| DC CL8'SYSDSPD' USERID |
| DC CL8'DB2PROD' GROUP NAME |
| DC X'00' NO PRIVILEGED ATTRIBUTE |
| DC XL7'00' RESERVED BYTES |
| DC CL8'DB2PDBM1' DATABASE SERVICES PROCEDURE |
| DC CL8'SYSDSPD' USERID |
| DC CL8'DB2PROD' GROUP NAME |
| DC X'00' NO PRIVILEGED ATTRIBUTE |
| DC XL7'00' RESERVED BYTES |
| DC CL8'DB2PDIST' DDF PROCEDURE |
| DC CL8'SYSDSPD' USERID |
| DC CL8'DB2PROD' GROUP NAME |
| DC X'00' NO PRIVILEGED ATTRIBUTE |
| DC XL7'00' RESERVED BYTES |
| DC CL8'DB2PSPAS' STORED PROCEDURES PROCEDURE |
| DC CL8'SYSDSPD' USERID |
| DC CL8'DB2PROD' GROUP NAME |
| DC X'00' NO PRIVILEGED ATTRIBUTE |
| DC XL7'00' RESERVED BYTES |
| |
| |
| |
| * OPTIONAL ENTRIES FOR CICS AND IMS CONTROL REGION |
| DC CL8'CICSSYS' CICS PROCEDURE NAME |
| DC CL8'CICS' USERID |
| DC CL8'CICSGRP' GROUP NAME |
| DC X'00' NO PRIVILEGED ATTRIBUTE |
| DC XL7'00' RESERVED BYTES |
| DC CL8'IMSCNTL' IMS CONTROL REGION PROCEDURE |
| DC CL8'IMS' USERID |
| DC CL8'IMSGRP' GROUP NAME |
| DC X'00' NO PRIVILEGED ATTRIBUTE |
| DC XL7'00' RESERVED BYTES |
| ENDTABLE DS 0D |
| END |
| |
|________________________________________________________________________|

48622202.doc Ver. 0.00a Page 172 of 177


Infosys Technologies Ltd. App DBA
___________________________________________________________________

Appendix G - Comparing WLM-established and


DB2-established Stored Procedures
_______________________ ________________________
| DB2-established | WLM-established |
|_______________________|________________________|
| There is a single | There can be many |
| stored procedure | stored procedures |
| address space: | address spaces: |
| | |
| ° A failure in one | ° It is possible to |
| stored procedure | isolate stored |
| can affect other | procedures from |
| stored procedures | one another so |
| that are running | that failures do |
| in that address | not affect other |
| space. | stored procedures. |
| | |
| ° Because of | ° Reduces demand for |
| storage that | storage below the |
| language products | 16MB line and |
| need below the | thereby removes |
| 16MB line, it can | the limitation on |
| be difficult to | the number of |
| support more than | stored procedures |
| 50 stored | that can run |
| procedures | concurrently. |
| running at the | |
| same time. | |
|_______________________|________________________|
| Incoming requests for | Requests are handled |
| stored procedures are | in priority order. |
| handled in a | |
| first-in, first-out | |
| order. | |
|_______________________|________________________|
| Stored procedures run | Stored procedures |
| at the priority of | inherit the MVS |
| the stored procedures | dispatching priority |
| address space. | of the DB2 thread that |
| | issues the CALL |
| | statement. |
|_______________________|________________________|
| No ability to | Each stored procedures |
| customize the | address space is |
| environment. | associated with a WLM |
| | environment that you |
| | specify. An |
| | environment is an |
| | attribute associated |
| | with one or more |
| | stored procedures. The |
| | environment determines |
| | which JCL procedure is |
| | used to run a |
| | particular stored |
| | procedure. |
|_______________________|________________________|
| Must run as a MAIN | Can run as a MAIN or |
| program. | SUB program. SUB |
| | programs can run |
| | significantly faster, |
| | but the subprogram |
| | must do more |
| | initialization and |
| | cleanup processing |
| | itself rather than |

48622202.doc Ver. 0.00a Page 173 of 177


Infosys Technologies Ltd. App DBA
___________________________________________________________________

| | relying on LE/370 to |
| | handle that. |
|_______________________|________________________|
| You can access | You can access |
| non-relational data, | non-relational data. |
| but that data is not | If the non-relational |
| included in your SQL | data is managed by |
| unit of work. It is a | OS/390 RRS, the |
| separate unit of | updates to that data |
| work. | are part of your SQL |
| | unit of work. |
|_______________________|________________________|
| Stored procedures | Stored procedures can |
| access protected MVS | access protected MVS |
| resources with the | resources with the SQL |
| authority of the | user's RACF authority. |
| stored procedures | |
| address space. | |
|_______________________|________________________|

48622202.doc Ver. 0.00a Page 174 of 177


Infosys Technologies Ltd. App DBA
___________________________________________________________________

Appendix H - Sample Stored Procedure

IDENTIFICATION DIVISION.
PROGRAM-ID. BKSPPOS1.
AUTHOR. INFOSYS.
DATE-WRITTEN. 08-14-2001.
DATE-COMPILED.
 
ENVIRONMENT DIVISION.
 
DATA DIVISION.
 
WORKING-STORAGE SECTION.
 
01 PROGRAM-WS-START PIC X(50) VALUE
'*** BKSPPOS1 WORKING STORAGE STARTS HERE ***'.
 
01 WS-WORK-FIELDS.
05 WS-PRSHG-ACCT-OFC-CD PIC X(03) VALUE SPACES.
05 WS-PRSHG-ACCT-NUM PIC X(06) VALUE SPACES.
 
01 WS-MESSAGE PIC X(80).
88 WS-MSG-1 VALUE
'CURSOR OPEN FAILED IN BKSPPOS1 !'.
88 WS-MSG-2 VALUE
'INVALID INPUT PARAMETERS !'.
88 WS-MSG-3 VALUE
'NO MATCHING RECORDS FOUND !'.
88 WS-MSG-4 VALUE
'CURSOR OPEN SUCCESSFUL!'.
88 WS-MSG-5 VALUE
‘VALIDATION OF INPUT VALUES FAILED!’
 
******************************************************************
* COPYBOOK AREA
*
******************************************************************
/
COPY SQLCODES.
 
******************************************************************
* DCLGEN AREA
*
******************************************************************
/
EXEC SQL INCLUDE SQLCA END-EXEC.
/
* DCLGEN FOR TB_AC_ACCT
EXEC SQL INCLUDE TBACACCT END-EXEC.
/
 
******************************************************************
* CURSOR DECLARATION FOR INSTR-QTY CURSOR
*
******************************************************************
 
EXEC SQL DECLARE INSTR_QTY_CSR CURSOR WITH RETURN FOR
 
SELECT
TB_AC_SUBACCT_PSTN.SUBACCT_CD,
TB_AC_SUBACCT_PSTN.SETTLE_DATE_QTY,
TB_AC_SUBACCT_PSTN.TRADE_DATE_QTY
FROM
TB_AC_SUBACCT_PSTN, TB_AC_ACCT
WHERE
TB_AC_ACCT.PRSHG_ACCT_OFC_CD =

48622202.doc Ver. 0.00a Page 175 of 177


Infosys Technologies Ltd. App DBA
___________________________________________________________________

:DCLTB-AC-ACCT.PRSHG-ACCT-OFC-CD AND
TB_AC_ACCT.PRSHG_ACCT_NUM =
:DCLTB-AC-ACCT.PRSHG-ACCT-NUM AND
TB_AC_ACCT.ACCT_ID =
TB_AC_SUBACCT_PSTN.ACCT_ID AND
TB_AC_SUBACCT_PSTN.TRADE_DATE_QTY <> 0 AND

END-EXEC.
/
LINKAGE SECTION.
 
COPY BKCPPOS1.
 
******************************************************************
* PROCEDURE DIVISION STARTS HERE
*
******************************************************************
/
PROCEDURE DIVISION USING BKSPPOS1-INPUT-AREA
BKSPPOS1-OUTPUT-AREA.
 
0001-MAIN.
PERFORM 1000-INITIALIZE
THRU 1000-INITIALIZE-EXIT
 
PERFORM 1500-CHECK-INPUT
THRU 1500-CHECK-INPUT-EXIT
 
PERFORM 2000-OPEN-CURSOR
THRU 2000-OPEN-CURSOR-EXIT
 
MOVE SQLCODE TO BKSPPOS1-SQLCODE
 
PERFORM 3000-EXIT-ROUTINE
THRU 3000-EXIT-ROUTINE-EXIT
.
 
0001-MAIN-EXIT.
EXIT
.
 
******************************************************************
* INTIALIZES VARIABLES USED IN THE PROGRAM
*
******************************************************************
1000-INITIALIZE.
 
INITIALIZE BKSPPOS1-OUTPUT-AREA
SET BKSPPOS1-STATUS-OK TO TRUE
.
 
1000-INITIALIZE-EXIT.
EXIT
.
 
/
******************************************************************
* VALIDATES THE INPUT PARAMETERS
*
******************************************************************
1500-CHECK-INPUT.
MOVE BKSPPOS1-PRSHG-ACCT-OFC-CD TO
PRSHG-ACCT-OFC-CD OF DCLTB-AC-ACCT
 
MOVE BKSPPOS1-PRSHG-ACCT-NUM TO
PRSHG-ACCT-NUM OF DCLTB-AC-ACCT
 
EXEC SQL
SELECT
ACCT_ID
INTO

48622202.doc Ver. 0.00a Page 176 of 177


Infosys Technologies Ltd. App DBA
___________________________________________________________________

:DCLTB-AC-ACCT.ACCT-ID
FROM
TB_AC_ACCT
WHERE
PRSHG_ACCT_OFC_CD = :DCLTB-AC-ACCT.PRSHG-ACCT-OFC-CD
AND
PRSHG_ACCT_NUM = :DCLTB-AC-ACCT.PRSHG-ACCT-NUM
END-EXEC
 
EVALUATE SQLCODE
WHEN 0
CONTINUE
WHEN -811
CONTINUE
WHEN +100
SET WS-MSG-2 TO TRUE
PERFORM 3000-EXIT-ROUTINE
THRU 3000-EXIT-ROUTINE-EXIT
WHEN OTHER
SET WS-MSG-5 TO TRUE
PERFORM 3000-EXIT-ROUTINE
THRU 3000-EXIT-ROUTINE-EXIT
END-EVALUATE
.
 
1500-CHECK-INPUT-EXIT.
EXIT
.

/
******************************************************************
* OPEN THE INSTR_QTY_CSR CURSOR
*
******************************************************************
2000-OPEN-CURSOR.

EXEC SQL
OPEN INSTR_QTY_CSR
END-EXEC
 
EVALUATE SQLCODE
WHEN +0
SET BKSPPOS1-STATUS-OK TO TRUE
SET WS-MSG-4 TO TRUE
WHEN +100
SET BKSPPOS1-NOT-FOUND TO TRUE
SET WS-MSG-3 TO TRUE
WHEN OTHER
SET BKSPPOS1-ERROR TO TRUE
SET WS-MSG-1 TO TRUE
PERFORM 3000-EXIT-ROUTINE
THRU 3000-EXIT-ROUTINE-EXIT
END-EVALUATE
.
 
2000-OPEN-CURSOR-EXIT.
EXIT
.
 
******************************************************************
* THIS PARA PERFORMS THE EXIT ROUTINE
*
******************************************************************
3000-EXIT-ROUTINE.
MOVE WS-MESSAGE TO BKSPPOS1-ERROR-MSG
.
 
3000-EXIT-ROUTINE-EXIT.
GOBACK
.

48622202.doc Ver. 0.00a Page 177 of 177

You might also like