Download as pdf or txt
Download as pdf or txt
You are on page 1of 194

Tivoli IBM Tivoli Workload

Scheduler 
Version 8.2 (Revised December 2004)

Planning and Installation Guide

SC32-1273-02
Tivoli IBM Tivoli Workload
®

Scheduler 
Version 8.2 (Revised December 2004)

Planning and Installation Guide

SC32-1273-02
Note

Before using this information and the product it supports, read the information in “Notices” on page 159.

Third Edition (December 2004)


This edition applies to version 8, release 2, modification level 0 of IBM Tivoli Workload Scheduler (program number
5698-WSH) and to all subsequent releases and modifications until otherwise indicated in new editions.
This edition replaces SC32-1273-01.
© Copyright International Business Machines Corporation 1991, 2004. All rights reserved.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
List of figures . . . . . . . . . . . . vii Planning installations . . . . . . . . . . . 20
Selecting workstation types . . . . . . . . 21
List of tables . . . . . . . . . . . . ix Selecting your installation method . . . . . . 21
Selecting tier installation type . . . . . . . . 23
Checking user authorization requirements . . . . 24
About this guide . . . . . . . . . . . xi Authorization roles for running the install wizard 24
What is new in this guide . . . . . . . . . . xi Authorization roles for running the twsinst script 24
Who should read this guide . . . . . . . . . xi Authorization roles for software distribution . . 24
What this guide contains . . . . . . . . . . xi Authorization roles for running the customize
Publications . . . . . . . . . . . . . . xii script . . . . . . . . . . . . . . . 25
Tivoli Workload Scheduler library . . . . . . xii Authorization roles for running an upgrade . . 25
Related publications . . . . . . . . . . xv Before you install . . . . . . . . . . . . 25
Accessing publications online . . . . . . . xv Information about the Tivoli Workload Scheduler
Ordering publications. . . . . . . . . . xvi user . . . . . . . . . . . . . . . . 26
Accessibility . . . . . . . . . . . . . . xvi Installation element validation criteria . . . . 26
Support information . . . . . . . . . . . xvii Installing for end-to-end scheduling . . . . . 27
Conventions used in this guide . . . . . . . xvii Installation information . . . . . . . . . 27
Typeface conventions . . . . . . . . . xvii Before you upgrade . . . . . . . . . . . 30
Operating system-dependent variables and Unlinking and stopping Tivoli Workload
paths . . . . . . . . . . . . . . . xvii Scheduler . . . . . . . . . . . . . . 30
Command syntax. . . . . . . . . . . xviii Stopping the connector . . . . . . . . . 31
Backup files . . . . . . . . . . . . . 32
Part 1. Introduction . . . . . . . . . 1 Using the new configuration files . . . . . . 32
Expanding your database . . . . . . . . . 33
Tivoli Management Framework implications . . 33
Chapter 1. Introduction . . . . . . . . 3
Tivoli Workload Scheduler overview . . . . . . 3
Processes . . . . . . . . . . . . . . . 4 Part 3. Installing and upgrading . . 35
Network communications . . . . . . . . . . 5
Network operation . . . . . . . . . . . . 6 Chapter 3. Installing using the
Extended agents . . . . . . . . . . . . . 6 installation wizard . . . . . . . . . . 37
Local UNIX access method . . . . . . . . 7
Install a new instance of Tivoli Workload Scheduler 37
Remote UNIX access method . . . . . . . . 7
Typical installation sequence . . . . . . . . 38
UNIX extended agents . . . . . . . . . . 7
Full installation sequence . . . . . . . . . 39
Product instances . . . . . . . . . . . . . 8
Custom installation sequence . . . . . . . 40
Registry file . . . . . . . . . . . . . 8
Add a new feature to an existing installation . . . 43
Components file . . . . . . . . . . . . 9
Promote an existing installation . . . . . . . 45
Quick start guide . . . . . . . . . . . . . 9
Performing a silent installation . . . . . . . . 45
Installation procedure . . . . . . . . . . 46
Part 2. Planning . . . . . . . . . . 11
Chapter 4. Installing and promoting
Chapter 2. Getting started . . . . . . 13 using twsinst . . . . . . . . . . . . 47
Network planning . . . . . . . . . . . . 13 Install and promote . . . . . . . . . . . . 47
Tivoli Workload Scheduler network overview . . 13
Domain functionality . . . . . . . . . . 13 Chapter 5. Installing using Software
Localized processing in your domain . . . . . 13
Distribution . . . . . . . . . . . . . 51
Network considerations . . . . . . . . . 14
Software packages and parameters . . . . . . 51
A single domain network . . . . . . . . . 15
Installation procedure . . . . . . . . . . . 53
A multiple domain network . . . . . . . . 16
Installing language packs . . . . . . . . . . 54
Switching to a backup domain manager . . . . 17
Fault-tolerant switch mechanism . . . . . . 17
Expanded databases . . . . . . . . . . 19 Chapter 6. Installing using customize 57
Workstation names . . . . . . . . . . . 19 The customize script . . . . . . . . . . . 57
Connector installation . . . . . . . . . . 19 Installing the Tivoli Workload Scheduler engine . . 58
Time zone considerations . . . . . . . . . 20

© Copyright IBM Corp. 1991, 2004 iii


Chapter 7. Upgrading to Tivoli Masters that do not support Tivoli Management
Workload Scheduler . . . . . . . . . 61 Framework . . . . . . . . . . . . . 108
Upgrade scenarios . . . . . . . . . . . . 61
Upgrading Tivoli Workload Scheduler . . . . . 62 Chapter 10. Integration with other IBM
Using the installation wizard . . . . . . . 62 Tivoli products . . . . . . . . . . . 111
Using twsinst . . . . . . . . . . . . . 63 Integration with IBM Tivoli Enterprise Data
Using the migrationInstall response file . . . . 67 Warehouse . . . . . . . . . . . . . . 111
Using Software Distribution . . . . . . . . 67 Integration with IBM Tivoli NetView . . . . . 111
Using customize . . . . . . . . . . . . 68 General . . . . . . . . . . . . . . 111
Installing the integration software . . . . . 113
Part 4. Configuring. . . . . . . . . 71 Setting up . . . . . . . . . . . . . 116
Objects, symbols, and submaps . . . . . . 117
Menu actions . . . . . . . . . . . . 119
Chapter 8. After you install . . . . . . 73 Tivoli Workload Scheduler/NetView events . . 121
Netman . . . . . . . . . . . . . . . 73 Tivoli Workload Scheduler/NetView
Configuring a master domain manager . . . . . 73 configuration files . . . . . . . . . . . 123
Configuring a fault-tolerant switch manager . . . 74 Tivoli Workload Scheduler/NetView
Configuring a fault-tolerant or standard agent . . . 74 configuration options . . . . . . . . . . 126
Updating the security file . . . . . . . . . . 75 Unison software MIB . . . . . . . . . . 127
Configuration steps for UNIX Tier 1 and 2 Tivoli Workload Scheduler/NetView program
installations . . . . . . . . . . . . . . 76 reference . . . . . . . . . . . . . . 130
Configuration steps for Tier 2 installations . . . . 77 Integration with IBM Tivoli Business Systems
Configuring a fault-tolerant agent after Manager . . . . . . . . . . . . . . . 132
installation . . . . . . . . . . . . . 77 General . . . . . . . . . . . . . . 132
Enabling the time zone feature . . . . . . . . 78 Using the key flag mechanism . . . . . . . 133
Enabling the time zone in an end-to-end network 79 Installing and configuring the common listener
agent . . . . . . . . . . . . . . . 134
Chapter 9. Optional customization . . . 81 Customizing the configuration files . . . . . 135
Global options . . . . . . . . . . . . . 81 Starting and stopping the common listener
Setting the global options . . . . . . . . . 81 agent . . . . . . . . . . . . . . . 136
Carry forward options . . . . . . . . . . 85 Tivoli Workload Scheduler/IBM Tivoli Business
Local options . . . . . . . . . . . . . . 86 Systems Manager events . . . . . . . . 136
Setting local options . . . . . . . . . . 86
Setting up decentralized administration . . . . . 94 Chapter 11. Setting security . . . . . 139
Sharing the master directories . . . . . . . 94 Setting strong authentication and encryption . . . 139
Sharing Tivoli Workload Scheduler parameters 94 Key SSL concepts . . . . . . . . . . . 140
Using a single share . . . . . . . . . . 95 Planning for SSL support in Tivoli Workload
Setting local options . . . . . . . . . . 95 Scheduler. . . . . . . . . . . . . . 141
Setting local options on the master . . . . . 96 Configuring SSL support in Tivoli Workload
Tivoli Workload Scheduler console messages and Scheduler. . . . . . . . . . . . . . 143
prompts . . . . . . . . . . . . . . . 96 Working across firewalls . . . . . . . . . . 149
Setting sysloglocal on UNIX . . . . . . . . 96
console command . . . . . . . . . . . 97
Chapter 12. Uninstalling Tivoli
Automating the production cycle . . . . . . . 97
Customizing the final job stream . . . . . . 98 Workload Scheduler . . . . . . . . 151
Starting a production cycle . . . . . . . . 98 Using the uninstall wizard . . . . . . . . . 151
Managing the production environment . . . . . 98 Using the twsinst script . . . . . . . . . . 151
Choosing the start of day . . . . . . . . . 98 Using the Software Distribution CLI . . . . . . 152
Changing the start of day . . . . . . . . 99 Using the customize script . . . . . . . . . 152
Creating a plan for future or past dates . . . . 99
Using the configuration scripts . . . . . . . 100 Appendix. Support information . . . . 155
Jobman environment variables. . . . . . . 100 Searching knowledge bases . . . . . . . . . 155
Standard configuration script - jobmanrc . . . 101 Search the information center on your local
Local configuration script - .jobmanrc . . . . 103 system or network. . . . . . . . . . . 155
Tivoli Workload Scheduler and Tivoli Management Search the information center at the IBM
Framework . . . . . . . . . . . . . . 104 support Web site . . . . . . . . . . . 155
The Tivoli Management Framework for Search the Internet . . . . . . . . . . 155
non-Tivoli users . . . . . . . . . . . 104 Obtaining fixes . . . . . . . . . . . . . 156
Adding Tivoli administrators . . . . . . . 105 Contacting IBM Software Support . . . . . . 156
Backup master considerations . . . . . . . 107 Determine the business impact of your problem 157

iv IBM Tivoli Workload Scheduler Planning and Installation Guide


Describe your problem and gather background Open source: SNMP library . . . . . . . . 163
information . . . . . . . . . . . . . 157 Open source: time zone library . . . . . . . 163
Submit your problem to IBM Software Support 157 Toolkit of Tivoli Internationalization Services . . . 164
Trademarks . . . . . . . . . . . . . . 164
Notices . . . . . . . . . . . . . . 159
Open source: test . . . . . . . . . . . . 160 Glossary . . . . . . . . . . . . . 165
Open source: OpenSSL . . . . . . . . . . 161
LICENSE ISSUES . . . . . . . . . . . 161 Index . . . . . . . . . . . . . . . 169
OpenSSL license . . . . . . . . . . . 161
Original SSLeay license . . . . . . . . . 162

Contents v
vi IBM Tivoli Workload Scheduler Planning and Installation Guide
List of figures
1. Process flows . . . . . . . . . . . . 5 5. Multiple domain topology . . . . . . . 16
2. Processes in the network operation . . . . . 6 6. Multiple inbound connections architecture 18
3. Single domain topology . . . . . . . . 15 7. Common listener agent architecture . . . . 132
4. Internetwork dependencies . . . . . . . 16

© Copyright IBM Corp. 1991, 2004 vii


viii IBM Tivoli Workload Scheduler Planning and Installation Guide
List of tables
1. Command Syntax . . . . . . . . . . xviii 18. Optional installable features and components 43
2. Registry file attributes . . . . . . . . . 8 19. Response files . . . . . . . . . . . . 46
3. Workstation installation selection . . . . . 21 20. SPBs to install Tivoli Workload Scheduler 51
4. Installation Methods, Components and 21. SPB installation parameters . . . . . . . 52
Features . . . . . . . . . . . . . . 22 22. List of parameters to install language packs 54
5. Tivoli Workload Scheduler installation options 23 23. Upgrading to Tivoli Workload Scheduler,
6. Required authorization roles for running the Version 8.2 . . . . . . . . . . . . . 61
install wizard . . . . . . . . . . . . 24 24. Using the twsinst backup options . . . . . 64
7. Required authorization roles for running 25. Globalopts syntax . . . . . . . . . . 81
twsinst . . . . . . . . . . . . . . 24 26. Localopts syntax . . . . . . . . . . . 86
8. Required authorization roles for Software 27. Shortcuts for encryption ciphers. . . . . . 91
Distribution . . . . . . . . . . . . 25 28. Jobman environment variables . . . . . . 100
9. Required Authorization Roles for running 29. Variables of jobmanrc . . . . . . . . . 101
customize . . . . . . . . . . . . . 25 30. Tivoli Workload Scheduler/NetView objects
10. Required authorization roles for running an and symbols . . . . . . . . . . . . 117
upgrade . . . . . . . . . . . . . . 25 31. Tivoli Workload Scheduler/NetView status 118
11. Installation element validation criteria. . . . 26 32. Tivoli Workload Scheduler/NetView events 121
12. ISMP features . . . . . . . . . . . . 37 33. Enterprise-specific traps . . . . . . . . 127
13. CPU data . . . . . . . . . . . . . 38 34. Forwarded events for key and non-key
14. Tivoli Workload Scheduler connector scheduling objects . . . . . . . . . . 133
information . . . . . . . . . . . . 39 35. Tivoli Workload Scheduler events for Tivoli
15. Tivoli Management Framework installation Business Systems Manager . . . . . . . 137
panel. . . . . . . . . . . . . . . 39 36. Files for Local Options . . . . . . . . 146
16. Tivoli Plus Module information panel . . . . 41 37. Type of communication depending on the
17. Tivoli Management Framework version securitylevel value. . . . . . . . . . 147
installation panel . . . . . . . . . . . 42 38. Shortcuts for encryption ciphers . . . . . 149

© Copyright IBM Corp. 1991, 2004 ix


x IBM Tivoli Workload Scheduler Planning and Installation Guide
About this guide
The IBM Tivoli Workload Scheduler Planning and Installation Guide provides
information on installing an IBM® Tivoli Workload Scheduler 8.2 network. This
includes information on how to plan a network, install the Tivoli Workload
Scheduler engine, connector software, and the graphical user interface. It also
provides instructions to customize Tivoli Workload Scheduler options and security
in order to start a network. Finally, it gives tips and information on migrating from
previous versions of the product.

What is new in this guide


This chapter describes the modifications made to this guide.

The book is now divided into the following parts:


v Planning: this part of the book provides you with the information necessary to
plan the installation of your network. It describes the architecture, installation
prerequisites, and pre-installation tasks.
v Installing: this part of the book provides you with the information necessary for
you to install your network. It describes the different installation methods.
v Configuring: this part of the book provides you with the information necessary
for you to configure your network after installation.
The previous version of the book was not divided into parts.

In addition to the restructuring of the book, new information relating to the fix
packs issued since the previous release has also been added. This information
relates to new functionality added in fix pack 3 for the backup facility, and new
fault-tolerant switch management functionality added in fix pack 5.

Who should read this guide


This guide is intended for the following audience:
v Tivoli Workload Scheduler administrators - those who plan the layout of the
Tivoli Workload Scheduler network
v Installers - those who install the various software packages on the computers
that make up the Tivoli Workload Scheduler network

What this guide contains


This guide contains the following chapters:
v Chapter 1, “Introduction,” on page 3
Describes the architecture of the product.
v Chapter 2, “Getting started,” on page 13
Describes all you need to know in order to get started with your installation.
v Chapter 3, “Installing using the installation wizard,” on page 37
Describes installation using the install wizard.
v Chapter 4, “Installing and promoting using twsinst,” on page 47
Describes installation using the twsinst script.
v Chapter 5, “Installing using Software Distribution,” on page 51
© Copyright IBM Corp. 1991, 2004 xi
What this guide contains

Describes installation using software package blocks.


v Chapter 6, “Installing using customize,” on page 57
Describes installation using the customize script.
v Chapter 7, “Upgrading to Tivoli Workload Scheduler,” on page 61
Describes how you migrate and promote components.
v Chapter 8, “After you install,” on page 73
Describes the configuration you need to do when you have finished installing.
v Chapter 9, “Optional customization,” on page 81
Describes tuning of your installation.
v Chapter 11, “Setting security,” on page 139
Describes how you set your security parameters.

Publications
This section lists publications in the Tivoli Workload Scheduler library and any other
related documents. It also describes how to access Tivoli publications online and
how to order Tivoli publications.

Tivoli Workload Scheduler library


Tivoli Workload Scheduler comprises several separate products available on a
variety of platforms, and the library is similarly divided:
IBM® Tivoli Workload Scheduling suite library
This library contains all cross-platform and cross-product publications for
Tivoli Workload Scheduler.
IBM Tivoli Workload Scheduler distributed library
This library contains all of the publications that refer to using Tivoli
Workload Scheduler on platforms other than z/OS®.
IBM Tivoli Workload Scheduler for z/OS library
This library contains all publications that apply only to IBM Tivoli
Workload Scheduler for z/OS.
IBM Tivoli Workload Scheduler for Applications library
This library contains all publications that apply only to IBM Tivoli
Workload Scheduler for Applications.
IBM Tivoli Workload Scheduler for Virtualized Data Centers library
This library contains all publications that apply only to IBM Tivoli
Workload Scheduler for Virtualized Data Centers.

IBM Tivoli Workload Scheduling suite library


The following publications are available in the IBM Tivoli Workload Scheduling
suite library. This includes publications which are common to all products,
platforms, and components.
v IBM Tivoli Workload Scheduler: General Information, SC32-1256
Provides general information about all Tivoli Workload Scheduler products. It
gives an overview of how they can be used together to provide workload
management solutions for your whole enterprise.
v IBM Tivoli Workload Scheduler: Job Scheduling Console User’s Guide, SC32-1257
Describes how to work with Tivoli Workload Scheduler, regardless of platform,
using a common GUI called the job scheduling console.
v IBM Tivoli Workload Scheduler: Job Scheduling Console Release Notes, SC32-1258

xii IBM Tivoli Workload Scheduler Planning and Installation Guide


Publications

Provides late-breaking information about the job scheduling console.


v IBM Tivoli Workload Scheduler: Warehouse Enablement Pack Version 1.1.0
Implementation Guide for Tivoli Enterprise Data Warehouse, Version 1.1,
Provides information about enabling Tivoli Workload Scheduler for Tivoli Data
Warehouse.

Note: This guide is only available on the product CD. It is not possible to access
it online, as you can the other books (see “Accessing publications online”
on page xv).

IBM Tivoli Workload Scheduler distributed library


The following publications are available in the IBM Tivoli Workload Scheduler
distributed library. This includes publications which refer to using the product on
all platforms except z/OS.
v IBM Tivoli Workload Scheduler: Release Notes, SC32-1277
Provides late-breaking information about Tivoli Workload Scheduler on
platforms other than z/OS.
v IBM Tivoli Workload Scheduler: Planning and Installation Guide, SC32-1273
Describes how to plan for and install IBM Tivoli Workload Scheduler on
platforms other than z/OS, and how to integrate Tivoli Workload Scheduler
with NetView®, Tivoli Data Warehouse, and IBM IBM Tivoli Business Systems
Manager.
v IBM Tivoli Workload Scheduler: Reference Guide, SC32-1274
Describes the Tivoli Workload Scheduler command line used on platforms other
than z/OS, and how extended and network agents work.
v IBM Tivoli Workload Scheduler: Administration and Troubleshooting, SC32-1275
Provides information about how to administer Tivoli Workload Scheduler on
platforms other than z/OS, and what to do if things go wrong. It includes help
on many messages generated by the main components of Tivoli Workload
Scheduler.
v IBM Tivoli Workload Scheduler: Limited Fault-tolerant Agent for OS/400®, SC32-1280
Describes how to install, configure, and use Tivoli Workload Scheduler limited
fault-tolerant agents on AS/400®.
v IBM Tivoli Workload Scheduler: Plus Module User’s Guide, SC32-1276
Describes how to set up and use the Tivoli Workload Scheduler Plus module.

See http://www.ibm.com/software/tivoli/products/scheduler/ for an


introduction to the product.

IBM Tivoli Workload Scheduler for z/OS library


The following documents are available in the Tivoli Workload Scheduler for z/OS
library:
v IBM Tivoli Workload Scheduler for z/OS: Getting Started, SC32-1262
Discusses how to define your installation data for Tivoli Workload Scheduler for
z/OS and how to create and modify plans.
v IBM Tivoli Workload Scheduler for z/OS: Installation Guide
Describes how to install Tivoli Workload Scheduler for z/OS.
v IBM Tivoli Workload Scheduler for z/OS: Customization and Tuning, SC32-1265
Describes how to customize Tivoli Workload Scheduler for z/OS.
v IBM Tivoli Workload Scheduler for z/OS: Managing the Workload, SC32-1263

About this guide xiii


Publications

Explains how to plan and schedule the workload and how to control and
monitor the current plan.
v IBM Tivoli Workload Scheduler for z/OS: Quick Reference, SC32-1268
Provides a quick and easy consultation reference to operate Tivoli Workload
Scheduler for z/OS.
v IBM Tivoli Workload Scheduler for z/OS: Diagnosis Guide and Reference, SC32-1261
Provides information to help diagnose and correct possible problems when using
Tivoli Workload Scheduler for z/OS.
v IBM Tivoli Workload Scheduler for z/OS: Messages and Codes, SC32-1267
Explains messages and codes in Tivoli Workload Scheduler for z/OS.
v IBM Tivoli Workload Scheduler for z/OS: Programming Interfaces, SC32-1266
Provides information to write application programs for Tivoli Workload
Scheduler for z/OS.
v IBM Tivoli Workload Scheduler for z/OS: Licensed Program Specifications, GI11-4208
Provides planning information about Tivoli Workload Scheduler for z/OS.
v IBM Tivoli Workload Scheduler for z/OS: Memo for program 5697-WSZ, GI11-4209
Provides a summary of changes for the current release of the product.
v IBM Tivoli Workload Scheduler for z/OS: Program Directory for program 5697-WSZ,
GI11-4203
Provided with the installation tape for Tivoli Workload Scheduler for z/OS
(program 5697-WSZ), describes all of the installation materials and gives
installation instructions specific to the product release level or feature number.
v IBM Tivoli Workload Scheduler for z/OS: Program Directory for program 5698-WSZ,
GI11-4207
Provided with the installation tape for Tivoli Workload Scheduler for z/OS
(program 5698-WSC), describes all of the installation materials and gives
installation instructions specific to the product release level or feature number.

See http://www.ibm.com/software/tivoli/products/scheduler-zos/ for an


introduction to the product.

IBM Tivoli Workload Scheduler for Applications library


The following manuals are available in the IBM Tivoli Workload Scheduler for
Applications library:
v IBM Tivoli Workload Scheduler for Applications: Release Notes, SC32-1279
Provides late-breaking information about the Tivoli Workload Scheduler
extended agents.
v IBM Tivoli Workload Scheduler for Applications: User’s Guide, SC32-1278
Describes how to install, use, and troubleshoot the Tivoli Workload Scheduler
extended agents.

See http://www.ibm.com/software/tivoli/products/scheduler-apps/ for an


introduction to the product.

IBM Tivoli Workload Scheduler for Virtualized Data Centers


library
The following manuals are available in the IBM Tivoli Workload Scheduler for
Virtualized Data Centers library:
v IBM Tivoli Workload Scheduler for Virtualized Data Centers: Release Notes, SC32-1453
Provides late-breaking information about Tivoli Workload Scheduler for
Virtualized Data Centers.

xiv IBM Tivoli Workload Scheduler Planning and Installation Guide


Publications

v IBM Tivoli Workload Scheduler for Virtualized Data Centers: User’s Guide, SC32-1454
Describes how to extend the scheduling capabilities of Tivoli Workload
Scheduler to workload optimization and grid computing by enabling the control
of IBM LoadLeveler® and IBM Grid Toolbox jobs.
See http://www.ibm.com/software/info/ecatalog/en_US/
products/Y614224T20392S50.html for an introduction to the product.

Related publications
The following documents provide additional information:
v IBM Redbooks™: High Availability Scenarios with IBM Tivoli Workload Scheduler and
IBM Tivoli Framework
This IBM Redbook, shows you how to design and create highly available IBM
Tivoli Workload Scheduler and IBM Tivoli Management Framework (TMR
server, Managed Nodes and Endpoints) environments. It presents High
Availability Cluster Multiprocessing (HACMP™) for AIX® and Microsoft®
Windows® Cluster Service (MSCS) case studies.
This Redbook can be found on the Redbooks Web site at
http://www.redbooks.ibm.com/abstracts/sg246632.html
v IBM Redbooks: Customizing IBM Tivoli Workload Scheduler for z/OS V8.2 to Improve
Performance
This IBM Redbook covers the techniques that can be used to improve the
performance of Tivoli Workload Scheduler for z/OS (including end-to-end
scheduling).
This Redbook can be found on the Redbooks Web site at
http://www.redbooks.ibm.com/abstracts/sg246352.html
v IBM Redbooks: End-to-End Scheduling with IBM Tivoli Workload Scheduler Version 8.2
This IBM Redbook considers how best to provide end-to-end scheduling using
Tivoli Workload Scheduler Version 8.2, both distributed (previously known as
Maestro™) and mainframe (previously known as OPC) components.
This Redbook can be found on the Redbooks Web site at
http://www.redbooks.ibm.com/abstracts/sg246624.html

The Tivoli Software Glossary includes definitions for many of the technical terms
related to Tivoli software. TheTivoli Software Glossary is available at the following
Tivoli software library Web site:

http://publib.boulder.ibm.com/tividd/glossary/tivoliglossarymst.htm

Accessing publications online


The product CD contains the publications that are in the product library. The
format of the publications is PDF, HTML, or both. To access the publications using
a Web browser, open the infocenter.html file. The file is in the appropriate
publications directory on the product CD.

IBM posts publications for this and all other Tivoli products, as they become
available and whenever they are updated, to the Tivoli software information center
Web site. Access the Tivoli software information center by first going to the Tivoli
software library at the following Web address:

http://www.ibm.com/software/tivoli/library/

About this guide xv


Publications

Scroll down and click the Product manuals link. In the Tivoli Technical Product
Documents Alphabetical Listing window, click the appropriate Tivoli Workload
Scheduler product link to access the product’s libraries at the Tivoli software
information center. All publications in the Tivoli Workload Scheduler suite library,
distributed library and z/OS library can be found under the entry Tivoli Workload
Scheduler.

Note: If you print PDF documents on other than letter-sized paper, set the option
in the File → Print window that allows Adobe Reader to print letter-sized
pages on your local paper.

Tivoli Workload Scheduler online books


All the books in the Tivoli Workload Scheduler for z/OS library, are available in
displayable softcopy form on CD-ROM in the IBM Online Library: z/OS Software
Products Collection Kit, SK3T-4270. You can read the softcopy books on CD-ROMs
using these IBM licensed programs:
v BookManager® READ/2 (program number 5601-454)
v BookManager READ/DOS (program number 5601-453)
v BookManager READ/6000 (program number 5765-086)
All the BookManager programs need a personal computer equipped with a
CD-ROM disk drive (capable of reading disks formatted in the ISO 9660 standard)
and a matching adapter and cable. For additional hardware and software
information, refer to the documentation for the specific BookManager product you
are using.

Updates to books between releases are provided in softcopy only.

Ordering publications
You can order many Tivoli publications online at the following Web site:
http://www.elink.ibmlink.ibm.com/public/applications/
publications/cgibin/pbi.cgi

You can also order by telephone by calling one of these numbers:


v In the United States: 800-879-2755
v In Canada: 800-426-4968

In other countries, see the following Web site for a list of telephone numbers:

http://www.ibm.com/software/tivoli/order-lit/

Accessibility
Accessibility features help users with a physical disability, such as restricted
mobility or limited vision, to use software products successfully. With this product,
you can use assistive technologies to hear and navigate the interface. You can also
use the keyboard instead of the mouse to operate all features of the graphical user
interface.

For additional information, see the Accessibility Appendix in the Tivoli Workload
Scheduler Job Scheduling Console User’s Guide.

xvi IBM Tivoli Workload Scheduler Planning and Installation Guide


Support

Support information
If you have a problem with your IBM software, you want to resolve it quickly. IBM
provides the following ways for you to obtain the support you need:
v Searching knowledge bases: You can search across a large collection of known
problems and workarounds, Technotes, and other information.
v Obtaining fixes: You can locate the latest fixes that are already available for your
product.
v Contacting IBM Software Support: If you still cannot solve your problem, and
you need to work with someone from IBM, you can use a variety of ways to
contact IBM Software Support.

For more information about these three ways of resolving problems, see “Support
information,” on page 155.

Conventions used in this guide


This guide uses several conventions for special terms and actions, operating
system-dependent commands and paths, comman syntax, and margin graphics.

Typeface conventions
This guide uses the following typeface conventions:
Bold
v Lowercase commands and mixed case commands that are otherwise
difficult to distinguish from surrounding text
v Interface controls (check boxes, push buttons, radio buttons, spin
buttons, fields, folders, icons, list boxes, items inside list boxes,
multicolumn lists, containers, menu choices, menu names, tabs, property
sheets), labels (such as Tip:, and Operating system considerations:)
v Keywords and parameters in text
Italic
v Words defined in text
v Emphasis of words (words as words)
v New terms in text (except in a definition list)
v Variables and values you must provide
Monospace
v Examples and code examples
v File names, programming keywords, and other elements that are difficult
to distinguish from surrounding text
v Message text and prompts addressed to the user
v Text that the user must type
v Values for arguments or command options

Operating system-dependent variables and paths


This guide uses the UNIX® convention for specifying environment variables and
for directory notation.

When using the Windows command line, replace $variable with % variable% for
environment variables and replace each forward slash (/) with a backslash (\) in

About this guide xvii


Conventions

directory paths. The names of environment variables are not always the same in
Windows and UNIX. For example, %TEMP% in Windows is equivalent to $tmp in
UNIX.

Note: If you are using the bash shell on a Windows system, you can use the UNIX
conventions.

Command syntax
This guide uses the following syntax wherever it describes commands:
Table 1. Command Syntax
Syntax Description
convention
Brackets ([ ]) The information enclosed in brackets ([ ]) is optional. Anything not
enclosed in brackets must be specified.
Braces ({ }) Braces ({ }) identify a set of mutually exclusive options, when one
option is required.
Underscore ( _ ) An underscore (_) connects multiple words in a variable.
Vertical bar ( | ) Mutually exclusive options are separated by a vertical bar( | ). You can
enter one of the options separated by the vertical bar, but you cannot
enter multiple options in a single use of the command. A vertical bar
can be used to separate optional or required options.
Bold Bold text designates literal information that must be entered on the
command line exactly as shown. This applies to command names and
non-variable options.
Italic Italic text is variable and must be replaced by whatever it represents.

xviii IBM Tivoli Workload Scheduler Planning and Installation Guide


Part 1. Introduction
Chapter 1. Introduction . . . . . . . . . . 3
Tivoli Workload Scheduler overview . . . . . . 3
Processes . . . . . . . . . . . . . . . 4
Network communications . . . . . . . . . . 5
Network operation . . . . . . . . . . . . 6
Extended agents . . . . . . . . . . . . . 6
Local UNIX access method . . . . . . . . 7
Remote UNIX access method . . . . . . . . 7
UNIX extended agents . . . . . . . . . . 7
Managing production for extended agents . . 7
Product instances . . . . . . . . . . . . . 8
Registry file . . . . . . . . . . . . . 8
Components file . . . . . . . . . . . . 9
Viewing the components file . . . . . . . 9
Quick start guide . . . . . . . . . . . . . 9

© Copyright IBM Corp. 1991, 2004 1


2 IBM Tivoli Workload Scheduler Planning and Installation Guide
Chapter 1. Introduction
This guide contains instructions for installing, configuring, and upgrading to IBM
Tivoli Workload Scheduler, Version 8.2.

Tivoli Workload Scheduler overview


Tivoli Workload Scheduler is composed of three parts:
Tivoli Workload Scheduler Engine
The engine runs on several types of Microsoft Windows and UNIX
operating systems. Installing the engine in this guide refers to installing
either a master, backup master, fault-tolerant agent, or standard agent and
their associated binaries.
Tivoli Workload Scheduler Connector
Maps Job Scheduling Console commands to the Tivoli Workload Scheduler
engine. Install the connector on the master and on any of the fault-tolerant
agents (FTA) that you will use as backup machines for the master CPU.
The connector pre-requires that the Tivoli Management Framework be
configured for a Tivoli server or managed node.
Job Scheduling Console
A graphical user interface (GUI) based on Java™ for Tivoli Workload
Scheduler. Install the Job Scheduling Console on any machine from which
you want to manage plan and database objects. The Job Scheduling
Console does not require the Tivoli Workload Scheduler engine or
connector to be installed on the same workstation. You can use the Job
Scheduling Console from any machine as long as it has a TCP/IP link with
the machine running the Tivoli Workload Scheduler connector. For more
detailed information on the Job Scheduling Console, refer to IBM Tivoli
Workload Scheduler Job Scheduling Console, User’s Guide.

A Tivoli Workload Scheduler network is made up of the workstations on which


jobs and job streams are run.

Primarily, workstation definitions refer to physical workstations. However, in the


case of extended and network agents the workstations are logical definitions that
must be hosted by a physical Tivoli Workload Scheduler workstation.

When you are installing a Tivoli Workload Scheduler network, you can choose
from the following types of workstation:
Master Domain Manager (MDM)
The master domain manager is the topmost domain of a Tivoli Workload
Scheduler network. It contains the centralized database files used to
document scheduling objects. It creates the Production plan, distributes it
to all the agents in the network at the start of each day, and performs all
logging and reporting for the network.
Backup Master
A fault-tolerant agent capable of assuming the responsibilities of the master
domain manager.

© Copyright IBM Corp. 1991, 2004 3


Overview

Fault-tolerant Agent (FTA)


A workstation capable of resolving local dependencies and launching its
jobs in the absence of a domain manager.
Standard Agent
A workstation that launches jobs only under the direction of its domain
manager.

Other than the roles you can select while installing, the fault-tolerant agents can
assume one of the following:
Domain Manager
The management hub in a domain. All communications to and from the agents
in a domain are routed through the domain manager.
Host
The scheduling function required by extended agents. It can be performed by
any Tivoli Workload Scheduler workstation, except another extended agent.
Extended Agent
A logical workstation definition that enables you to launch and control jobs on
other systems and applications.
Network Agent
A logical workstation definition for creating dependencies between jobs and
job streams in separate Tivoli Workload Scheduler networks.

Processes
Netman is started by the StartUp script. The order of process creation is Netman,
Mailman, Batchman, and Jobman. On standard agent workstations, Batchman does
not run. All processes, except Jobman, run as the TWS user. Jobman runs as root.

As network activity begins, Netman receives requests from remote Mailman


processes. Upon receiving a request, Netman spawns a Writer process and passes
the connection off to it. Writer receives the message and passes it to the local
Mailman. The Writer processes (there may be more than one on a domain
manager) are started by link requests and are stopped by unlink requests (or when
the communicating Mailman terminates).

Domain managers, including the master domain manager, can communicate with a
large number of agents and subordinate domain managers. For improved
efficiency, you can define Mailman servers on a domain manager to distribute the
communications load (see the section that explains how to manage workstations in
the database in the IBM Tivoli Workload Scheduler Job Scheduling Console User’s
Guide).

4 IBM Tivoli Workload Scheduler Planning and Installation Guide


Network communications

Master/Domain Fault-tolerant
Manager Agent

Startup Startup

Netman Netman

Writer Writer

Mailman Mailman

Batchman Batchman

Jobman Jobman

Method

Extended
Agent

Figure 1. Process flows

Network communications
In Tivoli Workload Scheduler network, agents communicate with their domain
managers, and domain managers communicate with their parent domain
managers. There are basically two types of communications that take place:
v Start-of-day initialization
v Scheduling change-of-state event messages during the processing day

Before the start of each new day, the master domain manager creates a production
control file called Symphony. Then, Tivoli Workload Scheduler is restarted in the
network, and the master domain manager sends a copy of the new Symphony file to
each of its automatically-linked agents and subordinate domain managers. The
domain managers, in turn, send copies to their automatically-linked agents and
subordinate domain managers. Agents and domain managers that are not set up to
link automatically are initialized with a copy of Symphony as soon as a link
operation is run in Tivoli Workload Scheduler. The autolink flag is set by default
when a workstation is created in Job Scheduling Console.

After the network is started, scheduling messages, like job starts and completions,
are passed from the agents to their domain managers, through parent domain
managers to the master domain manager. The master domain manager then

Chapter 1. Introduction 5
Network communications

broadcasts the messages throughout the hierarchical tree to update the domain
managers and all fault-tolerant agents running in full status mode.

Network operation
The Batchman process on each domain manager and fault-tolerant agent
workstation operates autonomously, scanning its Symphony file to resolve
dependencies and launch jobs. Batchman launches jobs via the Jobman process. On
a standard agent, the Jobman process responds to launch requests from the domain
manager’s Batchman.

The master domain manager is continuously informed of job launches and


completions and is responsible for broadcasting the information to domain
managers and fault-tolerant agents so they can resolve any interworkstation
dependencies.

The degree of synchronization among the Symphony files depends on the setting of
Full Status and Resolve Dependencies modes in a workstation’s definition.
Assuming that these modes are turned on, a fault-tolerant agent’s Symphony file
contains the same information as the master domain manager’s (see the IBM Tivoli
Workload Scheduler Job Scheduling Console User’s Guide).

Fault-tolerant Master/Domain Standard


Agent Manager Agent

Symphony Symphony

Batchman Batchman

Jobman Jobman Jobman

Job Job Job Job Job Job

Figure 2. Processes in the network operation

Extended agents
An extended agent serves as an interface to an external, non-Tivoli Workload
Scheduler system or application. It is defined as an Tivoli Workload Scheduler
workstation with an access method and a host. The access method communicates
with the external system or application and launches a series of jobs, such as,
monitoring. A job or job stream that needs to verify the existence of one or more
files before it can begin execution is known as a file dependency. The host is
another Tivoli Workload Scheduler workstation (except another extended agent)
that resolves dependencies and issues job launch requests via the method.

Jobs are defined for an x-agent in the same manner as for other Tivoli Workload
Scheduler workstations, except that job attributes are dictated by the external
system or application.
6 IBM Tivoli Workload Scheduler Planning and Installation Guide
Network communications

Extended agent software is available for several systems and applications. The
UNIX extended agents, included with Tivoli Workload Scheduler are described in
the following section.

Local UNIX access method


The Local UNIX method can be used to define multiple Tivoli Workload Scheduler
workstations on one computer: the host workstation and one or more extended
agents. When Tivoli Workload Scheduler sends a job to a local UNIX extended
agent, the access method, unixlocl, is invoked by the host to run the job. The
method starts by running the standard configuration script on the host workstation
(TWShome/jobmanrc). If the job’s logon user is permitted to use a local configuration
script and the script exists as $HOME/.jobmanrc, the local configuration script is also
run. The job itself is then run either by the standard or the local configuration
script. If neither configuration script exists, the method starts the job.

The launching of the configuration scripts, jobmanrc and .jobmanrc is configurable


in the method script. The method runs the configuration scripts by default, if they
exist. To disable this feature, you must comment out a set of lines in the method
script. For more information, examine the script file TWShome/methods/unixlocl on
the x-agent’s host.

Remote UNIX access method


The Remote UNIX access method can be used to designate a non-Tivoli Workload
Scheduler computer to run Tivoli Workload Scheduler-scheduled jobs. When Tivoli
Workload Scheduler sends a job to a remote UNIX extended agent, the access
method, unixrsh, creates a /tmp/maestro directory on the non-Tivoli Workload
Scheduler computer. It then transfers a wrapper script to the directory and runs it.
The wrapper then runs the scheduled job. The wrapper is created only once, unless
it is deleted, moved, or is outdated.

To run jobs via the x-agent, the job logon users must be given appropriate access
on the non-Tivoli Workload Scheduler UNIX computer. To do this, a .rhost,
/etc/host.equiv, or equivalent file should be set up on the computer. If Opens file
dependencies are to be checked, root access must also be permitted. Contact your
system administrator for help. For more information about the access method,
examine the script file TWShome/methods/unixrsh on an x-agent’s host.

UNIX extended agents


Tivoli Workload Scheduler includes access methods for two types of UNIX
extended agents. The Local UNIX method allows a single UNIX computer to
operate as two Tivoli Workload Scheduler workstations, both of which can run
Tivoli Workload Scheduler scheduled jobs. The Remote UNIX access method
allows you to designate a remote UNIX computer to run Tivoli Workload
Scheduler scheduled jobs without having Tivoli Workload Scheduler installed on it.

Information about a job’s execution is sent to Tivoli Workload Scheduler from an


extended agent via the job’s stdlist file. A Method Options file can specify alternate
logons to launch jobs and check Opens file dependencies. For more information,
see the IBM Tivoli Workload Scheduler Reference Guide.

Managing production for extended agents


In general, jobs that run on x-agents behave like other Tivoli Workload Scheduler
jobs. Tivoli Workload Scheduler tracks a job’s status and records output in the job’s
stdlist files. These files are stored on the x-agent’s host workstation. For more

Chapter 1. Introduction 7
Network communications

information on managing jobs, see the section that describes Tivoli Workload
Scheduler plan tasks in the IBM Tivoli Workload Scheduler Job Scheduling Console
User’s Guide.

Product instances
Multiple copies of the product can be installed on a single computer provided that
a unique name and installation path is used for each instance. Instances are
recorded in the registry file for Tier 1 platforms and in the components file for Tier
2 platforms. Former versions of Tivoli Workload Scheduler were also registered in
the components file.

Registry file
On Tier 1 platforms, when you install Tivoli Workload Scheduler using the ISMP
installation program or the twsinst script, a check is performed to determine
whether there are other Tivoli Workload Scheduler instances already installed. The
TWSRegistry.dat file stores the history of all instances installed, and this is the sole
purpose of this file. On Windows platforms, this file is stored under the system
drive directory, for example, c:\winnt\system32. On UNIX platforms, this file is
stored in the /etc/TWS path. The file contains the values of the following
attributes that define a Tivoli Workload Scheduler installation:
Table 2. Registry file attributes
Attribute Value
ProductID TWS_ENGINE
PackageName The name of the software package used to
perform the installation.
InstallationPath The absolute path of the Tivoli Workload
Scheduler instance.
UserOwner The owner of the installation.
MajorVersion Tivoli Workload Scheduler release number.
MinorVersion Tivoli Workload Scheduler version number.
MaintenanceVersion Tivoli Workload Scheduler maintenance
version number.
PatchVersion The latest product patch number installed.
Agent Any one of the following: standard agent,
fault-tolerant agent, master domain manager.
FeatureList The list of optional features installed.
LPName The name of the software package block that
installs the language pack.
LPList A list of all languages installed for the
instance installed.

The following is an example of a TWSRegistry.dat file on a master domain


manager:
/Tivoli/Workload_Scheduler/tws_nord_DN_objectClass=OU
/Tivoli/Workload_Scheduler/tws_nord_DN_PackageName=TWS_NT_tws_nord.8.2
/Tivoli/Workload_Scheduler/tws_nord_DN_MajorVersion=8
/Tivoli/Workload_Scheduler/tws_nord_DN_MinorVersion=2
/Tivoli/Workload_Scheduler/tws_nord_DN_PatchVersion=
/Tivoli/Workload_Scheduler/tws_nord_DN_FeatureList=TBSM
/Tivoli/Workload_Scheduler/tws_nord_DN_ProductID=TWS_ENGINE
/Tivoli/Workload_Scheduler/tws_nord_DN_ou=tws_nord

8 IBM Tivoli Workload Scheduler Planning and Installation Guide


Instances

/Tivoli/Workload_Scheduler/tws_nord_DN_InstallationPath=c:\TWS\tws_nord
/Tivoli/Workload_Scheduler/tws_nord_DN_UserOwner=tws_nord
/Tivoli/Workload_Scheduler/tws_nord_DN_MaintenanceVersion=
/Tivoli/Workload_Scheduler/tws_nord_DN_Agent=MDM

Components file
For product installations on Tier 2 platforms and for Tivoli Workload Scheduler
version 7.0 and 8.1 installations, product groups are defined in the components file.
This file permits multiple copies of a product to be installed on a single computer
by designating a different users for each copy. If the file does not exist prior to
installation, it is created by the customize script. For example:

<product> <version> <home directory> <product group>


Maestro 7.0 /opt/maestro production
maestro (8.2) /data/maestro8/maestro TWS_maestro8_8.2

Entries in this file are automatically made and updated by the customize script.

On UNIX, the file name of the components file is defined in the variable:
UNISON_COMPONENT_FILE

If the variable is not set, customize uses the file name:


/usr/unison/components

Viewing the components file


Following installation or an upgrade, you can view the contents of the components
file on a Tier 2 platform by running the ucomp program as follows:
ucomp -l

Quick start guide


To install Tivoli Workload Scheduler, perform the following task:
v Chapter 2, “Getting started,” on page 13 for planning your installation before
you begin
v Part 3, “Installing and upgrading,” on page 35 for installing Tivoli Workload
Scheduler and the connector components
v Chapter 8, “After you install,” on page 73 for configuring your installation

You can also perform the following optional tasks:


v Chapter 11, “Setting security,” on page 139 for setting Tivoli Workload Scheduler
security parameters
v Chapter 9, “Optional customization,” on page 81 for setting local and global
options
v “Promote an existing installation” on page 45 for upgrading an existing
installation
v “Add a new feature to an existing installation” on page 43 for adding a feature
or component to an existing installation

Chapter 1. Introduction 9
Quick start

10 IBM Tivoli Workload Scheduler Planning and Installation Guide


Part 2. Planning
Chapter 2. Getting started . . . . . . . . . 13
Network planning . . . . . . . . . . . . 13
Tivoli Workload Scheduler network overview . . 13
Domain functionality . . . . . . . . . . 13
Localized processing in your domain . . . . . 13
Network considerations . . . . . . . . . 14
A single domain network . . . . . . . . . 15
A multiple domain network . . . . . . . . 16
Switching to a backup domain manager . . . . 17
Fault-tolerant switch mechanism . . . . . . 17
Expanded databases . . . . . . . . . . 19
Workstation names . . . . . . . . . . . 19
Connector installation . . . . . . . . . . 19
Distributed connectors on fault-tolerant agents 20
Time zone considerations . . . . . . . . . 20
Planning installations . . . . . . . . . . . 20
Selecting workstation types . . . . . . . . 21
Selecting your installation method . . . . . . 21
Selecting tier installation type . . . . . . . . 23
Checking user authorization requirements . . . . 24
Authorization roles for running the install wizard 24
Authorization roles for running the twsinst script 24
Authorization roles for software distribution . . 24
Authorization roles for running the customize
script . . . . . . . . . . . . . . . 25
Authorization roles for running an upgrade . . 25
Before you install . . . . . . . . . . . . 25
Information about the Tivoli Workload Scheduler
user . . . . . . . . . . . . . . . . 26
Creating a user account on Windows
operating systems . . . . . . . . . . 26
Creating a user account on UNIX systems . . 26
Installation element validation criteria . . . . 26
Installing for end-to-end scheduling . . . . . 27
Installation information . . . . . . . . . 27
The installation CDs . . . . . . . . . 27
Installation log files . . . . . . . . . . 29
Windows services . . . . . . . . . . 30
Modifying the jobmon service rights for
Windows . . . . . . . . . . . . . 30
Before you upgrade . . . . . . . . . . . 30
Unlinking and stopping Tivoli Workload
Scheduler . . . . . . . . . . . . . . 30
Stopping the connector . . . . . . . . . 31
Backup files . . . . . . . . . . . . . 32
Using the new configuration files . . . . . . 32
Expanding your database . . . . . . . . . 33
Tivoli Management Framework implications . . 33
When a supported Tivoli Management
Framework version is already installed . . . 33

© Copyright IBM Corp. 1991, 2004 11


12 IBM Tivoli Workload Scheduler Planning and Installation Guide
Chapter 2. Getting started
This chapter describes the information you need to prepare for installation.

Network planning
Before you begin installing Tivoli Workload Scheduler, determine the answers to
the following questions.
1. Will you use multiple domains or a single domain network structure?
2. If you use multiple domains, how will you divide your domains:
v By geographical locations, for example, London and Paris domains?
v By time zone, for example Pacific Standard Time (PST) and Eastern Standard
Time (EST)?
v By business unit, for example marketing and accounting?
3. Will you activate the time zone feature?
4. Will your environment contain firewalls?

Tivoli Workload Scheduler network overview


A Workload Scheduler network contains at least one Workload Scheduler domain,
the master domain, in which the master domain manager is the management hub.
Additional domains can be used to divide a widely distributed network into
smaller, locally managed groups.

Using multiple domains reduces the amount of network traffic by reducing the
communications between the master domain manager and other computers.

In a single domain configuration, the master domain manager maintains


communications with all of the workstations in the Workload Scheduler network.

In a multi-domain configuration, the master domain manager communicates with


the workstations in its domain, and subordinate domain managers. The
subordinate domain managers, in turn, communicate with the workstations in their
domains and subordinate domain managers. Multiple domains also provide
fault-tolerance by limiting the problems caused by losing a domain manager in a
single domain. To limit the effects further, you can designate backup domain
managers to take over if their domain managers fail.

Domain functionality
When you define a new domain, you must identify the parent domain and the
domain manager. The parent domain is the domain directly above the new domain
in the domain hierarchy. All communications to and from a domain are routed
through the parent domain manager.

Localized processing in your domain


A key to choosing how to set up your Tivoli Workload Scheduler domains is the
concept of localized processing. The idea is to separate or localize your scheduling
needs based on a common set of characteristics.

© Copyright IBM Corp. 1991, 2004 13


Network planning

Common characteristics are things such as geographical locations, business


functions, and application groupings. Grouping related processing can limit the
amount of interdependency information that needs to be communicated between
domains. The benefits of localizing processing in domains are:
v Decreased network traffic. Keeping processing localized to domains eliminates
the need for frequent interdomain communications.
v Provides a convenient way to tighten security and simplify administration.
Security and administration can be defined at, and limited to, the domain level.
Instead of network-wide or workstation-specific administration, you can have
domain administration.
v Network and workstation fault tolerance can be optimized. In a multiple domain
Tivoli Workload Scheduler network, you can define backups for each domain
manager, so that problems in one domain do not disrupt operations in other
domains.

Network considerations
The following questions will help in making decisions about how to set up your
Tivoli Workload Scheduler network. Some questions involve aspects of your
network, and others involve the applications controlled by Tivoli Workload
Scheduler.
v How large is your Tivoli Workload Scheduler network? How many computers
does it hold? How many applications and jobs does it run?
The size of your network will help you decide whether to use a single domain
or the multiple domain architecture. If you have a small number of computers,
or a small number of applications to control with Tivoli Workload Scheduler,
there may not be a need for multiple domains.
v How many geographic locations will be covered in your Tivoli Workload
Scheduler network? How reliable and efficient is the communication between
locations?
This is one of the primary reasons for choosing a multiple domain architecture.
One domain for each geographical location is a common configuration. If you
choose single domain architecture, you will be more reliant on the network to
maintain continuous processing.
v Do you need centralized or decentralized management of Tivoli Workload
Scheduler?
An Tivoli Workload Scheduler network, with either a single domain or multiple
domains, gives you the ability to manage Tivoli Workload Scheduler from a
single node, the master domain manager. If you want to manage multiple
locations separately, you can consider the installation of a separate Tivoli
Workload Scheduler network at each location. Note that some degree of
decentralized management is possible in a standalone Tivoli Workload Scheduler
network by mounting or sharing file systems.
v Do you have multiple physical or logical entities at a single site? Are there
different buildings, and several floors in each building? Are there different
departments or business functions? Are there different applications?
These may be reasons for choosing a multi-domain configuration. For example, a
domain for each building, department, business function, or each application
(manufacturing, financial, engineering, and so on).
v Do you run applications that will operate with Tivoli Workload Scheduler?
If they are discrete and separate from other applications, you may choose to put
them in a separate Tivoli Workload Scheduler domain.

14 IBM Tivoli Workload Scheduler Planning and Installation Guide


Network planning

v Would you like your Tivoli Workload Scheduler domains to mirror your
Windows domains?
This is not required, but may be useful.
v Do you want to isolate or differentiate a set of systems based on performance or
other criteria?
This may provide another reason to define multiple Tivoli Workload Scheduler
domains to localize systems based on performance or platform type.
v How much network traffic do you have now?
If your network traffic is manageable, the need for multiple domains is less
important.
v Do your job dependencies cross-system boundaries, geographical boundaries, or
application boundaries? For example, does the start of Job1 on workstation1
depend on the completion of Job2 running on workstation2?
The degree of interdependence between jobs is an important consideration when
laying out your Tivoli Workload Scheduler network. If you use multiple
domains, you should try to keep interdependent objects in the same domain.
This will decrease network traffic and take better advantage of the domain
architecture.
v What level of fault-tolerance do you require?
An obvious disadvantage of the single domain configuration is the reliance on a
single domain manager. In a multi-domain network, the loss of a single domain
manager affects only the agents in its domain.

A single domain network


A single domain Tivoli Workload Scheduler network consists of a master domain
manager and any number of agents. Figure 3 shows an example of a single domain
network. A single domain network is well suited to companies that have few
locations and business functions. All communication in the network is routed
through the master domain manager. With a single location, you are concerned
only with the reliability of your local network and the amount of traffic it can
handle.

Figure 3. Single domain topology

Single domain networks can be combined with other networks, single or multiple
domain, to meet multiple site requirements. Tivoli Workload Scheduler supports
internetwork dependencies between jobs running on different Tivoli Workload
Scheduler networks.

Chapter 2. Getting started 15


Network planning

MDM

A A A A

Atlanta Denver

Or:

MDM MDM

A A A A

Atlanta Denver

Figure 4. Internetwork dependencies

The first example shows a single domain network. The master domain manager is
located in Atlanta, along with several agents. There are also agents located in
Denver. The agents in Denver depend on the master domain manager in Atlanta to
resolve all interagent dependencies, even though the dependencies may be on jobs
that run in Denver. An alternative would be to create separate single domain Tivoli
Workload Scheduler networks in Atlanta and Denver, as shown in the second
example.

A multiple domain network


Multiple Domain networks are especially suited to companies that span multiple
locations, departments, or business functions. A multiple domain Tivoli Workload
Scheduler network consists of a master domain manager, any number of lower tier
domain managers, and any number of agents in each domain. Agents
communicate only with their domain managers, and domain managers
communicate with their parent domain managers.

MDM
Atlanta
Tier 1
A A A

DM DM
Denver Los Angeles
Tier 2
A A A A A A

DM Boulder
DM
Aurora Burbank DM Tier 3

Figure 5. Multiple domain topology

As Figure 5 illustrates, the master domain manager is located in Atlanta; it contains


the database files used to document the scheduling objects, and distributes the
Symphony file to its agents and the domain managers in Denver and Los Angeles.
The Denver and Los Angeles domain managers then distribute the Symphony file
to their agents and subordinate domain managers in Boulder, Aurora and Burbank.
The master domain manager in Atlanta is responsible for broadcasting
inter-domain information throughout the network.

All communications to and from the Boulder domain manager are routed through
its parent domain manager in Denver. If there are schedules or jobs in the Boulder
domain that are dependent on schedules or jobs in the Aurora domain, those
dependencies are resolved by the Denver domain manager. Most interagent

16 IBM Tivoli Workload Scheduler Planning and Installation Guide


Network planning

dependencies are handled locally by the lower tier domain managers, greatly
reducing traffic on the WAN (Wide Area Network).

Switching to a backup domain manager


Each domain has a domain manager and, optionally, one or more backup domain
managers. A backup domain manager must be in the same domain as the domain
manager it is backing up. The backup domain managers must be fault-tolerant
agents running the same product version of the domain manager they are
supposed to replace, and must have the Resolve Dependencies and Full Status
options enabled in their workstation definitions.

If a domain manager fails during the production day, you can use either the Job
Scheduling Console, or the switchmgr command in the conman command line, to
switch to a backup domain manager. A Switch Manager action can be run by
anyone with start and stop access to the domain manager and backup domain
manager workstations.

A switch manager operation stops the backup manager, then restarts it as the new
domain manager, and converts the old domain manager to a fault-tolerant agent.
The identities of the current domain managers are carried forward in the
Symphony file from one processing day to the next, so any switch remains in effect
until you switch back to the original domain manager.

Fault-tolerant switch mechanism


The optional fault tolerance mechanism is based on a multiple inbound
connections, and switches roles between the domain manager and backup domain
manager. Inside a domain, the events are no longer routed from the primary
domain manager, but arrive directly from the originating fault-tolerant agents.
When an fault-tolerant agent sends an event to a primary domain manager, it also
sends the same event to all the full-status fault-tolerant agents in that domain. If it
is unable to deliver the event to any of them, then the event is buffered in the
corresponding pobox file on the fault-tolerant agent. Limit the number of FTAs
that have full status, as this causes network traffic congestion.

Figure 6 on page 18 shows the multiple inbound connections architecture.

Chapter 2. Getting started 17


Network planning

Master
Domain
Manager

Domain A
Domain
Manager Full-Status
DMA FTA-FS

FTA2 FTA3
Domain
AIX Linux
Manager
DMB

FTA4 FTA5
Linux AIX

Figure 6. Multiple inbound connections architecture

The plain arrows represent the connections that are created with a Tivoli Workload
Scheduler without multiple inbound connections architecture. The dashed arrows
represent the additional inbound connections that are created to the full-status
fault-tolerant agent in a domain with multiple inbound connections architecture.

When the fault-tolerant switch is active, the link and unlink commands issued
from the primary domain manager act both on the primary, and on the secondary
connections.

When a full-status fault-tolerant agent receives an event it processes it and does


not route it further, but buffers it locally in a cyclical queue called ftbox. ftbox acts
as a recovery queue.

The multiple inbound connections architecture ensures that all events received and
processed by the primary domain manager are also received and processed by the
full-status fault-tolerant agent (or will be received or processed later if the events
are still in some fault-tolerant agent pobox). If the primary domain manager fails, a
user can use the switch-manager command to switch the domain manager
functionality from the primary-domain manager to a selected full-status
fault-tolerant agent.

When the switch-manager command is received, all the fault-tolerant agents in that
domain disconnect from the primary domain manager and connect to the
full-status fault-tolerant agent. During the link establishment phase, the new
manager re-synchronizes with each connecting workstation by resending and

18 IBM Tivoli Workload Scheduler Planning and Installation Guide


Network planning

regenerating the delta of the events that were buffered on the ftboxes, ensuring
that none of the events still in the primary domain manager message boxes are lost
or duplicated.

The full-status fault-tolerant agents are always updated with the latest status
information and all the unprocessed or partially processed events are stored in at
least two machines (the original fault-tolerant agent if it was not able to deliver it,
and the domain manager or the full-status fault-tolerant agent). The events are
then ready to be resent and reprocessed, thus eliminating the single point of failure
of the primary-domain manager backup-domain manager communication.

Note: This approach applies both to top-down and bottom-up traffic. Inbound
does not depend on the direction of the traffic, but is domain-centric, and is
repeated the same way for each domain where at least one full-status
fault-tolerant agent resides.

Expanded databases
With Tivoli Workload Scheduler, Version 8.2, databases are created expanded on
the master domain manager the first time you run the Tivoli Workload Scheduler
engine.

If you are upgrading from version 7.0 or 8.1 to version 8.2, ensure that you expand
your databases before using them with Tivoli Workload Scheduler version 8.2. See
“Expanding your database” on page 33 for more information.

Workstation names
Job scheduling in Tivoli Workload Scheduler network is distributed across multiple
computers. To accurately track jobs, schedules, and other objects, each computer is
given a unique workstation name. The names can be the same as network node
names, as long as they comply with the naming rules of Tivoli Workload
Scheduler. The maximum permitted length of a workstation name is sixteen
alphanumeric, dash (-), and underscore (_) characters starting with a letter.

Connector installation
The connector is a Tivoli Management Framework service that enables Job
Scheduling Console clients to communicate with the Tivoli Workload Scheduler
engine. To install the connector you must have Tivoli Management Framework
version 3.7.1 or later. A connector can be installed on a system that must also be a
Tivoli server or managed node.

If you want to install the connector in your Tivoli Workload Scheduler domain, but
you have no existing regions and you are not interested in implementing a full
Tivoli management environment, then you should install the Tivoli Management
Framework as a unique region (and therefore install as a Tivoli server) on each
node that will run the connector.

You can install connectors on workstations other than the master domain manager.
This allows you to view the version of the Symphony file of this particular
workstation. This may be important for using the Job Scheduling Console to
manage the local parameters database or the submit command directly to the
workstation rather than submitting through the master. The workstation on which
you install the Connector must be either a managed node or a Tivoli server in the
Tivoli management region. However, to manage scheduling objects in the Tivoli

Chapter 2. Getting started 19


Network planning

Workload Scheduler database, you must install the Connector on the master
domain manager configured as a Tivoli server or managed node.

See Chapter 3, “Installing using the installation wizard,” on page 37 for


information about installing the connector as an additional feature.

Distributed connectors on fault-tolerant agents


The installation of distributed connectors on fault-tolerant agents depends on the
type of users.

Non-Tivoli environment users:


v When you create a fault-tolerant agent, be sure you install the Tivoli Workload
Scheduler engine before you install the connector.
v Follow the installation instructions described in the chapter that explains how to
install the connector in the Tivoli Workload Scheduler Job Scheduling Console User’s
Guide or follow the procedure in Chapter 3, “Installing using the installation
wizard,” on page 37.

Tivoli environment users:


v Typically, the master domain manager resides on a managed node. You must
first install the Job Scheduling Services/connector classes on the Tivoli server.
v Be careful not to create an instance on a managed node that does not have the
Tivoli Workload Scheduler engine installed.

All users:
v Be aware that during the connector installation process you will be prompted for
an Tivoli Workload Scheduler instance name. This name will be displayed in the
Job Scheduling tree of the Job Scheduling Console. To avoid confusion, you
should use a name that includes the name of the fault-tolerant agent.
v If you are installing the connector on several fault-tolerant agents within a
network, keep in mind that the instance names must be unique both within the
Tivoli Workload Scheduler network and the Tivoli management region.

Time zone considerations


Time zone support is an optional feature that is disabled by default. When
enabled, time zone support allows you to manage workloads at a global level.

When you enable time zones it removes the dead time in your global network. The
dead time is the period of time between Tivoli Workload Scheduler start of day on
the master domain, and the time on a fault-tolerant agent in another time zone. For
example, if a master in an eastern time zone has a start of day at 6 a.m. and
initializes a fault-tolerant agent in a western time zone with a 3–hour time
difference, the dead zone for the fault tolerant agent is between 3 a.m. and 6 a.m.

For a description of how the time zone works, refer to IBM Tivoli Workload
Scheduler: Reference Guide.

Planning installations
This section describes the things you need to take into consideration before you
start to install Tivoli Workload Scheduler.

20 IBM Tivoli Workload Scheduler Planning and Installation Guide


Planning installations

Selecting workstation types


Before you start installing you need to decide the type of workstation that you are
going to install. You should read “Tivoli Workload Scheduler overview” on page 3
for a description of the various workstation types.

Table 3 summarizes the type of workstation on which to install the components:


Table 3. Workstation installation selection
Workstation type Connector
Master Domain Manager Yes
Fault-tolerant Agent Optional
Standard Agent No

Note: Also install the connector on your backup domain manager.

Selecting your installation method


There are several installation methods to install Tivoli Workload Scheduler:
InstallShield Multi-Platform (ISMP) wizard
When you are installing the Tivoli Workload Scheduler on a single
workstation, you can use the installation wizard in interactive or silent
mode. In interactive mode, the wizard guides you through the installation
steps. In silent mode a response file provides the relevant information to
the install process, which is run in background without user intervention.
This method of installation uses a Java Virtual Machine, and therefore has
specific system requirements, refer to the Tivoli Workload Scheduler Release
Notes for more information. See Chapter 3, “Installing using the installation
wizard,” on page 37.
twsinst script for Tier 1 platforms
Running this script installs Tivoli Workload Scheduler on UNIX Tier 1
platforms. This method does not use a Java Virtual Machine and can be
used instead of the ISMP wizard. See Chapter 4, “Installing and promoting
using twsinst,” on page 47.
Software Distribution software package blocks (SPBs)
The Tivoli Workload Scheduler engine can be installed using the Software
Distribution component of IBM Tivoli Configuration Manager, Versions 4.2
or 4.2.1 by distributing software package blocks. See Chapter 5, “Installing
using Software Distribution,” on page 51.
Customize shell script for Tier 2 platforms
Running this script installs Tivoli Workload Scheduler on Tier 2 platforms.
Some configuration steps are required before and after running the script.
See Chapter 6, “Installing using customize,” on page 57.

The following sections describe why you would choose one way over another.

Table 4 on page 22 lists the available installation methods and the components and
features each method installs depending whether you are installing a Tier one or a
Tier 2 platform.

Chapter 2. Getting started 21


Planning installations

Table 4. Installation Methods, Components and Features


Installation Tivoli Workload Optional features
Platform tier method Scheduler agent you can install Refer to
Tier 1 ISMP Fault-tolerant Connector + Chapter 3,
agent Tivoli “Installing using
Management the installation
Master domain Framework wizard,” on page
manager 37
Tivoli Plus
Backup domain Module + Tivoli
manager Management
Framework

Language packs
Standard agent Language packs
Silent Install Fault-tolerant Connector + “Performing a
agent Tivoli silent installation”
Management on page 45
Master domain Framework
manager
Tivoli Plus
Backup domain Module + Tivoli
manager Management
Framework

Language packs
Standard agent Language packs
twsinst script Fault-tolerant Automatically
agent installs the
(UNIX platforms language packs,
only) Master domain not optional.
manager

Backup domain
manager

Standard agent
Software Fault-tolerant None “Software
Distribution agent packages and
parameters” on
Master domain page 51
manager

Backup domain
manager

Standard agent
None Language packs “Installing
language packs”
on page 54

22 IBM Tivoli Workload Scheduler Planning and Installation Guide


Planning installations

Table 4. Installation Methods, Components and Features (continued)


Installation Tivoli Workload Optional features
Platform tier method Scheduler agent you can install Refer to
Tivoli Connector Tivoli Workload
Management Scheduler Job
Framework Tivoli Plus Scheduling Console
Module User’s Guide

Tivoli Workload
Scheduler Plus
Module User’s
Guide
Tier 2 customize Agent Chapter 6,
“Installing using
customize,” on
page 57

Note: Refer to Tivoli Workload Scheduler Release Notes for a list of supported Tier 1
and Tier 2 platforms.

Selecting tier installation type


Table 5 provides the information necessary for you to decide what type of tier you
want to install.
Table 5. Tivoli Workload Scheduler installation options
Installation option Type of agent Features Description
Typical Fault-tolerant Agent Language pack for the Automatically installs a fault-tolerant
operating system agent with its associated binaries, as well
locale as the operating system language locale.
Full Master Domain Connector Installs a master domain manager and its
Manager associated binaries, the Connector and
All language packs Tivoli Management Framework (if not
and language packs of present), and all supported language
Tivoli Management packs.
Framework
Custom Standard Agent Selected additional Installs a standard agent and its associated
language packs and binaries and optionally, additional
Tivoli Management language packs.
Framework language
packs of the same
selected languages
Fault-tolerant Agent Connector Installs all the binaries related to the type
of agent selected, and optionally, the
Backup Master Tivoli Plus Module Connector, Tivoli Plus Module, and
additional language packs. The Tivoli
Master Domain Language packs Management Framework version 3.7.1 or
Manager 4.1 is a prerequisite for the Connector and
Tivoli Plus Module. See “Tivoli
Management Framework implications” on
page 33 for more information.

Before running the installation program, decide on the type of installation you
want to perform:
v “Typical installation sequence” on page 38

Chapter 2. Getting started 23


Planning installations

v “Full installation sequence” on page 39


v “Custom installation sequence” on page 40

Checking user authorization requirements


Depending on the installation method you choose, you need to check the
authorization roles before beginning the install procedure.

Authorization roles for running the install wizard


Table 6 provides the authorization roles required to use the ISMP method of
installation.
Table 6. Required authorization roles for running the install wizard
Activity Context Required role
Custom or typical Machine v Windows: your login account
installation using the must be a member of the
ISMP wizard Windows Administrators group.
v UNIX: root access
ISMP wizard installations Machine v Windows: your login account
involving Tivoli must be the local Administrator.
Management Framework
v UNIX: root access
v Custom installation that
installs either the
Connector or Tivoli
Plus Module*
Tivoli Management super, admin, or install_product
v Add a feature to an Region
existing installation that
includes the Connector
or Tivoli Plus Module*

* If a supported version of the Tivoli Management Framework is already installed, ensure


that you perform the product installation in one of the following ways:
v Log on as the same user that installed the Tivoli Management Framework.
v Logged on as a different user than that which installed the Tivoli Management
Framework, edit the Tivoli Administrator logins with this user login name.
v Create a new Tivoli Administrator and edit the logins with the user name that will
perform the installation.

Authorization roles for running the twsinst script


Table 7 provides the authorization roles required to use the twsinst method of
installation.
Table 7. Required authorization roles for running twsinst
Activity Context Required role
Running the twsinst script Machine root access

Authorization roles for software distribution


Table 8 on page 25 provides the authorization roles required to use the Software
Distribution method of installation.

24 IBM Tivoli Workload Scheduler Planning and Installation Guide


Planning installations

Table 8. Required authorization roles for Software Distribution


Activity Context Required role
Using Software Distribution TMR admin, senior, or super
to install a software package
Machine v Windows: your login
block
account must be a member
of the Windows
Administrators group.
v UNIX: root access

Authorization roles for running the customize script


Table 9 provides the authorization roles required to use the customize method of
installation.
Table 9. Required Authorization Roles for running customize
Activity Context Required role
Running the customize script Machine root access

Authorization roles for running an upgrade


Table 10 provides the context and authorization roles required for running an
upgrade.
Table 10. Required authorization roles for running an upgrade
Activity Context Required role

v Upgrading using the Machine v Windows: your login


installation program account must be a member
v Upgrading using the silent of the Windows. If the
installation upgrade process involves
the Tivoli Management
Framework, log in as the
local Administrator
v UNIX: root access
TMR super, admin, or
install_product
Upgrading using the twsinst Machine root access
script
Upgrading using Software TMR admin, senior, or super
Distribution
Upgrading using the Machine root access
customize script

Before you install


This section describes optional configuration tasks and information about what
changes are made to your system when you install the product. Topics include the
following:
v “Information about the Tivoli Workload Scheduler user” on page 26
v “Installing for end-to-end scheduling” on page 27
v “Installation information” on page 27

Chapter 2. Getting started 25


Before you install

Information about the Tivoli Workload Scheduler user


New Tivoli Workload Scheduler installations require a user account with specific
permissions to install the software. For Windows platforms, the installation
methods described in this guide create this user automatically or can use an
existing user provided the user has the required rights. On UNIX platforms,
regardless of the method of installation you choose, the Tivoli Workload Scheduler
user must be created manually before running the installation. The following
sections describe creating the user account manually, on both Windows and UNIX
operating systems.

Creating a user account on Windows operating systems


On Windows operating systems, the installation automatically creates the Tivoli
Workload Scheduler user with the appropriate rights, if the user does not already
exist. However, if you encounter problems with the creation of the user, you can
perform the following steps.
1. Create a local user account named TWS on the computer where you will install
Tivoli Workload Scheduler.

Note: You can also use an existing user account. Ensure, however, that this
user is a member of the Windows Administrators group.
2. Grant the TWSuser the following advanced user rights:

Act as part of the operating system


Increase quotas
Log on as batch job
Log on as a service
Log on locally
Replace a process level token

Creating a user account on UNIX systems


Installations on UNIX systems require that a valid user account already exist on
the workstation. Tivoli Workload Scheduler is installed in the home directory of
this user. Use the appropriate operating system commands to create this user.

Installation element validation criteria


Table 11 lists the criteria you must follow when installing. When installing using
the installation wizard, most of these are checked by the system. However, when
you are installing with another method they are not checked and the installation
will fail when they are incorrect. Read these criteria before you begin installation.
Table 11. Installation element validation criteria
Max.
Element Spaces Length Valid Characters Default
Installation Path NO – C:\win32app\TWS\$(tws_user)
UNIX User’s Home Directory
Cpu Name NO 16 abcdefghijklmnopqrstuvwxyz HOST NAME
ABCDEFGHIJKLMNOPQRSTUVWXYZ
0123456789_-
First character must be alphabetic
Master Cpu NO 16 abcdefghijklmnopqrstuvwxyz MASTER
Name ABCDEFGHIJKLMNOPQRSTUVWXYZ
0123456789_-
First character must be alphabetic

26 IBM Tivoli Workload Scheduler Planning and Installation Guide


Before you install

Table 11. Installation element validation criteria (continued)


Max.
Element Spaces Length Valid Characters Default
TCP Port NO 5 0 – 65535
Number
TWS User NO 16 abcdefghijklmnopqrstuvwxyz
Name ABCDEFGHIJKLMNOPQRSTUVWXYZ
0123456789_-
First character must be alphabetic
TWS User NO –
Password
TWS User NO 16
Domain
(Windows)
TWS Agent NO –
Type
Company Name NO 40

Installing for end-to-end scheduling


If you are installing Tivoli Workload Scheduler on a workstation that will be used
as a distributed agent for end-to-end scheduling (a domain manager, a
fault-tolerant agent, or a standard agent), you must specify OPCMASTER as the
name of the master domain manager during the installation process.

Installation information
The installation installs Tivoli Workload Scheduler files for the TWSuser in
TWShome, where:
TWSuser
Is the user for which Tivoli Workload Scheduler is installed. On Windows
systems, if you specify a user name that is already defined on the
workstation, the installation automatically assigns the user the necessary
rights to perform the installation. On UNIX workstations only, you must
create the user login account for which you are installing the product prior
to running the installation, if it does not already exist.
TWShome
The installation location. On Windows systems, the default installation
location is defined as system_drive\win32app\TWS\TWSuser, but you can
specify a different location. On UNIX systems, the product is installed in
the user’s home directory.

The installation CDs


The following CDs are required to start the installation process:

Chapter 2. Getting started 27


Before you install

Tivoli Workload Scheduler disk 1: Disk 1 includes images for AIX, SOLARIS,
HP-UX and Windows. The images are structured as follows:

28 IBM Tivoli Workload Scheduler Planning and Installation Guide


Before you install

Disk 2 includes images for Linux and Tier 2 platforms. The images are structured
as follows:

For Windows platforms, the SETUP.exe file is located in the Windows folder on
IBM Tivoli Workload Scheduler Installation Disk 1.

When you copy the image of a specific platform onto the workstation for
installation using the wizard, in addition to the specific image you must also copy
the following files:
v media.inf
v SETUP.jar
v Tivoli_TWS_LP.SPB
v TWS_size.txt

Installation log files


Details of the installation process are logged in various log files located in the
temporary directory set on the local machine, or the temporary_directory specified
when installing the product using the following command: ./SETUP.bin
[-is:tempdir temporary_directory. You can check the following log files for
information regarding the installation.
TWSIsmp.log
The file to which the installation wizard program writes.
TWSInstall.log
The file to which executable programs defined in the software package
blocks write.
Chapter 2. Getting started 29
Before you install

TWS_$(operating_system)_$(TWS_user)^8.2.log
The file to which Software Distribution writes.
twsinst_<ID>.log
The file to which twsinst writes. <ID> indicates the type of installation
process used.

For more information about log files, refer to the Tivoli Workload Scheduler
Administration and Troubleshooting

Windows services
An installation on Windows operating systems registers the following services with
the Windows Service Control Manager:
v Tivoli Workload Scheduler (for TWSuser)
v Tivoli Netman (for TWSuser)
v Tivoli Token Service (for TWSuser)
v Autotrace Runtime

The Service Control Manager maintains its own user password database. Therefore,
if the TWSuser password is changed following installation, you must use the
Services applet in the Control Panel to assign the new password for the Tivoli
Token Service and Tivoli Workload Scheduler (for TWSuser).

Modifying the jobmon service rights for Windows


On Windows systems, the Tivoli Workload Scheduler jobmon service runs in the
SYSTEM account with the right Allow Service to Interact with Desktop granted to
it. You can remove the right for security reasons. However, this will prevent the
service from launching interactive jobs that run in a window on the user’s desktop.
These jobs are not accessible and do not have access to desktop resources. As a
result, they may run forever or abend due to lack of resources.

Before you upgrade


Upgrades of Tivoli Workload Scheduler version 7.0 and 8.1 to version 8.2 are
supported. Before you upgrade your installation, read the topics in this section.
Topics include:
v “Unlinking and stopping Tivoli Workload Scheduler”
v “Stopping the connector” on page 31
v “Backup files” on page 32
v “Using the new configuration files” on page 32
v “Expanding your database” on page 33
v “Tivoli Management Framework implications” on page 33

Unlinking and stopping Tivoli Workload Scheduler


Before you perform an upgrade, promote, or uninstall, ensure that all Tivoli
Workload Scheduler processes and services are stopped. If you have jobs that are
currently running, the related processes must be stopped manually. Follow these
steps:
1. From the Job Scheduling Console, stop the target workstation. Otherwise, from
the command line of the master domain manager, while logged in as the
TWSuser, use the following command:
conman “stop;wait”

30 IBM Tivoli Workload Scheduler Planning and Installation Guide


Before you upgrade

2. From the Job Scheduling Console, unlink the target workstation from the other
workstations in the network. Otherwise, from the command line of the master
domain manager, use the following command:
conman "unlink workstationname;noask"
3. From the command line (UNIX) or command prompt (Windows), stop the
netman process as follows:
v On UNIX, run:
conman “shut;wait"
v On Windows, run the shutdown.cmd command from the Tivoli Workload
Scheduler home directory.
4. If you are updating an agent, remove (unmount) any NFS mounted directories
from the master domain manager.
5. If you are upgrading an installation that includes the Connector, ensure that
you stop the Connector as well. See the next section for reference.

To verify whether there are services and processes still running, complete the
following steps:
v On UNIX, type the command
ps -u

Verify that the following processes are not running: netman, mailman, batchman,
writer, jobman, JOBMAN, stageman.
v On Windows, run the command:
<drive>unsupported\listproc.exe

Verify that the following processes are not running: netman, mailman, batchman,
writer, jobman, stageman, JOBMON, tokensrv, batchup.
Also, ensure that no system programs are accessing the directory or anything
below it, including the Command prompt and Windows Explorer.

Stopping the connector


If you are upgrading an installation that includes the connector, ensure you stop
the connector before starting the upgrade process. Only the ISMP installation
wizard and silent install methods are able to upgrade existing connector
installations. You use the wmaeutil command to stop the connector.

To run the wmaeutil command, follow these steps:


1. Set the Tivoli environment:
v From a UNIX command line:
– For ksh:
. /etc/Tivoli/setup_env.sh
– For csh:
source /etc/Tivoli/setup_env.sh
v From a Windows command line:
%SYSTEMROOT%\system32\drivers\etc\Tivoli\setup_env.cmd
2. Enter the following command:
v On UNIX
wmaeutil.sh ALL -stop
v On Microsoft Windows
wmaeutil.cmd ALL -stop

Chapter 2. Getting started 31


Before you upgrade

Backup files
The upgrade procedure on Tier 1 platforms backs up the entire Tivoli Workload
Scheduler, Version 7.0 and 8.1 installation to a directory named:
TWShome_backup_TWSuser

Note: The backup files are moved to the same file system where you originally
installed the product. A check is performed to ensure that there is enough
space on the file system, otherwise, the upgrade procedure cannot start. If
you do not have the required disk space to perform the upgrade, backup the
mozart database and all your customized configuration files, and install a
new instance of Tivoli Workload Scheduler, Version 8.2. Then, transfer the
saved files and the mozart database to the new installation.

These configuration files are often customized to meet your specific needs, and you
can use the saved copies to incorporate your changes following the upgrade. The
installation program will not overwrite any files in the mozart directory, stdlist
directory, or unison directory that were modified after Tivoli Workload Scheduler
was installed, namely, the localopts, globalopts, and tbsmadapter.config files.

If there are any other files you want to protect during an upgrade, copy or rename
them now. As an added precaution, you should also backup the entire TWShome
directory.

Note also that if you have placed any personal files or directories in the TWShome
directory, these are going to be lost during the upgrade process, since all the files
in TWShome that do not belong to the IBM Tivoli Workload Scheduler installation
are not migrated. You should backup these files or directories before starting to
upgrade, and restore them when the upgrade has completed.

Using the new configuration files


During the upgrade process, the configuration file templates of the previous IBM
Tivoli Workload Scheduler versions are overridden by the new ones in the
TWShome/config directory. Working copies of these templates are installed with the
.TWS82 file extension. They carry default values in addition to some of the
information you provided during installation. This process applies to the following
configuration files:
v TWShome/mozart/globalopts
v TWShome/localopts
v TWShome/Tbsm/TbsmAdapter/adapter.config

Note: The .TWS82 files are no longer installed if you upgrade your installation with
the twsinst.sh script distributed with the APAR IY48550 fix (contained in
the fixpack 3 package).

Following the upgrade process, you can continue to use the configuration files you
were using in the previous installation. For example, after upgrading you can find
three copies of the global options file distributed as follows:
v TWShome/config/globalopts

This is the file template.


v TWShome/mozart/globalopts

This is the old global options file.


v TWShome/mozart/globalopts.TWS82

32 IBM Tivoli Workload Scheduler Planning and Installation Guide


Before you upgrade

This is the working copy of the new Version 8.2 global options file.
You can choose one of the following options:
v Continue to use your old global options file. In this case, you need to do
nothing.
v Use the new global options file. In this case, you must:
1. Rename globalopts as globalopts.old
2. Rename globalopts.TWS82 as globalopts
v Use the new global options file with the addition of some of the values you had
in your older file. You can then do as in the preceding option and then manually
edit the file.

Remember that if you want to activate the new optional features available with
Version 8.2 (such as, for example, SSL), either use the new .TWS82 options file, or
manually add the corresponding options to the older version.

Expanding your database


If you are upgrading from version 7.0 or 8.1 to version 8.2, ensure that you expand
your databases before running the upgrade process. Also, if your network includes
backup masters with copies of database files, expand them before running the
upgrade process. Expanded databases is the default setting for installations of
Tivoli Workload Scheduler, Version 8.2. It permits the use of long names for
scheduling objects -- for example, job names can contain up to forty characters
instead of eight as in earlier versions. To expand the databases, run the dbexpand
command on the master domain manager. The program sets the Global Option
expanded version to yes, makes backup copies of the existing databases, and
expands the databases to accept long object names. Only run dbexpand one time,
otherwise the backup copy of the existing databases will be lost. The backup
copies are placed in a directory named mozart.old in the mozart directory.

Tivoli Management Framework implications


The Tivoli Workload Scheduler engine is not a Tivoli Management Framework
application. However, Tivoli Management Framework, Version 3.7.1 or 4.1 is a
prerequisite of the Tivoli Workload Scheduler connector and the Tivoli Plus
Module. The connector is required if you want to use the Job Scheduling Console.

When you install or upgrade to Tivoli Workload Scheduler, Version 8.2, optional
features such as the Tivoli Workload Scheduler connector and the Tivoli Plus
Module both require the presence of Tivoli Management Framework, Version 3.7.1
or 4.1. These features must be installed on the Tivoli server. Upgrades on managed
nodes are not supported using the installation program, but can be performed
using the Tivoli desktop. The installation program automatically installs the Tivoli
Management Framework server if it is not detected during the installation. If an
installation is detected, the installation program verifies the version, and if a
supported version is not detected, the upgrade is not performed. You must
manually upgrade the Tivoli Management Framework to either version 3.7.1 or 4.1
and then begin the upgrade process.

When a supported Tivoli Management Framework version is


already installed
The installation program is able to detect the presence of a Tivoli Management
Framework installation, as well as verify the version. If the version that is detected
is supported, there are a number of prerequisites that must be met, otherwise, the
installation results in error:

Chapter 2. Getting started 33


Before you upgrade

Administrator roles
Verify that the Tivoli Management Framework Administrator has the
install_product authorization role assigned.
Tivoli Management Framework up and running
Verify that the Tivoli Management Framework server is up and running.
Tivoli Management Server, not managed node
The Tivoli Workload Scheduler connector and the Tivoli Plus Module must
be installed on a Tivoli Management Framework server. Upgrades on
managed nodes are not supported using the installation program, but can
be performed using the Tivoli desktop.
No prior versions of the Tivoli Workload Scheduler connector or the Tivoli Plus
Module on managed nodes in the region
To upgrade the Tivoli Workload Scheduler connector and the Tivoli Plus
Module on the Tivoli server using the installation program, no prior
versions of these features must exist on managed nodes in the Tivoli
region. In an environment where you have connectors or the Tivoli Plus
Module on the Tivoli server and managed nodes, you must perform the
upgrade in two phases: 1) use the installation program to upgrade the
Tivoli Workload Scheduler engine only on the Tivoli server and then, 2)
use the Tivoli desktop to upgrade the Connectors on the managed nodes
and Tivoli server.

34 IBM Tivoli Workload Scheduler Planning and Installation Guide


Part 3. Installing and upgrading
Chapter 3. Installing using the installation wizard 37
Install a new instance of Tivoli Workload Scheduler 37
Typical installation sequence . . . . . . . . 38
Full installation sequence . . . . . . . . . 39
Custom installation sequence . . . . . . . 40
Add a new feature to an existing installation . . . 43
Promote an existing installation . . . . . . . 45
Performing a silent installation . . . . . . . . 45
Installation procedure . . . . . . . . . . 46

Chapter 4. Installing and promoting using


twsinst . . . . . . . . . . . . . . . 47
Install and promote . . . . . . . . . . . . 47

Chapter 5. Installing using Software Distribution 51


Software packages and parameters . . . . . . 51
Installation procedure . . . . . . . . . . . 53
Installing language packs . . . . . . . . . . 54

Chapter 6. Installing using customize . . . . . 57


The customize script . . . . . . . . . . . 57
Installing the Tivoli Workload Scheduler engine . . 58

Chapter 7. Upgrading to Tivoli Workload


Scheduler . . . . . . . . . . . . . . 61
Upgrade scenarios . . . . . . . . . . . . 61
Upgrading Tivoli Workload Scheduler . . . . . 62
Using the installation wizard . . . . . . . 62
Using twsinst . . . . . . . . . . . . . 63
Backing up before running the script . . . . 63
Running twsinst . . . . . . . . . . . 65
Using the migrationInstall response file . . . . 67
Using Software Distribution . . . . . . . . 67
Using customize . . . . . . . . . . . . 68

© Copyright IBM Corp. 1991, 2004 35


36 IBM Tivoli Workload Scheduler Planning and Installation Guide
Chapter 3. Installing using the installation wizard
This chapter describes how to install, add a feature, promote and uninstall Tivoli
Workload Scheduler using the installation wizard. The installation wizard runs
only on Tier 1 platforms. Refer to the Tivoli Workload Scheduler Release Notes for a
list of supported Tier 1 platforms. The chapter includes the following sections:
v “Install a new instance of Tivoli Workload Scheduler”
v “Add a new feature to an existing installation” on page 43
v “Promote an existing installation” on page 45
v “Performing a silent installation” on page 45
For information about upgrading to Tivoli Workload Scheduler, Version 8.2, see
Chapter 7, “Upgrading to Tivoli Workload Scheduler,” on page 61.

Install a new instance of Tivoli Workload Scheduler


A new installation has a typical, full, and custom option. Table 12 details what is
installed for each installation option.
Table 12. ISMP features
Feature Typical Custom Full Upgrade
Agent Type Fault-tolerant User’s choice Master Domain User’s choice
agent Manager
Tivoli Plus Not installed User’s choice Not installed Not installed
Module
Connector Not installed User’s choice Installed Not installed
Tivoli Not installed When Tivoli Installed Not installed
Framework Plus Module or
the connector are
selected
Languages System locale User’s choice All languages Not installed

To install a new instance of Tivoli Workload Scheduler perform the following steps:
1. Insert IBM Tivoli Workload Scheduler Installation Disk 1.
If you are installing on a Linux workstation, insert IBM Tivoli Workload
Scheduler Installation Disk 2.
2. Run the setup program for the operating system on which you are installing.
v On Windows platforms, the SETUP.exe file is located in the Windows
folder.
v On UNIX platforms, the SETUP.bin file is located in the root directory of
the installation CD. Only use the SETUP.bin file on the CD for a full install.
For custom or typical installs, use the system SETUP.bin located in the /bin
directory.
3. The installation wizard is launched. Select the language of the installation
wizard. Click OK.
4. Read the welcome information and click Next.
5. Read and accept the license agreement. Click Next.

© Copyright IBM Corp. 1991, 2004 37


ISMP installation

6. The Install a new Tivoli Workload Scheduler Agent is selected by default.


Click Next.
7. Specify the Tivoli Workload Scheduler user name. Spaces are not permitted.
v On Windows systems, if this user account does not already exist, it is
automatically created by the installation program. If you specify a domain
user specify the name as domain_name\user_name. If you specify a local user
with the same name as a domain user, the local user must first be created
manually by an administrator and then specified as system_name\user_name.
Type and confirm the password.

Note: The password must comply with the password policy in your Local
Security Settings otherwise, the installation fails.
v On UNIX systems, this user account must be created manually before
running the installation program. Create a user with a home directory. IBM
Tivoli Workload Scheduler will be installed under the HOME directory of
the selected user.
Click Next.
8. On Windows systems, if you specified a user name that does not already
exist, an information panel is displayed. Review the information and click
Next.
9. On Windows systems only, specify the installation directory under which the
product will be installed. The directory cannot contain spaces. The directory
must be located on an NTFS file system. Click Browse to select a different
destination directory, and click Next.
10. Select the type of installation:
v Typical. See “Typical installation sequence.”
v Full. See “Full installation sequence” on page 39.
v Custom. See “Custom installation sequence” on page 40.

Typical installation sequence


For a typical installation, perform the following steps:
1. Provide the following information. Refer to the Internationalization Notes in
the IBM Tivoli Workload Scheduler Release Notes for restrictions.
Table 13. CPU data
Field Value
Company Type the company name. This name appears
in program headers and reports. Spaces are
permitted, provided that the name is not
enclosed in double quotation marks.
This CPU Type the Tivoli Workload Scheduler name of
this workstation. This name cannot exceed
16 characters and cannot contain spaces.
Master CPU Type the name of the master domain
manager. This name cannot exceed 16
characters and cannot contain spaces.
TCP Port Number The TCP port number used by the instance
being installed. It must be a value in the
range 1–65535. The default is 31111. When
installing more than one instance on the
same workstation, use different port
numbers for each instance.

38 IBM Tivoli Workload Scheduler Planning and Installation Guide


ISMP installation

Click Next.
2. Review the installation settings and click Next. A progress bar indicates that the
installation has started.
3. When the installation completes, a panel displays a successful installation or
indicates the location of the log file if the installation was unsuccessful. Click
Finish.
To configure a fault-tolerant agent, see “Configuring a fault-tolerant or standard
agent” on page 74. For UNIX installations, see also “Configuration steps for UNIX
Tier 1 and 2 installations” on page 76.

Full installation sequence


The following steps help guide you through a full installation.
1. Provide the data described in Table 13 on page 38. Refer to the
Internationalization Notes in the IBM Tivoli Workload Scheduler Release Notes for
restrictions. Click Next.
2. Complete the panel according to the following.
Table 14. Tivoli Workload Scheduler connector information
Field Value
Connector Instance Name Type the name that identifies the instance in
the Job Scheduling Console window. The
name must be unique within the scheduler
network.
Tivoli Workload Scheduler Home Directory Displays the location of the Tivoli Workload
Scheduler installation which you specified in
a previous panel.

Click Next.
3. The connector requires the Tivoli Management Framework. If no version of
Tivoli Management Framework is detected, you can install it now providing the
information in Table 15. If a version of Tivoli Management Framework that is
not supported is detected, you exit the installation and upgrade the Tivoli
Management Framework version as described in the Tivoli Enterprise Installation
Guide.
Table 15. Tivoli Management Framework installation panel
Field Value
Remote Access Account Type the Tivoli remote access account
name that allows Tivoli programs to access
remote file systems.
Password Type the password for the remote access
account.
Installation Password Specify an installation password if you
want a password to be used for
subsequent managed node installations.

The remaining fields are optional and apply if you intend to deploy Tivoli
programs or managed nodes in your Tivoli Management Framework
environment. Click Next.

Chapter 3. Installing using the installation wizard 39


ISMP installation

Note: On Windows, the Tivoli Desktop must be installed separately. For more
information, see the Tivoli Management Framework Planning and Installation
Guide.
4. Review the installation settings and click Next. A progress bar indicates that the
installation has started. To determine the next step to be performed, check the
following list for the situation that best describes your environment:
v An installation of Tivoli Management Framework was not required because a
supported version was detected on your workstation, proceed to step 5.
v The Tivoli Management Framework server version 4.1 will be installed
because it was not detected. Complete the following steps:
a. You are prompted with a Locate the Installation Image window for the
location of the Tivoli Management Framework images. If you did not
copy the images to the local machine or do not have them accessible on
an NFS mounted drive, unmount the installation CD and mount the
Tivoli Management Framework CD. Navigate to the directory that
contains the images. Click OK to continue the installation. A progress bar
indicates the Tivoli server is being installed.
b. Next, you are prompted for the Tivoli job scheduling services images
required to install the connector. These images are located on the
installation CD in the TWS_CONN directory. Navigate to the directory
and click OK. The installation program installs the connector.

Note: On Windows, if the Tivoli Management Framework has never been


installed on your workstation, you may be prompted to reboot
before being prompted for the job scheduling services images to
complete the installation. Click Now to reboot. After the reboot, the
installation program is relaunched and you are prompted for the
images required to install the connector. If you do not reboot
immediately, only the Tivoli Workload Scheduler engine is
installed, without the connector. The connector will be installed the
next time you reboot the workstation.
5. When the installation completes, a panel displays a successful installation or
indicates the location of the log file if the installation was unsuccessful. Click
Finish.
To configure a master domain manager, see “Configuring a master domain
manager” on page 73. For UNIX installations, see “Configuration steps for UNIX
Tier 1 and 2 installations” on page 76.

If you installed the connector, you must configure the security file as described in
“Updating the security file” on page 75.

Custom installation sequence


The following steps help guide you through a custom installation.

Note: If the installation includes the installation of the Tivoli Management


Framework (custom including connector or Tivoli Plus), ensure that you are
logged on as administrator.
1. Provide the data described in Table 13 on page 38. Refer to the
Internationalization Notes in the IBM Tivoli Workload Scheduler Release Notes for
restrictions. Click Next.
2. To determine the next step to be performed, follow the steps outlined for the
type of agent you selected to install:
v Standard Agent

40 IBM Tivoli Workload Scheduler Planning and Installation Guide


ISMP installation

a. Select additional languages to install, and click Next.


b. Review the installation settings and click Next. A progress bar indicates
that the installation has started.
c. When the installation completes, a panel displays a successful installation
or indicates the location of the log file if the installation was unsuccessful.
Click Finish.
d. To configure the standard agent you just installed, see “Configuring a
fault-tolerant or standard agent” on page 74. For UNIX installations, see
also “Configuration steps for UNIX Tier 1 and 2 installations” on page 76.
v Fault-tolerant Agent, Backup Master, Master Domain Manager. If you select
one or both optional features, proceed to step 3. If you did not select any of
the optional features, complete the following steps:
a. Select additional languages to install, and click Next.
b. Review the installation settings and click Next. A progress bar indicates
that the installation has started.
c. When the installation completes, a panel displays a successful installation
or indicates the location of the log file if the installation was unsuccessful.
Click Finish.
d. To configure the agent or master you just installed, see “Configuring a
fault-tolerant or standard agent” on page 74, or “Configuring a master
domain manager” on page 73. For UNIX installations, see also
“Configuration steps for UNIX Tier 1 and 2 installations” on page 76.
3. If you selected to install the connector, provide the information described in
Table 14 on page 39. Otherwise, proceed to the next step. If the Connector is
already installed, a new instance will be created. Click Next.
4. If you selected the Tivoli Plus Module optional feature, provide the following
information. Otherwise, proceed to the next step:
Table 16. Tivoli Plus Module information panel
Field Value
User Name Displays the user for which Tivoli Workload
Scheduler is being installed. You specified
this user in a previous installation.
Tivoli Workload Scheduler Home Directory Displays the location of the Tivoli Workload
Scheduler installation. You specified this
path in a previous installation.
Job Scheduling Console Installation Optionally, specify the location of the Job
Directory Scheduling Console installation. A task is
created that enables you to launch the Job
Scheduling Console from the Tivoli desktop.

Specify this information in the appropriate fields and click Next.


5. Select additional languages to install, and click Next. If you installed languages
in a previous installation, they are disabled from the language selection panel.
6. The Connector and Tivoli Plus Module features require the Tivoli Management
Framework. To determine the next step to be performed, check the following
list for the situation that best describes your environment:
v A supported version of Tivoli Management Framework (versions 3.7.1 or 4.1)
exists on your workstation, proceed to step 7 on page 42.
v A version not supported by Tivoli Workload Scheduler is detected. Exit the
installation program, upgrade your Tivoli Management Framework

Chapter 3. Installing using the installation wizard 41


ISMP installation

installation and relaunch installation program. Refer to the Tivoli Enterprise


Installation Guide for upgrade instructions. Or, go back and select a different
type of installation.
v Tivoli Management Framework is not installed on your workstation. To have
the installation program install the Tivoli Management Framework server,
version 4.1 on your workstation, specify the directory where you want to
install it, or use the Browse button to locate it. The remaining fields are
optional and apply if you intend to deploy Tivoli programs or managed
nodes in your Tivoli Management Framework environment. Optionally
specify the following information:
Table 17. Tivoli Management Framework version installation panel
Field Value
Remote Access Account Type the Tivoli remote access account
name that allows Tivoli programs to
access remote file systems.
Password Type the password for the remote access
account.
Installation Password Specify an installation password if you
want a password to be used for
subsequent managed node installations.

Click Next.

Note: On Windows, the Tivoli Desktop must be installed separately. For more
information, see the Tivoli Management Framework Planning and Installation
Guide.
7. Review the installation settings and click Next. A progress bar indicates that the
installation has started. To determine the next step to be performed, check the
following list for the situation that best describes your environment:
v An installation of Tivoli Management Framework was not required because a
supported version was detected on your workstation. If you selected
additional languages, you may be prompted for the Tivoli Management
Framework language support images. Locate the images and click OK.
Proceed to step 8 on page 43.
v The Tivoli Management Framework server version 4.1 will be installed
because it was not detected. Depending on the optional features you selected,
you may or may not have to complete all of the following steps:
a. You are prompted with a Locate the Installation Image window for the
location of the Tivoli Management Framework images. If you did not
copy the images to the local machine or do not have them accessible on
an NFS mounted drive, unmount the installation CD and mount the
Tivoli Management Framework CD. Navigate to the directory that
contains the images. Click OK to continue the installation. A progress bar
indicates the Tivoli server is being installed.
b. Next, if you selected to install additional language packs, you are
prompted for the Tivoli Management Framework language pack images.
Navigate to the directory indicated and click OK.
c. Next, you are prompted for the Tivoli Job Scheduling Services images
required to install the connector. These images are located on the
installation CD in the TWS_CONN directory. Navigate to the directory
and click OK. The installation program installs the connector.

42 IBM Tivoli Workload Scheduler Planning and Installation Guide


ISMP installation

Note: On Windows, if the Tivoli Management Framework has never been


installed on your workstation, you may be prompted to reboot
before being prompted for the Tivoli job scheduling services
images to complete the installation. Click Now to reboot
immediately. After the reboot, the installation program is
relaunched and you are prompted for the images required to install
the Connector. Navigate to the directory that contains the images
and click Next. If you do not reboot immediately, only the Tivoli
Workload Scheduler engine is installed, without the connector. The
connector will be installed the next time you reboot the
workstation. If you installed the connector, you must update the
security file. For instructions, see “Updating the security file” on
page 75.
d. If you selected to install the Tivoli Plus Module, the installation images
are located on the installation CD in the folder TWSPLUS. Navigate to
the folder and click OK.
8. When the installation completes, a panel displays a successful installation, or
indicates the location of the log file if the installation was unsuccessful. Click
Finish.
To configure the agent or master you just installed, see “Configuring a
fault-tolerant or standard agent” on page 74 or “Configuring a master domain
manager” on page 73. For UNIX installations, see also “Configuration steps for
UNIX Tier 1 and 2 installations” on page 76.

If you installed the connector, you must configure the security file as described in
“Updating the security file” on page 75.

Add a new feature to an existing installation


You can install the following optional components or features that were not
installed during a previous Tivoli Workload Scheduler Version 8.2 installation
using the installation program:
Table 18. Optional installable features and components
Feature Description
Tivoli Plus Module Integrates Tivoli Workload Scheduler with Tivoli
Management Framework, Tivoli Enterprise Console, and
Distributed Monitoring. The Tivoli Management
Framework version 3.7.1 or 4.1 is a prerequisite for this
component. If a version earlier than 3.7.1 is found, this
feature cannot be installed. If an installation is not
detected, version 4.1 is automatically installed. See
“Tivoli Management Framework implications” on page
33 for more information.
Tivoli Workload Scheduler The Job Scheduling Console communicates with the
connector Tivoli Workload Scheduler system through the
Connector. It translates instructions entered through the
Console into scheduler commands. The Tivoli
Management Framework version 3.7.1 or 4.1 is a
prerequisite for this component. If a version earlier than
3.7.1 is found, this feature cannot be installed. If an
installation is not detected, version 4.1 is automatically
installed. See “Tivoli Management Framework
implications” on page 33 for more information.

Chapter 3. Installing using the installation wizard 43


ISMP installation

Table 18. Optional installable features and components (continued)


Feature Description
Language Packs The English language pack and the language locale of
the operating system are installed by default. The
installation program enables users to select any of the
supported languages.

Installation of Tivoli Plus Module and the connector are described in “Custom
installation sequence” on page 40

To install additional language packs, perform the following steps:


1. Insert IBM Tivoli Workload Scheduler Installation Disk 1.
If you are installing on a Linux workstation, insert IBM Tivoli Workload
Scheduler Installation Disk 2.
2. Run the setup program for the operating system on which you are installing.
v On Windows platforms, the SETUP.exe file is located in the directory of the
platform on which you want to install Tivoli Workload Scheduler.
v On UNIX platforms, the SETUP.bin file is located in the root directory of
the installation CD.
3. The wizard is launched. Select the installation language, and click OK.
4. Read the welcome information. Click Next.
5. Read and accept the license agreement. Click Next.
6. From the drop-down list, select an existing Tivoli Workload Scheduler, Version
8.2 installation. Version 8.2 installations are identified by the type of agent and
the user name for which the agent was installed. Versions 7.0 and 8.1
installations are identified by the group name assigned.
7. The Add a feature to the selected instance is selected by default. Click Next.
8. Review the user information and click Next.
9. Review the destination directory and click Next.
10. Review the workstation configuration information.
11. Select the optional features you want to install.
v If you selected one or more optional features, click Next.
v If you did not select optional features on this panel, click Next and follow
these steps:
a. Select additional languages to install, and click Next. If you installed
languages in a previous installation, they are disabled from the language
selection panel.
b. Review the installation settings and click Next. A progress bar indicates
that the installation has started.
c. When the installation completes, a panel displays a successful
installation or indicates the location of the log file if the installation was
unsuccessful. Click Finish.
12. Select additional languages to install, and click Next. If you installed
languages in a previous installation, they are disabled from the language
selection panel.
13. Review the installation settings and click Next. A progress bar indicates that
the installation has started.
14. When the installation completes, a panel displays a successful installation or
indicates the location of the log file if the installation was unsuccessful. Click
Finish.

44 IBM Tivoli Workload Scheduler Planning and Installation Guide


ISMP installation

Promote an existing installation


Reconfigure a Tivoli Workload Scheduler Version 8.2 agent to a different type of
agent. You can perform the following operations:
v Promote a standard agent to a master domain manager/backup master
v Promote a standard agent to a fault-tolerant agent
v Promote a fault-tolerant agent to a master domain manager/backup master

Before you perform a promote, ensure that all Tivoli Workload Scheduler processes
and services are stopped. For information about stopping the processes and
services, see “Unlinking and stopping Tivoli Workload Scheduler” on page 30.

To promote an Tivoli Workload Scheduler Version 8.2 agent to a different type of


agent, perform the following steps:
1. Insert IBM Tivoli Workload Scheduler Installation Disk 1.
If you are installing on a Linux workstation, insert IBM Tivoli Workload
Scheduler Installation Disk 2.
2. Run the setup program for the operating system on which you are installing.
v On Windows platforms, the SETUP.exe file is located in the directory of the
platform on which you want to install Tivoli Workload Scheduler.
v On UNIX platforms, the SETUP.bin file is located in the root directory of
the installation CD.
3. The wizard is launched. Select the installation wizard language. Click OK.
4. Read the welcome information and click Next.
5. Read and accept the license agreement. Click Next.
6. From the drop-down list, select an existing Tivoli Workload Scheduler, Version
8.2 installation. The installation is identified by the type of agent and by the
user name under which the agent was installed.
7. Select Promote the selected instance and click Next.
8. Review the user information and click Next.
9. Review the installation destination directory and click Next.
10. Select the type of Tivoli Workload Scheduler agent to which you want to
promote the selected instance and click Next.
11. Review the workstation configuration settings and click Next.
12. Review the installation settings and click Next. A progress bar indicates that
the installation has started.
13. When the installation completes, a panel displays a successful installation. In
case of an unsuccessful installation, consult the log file indicated. Click Finish.

Performing a silent installation


Use the response file templates provided on the CD (CD 1 \RESPONSE_FILE\) to
perform a silent installation. The files include all the information required by the
installation program to run without user intervention. Instructions for customizing
the files are included in the files as commented text.

The following table lists the response files available and the type of installation
each performs:

Chapter 3. Installing using the installation wizard 45


ISMP installation

Table 19. Response files


Type of installation Response file to use
First-time installation freshInstall.txt
Add a feature to an existing installation updateInstall.txt
Promote an agent of an existing installation updateInstall.txt
Upgrade from Tivoli Workload Scheduler, Version migrationInstall.txt
7.0 or 8.1 to version 8.2

Note: If your installation requires the Tivoli Management Framework (connector


and Tivoli Plus Module features require it), then you must copy all images
to the local machine or have them accessible on an NFS mounted drive.

Installation procedure
For a silent installation, perform the following steps:
1. Copy the relevant response file to a local directory and edit it to meet the needs
of your environment.
2. Save the file with your changes.
3. Enter the following command:
v On UNIX,
./SETUP.bin -options <local_dir>/response_file.txt

The SETUP.bin is located in the root directory of IBM Tivoli Workload


Scheduler Installation Disk 1. For the Linux platform only, the SETUP.bin is
located in the root directory of IBM Tivoli Workload Scheduler Installation
Disk 2.
v On Windows,
SETUP.exe -options <local_dir>\response_file.txt

The SETUP.exe file is located on IBM Tivoli Workload Scheduler Installation


Disk 1, in the Windows folder.
4. At the end of the installation, perform one of the following configuration tasks
depending on the type of agent you installed:
v “Configuring a master domain manager” on page 73
v “Configuring a fault-tolerant or standard agent” on page 74
v If you used the updateInstall.txt file to add a connector, you must update
the security file. For instructions, see “Updating the security file” on page 75.
v For UNIX installations, see also “Configuration steps for UNIX Tier 1 and 2
installations” on page 76.

46 IBM Tivoli Workload Scheduler Planning and Installation Guide


Chapter 4. Installing and promoting using twsinst
This section explains the command line method of installing using the twsinst
script on UNIX Tier 1 platforms. Refer to Tivoli Workload Scheduler Release Notes for
a list of supported platforms. For Tier 2 installations using the command line, see
Chapter 6, “Installing using customize,” on page 57.

Install and promote


On UNIX Tier 1 platforms, you use the twsinst script to install a:
v Master domain manager
v Backup master
v Fault-tolerant agent
v Standard agent

You can also use the twsinst script to upgrade from versions 7.0 and 8.1, uninstall
a version 8.2 instance, and promote an existing version 8.2 agent to different type
of agent. For information about upgrading, see “Running twsinst” on page 65.

Refer to Tivoli Workload Scheduler Release Notes for a list of supported Tier 1
platforms.

To install or promote an instance of Tivoli Workload Scheduler, perform the


following steps
1. Insert IBM Tivoli Workload Scheduler Installation Disk 1.
2. Create the Tivoli Workload Scheduler user. The software is installed by default
in the user’s home directory, referred to as TWShome.
User: TWSuser
Home: TWShome (for example: /opt/TWS)
3. Log in as root, and locate the directory of the platform on which you want to
run the script then, run the twsinst script.

Synopsis
Show command usage and version
twsinst -u | -v
Install a new instance
twsinst -new -uname <username>
[-cputype {master | bkm_agent | ft_agent | st_agent} ]
[-thiscpu <cpuname>]
[-master <master_cpuname>]
[-port <port_number>]
[-company <company_name>]
[-inst_dir <install_dir>]
[-lang <lang_id>]
Promote an instance
twsinst -promote -uname <username>
[-cputype {master | bkm_agent | ft_agent} ]
[-inst_dir <install_dir>]
[-lang <lang_id>]

© Copyright IBM Corp. 1991, 2004 47


twsinst installation

Parameters
-u Displays command usage information and exits.
-v Displays the command version and exits.
-new | -promote
Specifies the type of installation to perform:
-new A fresh installation of Tivoli Workload Scheduler, Version 8.2.
Installs an agent or master and all supported language packs.
-promote
For existing installations of Tivoli Workload Scheduler, version 8.2,
you can perform the following operations:
v Promote a standard agent to a fault-tolerant agent, master
domain manager or backup master
v Promote a fault-tolerant agent to a master domain manager or
backup master
Before you perform a promote, ensure that all Tivoli Workload
Scheduler processes and services are stopped. For information
about stopping the processes and services, see “Unlinking and
stopping Tivoli Workload Scheduler” on page 30.
-uname <username>
The name of the user for which Tivoli Workload Scheduler is installed,
updated, promoted, or uninstalled. The software is installed or updated in
this user’s home directory. This user name is not to be confused with the
user performing the installation logged on as root. For a new installation,
this user account must be created manually before running the installation.
Create a user with a home directory. Tivoli Workload Scheduler will be
installed under the HOME directory of the specified user.
-cputype
Specifies the type of Tivoli Workload Scheduler agent to install. Valid
values are as follows:
v master
v bkm_agent (backup master)
v ft_agent (fault-tolerant agent, domain manager, backup domain
manager)
v st_agent (standard agent)
If not specified, the default value is ft_agent. When -cputype=master,
-master is set by default to the same value as -thiscpu.
-thiscpu <cpuname>
The name of the Tivoli Workload Scheduler workstation of this installation.
The name cannot exceed 16 characters. This name is registered in the
localopts file. If not specified, the default value is the hostname of the
workstation. Refer to the Internationalization Notes in the IBM Tivoli
Workload Scheduler Release Notes for restrictions.
-master <master_cpuname>
The workstation name of the master domain manager. This name cannot
exceed 16 characters and cannot contain spaces. This name is registered in
the globalopts file. If not specified, the default value is MASTER. Refer to
the Internationalization Notes in the IBM Tivoli Workload Scheduler Release
Notes for restrictions.

48 IBM Tivoli Workload Scheduler Planning and Installation Guide


twsinst installation

-port <port_number>
The TCP port number. This number is registered in the localopts file. If not
specified, it is set by default to 31111.
-company <company_name>
The name of the company. The company name cannot contain blank
characters. The name appears in program headers and reports. If not
specified, the default name is COMPANY.
Before you perform an upgrade or a promote of an existing CPU, ensure
that the company name does not contain blank characters. You can verify
the existence of blank characters and remove them from the company
name by modifying the related entry in the TWShome/mozart/globalopts
file.
-inst_dir <install_dir>
The directory of the Tivoli Workload Scheduler installation. This path
cannot contain blanks. If not specified, the path is set to the username home
directory.
-lang <lang_id>
The language in which the twsinst messages are displayed. If not
specified, the system LANG is used. If the related catalog is missing, the
default C language catalog is used.

Note: The -lang option is not to be confused with the Tivoli Workload
Scheduler supported language packs. By default, all supported
language packs are installed when you install using the twsinst
script.

Examples
For example, a sample twsinst script for installing a new instance of a
fault-tolerant agent workstation:
./twsinst -new -uname twsuser -cputype ft_agent -thiscpu fta -master mdm
-port 31124 -company IBM

For example, a sample twsinst script for promoting a fault-tolerant agent to a


master domain manager workstation:
./twsinst -promote -uname twuser -cputype master

Chapter 4. Installing and promoting using twsinst 49


twsinst installation

50 IBM Tivoli Workload Scheduler Planning and Installation Guide


Chapter 5. Installing using Software Distribution
This chapter describes how to install using software package blocks of software
distribution.

Software packages and parameters


Tivoli Workload Scheduler can be installed distributing a software package block
(SPB), using the Software Distribution component of IBM Tivoli Configuration
Manager, Versions 4.2, or 4.2.1. You can distribute the SPB, locally or remotely,
using either the command line interface or from the Tivoli desktop.

Note: Do not modify the SPB supplied.

An SPB exists for each supported Tier 1 platform. The software package blocks are
located on IBM Tivoli Workload Scheduler Installation Disk 1 and 2, under the
directory of the platform on which you want to install. The Software Distribution
command line is located in a folder named CLI under each platform folder. An
SPB also exists to install just the language packs. The language pack software
package block is found under the root directory of IBM Tivoli Workload Scheduler
Installation Disk 1. Table 20 lists the SPBs used to install Tivoli Workload Scheduler
components and features.
Table 20. SPBs to install Tivoli Workload Scheduler
.SPB file Description
Tivoli_TWS_WINDOWS.SPB The software package for Windows
operating systems.
Tivoli_TWS_AIX.SPB The software package for AIX operating
systems.
Tivoli_TWS_HP.SPB The software package for HP-UX operating
environments.
Tivoli_TWS_SOLARIS.SPB The software package for Solaris operating
environments.
Tivoli_TWS_LINUX_I386.SPB The software package for Linux for Intel.
Tivoli_TWS_LINUX_S390.SPB The software package for Linux for OS/390.
Tivoli_TWS_LP.SPB The software package that installs a
language pack.

A number of Tivoli Workload Scheduler parameters are used by the software


package block to perform the install. These parameters are defined as default
variables in the software package. The following is the list of installation
parameters:

© Copyright IBM Corp. 1991, 2004 51


Software Distribution installation

Table 21. SPB installation parameters


Variable Description
install_dir This required parameter is the fully qualified path to
the location of the Tivoli Workload Scheduler
installation. This path cannot contain blanks. On
Windows workstations, this path is created if it does
not already exist. On UNIX workstations, this path is
the same as the user’s home directory. The default
values are:
v Windows:
$(system_drive)\winn32app\TWS\$(tws_user)
v UNIX: opt/TWS/$(tws_user)
tws_user This required parameter is the user name for which
Tivoli Workload Scheduler instance is being installed.
On Windows systems, if this user account does not
already exist, it is automatically created by the
installation. If you specify a domain user, specify the
name as domain_name\user_name. If you specify a local
user with the same name as a domain user, the local
user must first be created manually by an
Administrator and then specified as
system_name\user_name. On UNIX systems, this user
account must be created manually before running the
installation. Create a user with a home directory. IBM
Tivoli Workload Scheduler will be installed under the
HOME directory of the selected user. The default is
$(user_name).
domain This parameter is optional, unless the user is a domain
user when it is required. The domain name of the
user. If you specify a domain user, specify the name as
domain_name\user_name. The default is
$(computer_name).
backup_dir This optional parameter is when you are performing
an upgrade. It indicates the location to where the
current installation is copied before it is upgraded. The
default is $(install_dir)_backup_$(tws_user).
create_user (for Windows only) If the user does not exist, this is a required parameter.
Specify true if the $(tws_user) does not already exist.
Ensure the local user does not exist on the domain.
The default is false.
check_user This is a required parameter for existing users. If
create_user = true, then the check_user value is
ignored. The default is true.
pwd (for Windows only) This is a required parameter for Windows platforms
when performing a first time install. The password
associated with the tws_user user name.
st_agent Specify true only for the type of agent you want to
ft_agent install. The default is false.
master
bkm_agent
company This optional parameter is the company name. This
name appears in program headers and reports. The
default is COMPANY.

52 IBM Tivoli Workload Scheduler Planning and Installation Guide


Software Distribution installation

Table 21. SPB installation parameters (continued)


Variable Description
this_cpu This required parameter is the name of the
workstation on which you are performing the
installation. This name cannot exceed 16 characters
and cannot contain spaces. The default is THIS CPU.
master_cpu This required parameter is the name of the master
domain manager. This name cannot exceed 16
characters and cannot contain spaces. This value is the
same as this_cpu if you are installing a master domain
manager. The default is MASTER.
tcp_port This required parameter is the TCP port number used
by the instance being installed. When installing more
than one instance on the same workstation, use
different port numbers for each instance. It must be an
unassigned 16–bit value in the range 1–65535.m The
default is 31111.
fresh_install This required parameter indicates whether this is a
first time install. Specify true to perform a fresh
install. Specify false to perform a promote or upgrade.
The default is true.
upgrade This required parameter indicates whether the install
is an upgrade. Specify false to perform a fresh install
or a promote. Specify true to perform an upgrade. The
default is false.
promote This required parameter indicates whether the install
is a promote. Specify true to perform a promote and
false for a fresh install or upgrade.
backup This optional parameter indicates a backup. Specify
false for a fresh install. The default is false.
group This optional parameter indicates the group name
assigned during the installation of Tivoli Workload
Scheduler, Version 8.1. Specify this name when
performing an upgrade. The default is TWS group.

Installation procedure
To perform the installation, complete the following steps:
1. Set the Tivoli environment. See “Stopping the connector” on page 31.
2. Import the software package block using the wimpspo command.
3. Install the software package block using the winstsp command.
4. Perform one of the following configuration tasks depending on the type of
agent you installed:
v “Configuring a master domain manager” on page 73
v “Configuring a fault-tolerant or standard agent” on page 74
v For UNIX installations, see also “Configuration steps for UNIX Tier 1 and 2
installations” on page 76
For complete instructions on performing these tasks, refer to wimpspo and
winstsp in the IBM Tivoli Configuration Manager, Reference Manual for Software
Distribution, and the IBM Tivoli Configuration Manager, User’s Guide for Software
Distribution.

Chapter 5. Installing using Software Distribution 53


Software Distribution installation

The following is an example of the settings required to perform a fresh install of a


master domain manager on a Windows workstation, where the user is not defined
on the workstation.
winstsp –D install_dir="testB\TWS\juno" –D tws_user="juno"
[–D create_user="true" –D pwd="Password"]
{–D st_agent="false"|–D ft_agent="false"|–D master="true"|–D bkm_agent="false"}
–D this_cpu="saturn" –D master_cpu="saturn" –D tcp_port="3111"
{–D fresh_install="true" | –D upgrade="false" | –D promote="false"}
–D backup="false"
Tivoli_TWS_WINDOWS.spb [subscribers...]

In this example, some variables could be omitted. For example, if master = true,
the installation will ignore the values of the other types of agents. Therefore, the
variables st_agent, ft_agent, bkm_agent could be omitted from the command, or, even
if specified, their values are ignored because their default values are set to false.

Installing language packs


You can also install language packs using Software Distribution. Locate the
Tivoli_TWS_LP.SPB software package block in the root directory of IBM Tivoli
Workload Scheduler Installation Disk 1, and then customize the following
parameters before you install.
Table 22. List of parameters to install language packs
Default variable Description Required Default value
zh_CN Chinese, Simplified Specify true for the false
languages to install.
it Italian All other languages
default to false.
ko Korean

es Spanish

zh_TW Chinese, Traditional

ja Japanese

pt_BR Brazilian Portuguese

de German

fr French

ALL_LANG All of the above


languages.
tws_user The user name for Yes $(user_name)
which the specified
language pack is
being installed.
install_dir The fully qualified Yes $(program_files)
path to which the
specified language
packs are installed.

54 IBM Tivoli Workload Scheduler Planning and Installation Guide


Software Distribution installation

The following is the syntax required to install all languages:


winstsp -D install_dir="Installation Path" -D tws_user="UserName"
[-D zh_C =true ... -D de=true | ALL_LANG=true] Tivoli_TWS_LP.SPB [subscribers...]

The following is the syntax required to install Italian and German language packs:
winstsp -D install_dir="Installation Path" -D tws_user="UserName"
[-D it =true | -D de=true] Tivoli_TWS_LP.SPB [subscribers...]

Chapter 5. Installing using Software Distribution 55


Software Distribution installation

56 IBM Tivoli Workload Scheduler Planning and Installation Guide


Chapter 6. Installing using customize
This section explains the command line method of installing using the customize
script on Tier 2 platforms. Refer to the Tivoli Workload Scheduler Release Notes for a
list of supported Tier 2 platforms. For Tier 1 installations using the command line,
see Chapter 4, “Installing and promoting using twsinst,” on page 47.

The customize script


Use the customize script to install, upgrade, and uninstall Tivoli Workload
Scheduler for supported Tier 2 platforms.

Synopsis
customize -new -thiscpu wkstationname -master wkstationname [-company ″
companyname″] [-nolinks|-execpath pathname] [-uname username][-port netman port]

customize -update [-company ″ companyname″] [-uname username]

Description
The customize script installs or updates Tivoli Workload Scheduler. Use it to
perform the following functions:
v New Tivoli Workload Scheduler installation: Install Tivoli Workload Scheduler.
Create a components file with new entries.
v Tivoli Workload Scheduler updates: Upgrade Tivoli Workload Scheduler, if
necessary. Update entries in components file. Use it also to reset permissions to
their default values provided that the original MAESTRO.TAR file is not in the
TWShome directory.
v Details of the installation process are logged in a file named customize.log. You
can find this file in the same directory from where you run the customize script.

Arguments
-new This is a new installation.
-update
This is an update of an existing installation. Note that updating the
software will not change the type of databases in use by Tivoli Workload
Scheduler.
-thiscpu
The name of this workstation. The name can be up to sixteen
alphanumeric, dash (-), or underscore (_) characters starting with a letter.
This name must be used later to formally define the workstation in Tivoli
Workload Scheduler.
-master
The name of the master domain manager. The name can be up to sixteen
characters in length. This name must be used later to formally define the
workstation in Tivoli Workload Scheduler.
-company
The name of the company, enclosed in double quotation marks (up to 40
characters). The name appears in program headers and reports.
© Copyright IBM Corp. 1991, 2004 57
Customize installation

[-nolinks|-execpath pathname]
The link option determines the path used by customize to create links to
Tivoli Workload Scheduler’s utility commands. If you include -nolinks, no
links are created. If you include -execpath, links are created from the
specified path. If linkopt is omitted altogether, links are created as follows:

usr/bin/mat twshome/bin/at
usr/bin/mbatch twshome/bin/batch
usr/bin/datecalc twshome/bin/datecalc
usr/bin/jobstdl twshome/bin/jobstdl
usr/bin/maestro twshome/bin/maestro
usr/bin/mdemon twshome/bin/mdemon
usr/bin/morestdl twshome/bin/morestdl
usr/bin/muser twshome/bin/muser
usr/bin/parms twshome/bin/parms

-uname
The name of the user for whom Tivoli Workload Scheduler will be
installed or updated. The name must not contain dot (.) characters. The
software is installed or updated in this user’s home directory. If omitted,
the default user name is maestro.
-port The TCP port number that Netman responds to on the local computer. It
must be an unsigned 16-bit value in the range 1- 65535 (remember that the
values between 0 and 1023 reserved for the well-known services, such as
FTP, TELNET, HTTP, etc.). The default is 31111. You can modify this value
at any time in the local options file.

Installing the Tivoli Workload Scheduler engine


Follow these instructions on Tier 2 computers if you are installing Tivoli Workload
Scheduler for the first time, or if Tivoli Workload Scheduler has been completely
uninstalled. For Tier 1 platforms, refer to Chapter 3, “Installing using the
installation wizard,” on page 37.

Perform the following steps to install Tivoli Workload Scheduler on a Tier 2


platform.
1. Create the Tivoli Workload Scheduler user. The software is installed in the
user’s home directory, referred to as TWShome.
User: TWSuser
Home: TWShome (for example: /opt/maestro)
2. Mount IBM Tivoli Workload Scheduler Installation Disk 2.
a. Log in as root, and change your directory to TWShome.
b. Extract the software:
tar -xvf cd2/platform/MAESTRO.TAR

Note: For the IBM-Sequent Numa platform (DYNIX®), you must also add
the -o option.
where:
cd The pathname of your CD drive.

58 IBM Tivoli Workload Scheduler Planning and Installation Guide


Customize installation

platform
Your platform type. One of the following:
DYNIX for IBM-Sequent Numa
IRIX for SGI Irix
LINUX_PPC for SuSE Linux Enterprise Server for iSeries and
pSeries
OSF for Compaq True64
3. Run the customize script. The script is run from the directory where you want
the product installed.
For example, a sample customize script for a fault-tolerant workstation:
/bin/sh customize -new -thiscpu dm1 -master mdm -uname twsuser [options]
For more information on the customize arguments and more examples, refer to
The customize script on page 57.
4. The Tivoli Workload Scheduler installation process is now complete. To
configure your workstation in the network, see “Configuration steps for UNIX
Tier 1 and 2 installations” on page 76 and “Configuration steps for Tier 2
installations” on page 77.

After you have completed the installation, run the StartUp command to start
Netman:
StartUp

Chapter 6. Installing using customize 59


Customize installation

60 IBM Tivoli Workload Scheduler Planning and Installation Guide


Chapter 7. Upgrading to Tivoli Workload Scheduler
You can upgrade a master domain manager, a domain manager, a fault-tolerant
agent, and a standard agent from the Tivoli Workload Scheduler Version 7.0 and
8.1 to the current 8.2 version. Tivoli Workload Scheduler, Version 8.2 supports
backward compatibility and you can, therefore, upgrade your network gradually,
at different times, and in no particular order. You can upgrade top-down (that is,
upgrade the master first, then the domain managers and then the fault-tolerant
agents), as well as upgrade bottom-up (that is, start with the fault-tolerant agents
and then upgrade in sequence leaving the master last. However, if you upgrade
the master first, some new version 8.2 functions, (firewall support, centralized
security) will not work until the whole network is upgraded.

During the upgrade procedure, the installation backs up all the master data and
configuration information, installs the new product code, and automatically
migrates old scheduling data and configuration information. However, it does not
migrate user files or directories placed in the TWShome directory. See “Backup
files” on page 32 for more details.

Upgrade scenarios
The following table describes upgrade scenarios that can exist in your network,
and the steps required to upgrade to Tivoli Workload Scheduler, Version 8.2 using
the installation program.
Table 23. Upgrading to Tivoli Workload Scheduler, Version 8.2
What is currently
installed... Follow these steps... Refer to...
Tivoli Workload Scheduler, Run the installation program “Using the installation wizard”
Versions 7.0 or 8.1 (no and perform an upgrade. on page 62
Tivoli Workload Scheduler
connector or Tivoli Plus
Module installed)
Tivoli Workload Scheduler, 1. Upgrade to Tivoli Tivoli Enterprise Installation
Versions 7.0 or 8.1 Management Framework, Guide
Version 4.1.
Tivoli Management “Using the installation wizard”
Framework, Version 3.6.x 2. Run the installation on page 62
program and perform an
Tivoli Workload Scheduler upgrade.
Connector, Versions 7.0 or
8.1
Tivoli Workload Scheduler, 1. Run the installation “Using the installation wizard”
Versions 7.0, 8.1 program and perform an on page 62
upgrade. The wizard
Tivoli Management “When a supported Tivoli
automatically upgrades
Framework, Versions 3.7.1, Management Framework
the connector, provided
or 4.1 version is already installed” on
that the connector is
page 33
Tivoli Workload Scheduler configured for the agent
Connector, Versions 7.0 or selected to be upgraded.
8.1 2. Check Tivoli
Management Framework
prerequisites.

© Copyright IBM Corp. 1991, 2004 61


Upgrading

Upgrading Tivoli Workload Scheduler


The following sections outline the procedures required to upgrade your Tivoli
Workload Scheduler Version 7.0 or 8.1 installation to Tivoli Workload Scheduler,
Version 8.2. They include:
v “Using the installation wizard”
v “Running twsinst” on page 65
v “Using the migrationInstall response file” on page 67
v “Using Software Distribution” on page 67
v “Using customize” on page 68

Using the installation wizard


You can upgrade a master domain manager, a domain manager, a fault-tolerant
agent, and a standard agent from the Tivoli Workload Scheduler Version 7.0 and
8.1 to the current 8.2 version.

During the upgrade procedure, the installation program backs up all the master
data and configuration information, installs the new product code, and
automatically migrates old scheduling data and configuration information.

Before you perform the upgrade, ensure that all Tivoli Workload Scheduler
processes and services are stopped. The installation program stops the Tivoli
Workload Scheduler processes if found to be running, but if you have jobs that are
currently running, the related processes must be stopped manually. For
information about stopping the processes and services, see “Unlinking and
stopping Tivoli Workload Scheduler” on page 30.

If you are upgrading an installation that includes the connector, ensure you stop
the connector before starting the upgrade process. See “Stopping the connector” on
page 31.

To perform an upgrade, complete the following steps:


1. Insert IBM Tivoli Workload Scheduler Installation Disk 1. For Linux platforms,
insert IBM Tivoli Workload Scheduler Installation Disk 2.
2. Run the setup program for the operating system on which you are upgrading.
v On Windows platforms, the SETUP.exe file is located in the directory of the
platform on which you want to perform the upgrade.
v On UNIX platforms, the SETUP.bin file is located in the root directory of
the installation CD. To launch the upgrade, run the following command:
./SETUP.bin [-is:tempdir <temporary_directory>]

where,
-is:tempdir <temporary_directory>
Specifies a temporary working directory to which installation files
and directories are copied. The default value for
<temporary_directory> is the temporary directory set on the local
machine. If you use the default value, you must manually delete the
following files and directories copied to this directory after the
installation process completes:
– SETUP.jar
– TWS_size.txt
– media.inf

62 IBM Tivoli Workload Scheduler Planning and Installation Guide


Upgrading

– Tivoli_TWS_LP.SPB
– RESPONSE_FILE directory
– TWS_CONN directory
– TWSPLUS directory
– the directory named after the operating system
If you specify a different name for the <temporary_directory> delete
the folder after the installation process completes.
3. The installation wizard is launched. Select the installation wizard language.
Click OK.
4. Read the welcome information and click Next.
5. Read and accept the license agreement. Click Next.
6. Select an existing installation of a previous release of the product from the
drop-down list for which you want to perform the upgrade. The instance can
be identified by its group name.
7. Upgrade the selected instance is selected by default. Click Next. On
Windows, check the user name and type the password associate with the user.
8. Review the Tivoli Workload Scheduler user for which the upgrade will be
performed. Click Next.
9. Review the location of the installation on the workstation and click Next.
10. Select the type of agent for which you want to perform the upgrade and click
Next. Be sure that the type selected corresponds to the instance you selected
to be upgraded.
11. Review the CPU data information and click Next.
12. Review the installation settings and click Next. The upgrade has started.
13. When the installation completes, a panel displays a successful installation. In
case of an unsuccessful installation, check the log file indicated.
14. Click Finish.

Using twsinst
You can upgrade using the twsinst script. If you intend to manually backup your
previous installation, you should read “Backing up before running the script”
before starting the upgrade.

Backing up before running the script


The two new backup options, -backup_dir and -nobackup, were distributed in a
GA fix included in the fixpack 3 package. The fix is located in the GA_fixes
directory of CD_1, is named APAR IY48550, and it is to be applied directly to the
General Availability (GA) version of IBM Tivoli Workload Scheduler 8.2. Because of
a documentation error, the fix is not listed within the CD contents in the readme of
fixpack 3. The following notes apply to the new options:
v The purpose of -nobackup is to let you choose between running the backup of
your previous IBM Tivoli Workload Scheduler installation manually or having
the twsinst process do it automatically for you. The purpose of -backup_dir is
to let you specify a different backup directory than the default. The two options
can also be used together. Depending on their use, there are four possible
actions:

Chapter 7. Upgrading to Tivoli Workload Scheduler 63


Upgrading

Table 24. Using the twsinst backup options


When you specify... The results are...
-nobackup You manually must run the backup in the
default backup directory (see Install and
promote on page 47).
-nobackup and -backup_dir You manually must run the backup in the
backup directory you specify.
Nothing twsinst automatically runs the backup in
the default backup directory (see Install and
promote on page 47).
-backup_dir twsinst automatically runs the backup in
the backup directory you specify.

v The main reason for using the -nobackup option is that you prefer to backup
your previous installation by yourself. Such backup is important because
twsinst uses it to retrieve and transfer some of your customization settings from
the old to the new installation. Therefore, when you use this option, to
guarantee that the migration process completes the customization step correctly,
do the following before running twsinst:
1. Stop all IBM Tivoli Workload Scheduler processes.
2. Create the backup directory needed by the customization step during the
migration process. Whether you also use the -backup_dir option, or you use
the default (see Install and promote on page 47), manually create this
directory.
3. Run the following command:
chmod -R 755 $BACKUP_DIR
4. Do the following to save the minimum IBM Tivoli Workload Scheduler set of
directories/files needed by the customization step in the $BACKUP_SUBDIR
directory:
a. Move the following directories/files:
– $INST_DIR/bin
– $INST_DIR/Security
b. Copy the following directories/files (use the command cp -p ... to
preserve the correct rights):
– $INST_DIR/catalog
– $INST_DIR/methods
– $INST_DIR/mozart/globalopts
– $INST_DIR/Tbsm
– $INST_DIR/Symphony
– $INST_DIR/parameters
– $INST_DIR/parameters.KEY
– $INST_DIR/Jobtable
– $INST_DIR/Jobmanrc
– $INST_DIR/localopts
– $INST_DIR/BmEvents.conf, if TBSM is customized
– $INST_DIR/MAgent.conf, if TBSM is customized
– $INST_DIR/CLEvents.conf, if TBSM is customized
– /usr/unison/components

64 IBM Tivoli Workload Scheduler Planning and Installation Guide


Upgrading

Note: These are the files that the automatic backup makes a copy of. You
can backup any other files that you need to maintain.
5. Move $INST_DIR/../unison to the $BACKUP_DIR directory to create its backup
copy.
6. Run twsinst -update ... -nobackup to start the migration process.
7. Check that the migration process completed successfully and delete
$BACKUP_DIR.
v If you do not specify -nobackup, the backup copy is created automatically (in the
backup directory you specify with -backup_dir; in the default backup directory
otherwise). The process automatically:
1. Creates the TWS82bck_$TWSuser_PID.tar file under the backup directory
reading from the filelist_bck82 list. If this step fails, a message is displayed
and the process stops; if it completes successfully, the .tar file is compressed.
2. Copies all the files/directories needed for the customization step, and the
current /usr/unison/components file, to the backup directory.
If the migration process fails, the rollback procedure will restore the files from
the saved .tar file.

Running twsinst
Use this procedure to upgrade an existing IBM Tivoli Workload Scheduler version
7.0 or 8.1 installation to version 8.2 on Tier 1 platforms. It also installs all
supported language packs. This procedure uses the command line method of
upgrading using the twsinst script. Refer to Tivoli Workload Scheduler Release Notes
for a list of supported Tier 1 platforms. For Tier 2 installations using the command
line, see Chapter 6, “Installing using customize,” on page 57.

Before you perform the upgrade, ensure that all Tivoli Workload Scheduler
processes and services are stopped. The installation wizard stops the Tivoli
Workload Scheduler processes if found to be running, but if you have jobs that are
currently running, the related processes must be stopped manually. For
information about stopping the processes and services, see “Unlinking and
stopping Tivoli Workload Scheduler” on page 30.

Perform the following steps to upgrade a version 7.0 or 8.1 installation to Tivoli
Workload Scheduler, Version 8.2 using the twsinst script.
1. Insert IBM Tivoli Workload Scheduler Installation Disk 1.
2. Log in as root, and change your directory to TWShome.
3. Locate the directory of the platform on which you want to run the script and
run the twsinst script as follows:
twsinst -update -uname <username>
-cputype {master | bkm_agent | ft_agent | st_agent}
[-inst_dir <install_dir>]
[-backup_dir<backup_dir>]
[-nobackup]
[-lang <lang-id>]

Note: The following options are available only if you applied the fix for APAR
IY48550 included in the fixpack 3 for Version 8.2 package:
v -backup_dir<backup_dir>
v -nobackup
-update
Upgrades an existing installation. Installs all supported language packs
also. Only installations of versions 7.0 and 8.1 of the product are supported.

Chapter 7. Upgrading to Tivoli Workload Scheduler 65


Upgrading

Updating the software does not change the type of databases in use by
Tivoli Workload Scheduler. See Chapter 7, “Upgrading to Tivoli Workload
Scheduler,” on page 61 for more information about upgrading and see
“Running twsinst” on page 65.
-cputype
Specifies the type of Tivoli Workload Scheduler agent to install. Valid
values are as follows:
v master
v bkm_agent (backup master)
v ft_agent (fault-tolerant agent, domain manager, backup domain manager)
v st_agent (standard agent)
If not specified, the default value is ft_agent. When -cputype=master,
-master is set by default to the same value as -thiscpu.
-master <master_cpuname>
The workstation name of the master domain manager. This name cannot
exceed 16 characters and cannot contain spaces. This name is registered in
the globalopts file. If not specified, the default value is MASTER. Refer to
the Internationalization Notes in the IBM Tivoli Workload Scheduler Release
Notes for restrictions.
-inst_dir <install_dir>
The directory of the Tivoli Workload Scheduler installation. This path
cannot contain blanks. If not specified, the path is set to the username home
directory.
-backup_dir<backup_dir>
Can be used to specify the name of an alternative directory (which must be
created manually) as the destination for the backup copy of a previous
version. This option can be used in combination with -nobackup.
If you do not specify this option when running an upgrade, the following
default value is used:
$BACKUP_DIR = $INST_DIR_backup_$TWS_USER

where:
v $INST_DIR is the IBM Tivoli Workload Scheduler installation path (the
user home directory on UNIX).
v $TWS_USER is the IBM Tivoli Workload Scheduler user name.
For example:
$INST_DIR=/opt/TWS/TWS81
$TWS_USER=maest81
$BACKUP_DIR=/opt/TWS/TWS81_backup_maest81
$BACKUP_SUBDIR=/opt/TWS/TWS81_backup_maest81/TWS81

In the backup directory you also have to create a subdirectory named as


the latest directory of the installation path.
-nobackup
Disables the automatic backup of the previous IBM Tivoli Workload
Scheduler version when running an upgrade. This option can be used in
combination with -backup_dir.

66 IBM Tivoli Workload Scheduler Planning and Installation Guide


Upgrading

-lang <lang_id>
The language in which the twsinst messages are displayed. If not specified,
the system LANG is used. If the related catalog is missing, the default C
language catalog is used.

Note: The -lang option is not to be confused with the Tivoli Workload
Scheduler supported language packs. By default, all supported
language packs are installed when you install using the twsinst
script.
For example, a sample twsinst script to upgrade an Tivoli Workload Scheduler,
version 7.0 fault-tolerant agent to a Version 8.2 fault-tolerant agent workstation:
./twsinst -update -uname twsuser -cputype ft_agent

Using the migrationInstall response file


The response file provided enables you to run the installation wizard program in
silent mode without having to run the wizard in graphical mode. You can upgrade
Tivoli Workload Scheduler using the migrationInstall.txt response file provided on
IBM Tivoli Workload Scheduler Installation Disk 1 in the following path:
\RESPONSE_FILE\migrationInstall.txt.

Before you perform the upgrade, ensure that all Tivoli Workload Scheduler
processes and services are stopped. For information about stopping the processes
and services, see “Unlinking and stopping Tivoli Workload Scheduler” on page 30.

If you are upgrading an installation that includes the Connector, ensure you stop
the connector before starting the upgrade process. See “Stopping the connector” on
page 31.

Copy the response file to a local directory and edit it to meet the needs of your
particular upgrade environment. Instructions for customizing the files are included
directly in the files as commented text. To start the upgrade in silent mode, type
the following command:
v On UNIX,
./SETUP.bin -options <local_dir>/migrationInstall.txt
v On Windows,
SETUP.exe -options <local_dir>\migrationInstall.txt

Using Software Distribution


A number of Tivoli Workload Scheduler parameters are used by the software
package block to perform the upgrade. You are required to assign the correct
values to each variable to reflect the installation that is being upgraded, otherwise,
the default value is assigned. These parameters are defined as default variables in
the software package. For a list of the parameters, see Table 21 on page 52.

Before you perform the upgrade, ensure that all Tivoli Workload Scheduler
processes and services are stopped. For information about stopping the processes
and services, see “Unlinking and stopping Tivoli Workload Scheduler” on page 30.

To perform the upgrade, complete the following steps:


1. Set the Tivoli environment. See “Stopping the connector” on page 31.
2. Import the software package block using the wimpspo command.
3. Install the software package block using the winstsp command.

Chapter 7. Upgrading to Tivoli Workload Scheduler 67


Upgrading

For complete instructions on performing these tasks, refer to the IBM Tivoli
Configuration Manager, Reference Manual for Software Distribution, and the IBM Tivoli
Configuration Manager, User’s Guide for Software Distribution.

The following is an example of the settings required to upgrade an Tivoli


Workload Scheduler, version 8.1 master domain manager to Tivoli Workload
Scheduler, version 8.2 on a Windows workstation.
winstsp –D install_dir="d:\win32app\TWS\juno" –D tws_user="juno"
{–D st_agent="false"|–D ft_agent="false"|–D master="false"|–D bkm_agent="false"}
–D this_cpu="saturn" –D master_cpu="saturn" –D tcp_port="3111"
{–D fresh_install="false" | –D upgrade="true" | –D promote="false"}
–D backup="true" –D group="TWS81group"
Tivoli_TWS_WINDOWS.spb [subscribers...]

Using customize
Before you perform the upgrade, ensure that all Tivoli Workload Scheduler
processes and services are stopped. For information about stopping the processes
and services, see “Unlinking and stopping Tivoli Workload Scheduler” on page 30.

Note: Be sure to read the Tivoli Workload Scheduler Release Notes for additional
information about updating existing software.

The customize script moves the methods directory to a directory named


twshome/config.old. These files are often customized to meet your specific needs,
and you can use the saved copies to incorporate your changes following the
update. The customize script will not overwrite any files in the stdlist directory
that were modified after Tivoli Workload Scheduler was installed.

If there are any other files you want to protect during the update, copy or rename
them now. As an added precaution, you should also backup the following:
v The TWShome directory.
v The components file (generally, /usr/unison/components)

To upgrade using the customize script, perform the following steps:


1. Mount IBM Tivoli Workload Scheduler Installation Disk 2.
a. Log in as root, and change your directory to twshome.
b. Extract the software:
tar -xvf cd2/platform/MAESTRO.TAR
where:
cd The pathname of your CD drive.
platform
Your platform type. One of the following:
DYNIX for IBM-Sequent Numa
IRIX for SGI Irix
LINUX_PPC for SuSE Linux Enterprise Server for iSeries and
pSeries
OSF for Compaq True64

68 IBM Tivoli Workload Scheduler Planning and Installation Guide


Upgrading

2. Run the customize script.


For example, a sample customize script for a fault-tolerant agent workstation:
/bin/sh customize -update -uname name [options]
For more information on the customize arguments and more examples, refer to
The customize script on page 57.

Chapter 7. Upgrading to Tivoli Workload Scheduler 69


Upgrading

70 IBM Tivoli Workload Scheduler Planning and Installation Guide


Part 4. Configuring
Chapter 8. After you install . . . . . . . . 73 Chapter 10. Integration with other IBM Tivoli
Netman . . . . . . . . . . . . . . . 73 products . . . . . . . . . . . . . . 111
Configuring a master domain manager . . . . . 73 Integration with IBM Tivoli Enterprise Data
Configuring a fault-tolerant switch manager . . . 74 Warehouse . . . . . . . . . . . . . . 111
Configuring a fault-tolerant or standard agent . . . 74 Integration with IBM Tivoli NetView . . . . . 111
Updating the security file . . . . . . . . . . 75 General . . . . . . . . . . . . . . 111
Configuration steps for UNIX Tier 1 and 2 How Tivoli Workload Scheduler/NetView
installations . . . . . . . . . . . . . . 76 works . . . . . . . . . . . . . . 112
Configuration steps for Tier 2 installations . . . . 77 Types of information . . . . . . . . . 112
Configuring a fault-tolerant agent after Definitions . . . . . . . . . . . . 112
installation . . . . . . . . . . . . . 77 General Requirements . . . . . . . . 113
Enabling the time zone feature . . . . . . . . 78 Configuration . . . . . . . . . . . 113
Enabling the time zone in an end-to-end network 79 Installing the integration software . . . . . 113
Running the customize script . . . . . . 114
Chapter 9. Optional customization . . . . . . 81 Installing . . . . . . . . . . . . . 115
Global options . . . . . . . . . . . . . 81 Setting up . . . . . . . . . . . . . 116
Setting the global options . . . . . . . . . 81 Objects, symbols, and submaps . . . . . . 117
Global options file example . . . . . . . 84 Status of Tivoli Workload
Carry forward options . . . . . . . . . . 85 Scheduler/NetView symbols . . . . . . 118
Local options . . . . . . . . . . . . . . 86 Extended agent mapping . . . . . . . 119
Setting local options . . . . . . . . . . 86 Menu actions . . . . . . . . . . . . 119
Local options file example . . . . . . . 93 Changing the commands . . . . . . . 120
Setting up decentralized administration . . . . . 94 Tivoli Workload Scheduler/NetView events . . 121
Sharing the master directories . . . . . . . 94 Polling and SNMP traps . . . . . . . . 122
Sharing Tivoli Workload Scheduler parameters 94 Tivoli Workload Scheduler/NetView
Using a single share . . . . . . . . . . 95 configuration files . . . . . . . . . . . 123
Setting local options . . . . . . . . . . 95 The BmEvents configuration file . . . . . 123
Setting local options on the master . . . . . 96 The MAgent configuration file. . . . . . 124
Tivoli Workload Scheduler console messages and Monitoring writers and servers . . . . . 125
prompts . . . . . . . . . . . . . . . 96 Tivoli Workload Scheduler/NetView
Setting sysloglocal on UNIX . . . . . . . . 96 configuration options . . . . . . . . . . 126
console command . . . . . . . . . . . 97 Agent scan rate . . . . . . . . . . . 126
Automating the production cycle . . . . . . . 97 Manager polling rate . . . . . . . . . 126
Customizing the final job stream . . . . . . 98 Configuring agents in NetView . . . . . 126
Starting a production cycle . . . . . . . . 98 Configuring workstation status in NetView 127
Managing the production environment . . . . . 98 Unison software MIB . . . . . . . . . . 127
Choosing the start of day . . . . . . . . . 98 Re-configuring enterprise-specific traps . . . 127
Changing the start of day . . . . . . . . 99 Tivoli Workload Scheduler/NetView program
Creating a plan for future or past dates . . . . 99 reference . . . . . . . . . . . . . . 130
Using the configuration scripts . . . . . . . 100 mdemon synopsis . . . . . . . . . . 130
Jobman environment variables. . . . . . . 100 magent synopsis . . . . . . . . . . 131
Standard configuration script - jobmanrc . . . 101 Integration with IBM Tivoli Business Systems
Local configuration script - .jobmanrc . . . . 103 Manager . . . . . . . . . . . . . . . 132
Tivoli Workload Scheduler and Tivoli Management General . . . . . . . . . . . . . . 132
Framework . . . . . . . . . . . . . . 104 Using the key flag mechanism . . . . . . . 133
The Tivoli Management Framework for Setting the key flag . . . . . . . . . 133
non-Tivoli users . . . . . . . . . . . 104 Installing and configuring the common listener
Adding Tivoli administrators . . . . . . . 105 agent . . . . . . . . . . . . . . . 134
Backup master considerations . . . . . . . 107 Customizing the configuration files . . . . . 135
Masters that do not support Tivoli Management Customizing BmEvents.conf . . . . . . 135
Framework . . . . . . . . . . . . . 108 Customizing ClEvents.conf . . . . . . . 136
Moving the backup master . . . . . . . 108 Starting and stopping the common listener
Creating a backup master . . . . . . . 108 agent . . . . . . . . . . . . . . . 136
Mounting master domain manager databases 109 Tivoli Workload Scheduler/IBM Tivoli Business
Systems Manager events . . . . . . . . 136

© Copyright IBM Corp. 1991, 2004 71


Chapter 11. Setting security . . . . . . . . 139
Setting strong authentication and encryption . . . 139
Key SSL concepts . . . . . . . . . . . 140
Planning for SSL support in Tivoli Workload
Scheduler. . . . . . . . . . . . . . 141
Configuring SSL support in Tivoli Workload
Scheduler. . . . . . . . . . . . . . 143
Setting up private keys and certificates . . . 143
Creating your own certification authority . . 144
Creating private keys and certificates . . . 145
Configuring SSL attributes . . . . . . . 146
Setting SSL local options . . . . . . . 147
Working across firewalls . . . . . . . . . . 149

Chapter 12. Uninstalling Tivoli Workload


Scheduler . . . . . . . . . . . . . . 151
Using the uninstall wizard . . . . . . . . . 151
Using the twsinst script . . . . . . . . . . 151
Using the Software Distribution CLI . . . . . . 152
Using the customize script . . . . . . . . . 152

72 IBM Tivoli Workload Scheduler Planning and Installation Guide


Chapter 8. After you install
This chapter describes configuration tasks that might be required at the end of the
installation procedure you followed. Each procedure will point you to the tasks if
they are needed.

Netman
The Netman process is automatically started at the end of installation. This is to
control that the installation process succeeded.

Configuring a master domain manager


After you have installed a master domain manager, regardless of the method of
installation you used, you must add the final job stream to the database and run
Jnextday. This job stream is placed in production every day, and results in running
a job named Jnextday prior to the start of a new day. The installation creates an
Sfinal file in the TWShome directory on your workstation containing the final job
stream definition. You can use this Sfinal file or create and customize a new one.
See “Automating the production cycle” on page 97 for details about customizing
the final job stream.

The following is an example of how to configure a master domain manager after


the installation:
1. Before you can run commands such as conman or composer commands, you
must set the PATH and TWS_TISDIR variables. The TWS_TISDIR variable
enables Tivoli Workload Scheduler to display messages in the correct language
and codeset.
v On Windows systems, edit the PATH system variable to include TWShome
and TWShome\bin. For example, if Tivoli Workload Scheduler has been
installed in the c:\win32app\TWS\jdoe directory, the PATH variable should
include the following:
PATH=\win32app\TWS\jdoe;\win32app\TWS\jdoe\bin

Create the TWS_TISDIR environment variable and assign TWShome as the


value. In this way, the necessary environment variables and search paths are
set to allow you to run commands even if you are not located in the
TWShome path. Alternatively, you can run the tws_env.cmd shell script to set
up both the PATH and TWS_TISDIR variables.

Note: If you have more than one version of IBM Tivoli Workload Scheduler
installed on your computer, make sure TWS_TISDIR points to the
latest one. This ensures that the most recent character set conversion
tables are used.
v For UNIX systems see step 1 on page 76 in “Configuration steps for UNIX
Tier 1 and 2 installations” on page 76.
2. Login as TWSuser.
3. Run the composer command.
4. Add the final job stream definition to the database by running the following
command:
composer add Sfinal

© Copyright IBM Corp. 1991, 2004 73


After installation

If you did not use the Sfinal file provided with the installation but created a
new one, use its name in place of Sfinal.
5. Exit the composer command line.
6. Run the Jnextday job:
Jnextday

You can automate this step following the first time after installation. See
“Automating the production cycle” on page 97 for details.
7. When the Jnextday job completes, check the status of Tivoli Workload
Scheduler:
conman status

If Tivoli Workload Scheduler started correctly the status is Batchman=LIVES.


8. Raise the limit to allow jobs to run. The default job limit after installation is set
to zero. This means no jobs will run, so you may want to raise the job limit
now:
conman "limit;10"

Configuring a fault-tolerant switch manager


To enable the feature, follow these steps:
1. Open the <TWShome>/mozart/globalopts file on the master domain manager.
2. Add the line enable switch fault tolerance = yes at the bottom.
3. Save and close.
4. The feature will be active from the following Jnextday job.

In the end-to-end functionality, a new parameter has been introduced in the


TOPOLOGY statement:
ENABLESWITCHFT (Y/N)

The ftbox is a cyclical message queue where each full-status agent stores the
messages it would send if it acted as a domain manager. When the queue fills up,
the new messages overwrite the old messages, from the beginning.

It is possible to change the ftbox size as any other message queue, using the
evtsize command.

Configuring a fault-tolerant or standard agent


After you have installed a fault-tolerant agent or standard agent, regardless of the
method of installation you used, you must define the workstation on the master
and link the workstation from the master. You can perform this task using the Job
Scheduling Console or the command line interface. Refer to the Tivoli Workload
Scheduler Job Scheduling Console User’s Guide for information. The following is an
example of configuring a fault-tolerant agent after installation from the command
line:
1. Before you can run commands such as conman or composer commands, you
must set the PATH and TWS_TISDIR variables. The TWS_TISDIR variable
enables Tivoli Workload Scheduler to display messages in the correct language
and codeset.

74 IBM Tivoli Workload Scheduler Planning and Installation Guide


After installation

v On Windows systems, edit the PATH system variable to include TWShome


and TWShome\bin. For example, if Tivoli Workload Scheduler has been
installed in the c:\win32app\TWS\jdoe directory, the PATH variable should
include the following:
PATH=\win32app\TWS\jdoe;\win32app\TWS\jdoe\bin

Create the TWS_TISDIR environment variable and assign TWShome as the


value. In this way, the necessary environment variables and search paths are
set to allow you to run commands even if you are not located in the
TWShome path. Alternatively, you can run the tws_env.cmd shell script to set
up both the PATH and TWS_TISDIR variables.

Note: If you have more than one version of IBM Tivoli Workload Scheduler
installed on your computer, make sure TWS_TISDIR points to the
latest one. This ensures that the most recent character set conversion
tables are used.
v For UNIX systems, see step 1 on page 76 in “Configuration steps for UNIX
Tier 1 and 2 installations” on page 76.
2. Login to the master domain manager as TWSuser.
3. Create the fault-tolerant agent workstation definition in the Tivoli Workload
Scheduler database by using the composer command line. Open a command
line window and enter the following commands:
composer
new
4. This opens a text editor where you can create the fault-tolerant agent
workstation definition in the Tivoli Workload Scheduler database. Below is an
example workstation definition for a fault-tolerant agent . For more information
on workstation definitions, refer to the Tivoli Workload Scheduler Reference Guide.
cpuname DM1
os UNIX
node domain1
description "Fault-tolerant Agent"
for Maestro
autolink off
end
5. The newly-defined fault-tolerant agent is not recognized until the Jnextday job
runs in the final job stream. If you want to incorporate the fault-tolerant agent
sooner, you can run conman ″release final″. For information about defining
your scheduling objects, refer to the Tivoli Workload Scheduler Reference Guide.
6. Issue the link command from the master domain manager to link the
fault-tolerant agent and to download the Symphony file to it:
conman “link ftaname”

Updating the security file


When you are performing a full install, the installation procedure automatically
creates a Tivoli Administrator called, TWS_TWSuser, edits the Administrator’s
logins with the Tivoli Workload Scheduler user name, and updates the security file
with the Tivoli Administrator. However, in the case that you performed an add
feature operation, installing the connector to an existing installation, you must
update the security file manually.

Complete the following steps to add the Tivoli Administrator (TWS_TWSuser)


created during the installation procedure with the scheduler login to the scheduler
security file:

Chapter 8. After you install 75


After installation

1. Login as the user (usually, TWSuser or maestro).


2. Change directory to TWShome.
3. Run the dumpsec command to create a temporary editable copy of the security
file:
dumpsec >tempsec
4. Edit the tempsec file to insert the Administrator name under the USER
MAESTRO heading, as described in the following example:
USER MAESTRO
CPU=@+LOGON=tws82,root,TWS_TWSuser

where TWS_TWSuser is the Administrator name.

Note: To obtain the Administrator name, open the Tivoli desktop and
double-click Administrators. The Administrator name is the
Administrators group to which your login belongs.
5. Set the Tivoli environment:
From a UNIX command line:
v For ksh:
. /etc/Tivoli/setup_env.sh
v For csh:
source /etc/Tivoli/setup_env.sh
6. Enter the following command to stop the Connector:
v on UNIX
wmaeutil.sh ALL -stop
v on Microsoft Windows
wmaeutil.cmd ALL -stop
7. Run the makesec command to compile the temporary file into a new security
file:
makesec tempsec

For more information on the makesec and dumpsec commands, see IBM Tivoli
Workload Scheduler: Reference.

Configuration steps for UNIX Tier 1 and 2 installations


For installations on Tier 1 and Tier 2 UNIX platforms, perform the following
configuration tasks.
1. Create a .profile file for the TWSuser, if one does not already exist
(TWShome/.profile). Edit the file and modify the PATH variable to include
TWShome and TWShome/bin. For example, if Tivoli Workload Scheduler has
been installed in the /opt/maestro directory, in a Bourne/Korn shell
environment, the PATH variable should be defined as follows:
PATH=/opt/maestro:/opt/maestro/bin:$PATH
export PATH

In addition to the PATH, you must also set the TWS_TISDIR variable to
TWShome. The TWS_TISDIR variable enables Tivoli Workload Scheduler to
display messages in the correct language and codeset. For example,
TWS_TISDIR=/opt/maestro
export TWS_TISDIR

76 IBM Tivoli Workload Scheduler Planning and Installation Guide


After installation

In this way, the necessary environment variables and search paths are set to
allow you to run commands, such as conman or composer commands, even if
you are not located in the TWShome path. Alternatively, you can use the
tws_env shell script to set up both the PATH and TWS_TISDIR variables. These
variables must be set before you can run commands. The tws_env script has
been provided in two versions:
v tws_env.sh for Bourne and Korn shell environments
v tws_env.csh for C Shell environments
See step 1 on page 73 in “Configuring a master domain manager” on page 73
for information about the tws_env script on Windows systems.
2. To start the Tivoli Workload Scheduler network management process, Netman,
automatically as a daemon each time you boot your system, add one of the
following to the /etc/rc file, or the proper file for your system: To start Netman
only:
if [-x twshome/StartUp]
then
echo "netman started..."
/bin/su - twsuser -c " twshome/StartUp"
fi

Or, to start the entire Tivoli Workload Scheduler process tree:


if [-x twshome/bin/conman]
then
echo "Workload Scheduler started..."
/bin/su - twsuser -c " twshome/bin/conman start"
fi

Configuration steps for Tier 2 installations


For Tier 2 installations execute the following configuration tasks. These tasks must
be executed from the composer and conman command line interfaces.

Configuring a fault-tolerant agent after installation


After you have configured the fault-tolerant agent in this way, you can use the Job
Scheduling Console to configure the other workstations, and job scheduling objects
in your Tivoli Workload Scheduler network. Refer to the IBM Tivoli Workload
Scheduler Reference Guide for details on the commands used below. Refer to the
Tivoli Workload Scheduler Job Scheduling Console User’s Guide for information on
using the Job Scheduling Console to configure other workstations in the network.
1. Login to the master domain manager as TWSuser. This is the user that manages
the Tivoli Workload Scheduler master domain manager instance.
2. Create the fault-tolerant workstation definition in the Tivoli Workload
Scheduler database by using the composer command line. See 1 on page 76
about setting the PATH and TWS_TISDIR environment variables before running
commands. Open a command line window and enter the following commands:
composer
new
3. This opens a text editor where you can create the fault-tolerant workstation
definition in the Tivoli Workload Scheduler database. Below is an example
workstation definition for a fault-tolerant. For more information on workstation
definitions, refer to the Tivoli Workload Scheduler Reference Manual.
cpuname FTA1
os UNIX
node FTA1.rome.ibm.com

Chapter 8. After you install 77


After installation

description "Fault-tolerant Agent"


for Maestro
autolink on
end
4. Create a new Symphony file that includes the fault-tolerant workstation
definition. To do this run the Jnextday job on the master domain manager
which automates the creation of a new Symphony file:
Jnextday
5. Issue the link command from the Master Domain Manager to link the
fault-tolerant and to download the Symphony file to it:
conman “link ftaname”

Enabling the time zone feature


The time zone feature is enabled by an entry in the globalopts file and by
specifying a time zone in the master’s workstation definition, as follows:
timezone enable = yes|no

Time zones are disabled by default on installation or update of the product. If the
timezone enable entry is missing from the globalopts file, time zones are disabled.

The following steps outline the method of implementing the time zone feature:
1. Load Tivoli Workload Scheduler.
The default setting is for time zone is timezone enable = no in the globalopts
file. The database allows time zones to be specified for workstations, but not on
start and deadline times within job streams in the database. The plan creation
(Jnextday) ignores any time zones that are present in the database. You will not
be able to specify any time zones anywhere in the plan.
2. Define workstation time zones.
Set the time zone of the master workstation, of the backup master, and of any
fault-tolerant agents that are in a different time zone than the master. No time
zones will be allowed in the database for start and deadline times. No time
zones will be allowed anywhere in the plan at this point, because the timezone
enable entry in the globalopts file is still set to NO.
3. When workstation time zones have been set correctly, set timezone enable to
YES in the globalopts file. This setting, and the time zone definition in the
master workstation, will enable the Tivoli Workload Scheduler network to take
advantage of time zone support.
At this point, all users will be able to use time zones anywhere in the database,
although they should wait for the next run of Jnextday to use them on start and
deadline times. Until Jnextday runs, they will not be able to use time zones in
the plan. The next time Jnextday runs, time zones will be carried over to the
plan, and the Job Scheduling Console and the backend will allow the
specification of time zones anywhere in the plan.
4. Start using time zones on start and until times where needed.
You can now use all time zone references in the database and in the plan with
both the Job Scheduling Console and the CLI.

78 IBM Tivoli Workload Scheduler Planning and Installation Guide


After installation

Enabling the time zone in an end-to-end network


In an end-to-end network, you define the workstation in the Tivoli Workload
Scheduler for z/OS master. You define the workstation using the CPUREC
statement as described in the IBM Tivoli Workload Scheduler for z/OS Customization
and Tuning manual. In CPUREC statement, you set the local time zone of the
workstation using the CPUTZ keyword. If you do not specify the CPUTZ
keyword, the default value is UTC (universal coordinated time). Ensure that the
time zone specified in CPUTZ matches the time zone of the operating system of
the workstation on which Tivoli Workload Scheduler runs. If the time zones do not
match, a message coded as AWSBHT128I is displayed in the log file of the
workstation.

In an end-to-end network, the time zone feature is always enabled and does not
need to be set in the globalopts file. Also, the value specified for the CPUTZ
keyword is used for every workstation. If it is not specified, the default value UTC
is used.

Chapter 8. After you install 79


80 IBM Tivoli Workload Scheduler Planning and Installation Guide
Chapter 9. Optional customization
After installing the product, you want to customize it to fit your operational
requirements. This chapter describes optional customization steps for your Tivoli
Workload Scheduler installation. Topics include:
v “Global options”
v “Local options” on page 86
v “Setting up decentralized administration” on page 94
v “Tivoli Workload Scheduler console messages and prompts” on page 96
v “Automating the production cycle” on page 97
v “Managing the production environment” on page 98
v “Using the configuration scripts” on page 100
v “Tivoli Workload Scheduler and Tivoli Management Framework” on page 104

Global options
You define global options on the master domain manager and they apply to all the
workstations in the Tivoli Workload Scheduler network.

Setting the global options


You set global options in the globalopts file with a text editor. You can make
changes at any time, but they do not take effect until Tivoli Workload Scheduler is
stopped and restarted. Table 25 describes the syntax of the file. Entries are not
case-sensitive.
Table 25. Globalopts syntax
Syntax Default value
# comment
automatically grant logon as batch = yes|no no
batchman schedule = yes|no no
bmmsgbase = integer 1000
bmmsgdelta = integer 1000
carry job states = ([state[,...]]) null
carryforward = yes|no|all yes
centralized security=on|off off
company = companyname null
database audit level = 0|1 0
enable list security check = yes|no no
history = days 10
ignore calendars = yes|no no
master = wkstation Set initially when you install
Tivoli Workload Scheduler.
plan audit level = 0|1 0
retain rerun job name = yes|no no

© Copyright IBM Corp. 1991, 2004 81


Optional customization

Table 25. Globalopts syntax (continued)


Syntax Default value
start = starttime 0600
timezone enable = yes|no no

# comment
Treat everything from the pound sign to the end of the line as a comment.
automatically grant logon as batch job
This is for Windows jobs only. If set to yes, the logon users for Windows
jobs are automatically granted the right to Logon as batch job. If set to no,
or omitted, the right must be granted manually to each user or group.
Note that the right cannot be granted automatically for users running jobs
on a Backup Domain Controller (BDC), so you must grant those rights
manually.
bmmsgbase
Specify the maximum number of prompts that can be displayed to the
operator after a job abends. The default value is 1000.
bmmsgdelta
Specify an additional number of prompts for the value defined in
bmmsgbase for the case when a job is rerun after abending and the limit
specified in bmmsgbase has been reached. The default value is 1000.
batchman schedule
This is a production option that affects the operation of Batchman, which is
the production control process of Tivoli Workload Scheduler. The setting
determines the priority assigned to the job streams created for unscheduled
jobs. Enter yes to have a priority of 10 assigned to these job streams. Enter
no to have a priority of 0 assigned to these job streams.
carry job states
This is a pre-production option that affects the operation of the stageman
command. Its setting determines the jobs, by state, to be included in job
streams that are carried forward. You must enclose the job states in
parentheses, double quotation marks, or single quotation marks. The
commas can be replaced by spaces. The valid internal job states are as
follows:

abend abenp add done exec fail


hold intro pend ready rjob sched
skel succ succp susp wait waitd

Some examples of the option are as follows:


carry job states=(abend,exec,hold,intro)
carry job states="abend exec hold intro"
carry job states=’abend exec hold intro’

An empty list is entered as follows:


carry job states=()

See “Carry forward options” on page 85 for more information.


carryforward
This is a pre-production option that affects the operation of the stageman
command. Its setting determines whether or not job streams that did not
complete are carried forward from the old to the new production plan

82 IBM Tivoli Workload Scheduler Planning and Installation Guide


Optional customization

(Symphony). Enter yes to have uncompleted job streams carried forward


only if the Carry Forward option is enabled in the job stream definition.
Enter all to have all uncompleted job streams carried forward, regardless
of the Carry Forward option. Enter no to completely disable the carry
forward function. The stageman -carryforward command line option is
assigned the same values and serves the same function as the carryforward
Global Option. If it is used, it overrides the Global Option. See “Carry
forward options” on page 85 for more information.
centralized security
Rules how the security file is used within the IBM Tivoli Workload
Scheduler network.
If it is set to on, the security files of all the workstations of the IBM Tivoli
Workload Scheduler network can be created and modified only on the
master, and the IBM Tivoli Workload Scheduler administrator is
responsible for their production, maintenance, and distribution.
If it is set to off, the security file of each workstation can be managed by
its root user or administrator. The local user can run the makesec
command to create or update the file.
The default is off. See IBM Tivoli Workload Scheduler: Reference for details on
using this feature.
company
This is your company’s name, up to 40 characters. If the name contains
spaces, enclose the entire name in quotation marks (″). If you use the
Japanese-Katakana language set, always enclose the name within single or
double quotes to ensure that it appears in the pages of your reports.
database audit level
Select whether to enable or disable database auditing. Valid values are 0 to
disable database auditing, and 1 to activate database auditing. Auditing
information is logged to a flat file in the TWShome/audit/database directory.
Each Tivoli Workload Scheduler workstation maintains its own log. For the
database, only actions are logged in the auditing file, not the delta of the
action. For more information on this feature, see Enabling the time zone
feature.
enable list security check
This is a security option that controls which objects in the plan the user is
permitted to list when running a Job Scheduling Console query or a
conman show command. If set to yes, objects in the plan returned from a
query or show command are shown to the user only if the user has been
granted the list permission in the security file. The default value is no.
Change the value to yes if you want to check for the list permission in the
security file.
history
Enter the number of days for which you want to save job statistics.
Statistics are discarded on a first-in, first-out basis. This has no effect on job
standard list files, which must be removed with the rmstdlist command.
See the Tivoli Workload Scheduler Reference Manual for information about the
rmstdlist command.
ignore calendars
This is a pre-production option that affects the operation of the compiler
command. Its setting determines whether or not user calendars are copied
into the new Production Control file. Enter yes to prevent user calendars

Chapter 9. Optional customization 83


Optional customization

from being copied into the new production plan (Symphony file). This
conserves space in the file, but permits the use of calendar names in date
expressions. Enter no to have user calendars copied into the new
production plan. See the explanation of the compiler command in the
Tivoli Workload Scheduler Reference Guide for more information.
master
The name of the master domain manager. This is set when you install
Tivoli Workload Scheduler.
plan audit level
Select whether to enable or disable plan auditing. Valid values are 0 to
disable plan auditing, and 1 to activate plan auditing. Auditing
information is logged to a flat file in the TWShome/audit/plan directory.
Each Tivoli Workload Scheduler workstation maintains its own log. For the
plan, only actions are logged in the auditing file, not the success or failure
of any action. For more information on this feature, see Enabling the time
zone feature.
retain rerun job name
This is a production option that affects the operation of Batchman, which is
the production control process of Tivoli Workload Scheduler. Its setting
determines whether or not jobs that are rerun with the Conman rerun
command will retain their original job names. Enter yes to have rerun jobs
retain their original job names. Enter no to permit the rerun from name to
be assigned to rerun jobs.
start Enter the start time of the Tivoli Workload Scheduler processing day in 24
hour format: hhmm (0000-2359). The default start time is 6:00 A.M., and the
default launch time of the final job stream is 5:59 A.M. If you change this
option, you must also change the launch time of the final job stream,
which is usually set to one minute before the start time.
timezone enable
Select whether to enable or disable the time zone option. Valid values are
yes to activate time zones in your network, and no to disable time zones in
the network. The time zone is defined in the workstation definition and
the feature can be enabled by the value of the entry in the globalopts file.
Time zones are disabled by default when installing or upgrading Tivoli
Workload Scheduler. If the timezone enable entry is missing from the
globalopts file, time zones are disabled. For more information on this
feature, refer to Enabling the time zone feature.

Global options file example


A Global Options file template containing Tivoli Workload Scheduler’s default
settings is located in TWShome/config/globalopts.

During the installation process, a working copy of the global options file is
installed as TWShome/mozart/globalopts.

You can customize the working copy to your needs. The following is a sample of a
global options file:
# Globalopts file on the master domain manager defines
# attributes of the Tivoli Workload Scheduler network.
#--------------------------------------------------------
company="IBM"
master=main
start=0600
history=10
carryforward=yes

84 IBM Tivoli Workload Scheduler Planning and Installation Guide


Optional customization

ignore calendars=no
batchman schedule=no
retain rerun job name=no
centralized security=no
#
#--------------------------------------------------------
# End of globalopts.

Carry forward options


Job streams are carried forward by the stageman command during end-of-day
processing. The carry forward process is affected by the following:
v The carryforward keyword in your job streams. Refer to the Tivoli Workload
Scheduler Reference Guide for more information.
v The carryforward global option. See on page 82.
v The stageman -carryforward command line option. See the explanation of the
stageman command in the Tivoli Workload Scheduler Reference Guide.
v The carry job states global option. See page 82.

The following table shows how the various carry forward options work together.

Global options Carry forward operation


carryforward=no No job streams are carried forward.
carryforward=yes carry job Job streams are carried forward only if they have both
states=(states) jobs in the specified states and the Carryforward option
enabled. Only the jobs in the specified states are carried
forward with the job streams.
carryforward=yes carry job Job streams are carried forward only if they are both
states=() uncompleted and have the Carryforward option
enabled. All jobs are carried forward with the job
streams.
carryforward=all carry job Job streams are carried forward only if they have jobs in
states=(states) the specified states. Only jobs in the specified states are
carried forward with the job streams.
carryforward=all carry job Job streams are carried forward only if they are
states=() uncompleted. All jobs are carried forward with the job
streams.

Carry forward options have the following behavior:


v Any job stream not in SUCC status are considered uncompleted and are carried
forward. A chain of rerun jobs is considered uncompleted if at least a job is
uncompleted, so all the chain is carried forward.
v The stageman -carryforward command line option, if used, always overrides the
carryforward global option. The default, if neither is specified, is
carryforward=yes.
v The default entry is null for the carry job states Global Option. That is, if the list
is empty or the option is absent, carry forward works as described for carry job
states=().
v Jobs and job streams that were cancelled are never carried forward.
v Jobs and job streams with expired until times are never carried forward.
v The decision to carry forward a repetitive job (defined by the Every option) is
based on the state of its most recent run.

Chapter 9. Optional customization 85


Optional customization

v If a job is running when the Jnextday job begins execution, and it is not
specified to be carried forward, the job continues to run and is placed in the
userjobs job stream for the new production day. Note that dependencies on such
jobs are not carried forward, and any resources that are held by the job are
released.

Local options
Local options are defined on each workstation, and apply only to that workstation.

Setting local options


You enter local options in a file named localopts with a text editor. Changes can
be made at any time but do not take effect until Tivoli Workload Scheduler is
stopped and restarted. Table 26 describes the syntax. Entries are not case-sensitive.
Table 26. Localopts syntax
Syntax Default value
# comment
bm check deadline = seconds 0
bm check file = seconds 120
bm check status = seconds 300
bm check until = seconds 300
bm look = seconds 30
bm read = seconds 15
bm stats = on|off off
bm verbose = on|off off
composer prompt = key dash (-)
conman cli prompt = key percent (%)
date format= integer 1
db visible for gui= yes|no no
jm job table size = entries 160
jm look = seconds 300
jm nice = value 0
jm no root = yes|no no
jm read = seconds 10
merge stdlists = yes|no yes
mm cache mailbox = yes|no no
mm cache size = bytes 32
mm read = seconds 15
mm resolve master = yes|no yes
mm response = seconds 600
mm retry link = seconds 600
mm sound off =yes|no no
mm unlink = seconds 960
mozart directory = mozart_share None

86 IBM Tivoli Workload Scheduler Planning and Installation Guide


Optional customization

Table 26. Localopts syntax (continued)


Syntax Default value
nm ipvalidate = none|full none
nm mortal = yes|no no
nm port = port number 31111
nm read = seconds 60
nm retry = seconds 800
nm SSL port = value 31113
parameters directory =parms_share None
SSL auth mode = caonly|string|cpu caonly
SSL auth string = string tws
SSL CA certificate =*.crt TWShome/ssl/filename.crt
SSL certificate =*.crt TWShome/ssl/filename.crt
SSL certificate chain =*.crt TWShome/ssl/filename.crt
SSL encryption cipher None. See Table 38 on page 149
SSL key = *.key TWShome/ssl/filename.crt
SSL key pwd = *.sth TWShome/ssl/filename.sth
SSL random seed =*.rnd TWShome/ssl/filename.rnd
stdlist width = columns 80
sync level = low|medium|high high
switch sym prompt = key percent (%)
syslog local = facility -1
tcp timeout = seconds 600
thiscpu = wkstation thiscpu
wr read = seconds 600
wr enable compression = yes|no no
wr unlink = seconds 600

# comment
Treats everything from the pound sign to the end of the line as a comment.
bm check deadline
Specify the maximum number of seconds Batchman will wait before
reporting the expiration of the deadline time for job or job stream. The
default value is (0) which means no check of the deadline is performed
and its expiration is not reported. To enable this check, specify a value in
seconds.
bm check file
Specify the minimum number of seconds Batchman will wait before
checking for the existence of a file that is used as a dependency.
bm check status
Specify the number of seconds Batchman will wait between checking the
status of an internetwork dependency.
bm check until
Specify the maximum number of seconds Batchman will wait before
reporting the expiration of an Until time for job or job stream. Specifying a

Chapter 9. Optional customization 87


Optional customization

value below the default setting (300) may overload the system. If it is set
below the value of Local Option bm read, the value of bm read is used in
its place.
bm look
Specify the minimum number of seconds Batchman will wait before
scanning and updating its production control file.
bm read
Specify the maximum number of seconds Batchman will wait for a
message in the INTERCOM.MSG message file. If no messages are in
queue, Batchman waits until the timeout expires or until a message is
written to the file.
bm stats
Specify on to have Batchman send its startup and shutdown statistics to its
standard list file. Specify off to prevent Batchman statistics from being sent
to its standard list file.
bm verbose
Specify on to have Batchman send all job status messages to its standard
list file. Specify off to prevent the extended set of job status messages from
being sent to the standard list file.
composer prompt
Specify a prompt for the composer command line. The prompt can be of
up to 10 characters in length. The default is a dash (-).
conman prompt
Specify a prompt for the conman command line. The prompt can be of up
to 8 characters in length. The default is a percent sign (%).
date format
Specify the value that corresponds to the date format you desire. The
values can be:
v 0 corresponds to yy/mm/dd
v 1 corresponds to mm/dd/yy
v 2 corresponds to dd/mm/yy
v 3 indicates usage of Native Language Support variables
The default value is 1.
db visible for gui
Specify yes to enable the Job Scheduling Console to access the
fault-tolerant agent database, while connecting to the fault-tolerant agent,
even if it is not a master domain manager. The Job Scheduling Console
user is able to see the database icons. The default value is no.
jm job table size
Specify the size, in number of entries, of the job table used by Jobman.
jm look
Specify the minimum number of seconds Jobman will wait before looking
for completed jobs and performing general job management tasks.
jm nice
For UNIX only, specify the nice value to be applied to jobs launched by
Jobman.

88 IBM Tivoli Workload Scheduler Planning and Installation Guide


Optional customization

jm no root
For UNIX only, specify yes to prevent Jobman from launching root jobs.
Specify no to allow Jobman to launch root jobs.
jm read
Specify the maximum number of seconds Jobman will wait for a message
in the COURIER.MSG message file.
mm cache mailbox
Use this option to enable mailman to use a reading cache for incoming
messages. In that case not all messages are cached, but only those not
considered essential for network consistency. the default is no.
mm cache size
Specify this option if you also use mm cache mailbox. The default is 32
event. Use the default for small and medium networks. Use larger values
for large networks. Avoid using a large value on small networks. The
maximum value is 512 (higher values are ignored).
merge stdlists
Specify yes to have all of the Tivoli Workload Scheduler control processes,
except Netman, send their console messages to a single standard list file.
The file is given the name TWSmerge. Specify no to have the processes
send messages to separate standard list files.
mm read
Specify the rate, in seconds, at which Mailman checks its mailbox for
messages. The default is 15 seconds. Specifying a lower value will cause
Tivoli Workload Scheduler to run faster but use more processor time.
mm resolve master
If this is set to ″yes″ (default), the $MASTER variable is resolved at the
beginning of the production day. The host of any extended agent will be
switched after the next Jnextday (long term switch). If it is set to ″no″, the
$MASTER variable is not resolved at Jnextday time, and this lets the host
of any extended agent to be switched right after a conman switchmgr
command (short- and long-term switch).
mm response
Specify the maximum number of seconds Mailman will wait for a response
before reporting that a workstation is not responding. The response time
should not be less than 90 seconds.
mm retry link
Specify the maximum number of seconds Mailman will wait, after
unlinking from a non-responding workstation, before it attempts to link to
the workstation again.
mm sound off
Specifies how Mailman responds to a conman tellop ? command. Specify
yes to have Mailman display information about every task it is performing.
Specify no to have Mailman send only its own status.
mm unlink
Specify the maximum number of seconds Mailman will wait before
unlinking from a workstation that is not responding. The wait time should
not be less than the response time specified for the Local Option nm
response.

Chapter 9. Optional customization 89


Optional customization

nm ipvalidate
Specify full to enable IP address validation. If IP validation fails, the
connection is not allowed. Specify none to allow connections when IP
validation fails.
nm mortal
Specify yes to have Netman quit when all of its child processes have
stopped. Specify no to have Netman keep running even after its child
processes have stopped.
nm port
Specify the TCP port number that Netman responds to on the local
computer. This must match the TCP port in the computer’s workstation
definition. It must be an unsigned 16-bit value in the range 1- 65535
(remember that the values between 0 and 1023 are reserved for well-known
services such as, FTP, TELNET, HTTP, etc.)
nm read
Specify the maximum number of seconds Netman will wait for a
connection request before checking its message queue for stop and start
commands.
nm retry
Specify the maximum number of seconds Netman will wait before retrying
a connection that failed.
nm SSL port
The port used to listen for incoming SSL connections. This value must
match the one defined in the secureaddr attribute in the workstation
definition in the IBM Tivoli Workload Scheduler database. It must be
different from the nm port local option that defines the port used for
normal communications. At installation time, the default value is 0. When
the CPU is created and SSL authentication is enabled, the port number
assumes the value 31113.
Notes:
1. On Windows, place this option also in the localopts file.
2. If you install multiple instances of Tivoli Workload Scheduler version
8.2 on the same computer, set all SSL ports to different values.
3. If you plan not to use SSL, set the value to 0.
SSL auth mode
The behavior of Tivoli Workload Scheduler during an SSL handshake is
based on the value of the SSL auth mode option as follows:
caonly Tivoli Workload Scheduler checks the validity of the certificate and
verifies that the peer certificate has been issued by a recognized
CA. Information contained in the certificate is not examined. It is
the default. If you do not specify the SSL auth mode option, or you
define a value that is not valid, the caonly value is used.
string Tivoli Workload Scheduler checks the validity of the certificate and
verifies that the peer certificate has been issued by a recognized
CA. It also verifies that the Common Name (CN) of the Certificate
Subject matches the string specified into the SSL auth string option.
See 91.
cpu Tivoli Workload Scheduler checks the validity of the certificate and
verifies that the peer certificate has been issued by a recognized
CA. It also verifies that the Common Name (CN) of the Certificate
Subject matches the name of the CPU that requested the service.

90 IBM Tivoli Workload Scheduler Planning and Installation Guide


Optional customization

SSL auth string


Used in conjunction with the SSL auth mode option when the ″string″
value is specified. The SSL auth string (ranges from 1 — 64 characters) is
used to verify the certificate validity. If you do not specify an SSL auth
string value in conjunction with the SSL auth mode, then the default
string value used is ″tws″.
SSL CA certificate
In SSL authentication, the name of the file containing the trusted
certification authority (CA) certificates required for authentication. The CAs
in this file are also used to build the list of acceptable client CAs passed to
the client when the server side of the connection requests a client
certificate. This file is the concatenation, in order of preference, of the
various PEM-encoded CA certificate files. See “Setting strong
authentication and encryption” on page 139 for reference.
SSL certificate
In SSL authentication, the name of the local certificate file. See “Setting
strong authentication and encryption” on page 139 for reference.
SSL certificate chain
In SSL authentication, the name of the file that contains the concatenation
of the PEM-encoded certificates of certification authorities which form the
certificate chain of the workstation’s certificate. This parameter is optional.
If it is not specified, the file pointed by SSL CA certificate is used. See
“Setting strong authentication and encryption” on page 139 for reference.
SSL encryption cipher
The ciphers that the workstation supports during an SSL connection. Use
the following shortcuts:
Table 27. Shortcuts for encryption ciphers
Shortcut Encryption ciphers
SSLv3 SSL version 3.0
TLSv TLS version 1.0
EXP Export
EXPORT40 40-bit export
EXPORT56 56-bit export
LOW Low strength (no export, single DES)
MEDIUM Ciphers with 128 bit encryption
HIGH Ciphers using Triple-DES
NULL Ciphers using no encryption

SSL key
In SSL authentication, the name of the private key file. See “Setting strong
authentication and encryption” on page 139 for reference.
SSL key pwd
In SSL authentication, the name of the file containing the password for the
stashed key. See “Setting strong authentication and encryption” on page
139 for reference.
SSL random seed
The pseudo random number file used by OpenSSL on some platforms.
Without this file, SSL authentication may not work properly. See “Setting
strong authentication and encryption” on page 139 for reference.

Chapter 9. Optional customization 91


Optional customization

stdlist width
Specify the maximum width of the Tivoli Workload Scheduler console
messages. You can specify a column number in the range 1 to 255 and lines
are wrapped at or before the specified column, depending on the presence
of imbedded carriage control characters. Specify a negative number or zero
to ignore line width. On UNIX, you should ignore line width if you enable
system logging with the syslog local option.
syslog local
Enables or disables Tivoli Workload Scheduler system logging for UNIX
computers only. Specify -1 to turn off system logging for Tivoli Workload
Scheduler. Specify a number from 0 to 7 to turn on system logging and
have Tivoli Workload Scheduler use the corresponding local facility
(LOCAL0 through LOCAL7) for its messages. Specify any other number to
turn on system logging and have Tivoli Workload Scheduler use the USER
facility for its messages. For more information, see “Tivoli Workload
Scheduler console messages and prompts” on page 96.
sync level
Specify the rate at which Tivoli Workload Scheduler synchronizes
information written to disk. This option affects all mailbox agents and is
applicable to UNIX workstations only. Values can be:
low Allows the operating system to handle it.
medium
Flushes the updates to disk after a transaction has completed.
high Flushes the updates to disk every time data is entered.

The default is high.


switch sym prompt
Specify a prompt for the conman command line after you have selected a
different Symphony file with the setsym command. The prompt can be of
up to 8 characters in length. The default is a percent sign (%).
tcp timeout
With this attribute for the Netman process, specify the maximum number
of seconds that Mailman and Conman will wait for the completion of a
request on a linked workstation that is not responding. The default is 600
seconds.
thiscpu
Specify the Tivoli Workload Scheduler name of this workstation.
wr enable compression
Use this option on fault-tolerant agents. Specify if the fault-tolerant agent
can receive the Symphony file in compressed form from the master domain
manager. The default is no.
wr read
Specify the number of seconds the Writer process will wait for an incoming
message before checking for a termination request from Netman.
wr unlink
Specify the number of seconds the Writer process will wait before exiting if
no incoming messages are received. The lower limit is 120 and the default
is 600.

92 IBM Tivoli Workload Scheduler Planning and Installation Guide


Optional customization

Local options file example


A template file containing Tivoli Workload Scheduler’s default settings is located in
TWShome/config/localopts.

During the installation process, a working copy of the local options file is installed
as TWShome/localopts unless you have specified an non-default location for
netman. Then there two copies of the localopts file, one in TWShome and one in
Netmanhome. Any options pertaining to netman need to updated to the localopts
file in Netmanhome.

You can customize the working copy to your needs. For example:
#
# Tivoli Workload Scheduler localopts file defines attributes of this Workstation.
#
#----------------------------------------------------------------------------
# Attributes of this Workstation:
#
thiscpu = <THIS_CPU>
merge stdlists = yes
stdlist widt h= 80
syslog local = -1
#
#----------------------------------------------------------------------------
# Attributes of this Workstation for Tivoli Workload Scheduler batchman process:
#
bm check file = 120
bm check status = 300
bm look = 15
bm read = 10
bm stats = off
bm verbose = off
bm check until = 300
bm check deadline = 600
#
#----------------------------------------------------------------------------
# Attributes of this Workstation for Tivoli Workload Scheduler jobman process:
#
jm job table size = 1024
jm look = 300
jm nice = 0
jm no root = no
jm read = 10
#
#----------------------------------------------------------------------------
# Attributes of this Workstation for Tivoli Workload Scheduler mailman process:
#
mm response = 600
mm retrylink = 600
mm sound off = no
mm unlink = 960
mm cache mailbox = no
mm cache size = 32
mm resolve master = yes
#
#----------------------------------------------------------------------------
# Attributes of this Workstation for Tivoli Workload Scheduler netman process:
#
nm mortal = no
nm port = <TCP_PORT>
nm read = 10
nm retry = 800
#
#----------------------------------------------------------------------------
# Attributes of this Workstation for Tivoli Workload Scheduler writer process:
#
wr read = 600
wr unlink = 120
wr enable compression = no
#
#----------------------------------------------------------------------------

Chapter 9. Optional customization 93


Optional customization

# Optional attributes of this Workstation for remote database files


#
# mozart directory = <HOME>/mozart
# parameters directory = <HOME>
#
#----------------------------------------------------------------------------
# Custom format attributes
#
date format = 1# The possible values are 0-ymd, 1-mdy, 2-dmy, 3-NLS.
composer prompt = -
conman prompt = %
switch sym prompt = <n>%
#
#----------------------------------------------------------------------------
# Attributes for customization of I/O on mailbox files
#
sync level = high
#
#----------------------------------------------------------------------------
# Network attributes
#
tcp timeout = 600
#
#----------------------------------------------------------------------------
# SSL Attributes
#
nm SSL port = 0
SSL key = $(TWShome)/ssl/TWS.key
SSL certificate = $(TWShome)/ssl/TWS.crt
SSL key pwd = $(TWShome)/ssl/TWS.sth
SSL CA certificate = $(TWShome)/ssl/TWSTrustedCA.crt
SSL certificate chain = $(TWShome)/ssl/TWSCertificateChain.crt
SSL random seed = $(TWShome)/ssl/TWS.rnd
SSL Encryption Cipher = SSLv3
SSL auth mode = caonly
SSL auth string = tws

Setting up decentralized administration


You can administer Tivoli Workload Scheduler scheduling objects from computers
other than the Tivoli Workload Scheduler master domain manager where the
databases exist.

If you intend to administer scheduling objects in this manner, you must create
shares for the directories in the master domain manager and define a set of local
options on the other computers.

Sharing the master directories


Ensure the following directories are shareable on the master domain manager, and
set the permissions to give the domain user, TWS or maestro, full control:
TWShome\mozart
TWShome\network

Sharing Tivoli Workload Scheduler parameters


The Tivoli Workload Scheduler substitution parameters are normally
computer-specific and administered separately on each computer. If you want the
parameters to be common to all computers, and administered from any computer,
you can either share the TWShome directory as described above for other
directories, or copy the parameters database, each time it changes, from the master
domain manager to each of the other computers. The database files are:
TWShome\parameters
TWShome\parameters.KEY

94 IBM Tivoli Workload Scheduler Planning and Installation Guide


Optional customization

Using a single share


As an alternative to sharing different directories, you can move all of the database
files to a common directory, share the directory, and then set the local options,
discussed below, to the share name. The database files are listed below.

In TWShome\network\ :
cpudata
cpudata.KEY
userdata
userdata.KEY

In TWShome\mozart\ :
calendars
calendars.KEY
job.sched
job.sched.KEY
jobs
jobs.KEY
mastsked
mastsked.KEY
prompts
prompts.KEY
resources
resources.KEY

In TWShome :
parameters
parameters.KEY

The files are created as needed by Tivoli Workload Scheduler. If they do not exist,
you can simply set the local options to the shared directory as described below.

Setting local options


To access the shared master databases, set the local options on each of the
computers from which you want to administer Tivoli Workload Scheduler
scheduling objects. The options are described here followed by the procedure for
modifying them.

Note that each option can be set to a conventional name (drive:\share) or a UNC
name (\\node\share). If set to a conventional name, the Tivoli Workload Scheduler
user must explicitly connect to the share. If set to a UNC name, an explicit
connection is not required. The local options are:
mozart directory
Defines the name of the master’s shared mozart directory.
unison network directory
Defines the name of the master’s shared directory.
parameters directory
Defines the name of the master’s shared TWShome directory.

If an option is not set or does not exist, the Tivoli Workload Scheduler programs
attempt to open the database files on the local computer. See“Setting local options”
for more information.

On each of the computers, set the options as follows:


1. Use an editor of your choice to open and modify the file TWShome\localopts.

Chapter 9. Optional customization 95


Optional customization

2. The options exist as comments in the Tivoli-supplied file. To set an option,


remove the # sign in column 1 and change the value to point to the correct
directory. For example, to access all objects except parameters:
mozart directory = \\hub\mozart
unison network directory = \\hub\unison
# parameters directory = d:\maestro
3. Save the file and exit.
4. Stop and restart Tivoli Workload Scheduler (including Netman) to make the
changes operative.

If an option is not set or does not exist, the Tivoli Workload Scheduler Composer
program attempts to access the database files on the local computer.

Setting local options on the master


If the database files have been moved from the default directories, then you must
set the local options on the master domain manager to the new location. See
“Setting local options” on page 95.

Tivoli Workload Scheduler console messages and prompts


The Tivoli Workload Scheduler control processes (Netman, Mailman, Batchman,
Jobman, and Writer) write their status messages, referred to as console messages,
to standard list files. These messages include the prompts used as job and job
stream dependencies. On UNIX, the messages can also be directed to the syslog
daemon (syslogd) and to a terminal running the Tivoli Workload Scheduler
console manager. These features are described in the following sections.

Setting sysloglocal on UNIX


If you set sysloglocal in the local options file to a positive number, Tivoli
Workload Scheduler’s control processes send their console and prompt messages to
the syslog daemon. Setting it to -1 turns this feature off. If you set it to a positive
number to enable system logging, you must also set the local option stdlistwidth
to 0, or a negative number.

Tivoli Workload Scheduler’s console messages correspond to the following syslog


levels:
LOG_ERR
Error messages such as control process abends and file system errors.
LOG_WARNING
Warning messages such as link errors and stuck job streams.
LOG_NOTICE
Special messages such as prompts and tellops.
LOG_INFO
Informative messages such as job launches and job and job stream state
changes.

Setting sysloglocal to a positive number defines the syslog facility used by Tivoli
Workload Scheduler. For example, specifying 4 tells Tivoli Workload Schedulerto
use the local facility LOCAL4. After doing this, you must make the appropriate
entries in the /etc/syslog.conf file, and reconfigure the syslog daemon. To use
LOCAL4 and have the Tivoli Workload Scheduler messages sent to the system
console, enter the following line in /etc/syslog.conf:

96 IBM Tivoli Workload Scheduler Planning and Installation Guide


Optional customization

local4 /dev/console

To have the Tivoli Workload Scheduler error messages sent to the maestro and root
users, enter the following:
local4.err maestro,root

Note that the selector and action fields must be separated by at least one tab. After
modifying /etc/syslog.conf, you can reconfigure the syslog daemon by entering the
following command:
kill -HUP `cat /etc/syslog.pid`

console command
You can use the conman console command to set the Tivoli Workload Scheduler
message level and to direct the messages to your terminal. The message level
setting affects only Batchman and Mailman messages, which are the most
numerous. It also sets the level of messages written to the standard list file or files
and the syslog daemon. The following command, for example, sets the level of
Batchman and Mailman messages to 2 and sends the messages to your computer:
console sess;level=2

Messages are sent to your computer until you either run another console
command, or exit conman. To stop sending messages to your terminal, you can
enter the following conman command:
console sys

Automating the production cycle


Pre- and post-production processing can be fully automated by adding the
Tivoli-supplied final job stream, or a user-supplied equivalent, to the Tivoli
Workload Scheduler database along with other job streams. A copy of the
Tivoli-supplied job stream can be found in TWShome/Sfinal, and a copy of the job
script can be found in TWShome/Jnextday. You may find it helpful to have printed
copies to assist in understanding the turnover process.

The final job stream is placed in production everyday, and results in running a job
named Jnextday prior to the start of a new day. The job performs the following
tasks:
1. Links to all workstations to ensure that the master domain manager has been
updated with the latest scheduling information.
2. Runs the schedulr command to select job streams for the new day’s production
plan.
3. Runs the compiler command to compile the production plan.
4. Runs the reptr command to print pre-production reports.
5. Stops Tivoli Workload Scheduler.
6. Runs the stageman command to carry forward uncompleted job streams, log
the old production plan, and install the new plan.
7. Starts Tivoli Workload Scheduler for the new day.
8. Runs the reptr and the rep8 commands to print post-production reports for the
previous day.
9. Runs the logman command to log job statistics for the previous day.

Chapter 9. Optional customization 97


Optional customization

In the Tivoli Workload Scheduler library, the terms final and Jnextday are used
when referring to both the Tivoli-supplied versions, and any user-supplied
equivalents.

Customizing the final job stream


Before using the final job stream, you can modify it to meet your needs, or you
can create a different job stream to use in its place.

When creating your own job stream, model it after the one supplied by Tivoli. If
you choose to do so, consider the following:
v If you choose to change the way stageman generates log file names, remember
that reptr and logman must use the same names.
v If you would like to print the pre-production reports in advance of a new day,
you can split the Jnextday job into two jobs. The first job will run schedulr,
compiler and reptr. The second job will stop Tivoli Workload Scheduler, run
stageman, start Tivoli Workload Scheduler, and run reptr and logman. The first
job can then be scheduled to run at any time prior to the end of day, while the
second job is scheduled to run just prior to the end of day.
See “Configuring a master domain manager” on page 73 for information about
adding the final job stream to the database.

Starting a production cycle


If it has not been started before, or if it becomes necessary to start a new
production day at a time other than the defined start of day, follow these steps:
1. Log in as the maestro user on the master domain manager.
2. At a command prompt, run conman "release final".
This will perform pre-production processing and start Tivoli Workload
Scheduler’s production processes.

Managing the production environment


This section provides information on changing the start of day for Tivoli Workload
Scheduler and creating a plan to process future or past days processing.

Choosing the start of day


There are three common choices for the start of the production day.
v Early morning
v Late afternoon
v Midnight

These are a few of the scheduling implications:


Start Time and Latest Start Time
Start times (at keyword) specified are always in relationship to the
scheduler production day start time. You may need to add “+ 1 day” to job
streams whose jobs run across production days. Also, be certain that the
latest start time (until keyword) is a time later than the start time.
on keyword
Production and calendar days may not be the same. If your production
day starts at 06:00 a.m. (the default setting), 05:59 a.m. will be the last

98 IBM Tivoli Workload Scheduler Planning and Installation Guide


Optional customization

minute of the production day. A Job Stream defined to run ON MONDAY


at 05:30 will be selected on Monday and will run on the calendar day
Tuesday at 5:30 a.m.
carryforward keyword
Placing the start of day near midnight to correspond with the calendar day
will tend to produce a large number of carried forward Job Streams. This
may increase the complexity of managing the data center.
deadline
Notifications are sent when jobs and job streams have reached their
deadline but have not yet started, or have not yet finished running. A
deadline specifies the time within which a job or job stream must
complete.

Changing the start of day


The start of day for Tivoli Workload Scheduler is when the final Job Stream is run
and the Tivoli Workload Scheduler processes are stopped and restarted. To specify
the start of day for Tivoli Workload Scheduler:
1. Modify the start option in the Globalopts file. This is the start time of Tivoli
Workload Scheduler’s processing day in 24 hour format: hhmm (0000-2359). The
default start time is 6:00 A.M.
2. Modify the start time (AT keyword) of the final job stream to run one minute
before the end of day.

If you want to set the start of the production day to midnight:


1. Set the start time of the final job stream to midnight.
2. Set the start option in the Globalopts file to 0001.
Otherwise, by having the start option set to 0000 and Jnextday set to 2359, you
risk selecting schedules or job streams for the day that just ended, since the
schedulr command uses the system date and small networks can sometimes get to
the schedulr run before midnight.

Creating a plan for future or past dates


You can create a plan that executes processing normally scheduled for a future or
past day of processing. This procedure effectively recreates any specified day of
processing. You may need to use this procedure if you lost a day of processing due
to an emergency.
1. Unlink and stop all workstations in your Tivoli Workload Scheduler network
with the following commands:
conman “unlink @!@;noask”
conman “stop @!@;wait”
This stops all processing in the network.
2. Run the schedulr command with the date option to create a prodsked file:
schedulr -date MM/DD/YY
With the date option you can specify to create a plan based on a future or past
day of processing.
3. Run the compiler command to create a symnew file:
compiler (-date MM/DD/YY)

You can use the date option with the compiler to specify today’s date or the
date of the day you are trying to recreate. This option may be necessary if you
have job streams that contain date sensitive input parameters. The scheddate
Chapter 9. Optional customization 99
Optional customization

parameter is keyed off the date specified with the compiler command. If you
do not specify a date, it defaults to the date entered with the schedulr
command.
4. Run console manager to stop Tivoli Workload Scheduler processes:
conman stop @!@
5. Run stageman to create the new symphony file:
stageman
6. Run console manager to start Tivoli Workload Scheduler processes:
conman start

Using the configuration scripts


Tivoli Workload Scheduler provides two standard configuration scripts, one at the
global and one at the local level, to establish your production environment.

In the production environment, jobs are launched under the direction of the
Production Control process Batchman. Batchman resolves all job dependencies to
ensure the correct order of execution, and then issues a job launch message to the
Jobman process.

Jobman spawns a job monitor process that begins by setting a group of


environment variables, and then it runs the standard configuration script
(maestrohome/jobmanrc). If the user is allowed to use a local configuration script,
and the script $HOME/.jobmanrc exists, the local configuration script is also run.
The job is then run either by the standard configuration script, or by the local one.

Each of the processes launched by Jobman, including the configuration scripts and
the jobs, retain the user name recorded with the Logon of the job. In case of
submitted jobs, they retain the submitting user’s name. To have the jobs run with
the user’s environment, be sure to add the user’s .profile environment to the local
configuration script.

Jobman environment variables


The variables listed in the table below are set and exported by Jobman.
Table 28. Jobman environment variables
Variable Value
HOME The login user’s name directory
LOGNAME The login user’s name
PATH For MS-Windows:
%SYSTEMROOT\SYSTEM32. For UNIX:
/bin:/usr/bin
TZ The timezone
UNISON_SHELL The user’s login shell
UNISON_CPU The name of this CPU
UNISON_HOST The name of the master/host CPU
UNISON_JOB The fully qualified job name: cpu#sched.job
UNISON_JOBNUM The job number (ppid)
UNISON_MASTER The name of the master CPU

100 IBM Tivoli Workload Scheduler Planning and Installation Guide


Optional customization

Table 28. Jobman environment variables (continued)


UNISON_RUN Tivoli Workload Scheduler’s current
production run number
UNISON_SCHED The job stream name
UNISON_SCHED_DATE Tivoli Workload Scheduler’s production date
(yymmdd)
UNISON_SCHED_EPOCH Tivoli Workload Scheduler’s production
date, expressed in epoch form

Standard configuration script - jobmanrc


A standard configuration script template named TWShome/config/jobmanrc is
supplied with Tivoli Workload Scheduler. It is installed automatically as
TWShome/jobmanrc. This script can be used by the system administrator to establish
a desired environment before each job is executed. If you wish to alter the script,
make your modifications in the working copy (TWShome/jobmanrc), leaving the
template file intact. The file contains configurable variables, and comments to help
you understand the methodology. Table 29 describes the jobmanrc variables.
Table 29. Variables of jobmanrc
Variable Value
UNISON_JCL The path name of the job’s script file.
UNISON_STDLIST The path name of the job’s standard list file.
UNISON_EXIT yes|no
v If set to yes, the job is terminated
immediately if any command returns a non
zero exit code.
v If set to no, the job continues to run if a
command returns a non zero exit code.
Any other setting is interpreted as no.
LOCAL_RC_OK yes|no
v If set to yes, the user’s local configuration
script is run (if it exists), passing
$UNISON_JCL as the first argument. The
user may be allowed or denied this option.
See “Local configuration script - .jobmanrc”
on page 103 for more information.
v If set to no, the presence of a local
configuration script is ignored, and
$UNISON_JCL is run.
Any other setting is interpreted as no.

Chapter 9. Optional customization 101


Optional customization

Table 29. Variables of jobmanrc (continued)


Variable Value
MAIL_ON_ABEND yes|no
v If set to yes, a message is mailed to the login
user’s mailbox if the job terminates with a
non zero exit code. This can also be set to
one or more user names, separated by spaces,
and a message is mailed to each user. For
example, ″root mis sam mary″.
v If set to no, no messages are mailed if the job
abends. Abend messages have the following
format:
cpu#sched.job
jcl-file failed with exit-code
Please review standard-list-filename

SHELL_TYPE standard|user|script
v If set to standard the first line of the jcl file is
read to determine which shell to use to run
the job. If the first line does not start with #!,
then /bin/sh is used to run the local
configuration script or $UNISON_JCL.
Commands are echoed to the job’s standard
list file.
v If set to user, the local configuration script or
$UNISON_JCL is run by the user’s login
shell ($UNISON_SHELL). Commands are
echoed to the job’s standard list file.
v If set to script (default), the local
configuration script or $UNISON_JCL is run
directly, and commands are not echoed
unless the local configuration script or
$UNISON_JCL contains a set -x command.
Any other setting is interpreted as standard.
USE_EXEC yes|no
v If set to yes, the job, or the user’s local
configuration script is run using the exec
command, thus eliminating an extra process.
If a sub-shell is requested (see SHELL_TYPE),
the shell being used will be executed. In
other words, once the command/script is
run, the ″jobmanrc″ process no longer exists
which is why the USE_EXEC is forced to
″NO″ if the ″MAIL_ON_ABEND″ feature is
enabled. In this case, the process needs to
come back to ″jobmanrc″ in order to allow
the post-processing. This option is overridden
if MAIL_ON_ABEND is also set to yes.
v Any other setting is interpreted as no, in
which case the job or local configuration
script is run by another shell process.

102 IBM Tivoli Workload Scheduler Planning and Installation Guide


Optional customization

Local configuration script - .jobmanrc


The local configuration script permits users to establish a desired environment for
the execution of their own jobs. Unlike the jobmanrc script, the .jobmanrc script
can be customized to perform different actions for different users. Each user
defined in the home directory can customize the .jobmanrc script to perform pre-
and post-processing actions. When a job is run, Tivoli Workload Scheduler
launches the jobmanrc script which sets necessary environment variables, launches
the command, and reports the job status to Jobman to log it in the current plan.
The .jobmanrc script is an extra step that occurs before the job is actually
launched.

To run the script, follow these guidelines:


1. To use the .jobmanrc script, the standard configuration script, jobmanrc, must
be installed, and the environment variable LOCAL_RC_OK must be set to yes
(see Table 29).
2. To allow the use of .jobmanrc to only specific users, set the LOCAL_RC_OK to
yes and specify a list of permitted users in the following file
TWShome/localrc.allow. Only users defined in this file are allowed to use the
.jobmanrc. If this file does not exist, the user’s name must not appear in the
TWShome/localrc.deny. If neither of these files exists, the user is permitted to
use the local configuration script. Alternatively, for users that should not use
the .jobmanrc script, you can define a list of users in the TWShome/localrc.deny
file.
3. The local configuration script must be installed in the user’s home directory
(TWShome/.jobmanrc), and it must have execute permission.
4. Jobs are not automatically run, the command or script must be launched from
inside the .jobmanrc. Depending on the type of process activity you want to
perform, the command or script is launched differently. Use the following
general rules:
v If the job to launch is a command, then use eval
v If the job to launch is a script and no post-processing is needed, use either
exec or eval
v If the job to launch is a script and post-processing is needed, use eval

If you intend to use a local configuration script, it must, at a minimum, run the
job’s script file ($UNISON_JCL). The Tivoli-supplied standard configuration script,
jobmanrc, runs your local configuration script as follows:
$EXECIT $USE_SHELL $TWSHOME/.jobmanrc "$UNISON_JCL" $IS_COMMAND

The value of USE_SHELL is set to the value of the jobmanrc SHELL_TYPE variable
(see Table 29 on page 101). IS_COMMAND is set to yes if the job was scheduled or
submitted using the docommand construct. EXECIT is set to exec if the variable
USE_EXEC is set to yes (see Table 29 on page 101), otherwise it is null. All the
variables exported into jobmanrc are available in the .jobmanrc shell, however,
variables that are defined, but not exported, are not available.

The following is an example of a .jobmanrc script that does processing based on


the exit code of the user’s job:
##### Start of .jobmanrc script #####
#!/bin/sh
echo "*********************************"
echo "* Entering .jobmanrc processing *"
echo "*********************************"
echo ""

Chapter 9. Optional customization 103


Optional customization

echo "**************************************"
echo "* Doing some pre-processing activity *"
echo "**************************************"
echo ""

echo "Setting variable USER_DEFINED_VARIABLES"


echo ""

USER_DEFINED_VARIABLES=some_value
export USER_DEFINED_VARIABLES

echo "************************************"
echo "* Launching the TWS command/script *"
echo "************************************"
echo ""

eval "$UNISON_JCL"
RETURN_CODE=$?

echo "*******************************************"
echo "* Executing post processing into .jobmanrc*"
echo "*******************************************"
echo ""

if [ $RETURN_CODE -gt 0 ]
then
echo "Return code from commnad = " $RETURN_CODE
echo "Setting Return Code to 0"
RETURN_CODE=0
fi

exit $RETURN_CODE
##### End of .jobmanrc script #####

Tivoli Workload Scheduler and Tivoli Management Framework


This section includes the following topics about Tivoli Workload Scheduler and the
Tivoli Management Framework:
v “The Tivoli Management Framework for non-Tivoli users”
v “Adding Tivoli administrators” on page 105
v “Backup master considerations” on page 107
v “Masters that do not support Tivoli Management Framework” on page 108

The Tivoli Management Framework for non-Tivoli users


The Tivoli Management Framework is an open, object-oriented framework that
includes a set of managers, brokers, and agents that conform to the Object
Management Group/Common Object Request Broker Architecture (OMG/CORBA)
specification. OMG/CORBA technology allows major differences between
computer operating systems to be hidden from the user, and it allows key services
to be encapsulated in objects that can be used by multiple management
applications. The Tivoli Management Framework provides platform independence,
unifying architecture for all applications, and a rich set of application program
interfaces (APIs) which have been adopted by the Desktop Management Task Force
(DMTF) and the Open Group (formerly X/Open) as a basis for a systems
management framework. Tivoli APIs provide common network and systems
management services, including scheduling, transaction support, configuration
profiles, and a generic object database user facility.

104 IBM Tivoli Workload Scheduler Planning and Installation Guide


Optional customization

The basic unit of Tivoli Management Framework functionality is the Tivoli


management region. A region consists of one Tivoli server and the clients that the
server manages. The Tivoli server holds the database for that region. Depending on
the size and requirements of an environment, more than one region may be
defined. If multiple regions are present, they can either stand alone or they can be
linked together to share information and resources. Administrators with the proper
role can manage all exchanged resources in a set of connected regions from a
single system as if the resources are all in the local system’s region. In general, a
Tivoli server can support up to 200 fully managed nodes. From Tivoli Management
Framework 3.6 and later, new services are placed on the Tivoli server and on some
managed nodes to allow those nodes to act as gateways for hundreds of endpoints.
This significantly increases the scope of a single region.

The Tivoli Management Framework provides a server-based implementation of a


CORBA Object Request Broker (ORB) and basic object adapter (BOA). It also
provides related object, management, and desktop services and includes an
implementation of the APIs adopted by Open Group for a systems management
framework. The object dispatcher (oserv) is the main component of the framework
runtime. It is implemented as a single multi-threaded process and runs in the
background of each Tivoli client within a region and of the Tivoli server for that
region. The object dispatcher consists of an object request broker, the BOA, and
related services. The object dispatcher running on the Tivoli server provides
additional services, including security and implementation inheritance resolution.

A Tivoli managed node runs the same software that runs on a Tivoli server. From a
managed node you can run the Tivoli desktop and directly manage other Tivoli
managed resources. A managed node has its own oserv service that runs
continuously and communicates with the oserv service on the Tivoli server. A
managed node also maintains its own client database. The primary difference
between a Tivoli server and a managed node is the size of the database. Also, you
cannot have a managed node without a Tivoli server in a Tivoli management
region.

Adding Tivoli administrators


Assume that you want to install the connector on the master domain manager so
that you can have a number of Job Scheduling Console clients. The current
Security file on your master is the following:
###########################################################
# Security File
###########################################################
# (1) APPLIES TO MAESTRO OR ROOT USERS LOGGED IN ON THE
# MASTER DOMAIN MANAGER
user mastersm cpu=$master + logon=maestro,root
begin
# OBJECT ATTRIBUTES ACCESS CAPABILITIES
# ---------- ------------ ----------------------
job access=@
schedule access=@
resource access=@
prompt access=@
file access=@
calendar access=@
cpu access=@
parameter name=@ ~ name=r@ access=@
userobj cpu=@ + logon=@ access=@
end
###########################################################
# (2) APPLIES TO MAESTRO OR ROOT USERS LOGGED IN ON ANY
# WORKSTATION OTHER THAN THE MASTER DOMAIN MANAGER.

Chapter 9. Optional customization 105


Optional customization

user sm logon=maestro,root
begin
# OBJECT ATTRIBUTES ACCESS CAPABILITIES
# ---------- ------------ ----------------------
job cpu=$thiscpu access=@
schedule cpu=$thiscpu access=@
resource cpu=$thiscpu access=@
prompt access=@
file access=@
calendar access=@
cpu cpu=$thiscpu access=@
parameter cpu=$thiscpu
~ name=r@ access=@
end
###########################################################

Suppose that you want these two users to use the Job Scheduling Console. After
you install the Tivoli Management Framework software, from the Tivoli desktop
you create an additional Tivoli administrator (beside the default one created by the
installation process) for each user. You call one mastersm and the other sm. You
then add the respective definitions so that the Security file looks like this:
###########################################################
# Security File
###########################################################
# (1) APPLIES TO MAESTRO OR ROOT USERS LOGGED IN ON THE
# MASTER DOMAIN MANAGER
user mastersm cpu=$master + logon=maestro,root
begin
# OBJECT ATTRIBUTES ACCESS CAPABILITIES
# ---------- ------------ ----------------------
job access=@
schedule access=@
resource access=@
prompt access=@
file access=@
calendar access=@
cpu access=@
parameter name=@ ~ name=r@ access=@
userobj cpu=@ + logon=@ access=@
end
###########################################################
# (2) TIVOLI ADMINISTATOR DEFINITION FOR MAESTRO OR ROOT USERS
LOGGED IN ON THE
# MASTER DOMAIN MANAGER
user mastersm cpu=$framework + logon=mastersm
begin
# OBJECT ATTRIBUTES ACCESS CAPABILITIES
# ---------- ------------ ----------------------
job access=@
schedule access=@
resource access=@
prompt access=@
file access=@
calendar access=@
cpu access=@
parameter name=@ ~ name=r@ access=@
userobj cpu=@ + logon=@ access=@
end
###########################################################
###########################################################
# (3) APPLIES TO MAESTRO OR ROOT USERS LOGGED IN ON ANY
# WORKSTATION OTHER THAN THE MASTER DOMAIN MANAGER.
user sm logon=maestro,root
begin
# OBJECT ATTRIBUTES ACCESS CAPABILITIES
# ---------- ------------ ----------------------

106 IBM Tivoli Workload Scheduler Planning and Installation Guide


Optional customization

job cpu=$thiscpu access=@


schedule cpu=$thiscpu access=@
resource cpu=$thiscpu access=@
prompt access=@
file access=@
calendar access=@
cpu cpu=$thiscpu access=@
parameter cpu=$thiscpu
~ name=r@ access=@
end
##############################################
################
(4) TIVOLI ADMINISTRATOR DEFINITION FOR MAESTRO OR
ROOT USERS LOGGED IN ON ANY
# WORKSTATION OTHER THAN THE MASTER DOMAIN
MANAGER.
user sm cpu=$framework + logon=sm
begin
# OBJECT ATTRIBUTES ACCESS CAPABILITIES
# ---------- ------------ ----------------------
job cpu=$thiscpu access=@
schedule cpu=$thiscpu access=@
resource cpu=$thiscpu access=@
prompt access=@
file access=@
calendar access=@
cpu cpu=$thiscpu access=@
parameter cpu=$thiscpu
~ name=r@ access=@
end
##########################################################

The new security file grants the same privileges to the original users also on the
Job Scheduling Console.

Furthermore, on the Tivoli server you could add two new logins for Tivoli
administrator mastersm:

maestro@rome.production.com
maestro@london.production.com

So that any authorized user who logs into rome or london as maestro will acquire
the privileges granted to mastersm.

Backup master considerations


If you want to install the connector on the backup master, you have no existing
regions, and you are not interested in implementing a full Tivoli management
environment, then you may want to install a new Tivoli server on the backup
master.

If on the backup master you install a different Tivoli server than the master, be
sure to enable the same entries as on the master, that is:
v Start
v Plan and database audit levels
v time zone enable global option

Chapter 9. Optional customization 107


Optional customization

Masters that do not support Tivoli Management Framework


If your master domain manager does not run on a platform that supports the
Tivoli Management Framework server or managed node and the connector, and
you want to be able to use the Job Scheduling Console, you can choose from the
following three options:
v Move the master domain manager to one of the supported platforms.
v Create a backup master on a supported platform.
v NFS mount the master databases to a fault-tolerant agent.

Moving the backup master


To move the master domain manager (from UNIX to UNIX):
1. Choose an existing fault-tolerant agent in the network or create one. If you
create a fault-tolerant agent, be sure you define it with Resolve Dependencies
and Full Status enabled.
2. On the master domain manager, tar TWShome/mozart/* and TWShome/network/*.
3. Untar these files in the fault-tolerant agent in directories with the same name.
4. Do not copy parameters or parameters.KEY from the home directory or you will
overwrite parameters that are unique to the fault-tolerant agent. Create a list
of parameters from both machines, adding the required ones to the
fault-tolerant agent.
5. On the fault-tolerant agent, edit the globalopts file to change the name of the
master to that of the fault-tolerant agent.
6. On the old master, use the switchmgr command to switch to the new master.
7. Cancel the existing final schedule in the current day production.
8. Add the final schedule onto the new master (use
composer “modify sched=oldmastername#final”

and change the workstation id on the schedule line).


9. After saving and adding, delete the oldmastername#final schedule with the
command:
composer “delete sched=oldmastername#final”
10. Submit the new final schedule into daily production.
11. On the old master, edit the globalopts file to reflect the name of the new master.

Note: It is not necessary to edit all the global options for each workstation in
the network. The nodes will acknowledge the new master at Jnextday
time when the master initializes them.

Creating a backup master


To create a backup master:
1. Install the Tivoli Management Framework. See the Tivoli Enterprise Installation
Guide for reference.
2. Install the Tivoli Workload Scheduler engine. See Chapter 3, “Installing using
the installation wizard,” on page 37 or Chapter 6, “Installing using customize,”
on page 57.
3. Install the connector. See the chapter that explains how to install the connector
in the Tivoli Workload Scheduler Job Scheduling Console User’s Guide.
4. Customize the workstation with Tivoli Workload Scheduler security and local
options. See Chapter 11, “Setting security,” on page 139 and Chapter 9,
“Optional customization,” on page 81.

108 IBM Tivoli Workload Scheduler Planning and Installation Guide


Optional customization

5. Define the workstation in the Tivoli Workload Scheduler network. Either use
the Composer cpuname command or the Create Workstations window in the
Job Scheduling Console. Be sure you define it with Resolve Dependencies and
Full Status enabled. See the IBM Tivoli Workload Scheduler Reference Guide or the
IBM Tivoli Job Scheduling Console User’s Guide.

Mounting master domain manager databases


You can NFS mount the following master databases on a fault-tolerant agent that
runs on a supported platform:
v /usr/lib/maestro/mozart/globalopts (the operational copy)
v /usr/lib/unison/network/cpudata

The fault-tolerant agent must have Full Status and Resolve Dependencies enabled
in its workstation definition.

Before mounting the databases, make certain that the file system containing the
required directories has been included in the /etc/exports file on the master
workstation. If you choose to control the availability of the file system, make the
appropriate entries in the /etc/hosts or /etc/netgroup file in the master.

The mount point on the fault-tolerant agent must be the same as the master. For
example, on the fault-tolerant agent:

cd twshome
/etc/mount mastername:mozart mozart
/etc/mount mastername:../unison/network ../unison/network

To have the databases mounted automatically, you can enter the mounts in the
/etc/checklist file.

If you use this solution, be aware that the parameters database in the fault-tolerant
agent is not the master’s but a local copy. This becomes an issue if you use parms
as part of the job definitions (in the task or login name), because at Jnextday time
all the parameters referenced with the ^ (carat) symbol in job definitions are
expanded from the parameters database in the master. You have two possible
workarounds for this issue:
v Create a script that uploads and changes the parameter values from the
fault-tolerant agent to the master. Run this script just before Jnextday. Making
Jnextday dependent on it will make sure that the parms are uploaded
successfully before Jnextday sets up production for the following day.
v On the master cpu, move the parameters database to the mozart directory. Create
a link from the master to the home directory. Next, on the fault-tolerant agent
create a link from the parameters database in mozart to twshome.

If you wish to enable the time zone feature in the Job Scheduling Console, you also
need to edit the local globalopts file on the fault-tolerant agent to set the timezone
enable entry.

Chapter 9. Optional customization 109


110 IBM Tivoli Workload Scheduler Planning and Installation Guide
Chapter 10. Integration with other IBM Tivoli products
IBM Tivoli Workload Scheduler provides out-of-the-box integration with the
following IBM products:
v IBM Tivoli NetView
v IBM Tivoli Business Systems Manager
v IBM Tivoli Enterprise Data Warehouse
v IBM Tivoli Distributed Monitoring (Classic Edition)
v IBM Tivoli Enterprise Console
v IBM Tivoli Management Framework
This chapter describes integration with Tivoli NetView, Tivoli Business Systems
Manager, and Tivoli Enterprise Data Warehouse.

Integration with Tivoli Distributed Monitoring (Classic Edition) and Tivoli


Enterprise Console is performed using the Tivoli Workload Scheduler Plus Module
version 8.2 and described in the IBM Tivoli Workload Scheduler Plus Module version
8.2 User’s Guide.

Integration with Tivoli Management Framework is a prerequisite of the Tivoli


Workload Scheduler connector, to integrate with the Job Scheduling Console, or
Tivoli Plus Module for Tivoli Workload Scheduler. Integration with Tivoli
Management Framework also allows you to manage all the physical workstations
where the Tivoli endpoint is installed, using Tivoli Framework based product suite
functionality.

Integration with IBM Tivoli Enterprise Data Warehouse


When your environment contains many products and services that manage and
monitor your IT enterprise, storing this data, generating reports, and analyzing the
data becomes a complex task. IBM Tivoli Enterprise Data Warehouse enables you
to collect this data in one place, a central data warehouse, and enables you to
construct an end-to-end view of your enterprise and view its components
independent of specific applications.

Tivoli Workload Scheduler provides a Tivoli Enterprise Data Warehouse


enablement pack to consolidate scheduling data in the Tivoli Enterprise Data
Warehouse database. The documentation for the warehouse enablement pack is on
Tivoli Workload Scheduler Installation Disk 2, in the path
tedw_apps_etl/aws/pkg/v820/doc/TivoliWorkloadScheduler8.2_for_TEDW.doc.

Integration with IBM Tivoli NetView


This section describes the integration of IBM Tivoli Workload Scheduler on UNIX
with NetView for AIX.

General
Tivoli Workload Scheduler/NetView is a NetView application that gives network
managers the ability to monitor and diagnose Tivoli Workload Scheduler networks
from a NetView management node.

© Copyright IBM Corp. 1991, 2004 111


Product Integration

It includes a set of submaps and symbols to view scheduler networks


topographically, and determine the status of job scheduling activity and critical
scheduler processes on each workstation. Menu actions are provided to start and
stop scheduler processing, and to run conman on any workstation in the network.

How Tivoli Workload Scheduler/NetView works


Tivoli Workload Scheduler/NetView consists of manager and agent software. The
manager runs on the NetView management nodes, and the agent runs on the
managed nodes. All nodes must have Tivoli Workload Scheduler for UNIX
installed. The manager (mdemon runs on AIX only) polls its agents (magent)
periodically to obtain information about scheduler processing. If the information
returned during a poll is different from that of the preceding poll, the color of a
corresponding symbol is changed to indicate a state change, for example, from
green (normal) to red (critical) or yellow (marginal). After you have taken action to
remedy a marginal or critical condition, the state of the corresponding symbol is
returned to normal by the next poll.

The agents also generate SNMP traps to inform the manager of asynchronous
events, such as job abends, stuck schedules, and restarted scheduler processes.
Although polling and traps are functionally independent, the information that
accompanies a trap can be correlated with symbol state changes. If, for example, a
scheduled job abends, the symbol for the workstation changes color, and a job
abend trap is logged in the NetView event log. By scanning the log, you can
quickly isolate the problem and take the appropriate action.

The muser process runs commands issued by a NetView user, and updates the
user’s map. An muser is started for each NetView user whose map has the Tivoli
Workload Scheduler/NetView application activated.

Types of information
The manager collects two types of information by polling its agents:
Job scheduling
Indicates the status of jobs and schedules in a Tivoli Workload Scheduler
network. The information is provided by a single agent, usually running on the
master of the network. Alternatively, the information can be provided by an
agent running on a fault-tolerant agent that has been configured as a backup
master.
Monitored process
Indicates the status of scheduler critical processes on a workstation (netman,
mailman, batchman, jobman, mailman servers, writers, and all extended agent
connections). This information is provided only by local agents running on each
workstation.

Definitions
Cpu and Node
The terms cpu and node are used interchangeably to mean a workstation.
Management Nodes
A NetView management node that runs a Tivoli Workload Scheduler/NetView
manager (mdemon). In NetView 6.x and later, the management node functions
can be distributed across a server and one or more clients.
Managed Node
The nodes that comprise a Tivoli Workload Scheduler network and that have
the Tivoli Workload Scheduler/NetView agent (magent) running.

112 IBM Tivoli Workload Scheduler Planning and Installation Guide


Product Integration

Managed Tivoli Workload Scheduler Network


A group of nodes that are configured as a Tivoli Workload Scheduler network,
and whose job scheduling status is managed from a NetView management
node. More than one network can be managed from a single management node

General Requirements
The basic configuration requirements are:
v Management nodes (server and clients) must have Tivoli Workload Scheduler
installed, but need not be members of managed scheduler networks.
v Tivoli Workload Scheduler/NetView managers (mdemon) run exclusively on
AIX. Tivoli Workload Scheduler/NetView agents (magent) run on AIX, HPUX,
and Solaris.
v There must be at least one managed node in a managed scheduler network. To
obtain accurate job scheduling information, this should be either the master, or
the backup master, that is, a fault-tolerant agent with fullstatus on and
resolvedep on in its definition.

Configuration
The NetView management node can be a member of a managed Tivoli Workload
Scheduler network or not.

The Tivoli Workload Scheduler/NetView agent can run on master workstations,


providing accurate information about job scheduling in their respective Tivoli
Workload Scheduler networks. Alternatively, the agents can run on backup
masters. In either case, additional agents can be installed on other scheduler
workstations to monitor the status of their critical processes.

When you plan for your configuration, you should consider the following:
v If you choose to use a master workstation, or a backup master, as the NetView
management node, it can also have a Tivoli Workload Scheduler/NetView agent
that provides job scheduling status for its Tivoli Workload Scheduler network.
This minimizes Tivoli Workload Scheduler/NetView manager-agent traffic when
polling. However, you must also consider the additional workload imposed by
NetView management, particularly in large networks and those with several
NetView applications, which can noticeably slow down Tivoli Workload
Scheduler processing.
v Choosing an existing Tivoli Workload Scheduler standard agent as a NetView
management node, or making the current NetView management node a
standard agent, has the advantage of not overloading the master, and of letting
you use Tivoli Workload Scheduler on that node to schedule NetView
management tasks, such as clearing out log files.

Installing the integration software


The Tivoli Workload Scheduler/NetView software is delivered and installed as part
of Tivoli Workload Scheduler on UNIX. Before performing the following steps for
Tivoli Workload Scheduler/NetView, make sure that Tivoli Workload Scheduler for
UNIX is properly installed on the management node (server and clients) and on
each managed node.

Because the purpose of Tivoli Workload Scheduler/NetView is to monitor the


operation of Tivoli Workload Scheduler for UNIX, new users should:
1. Install and implement Tivoli Workload Scheduler for UNIX to the point that
you have a good understanding of its operation, and are successfully
scheduling and tracking your own jobs.

Chapter 10. Integration with other IBM Tivoli products 113


Product Integration

2. Follow the steps outlined below to install Tivoli Workload Scheduler/NetView.

Running the customize script


As part of the installation procedure, you run the Tivoli Workload
Scheduler/NetView customize script on the NetView management nodes and
managed nodes. The customize script performs the following steps:

Determining the Tivoli Workload Scheduler Home Directory: The home


directory of the Tivoli Workload Scheduler user, usually /usr/lib/maestro, is
referred to as TWShome throughout this section. This is the directory you defined at
the time you installed Tivoli Workload Scheduler. The Tivoli Workload
Scheduler/NetView customize script determines the user’s home directory, and
uses it to correctly install Tivoli Workload Scheduler/NetView.

Using customize on managed nodes: On managed nodes, customize does the


following:
1. Modifies TWShome/StartUp to add a command to run the Maestro/NV agent
(magent).
2. Creates the following configuration files:
TWShome/BmEvents.conf
TWShome/MAgent.conf
3. In addition, for each AIX node:
a. Modifies /etc/snmpd.conf to add a new smux agent, and define the
destination node for traps.
b. Modifies /etc/snmpd.peers to configure the new smux agent.
c. Modifies /etc/mib.defs to add Unison Software’s MIB.
4. For NetView version 6.x and above, if the managed node is a NetView client,
the Tivoli Workload Scheduler Application Registration File (ARF) is installed.

Using customize on the management node and version 6.x NetView Server: On
the management node and version 6.x NetView server, customize does the
following:
1. Performs the steps listed above for managed nodes.
2. Registers the Tivoli Workload Scheduler/NetView mdemon process so that it is
started by NetView.
3. Adds Unison Software’s enterprise traps.
4. Copies the Tivoli Workload Scheduler fields, application, MIB, and help files
into the appropriate directory structure.

Using customize on version 6.x NetView clients: On version 6.x NetView clients,
customize does the following:
1. Performs the steps listed above for managed nodes.
2. Copies the Tivoli Workload Scheduler/NetView application and help files into
the appropriate directory structure. The Application Registration File (ARF)
installed by Tivoli Workload Scheduler/NetView uses NetView’s nvserver_run
facility to launch the application on the server.

Reviewing changes: If you want to review the changes made by customize before
installing any files, run the script with the -noinst option, as follows:
/bin/sh TWShome/OV/customize -noinst

114 IBM Tivoli Workload Scheduler Planning and Installation Guide


Product Integration

Create the files in the /tmp directory. As customize runs, it tells you where to move
the files to complete the installation. Alternatively, you can remove the /tmp files
and rerun customize without the -noinst option.

Removing changes: To uninstall Tivoli Workload Scheduler/NetView, and to


remove the changes made by customize, run the decustomize script:
/bin/sh TWShome/OV/decustomize

customize synopsis: The syntax of customize is:

customize [-uname name] [-prev3] [-noinst] [-client] [-manager host ]

where:
[-uname name]
IBM Tivoli Workload Scheduler user name.
-prev3 Include this option if your version of NetView is prior to version 3.
-noinst
Do not overwrite existing NetView configuration files. See “Reviewing
changes” on page 114.
-client For NetView version 6.x and later, include this option for management
clients.
-manager
The host name of the management node. For NetView version 6.x and
above, this is the host name of the NetView server. This is required for
managed nodes and NetView clients. Do not use this option on the
management node or NetView server.

Installing
The installation procedure is made up by the following two steps:
v Installing on managed nodes and on NetView clients
v Installing on the management node or NetView server

Installing on managed nodes and NetView clients: The management node can
also be a managed node. For the management node or NetView server, skip this
step and perform step Installing on the management node or NetView server.
1. Make certain that no Tivoli Workload Scheduler processes are running. If
necessary, issue a conman shutdown command.
2. Log in as root.
3. For managed nodes, including those that are also NetView clients that are not
used to manage Tivoli Workload Scheduler, run the customize script as follows:
/bin/sh <TWShome>/OV/customize -manager host

where, host is the host name of the management node.


4. For NetView clients that are used to manage Tivoli Workload Scheduler, run
customize as follows:
/bin/sh <TWShome>/OV/customize -client [-manager host]

where, host is the host name of the management node.


5. Run StartUp:
<TWShome>/StartUp

Chapter 10. Integration with other IBM Tivoli products 115


Product Integration

Installing on the management node or NetView server:


1. Make certain that no Tivoli Workload Scheduler processes are running. If
necessary, issue a conman shutdown command.
2. Log in as root .
3. Run the customize script as follows:
/bin/sh <TWShome>/OV/customize
4. If you do not want the Tivoli Workload Scheduler/NetView agent to run on
this node, edit <TWShome>/StartUp, and remove the run of magent.
5. If you want Tivoli Workload Scheduler to run on this node, run StartUp:
<TWShome>/StartUp
6. Start the Tivoli Workload Scheduler/NetView daemon (mdemon) as follows:
/usr/OV/bin/ovstart Unison_Maestro_Manager

or, for NetView versions below 3, stop and start as follows:


/usr/OV/bin/ovstop
/usr/OV/bin/ovstart

Setting up
Follow these steps:
1. Determine the user who will be managing Tivoli Workload Scheduler with
NetView.
a. On each managed node, enter the host name of the management node in
the user’s $HOME/.rhosts file.
b. To allow the user to run certain scheduler commands, you must add a user
definition to the scheduler security file. You can, for example, give this user
the same capabilities as the default maestro user. For more information about
Tivoli Workload Scheduler security, refer to the IBM Tivoli Workload
Scheduler Planning and Installation book.
2. On the management node, run NetView.
3. Bring up the map you intend to use.
a. From the File menu, select Describe Map....
b. When the Map Description dialog box appears, select Maestro-Unison
Software(c) from the Configurable Applications list, and click Configure
For This Map....
c. When the Configuration dialog box appears, click True under Enable
Maestro for this map.
d. Click Verify.
e. Click OK to close the Configuration dialog box.
f. Click OK to close the Map Description dialog box.
4. If you want to use the MIB browser, load the Tivoli Workload Scheduler MIB as
follows:
a. From the Options menu, select Load/Unload MIBs:SNMP... .
b. When the Load/Unload MIB dialog box appears, click Load.
c. When the Load MIB From File dialog box appears, enter
/usr/OV/snmp_mibs/Maestro.mib

in the MIB File to Load field. Click OK.


d. Click Close to close the Load/Unload MIBs dialog box.

116 IBM Tivoli Workload Scheduler Planning and Installation Guide


Product Integration

5. If the management node is not also a managed Tivoli Workload Scheduler


node, or if you will be managing more than one Tivoli Workload Scheduler
network, use the NetView object description function to identify the managed
nodes where Tivoli Workload Scheduler/NetView agents are running.
a. Move down the IP Internet tree to the IP segment submap showing all the
nodes.
b. Select a node where a Tivoli Workload Scheduler/NetView agent is
running. Press Ctrl-O to open the Object Description dialog.
c. On the Object Description dialog, select General Attributes from the Object
Attributes list, and click View/Modify Object Attributes.
d. On the Attributes for Object dialog, click True under the isUTMaestroAgent
attribute.
e. Click OK to close the Attributes for Object dialog.
f. Click OK to close the Object Description dialog.
g. Repeat steps 5b through 5f for each node where a Tivoli Workload
Scheduler/NetView agent is running.
h. Return to the Root submap. From the Tools menu, select Tivoli Workload
Scheduler, then select Re-discover.
i. When the Unison Software(c) symbol appears, double-click it to open the
Unison Software(c) submap displaying a symbol for each Tivoli Workload
Scheduler network. Double-click a Tivoli Workload Scheduler network
symbol to open a Tivoli Workload Scheduler Network submap showing a
topographical representation of the network.

You are now ready to use Tivoli Workload Scheduler/NetView. On the Tivoli
Workload Scheduler master issue a conman start@ command to restart Tivoli
Workload Scheduler in the network. This can be done in NetView on the Tivoli
Workload Scheduler Network submap as follows:
1. Select all of the nodes in the network.
2. From the Tools menu, select Tivoli Workload Scheduler, and then select Start.

Objects, symbols, and submaps


The Tivoli Workload Scheduler/NetView objects and symbols are described in
Table 30.
Table 30. Tivoli Workload Scheduler/NetView objects and symbols
Symbol Description
The Unison Software application. This symbol appears
on the root submap. Its color indicates the aggregate
status of all Tivoli Workload Scheduler networks.

On the Unison Software (c) submap, an Tivoli


Workload Scheduler network. Its color indicates the
aggregate status of all workstations and links that
comprise a Tivoli Workload Scheduler network.

On the Tivoli Workload Scheduler Network submap, a


host. Its color indicates the aggregate status of the
host, all its agents, and their links.

Chapter 10. Integration with other IBM Tivoli products 117


Product Integration

Table 30. Tivoli Workload Scheduler/NetView objects and symbols (continued)


Symbol Description
On an Tivoli Workload Scheduler Network submap, a
topographical representation of the workstations and
links that comprise an Tivoli Workload Scheduler
network. The color of a workstation symbol indicates
the status of job scheduling on the workstation. The
color of a link symbol (a line) indicates the status of
the workstation link. Workstation symbols also appear
on IP node submaps.

If a workstation on the network is a host but is not the


master, the workstation is represented by a network
symbol (for example, SLAVE3 ). A Host Network
submap exists for the host and its attached agents.
The Tivoli Workload Scheduler software on a
workstation. This symbol appears on the IP node
submap. Its color indicates the aggregate status of all
monitored processes on a Tivoli Workload Scheduler
workstation.

Note, extended agents have no monitored processes.


The monitored processes on an Tivoli Workload
Scheduler workstation. These symbols appear on the
monitored processes submap. The color of a process
symbol indicates the status of the process. Clicking the
NETMAN symbol performs the StartUp action (see
“Menu actions” on page 119). Clicking the MAGENT
symbol starts the magent process on the workstation.

Status of Tivoli Workload Scheduler/NetView symbols


The color of a Tivoli Workload Scheduler/NetView symbol indicates its current
status. The colors are described in Table 31. The status of the monitored process
symbols and Tivoli Workload Schedulerworkstation symbols is propagated up the
submap tree in the manner defined in NetView. Propagation is defined by selecting
Describe Map from the File menu, and selecting the desired Compound Status.
Table 31. Tivoli Workload Scheduler/NetView status
Monitored processes Job Scheduling Communication
(monitored process (maestro cpu (maestro link
Color symbols) symbols) symbols)
Black na na Up or unknown *
Blue (Unknown) * * na
Green (Normal/Up) Running No unacknowledged na
events
Yellow (Marginal) Stopped Abended, failed, or Down (unlinked)
suspended job(s)
Red (Critical/Down) Abended or gone Abended, stuck, or Down (error)
suspended
schedule(s)
Tan (Unmanaged) No longer being No longer being na
polled polled
Dk Green Ignore until Ignore until na
(Acknowledged) unacknowledged unacknowledged

118 IBM Tivoli Workload Scheduler Planning and Installation Guide


Product Integration

*The Unknown state is set by Tivoli Workload Scheduler/NetView whenever the


status of an object cannot be determined. This may be the result of one of the
following situations:
v If a fault-tolerant or standard agent is not linked to the master, job scheduling
status on that workstation is unknown.
v If communications with the mdemon process fails, the status of all monitored
processes and all job scheduling is unknown.
v If communications with an agent process fails, its status is Critical, and all other
monitored processes on that workstation are unknown.
v If an agent process cannot be found on the master, the backup master, or a
fullstatus on fault-tolerant agent, the job scheduling status on all the
workstations of the Tivoli Workload Scheduler network is unknown.
v If the Tivoli Workload Scheduler master is down, or unmanaged, the
communications (link) status is unknown.

For more information about Tivoli Workload Scheduler workstation status, see
“Configuring workstation status in NetView” on page 127.

Extended agent mapping


If the host of an extended agent is the master, the extended agent is displayed as
connected to the master in the Tivoli Workload Scheduler Network submap. If the
host of an extended agent is not the master:
v The extended agent is not displayed in the Tivoli Workload Scheduler Network
submap.
v The host is represented by a network symbol and extended agents are displayed
in the Host Network submap.

Menu actions
To use Tivoli Workload Scheduler/NetView menu actions, select Tivoli Workload
Scheduler from the Tools menu. These actions are also available from the object
context menu by right clicking a symbol.

The menu actions are:


View Open a child submap for a Tivoli Workload Scheduler/NetView symbol.
Choosing View after selecting a workstation symbol on the Tivoli
Workload Scheduler network submap opens the monitored processes
submap. Choosing View after selecting a workstation symbol on the IP
node submap returns to the Tivoli Workload Scheduler network submap.
Master conman
Run the conman command line program on the Tivoli Workload Scheduler
master. Running the program on the master permits you to run all conman
commands (except shutdown) for any workstation in the Tivoli Workload
Scheduler network. For information about conman commands, see IBM
Tivoli Workload Scheduler Reference.
Acknowledge
Acknowledge the status of selected Tivoli Workload Scheduler/NetView
symbols. When acknowledged, the status of a symbol returns to normal. It
is not necessary to acknowledge critical or marginal status for a monitored
process symbol, as it returns to normal when the monitored process itself
is running. Acknowledge critical or marginal status of Tivoli Workload
Scheduler workstation symbols either before or after you have taken some
action to remedy the problem, otherwise it does not return to normal.

Chapter 10. Integration with other IBM Tivoli products 119


Product Integration

Conman
Run the conman command line program on the selected Tivoli Workload
Scheduler workstations. Running the program on a workstation other than
the master, permits you to run all conman commands on that workstation
only. For information about conman commands, see IBM Tivoli Workload
Scheduler Reference. For an extended agent, conman is run on its host.
Start Issue a conman start command for the selected workstations. By default,
the command for this action is:
remsh %H %P/bin/conman ’start %c’
Down (stop)
Issue a conman stop command for the selected workstations. By default,
the command for this action is:
remsh %H %P/bin/conman ’stop %c’
StartUp
Run the Tivoli Workload Scheduler StartUp script on the selected
workstations. By default, the command for this action is:
remsh %h %P/StartUp

For an extended agent, conman is run on its host.


Rediscover
Locate new agents and new Tivoli Workload Scheduler objects, and update
all Tivoli Workload Scheduler/NetView submaps.

Note: Run Rediscover each time you change the Tivoli Workload
Scheduler workstation configuration.

The substituted parameters in the command lines are:


%c The Tivoli Workload Scheduler workstation name of a selected workstation
symbol.
%D The current DISPLAY name.
%h The host name of a selected workstation symbol.
%H The host name of the Tivoli Workload Scheduler master.
%p The process name of a selected process symbol, or “MAESTRO” if it is not
a process.
%P The maestro user’s home directory (usually /usr/lib/maestro ).

Changing the commands


The commands run by selecting Tivoli Workload Scheduler/NetView actions can
be modified in NetView by choosing Describe Map from the File menu. When the
Map Description dialog box appears, select Maestro-Unison Software from the
Configurable Applications list, and click Configure For This Map. Make your
changes in the Configuration dialog box. For instructions, refer to your NetView
documentation or online help. Also see notes below.
Notes:
1. The user running NetView must be defined in the Tivoli Workload Scheduler
security file to run certain conman commands. For more information, see IBM
Tivoli Workload Scheduler Reference.
2. Remove the remsh command if they are not required. For example, if the
management node is the Tivoli Workload Scheduler master, the remsh for the
Master conman, Start, and Down (stop) actions are not required.
120 IBM Tivoli Workload Scheduler Planning and Installation Guide
Product Integration

3. As written, the remsh commands require that the NetView user be able to login
on other nodes without a password prompt.

Tivoli Workload Scheduler/NetView events


The Tivoli Workload Scheduler/NetView events are listed in IBM Tivoli Workload
Scheduler Reference. The first four (1-53) indicate the status of critical processes that
are monitored by the Tivoli Workload Scheduler/NetView agents, including the
agents themselves (event 1). The remaining events (101-252) indicate the status of
the job scheduling activity.

All of the listed events can result in SNMP traps generated by the Tivoli Workload
Scheduler/NetView agents. Whether or not traps are generated is controlled by
options set in the configuration files of the agents. See “Tivoli Workload
Scheduler/NetView configuration files” on page 123 for more information.

The Additional Actions column in IBM Tivoli Workload Scheduler Reference lists the
actions available to the operator for each event. The actions can be initiated by
selecting Additional Actions from the Options menu, then selecting an action
from the Additional Actions panel.

Note: You must have the appropriate Tivoli Workload Scheduler security access to
perform the chosen action.
Table 32. Tivoli Workload Scheduler/NetView events
Trap # Name Description Additional Actions
1* uTtrapReset The magent process na
was restarted.
51 uTtrapProcessReset A monitored process na
was restarted. This
event is reported by
default in the
BmEvents.conf file
52 * uTtrapProcessGone A monitored process is na
no longer present.
53 * uTrapProcessAbend A monitored process na
abended.
54 * uTrapXagentConnLost The connection na
between a host and
xagent has been lost.
101 * uTtrapJobAbend A scheduled job Show Job, Rerun Job,
abended. Cancel Job
102 * uTtrapJobFailed An external job is in Show Job, Rerun Job,
the error state. Cancel Job
103 uTtrapJobLaunch A scheduled job was Show Job, Rerun Job,
launched successfully. Cancel Job
104 uTtrapJobDone A scheduled job Show Job, Rerun Job,
finished in a state other Cancel Job
than ABEND.
105* uTtrapJobUntil A scheduled job’s Show Job, Rerun Job,
UNTIL time has Cancel Job
passed, it will not be
launched.

Chapter 10. Integration with other IBM Tivoli products 121


Product Integration

Table 32. Tivoli Workload Scheduler/NetView events (continued)


Trap # Name Description Additional Actions
111 uTrapJobCant A scheduled job could Show Job, Rerun Job,
not be launched. Cancel Job
151 * uTtrapSchedAbend A schedule ABENDed. Show Schedule, Cancel
Schedule
152 * uTtrapSchedStuck A schedule is in the Show Schedule, Cancel
STUCK state. Schedule
153 uTtrapSchedStart A schedule has started Show Schedule, Cancel
execution. Schedule
154 uTtrapSchedDone A schedule has finished Show Schedule, Cancel
in a state other than Schedule
ABEND.
155* uTtrapSchedUntil A schedule’s UNTIL Show Schedule, Cancel
time has passed, it will Schedule
not be launched.
201 * uTtrapGlobalPrompt A global prompt has Reply
been issued.
202 * uTtrapSchedPrompt A schedule prompt has Reply
been issued.
203 * uTtrapJobPrompt A job prompt has been Reply
issued.
204 * uTtrapJobRerunPrompt A job rerun prompt has Reply
been issued.
251 uTtrapLinkDropped The link to a Link
workstation has closed.
252 * uTtrapLinkBroken The link to a Link
workstation has closed
due to an error.
* These traps are enabled by default.

Polling and SNMP traps


Because SNMP uses an unreliable transport protocol (UDP), Tivoli Workload
Scheduler/NetView does not rely on SNMP traps to indicate the status of its
symbols. Instead, the manager polls its agents periodically, requesting specific MIB
values. The returned values are compared with those returned by the previous
poll, and differences are indicated as status changes in Tivoli Workload
Scheduler/NetView symbols. The default polling interval is one minute. See
“Tivoli Workload Scheduler/NetView configuration options” on page 126 for
information about changing the polling interval.

To obtain critical process status, the manager polls all of its agents. For job
scheduling status, the manager determines which of its agents is most likely to
have the required information, and polls only that agent. The choice is made in the
following order of precedence:
1. The agent running on the Tivoli Workload Scheduler master.
2. The agent running on a Tivoli Workload Scheduler backup master.
3. The agent running on any Tivoli Workload Scheduler fault-tolerant agent that
has fullstatus on in its workstation definition.

122 IBM Tivoli Workload Scheduler Planning and Installation Guide


Product Integration

Enabling Tivoli Workload Scheduler/NetView traps provides the following


advantages:
1. Event-specific variables are included with each trap
2. Traps are logged in NetView’s event log.
If job abend traps (101) are enabled, for example, sufficient information is collected
to identify an abended job, its schedule, and the workstation on which it runs. This
is useful when deciding what actions to take to remedy a problem.

You may choose to disable some or all of the Tivoli Workload Scheduler/NetView
traps for the following reasons:
1. To reduce network traffic.
2. To avoid confusion on the part of other NetView users by limiting the number
of logged events.

For more information about the Unison Software’s enterprise-specific traps and
their variables, see “Re-configuring enterprise-specific traps” on page 127.

Tivoli Workload Scheduler/NetView configuration files


On each managed node (each node running a Tivoli Workload Scheduler/NetView
agent), the selection of events and how they are reported is controlled by setting
variables in two configuration files:
v The BmEvents configuration file controls the reporting of job scheduling events
(101-252 in Table 32) by the mailman and batchman production processes. These
events are passed on to the agent, which may convert them to SNMP traps,
depending on the settings in its configuration file.
v The MAgent configuration file controls reporting by the Tivoli Workload
Scheduler/NetView agent, magent. Events selected in this file are turned into
SNMP traps, which are passed to NetView by the Tivoli Workload
Scheduler/NetView manager, mdemon, on the management node. The traps can
also be processed by other network management systems.

The BmEvents configuration file


The BmEvents configuration file is named <TWShome>/BmEvents.conf. Use it to
configure Tivoli Workload Scheduler production processes on each workstation
that has an agent installed. Its contents are described below.
# comment
A comment line.
OPTIONS=MASTER|OFF
If the value is set to MASTER then all job scheduling events gathered by
that workstation are reported. If that workstation is the master domain
manager or the backup master domain manager with full status on, then
all scheduling events from the scheduling environment are reported. If the
value is set to OFF, no job scheduling events are reported from that
workstation. If commented, it defaults to MASTER on the master domain
manager workstation, while it allows to report all job scheduling events
regarding that workstation only on a workstation different from the master
domain manager.
EVENT= n [ n ...]
The list of events to be reported. Event numbers must be separated by at
least one space. If omitted, the events reported by default are:
51 101 102 105 151 152 155 201 202 203 204 251 252

Chapter 10. Integration with other IBM Tivoli products 123


Product Integration

Event 51 causes mailman and batchman to report the fact that they were
restarted. Events 1, 52, and 53 are not valid in this file (see “The MAgent
configuration file”).

If the EVENT parameter is included, it completely overrides the defaults.


To remove only event 102 from the list, for example, you must enter the
following:
EVENT=51 101 105 151 152 155 201 202 203 204 251 252

See Table 32 on page 121 for a description of events.


PIPE=filename
If set, job scheduling events are written to a FIFO file. To have events sent
to the Tivoli Workload Scheduler/NetView agent, the setting must be:
PIPE=MAGENT.P

A BmEvents configuration file is included with the Tivoli Workload Scheduler


software. It contains several comment lines, and a single parameter setting:
PIPE=MAGENT.P

This causes events to be reported as follows:


v If installed on the master, it will report all job scheduling events (101-252) for all
workstations in the network. If installed on any other workstation, no job
scheduling events will be reported. The process restart event (51) is reported
regardless of the workstation type.
v The following events are reported:
51 101 102 105 151 152 155 201 202 203 204 251 252
v Event information is written to a FIFO file named MAGENT.P, which is read by the
Tivoli Workload Scheduler/NetView agent.

The MAgent configuration file


The MAgent configuration file is named <TWShome>/MAgent.conf. Use it to configure
the agent on each workstation. Its contents are described below.
# comment
A comment line.
OPTIONS=MASTER|OFF
If set to MASTER, the agent on this workstation will send the job
scheduling events read from the MAGENT.P file as SNMP traps. If set to
OFF , no job scheduling traps are generated by this workstation. If
omitted, it defaults to MASTER on the master, and OFF on other
workstations.
This variable is required only if the master will not be used to generate job
scheduling traps for the network. For example, if the master is not a
managed node (no agent is installed), you should set this variable to
MASTER on a backup master that has an agent installed.
EVENT= n [ n ...]
The list of events to be sent as SNMP traps. With the exception of events 1,
52, and 53, traps will not be generated unless the corresponding events are
turned on in the BmEvents configuration file. Event numbers must be
separated by at least one space. If omitted, the events sent as traps by
default are:
1 52 53 54 101 102 105 151 152 155 201 202 203 204 252

124 IBM Tivoli Workload Scheduler Planning and Installation Guide


Product Integration

Event 1 (magent restarted) cannot be turned off.

If this parameter is included, it completely overrides the defaults. To


remove only event 102 from the list, for example, you must enter the
following:
EVENT=1 52 53 54 101 105 151 152 155 201 202 203 204 252

See Table 32 on page 121 for a description of events.


+name [pidfilename]
By default, the list of processes monitored by the Tivoli Workload
Scheduler/NetView agent contains the following processes: magent,
netman, mailman, batchman, jobman, all mailman servers, all writers, and
all extended agent connections. Use this syntax to add processes to the list.
If it is not a Tivoli Workload Scheduler process, you must include its PID
file name. Some examples are:
+SENDMAIL /etc/sendmail.pid
+SYSLOG /etc/syslogd.pid
-name Use this syntax to remove processes from the list of monitored processes.
To remove writer processes, use this form:
- cpuid :writer

For example, to remove the writers for all workstations with ids starting
with SYS, enter:
-SYS@:WRITER

To remove all writers, enter:


-@:WRITER

To remove mailman servers 5 and A, enter:


-SERVER5
-SERVERA

To remove all mailman servers, enter:


-SERVER@

An MAgent configuration file is included with the Tivoli Workload


Scheduler/NetView software. It contains all comment lines, no parameters are set.
This causes SNMP traps to be generated as follows:
v If installed on the master, traps are generated for job scheduling events (101-252)
on all workstations in the network. If installed on any other workstation, no job
scheduling traps are generated.
v The following events result in SNMP traps:
1 52 53 54 101 102 105 151 152 155 201 202 203 204 252
v The following processes are monitored: magent, netman, mailman, batchman,
jobman, all mailman servers, all writers, and all extended agent connections.

Monitoring writers and servers


writer and mailman server processes are started and stopped when Tivoli
Workload Scheduler workstations are linked and unlinked. Their transitory nature,
and the resulting number of status changes in NetView can cause confusion,
particularly in large Tivoli Workload Scheduler networks where linking and
unlinking is common. For this reason, you can remove writer and mailman server
processes from the list of monitored processes.

Chapter 10. Integration with other IBM Tivoli products 125


Product Integration

Tivoli Workload Scheduler/NetView configuration options


Tivoli Workload Scheduler/NetView submaps, symbols, and objects can be
modified like others in NetView. The following topics describe some specific
configuration options for Tivoli Workload Scheduler/NetView.

Agent scan rate


By default, the Tivoli Workload Scheduler/NetView agents scan and update the
status of their monitored processes every 60 seconds. To change the rate:
1. Login on the managed node and edit the file <TWShome>/StartUp.
2. Add the -timeout option to the magent command line.
For example, to change the rate to 120 seconds, make the following change:
<TWShome>/bin/magent -peers hosts -timeout 120

Manager polling rate


The Tivoli Workload Scheduler/NetView manager (mdemon) polls its agents to
retrieve status information about the managed nodes. The rate is defined in the file
/usr/OV/lrf/Mae.mgmt.lrf on the management node. Unless otherwise specified,
the polling rate defaults to 60 seconds.

To change the rate:


1. Edit the file to add the -timeout option to the mdemon command line. For
example, to change the rate to 120 seconds, make the following change:
Unison_Software_Maestro_Manager: <TWShome>/bin/mdemon:
OVs_YES_START:pmd,ovwdb:-pmd,-timeout,120:OVs_WELL_BEHAVED
2. After making a change, delete the old registration by running the ovdelobj
command.
3. Register the manager by running the ovaddobj command and supplying the
name of the lrf file.

For more information, review the man pages for ovaddobj(8) and lrf(4). See also
Configuring agents in NetView.

Configuring agents in NetView


To change the configuration of Tivoli Workload Scheduler/NetView agents in
NetView, follow these steps:
1. Move down the IP Internet tree to the IP Segment submap showing all the
nodes.
2. Select a node where a Tivoli Workload Scheduler/NetView agent is running.
Press Ctrl-O to open the Object Description panel.
3. On the Object Description panel, select Maestro - Unison Software(c) from the
Object Attributes list.
4. Click the View/Modify Object Attributes button.
5. On the Attributes for Object panel:
a. To ignore this agent altogether, click False under Does a Maestro agent
exist on this cpu?.
b. To change the rate at which mdemon polls this agent, enter the number of
seconds under Enter the number of seconds between polling. If this
number is other than zero, it overrides the rate defined for the mdemon
process (see “Manager polling rate”).
c. Click Verify, and then OK to close the Attributes for Object panel.
6. Click OK to close the Object Description panel.

126 IBM Tivoli Workload Scheduler Planning and Installation Guide


Product Integration

Configuring workstation status in NetView


To modify the way status is indicated for a Tivoli Workload Scheduler workstation
symbol, follow these steps:
1. Select a workstation symbol on the Tivoli Workload Scheduler network
submap.
2. Press Ctrl-O to open the Object Description panel.
3. On the Object Description dialog, select Tivoli Workload Scheduler from the
Object Attributes list.
4. Click View/Modify Object Attributes.
5. On the Attributes for Object dialog, click True or False to either ignore or
recognize the various job scheduling events. For example, to ignore job abend
events, click True under Tivoli Workload Scheduler should ignore JobAbend
Events.
6. Click Verify, and then OK to close the Attributes for Object panel.
7. Click OK to close the Object Description panel.

Unison software MIB


For a complete listing of the Unison Software enterprise MIB, review the file
TWShome/OV/Maestro.mib.

Re-configuring enterprise-specific traps


The Tivoli Workload Scheduler/NetView enterprise-specific traps are configured
with default messages that will serve most users’ needs. To re-configure the traps,
choose Event Configuration from the Options menu. For instructions, refer to
your NetView documentation or online help. It may also be helpful to review the
man page for trapd.conf(4).

The enterprise-specific traps and their positional variables are listed in Table 33.
Trap descriptions are listed in Table 32.

Table 33 lists enterprise-specific traps.


Table 33. Enterprise-specific traps
Trap Identifier Positional variables
1* uTtrapReset 1 Agent identifier number

2 Software version

3 Tivoli Workload Scheduler


message string, if any
51 52 * 53 * uTtrapProcessReset 1 Process pid
uTtrapProcessGone
uTrapProcessAbend 2 Program name

3 Tivoli Workload Scheduler


message string, if any
54 * uTrapXagentConnLost 1 Program name

2 Tivoli Workload Scheduler


message string, if any

Chapter 10. Integration with other IBM Tivoli products 127


Product Integration

Table 33. Enterprise-specific traps (continued)


Trap Identifier Positional variables
101 * 102 * 103 104 105* 204 * uTtrapJobAbend 1 Workstation name of the
uTtrapJobFailed schedule
uTtrapJobLaunch
uTtrapJobDone 2 Schedule name
uTtrapJobUntil
uTtrapJobRerunPrompt 3 Job name. For jobs
submitted with at or batch , if
the name supplied by the
user is not unique, this is the
Tivoli Workload
Scheduler-generated name,
and the name supplied by
the user appears as variable
7

4 Workstation name on
which the job runs

5 Job number (pid)

6 Job state, indicated by an


integer: 1 (ready), 2 (hold), 3
(exec), 5 (abend), 6 (succ), 7
(cancl), 8 (done), 13 (fail), 16
(intro), 23 (abenp), 24
(succp), 25 (pend).

7 Job’s submitted (real)


name. For jobs submitted
with at or batch , this is the
name supplied by the user if
not unique. The unique
name generated by Maestro
appears as variable 3

8 User name under which


the job runs.

9 Name of the job’s script


file, or the command it
executes. White space is
replaced by the octal
equivalent; for example, a
space appears as 040.

128 IBM Tivoli Workload Scheduler Planning and Installation Guide


Product Integration

Table 33. Enterprise-specific traps (continued)


Trap Identifier Positional variables
101 * 102 * 103 104 105* 204 * uTtrapJobAbend 10 The rate at which an every
uTtrapJobFailed job runs, expressed as hhmm .
uTtrapJobLaunch If every was not specified for
uTtrapJobDone the job, this is -32768.
uTtrapJobUntil
uTtrapJobRerunPrompt 11 Job recovery step,
indicated by an integer: 1
(stop), 2 (stop after recovery
job), 3 (rerun), 4 (rerun after
recovery job), 5 (continue), 6
(continue after recovery job),
10 (this is the rerun of the
job), 20 (this is the run of the
recovery job).

12 An event timestamp,
expressed as:
yyyymmddhhmmss 00 (that is,
year, month, day, hour,
minute, second, hundredths
always zeroes).

13 The prompt number, or


zero if there is no prompt.

14 The prompt text, or Tivoli


Workload Scheduler error
message.
151 * 152 * 153 154 155* uTtrapSchedAbend 1 Workstation name of the
uTtrapSchedStuck schedule.
uTtrapSchedStart
uTtrapSchedDone 2 Schedule name.
uTtrapSchedUntil
3 Schedule state, indicated
by an integer: 1(ready), 2
(hold), 3 (exec), 4 (stuck), 5
(abend), 6 (succ), 7 (cancl).

4 Tivoli Workload Scheduler


error message, if any.
201 * uTtrapGlobalPrompt 1 Prompt name

2 Prompt number

3 Prompt text
202 * uTtrapSchedPrompt 1 Workstation name of the
schedule

2 Schedule name

3 Prompt number

4 Prompt text

Chapter 10. Integration with other IBM Tivoli products 129


Product Integration

Table 33. Enterprise-specific traps (continued)


Trap Identifier Positional variables
203 * uTtrapJobPrompt 1 Workstation name of the
schedule

2 Schedule name

3 Job name

4 Workstation name of the


job

5 Prompt number

6 Prompt text
251 252 * uTtrapLinkDropped 1 The to workstation name.
uTtrapLinkBroken
2 Link state, indicated by an
integer: 1 (unknown), 2
(down due to an unlink), 3
(down due to an error), 4
(up).

3 Tivoli Workload Scheduler


error message.
* These traps are enabled by default.

Tivoli Workload Scheduler/NetView program reference


The following information is provided for those who want to run the Tivoli
Workload Scheduler/NetView programs manually. The manager program,
mdemon, is normally started with NetView as part of the ovstart sequence, and its
run options are included in the /usr/OV/lrf/Mae.mgmt.lrf file. The agent program,
magent, is normally started within the Tivoli Workload Scheduler StartUp script
(<TWShome>/bin/StartUp).

mdemon synopsis
mdemon [-timeout secs] [-pmd] [-port port] [-retry secs]

where:
-timeout
The rate at which agents are polled, expressed in seconds. The default is 60
seconds. See “Manager polling rate” on page 126 and “Configuring agents
in NetView” on page 126 for more information about changing the rate.
-pmd This option causes mdemon to run under NetView pmd (Port Map
Demon). Otherwise, it must be run manually. This option is included by
default in the file /usr/OV/lrf/Mae.mgmt.lrf file.
-port For HP-UX agents only. This identifies the port address on the managed
nodes on which the HP-UX agents will respond. The default is 31112.
-retry The period of time mdemon will wait before trying to reconnect to a
non-responding agent. The default is 600 seconds.

130 IBM Tivoli Workload Scheduler Planning and Installation Guide


Product Integration

magent synopsis
The syntax of magent is:

magent -peers host [, host [,...]] [-timeout secs ] [-notraps] [-port port]

where:
-peers For HP-UX agents only. This defines the hosts (names or IP addresses) to
which the agent will send its traps. The default is 127.0.0.1 (loopback).
For AIX agents, the /etc/snmpd.conf file must be modified to define the
hosts to which the agent will send its traps. To add another host, for
example, duplicate the existing trap line and change the host name:
# This file contains Tivoli Workload Scheduler
# agent registration.
#
trap public host1 1.3.6.1.4.1.736 fe
trap public host2 1.3.6.1.4.1.736 fe
-timeout
The rate at which the agent checks its monitored processes, expressed in
seconds. The default is 60 seconds.
-notraps
If included, the agent will not generate traps.
-port For HP-UX agents only. This defines the port address on which this agent
responds. The default is 31112.

Chapter 10. Integration with other IBM Tivoli products 131


Product Integration

Integration with IBM Tivoli Business Systems Manager


This section describes the integration of IBM Tivoli Workload Scheduler with IBM
Tivoli Business Systems Manager.

General
IBM Tivoli Business Systems Manager is an object-oriented systems management
application that provides monitoring and event management of resources,
applications and subsystems within the enterprise with the objective of providing
continuous availability. Monitoring Tivoli Workload Scheduler daily plans with
IBM Tivoli Business Systems Manager provides quick determination of problems
that can jeopardize the successful and timely completion of the schedules.
Integrating Tivoli Workload Scheduler with IBM Tivoli Business Systems Manager
provides the ability to manage schedules from a unique business systems
perspective.

Integration is accomplished by the following:


v A special flag (key flag) in Tivoli Workload Scheduler that marks those jobs or
job streams that you want IBM Tivoli Business Systems Manager to monitor
more thoroughly. Information on these key objects is sent in the form of events.
v A bulk discovery mechanism that sends information on all the key jobs and job
streams of the daily plan to IBM Tivoli Business Systems Manager.
v A delta discovery mechanism that sends daily plan changes for key jobs and job
streams to IBM Tivoli Business Systems Manager.

Using these mechanisms, the Tivoli Workload Scheduler CommonListener agent


interacts with the IBM Tivoli Business Systems Manager adapter CommonListener.
The next figure shows how events and commands are passed from batchman to
the common listener agent which in turn interprets them and runs proper requests
to the TBSM adapter.

TWS
TBSM
Console
TWS daily plan

Key Job

Key JStream

Not-Key Job TBSM


Not-Key JStream

clagent
Common
Listener

Figure 7. Common listener agent architecture

Note: The CommonListener agent process must be stopped and restarted every
day. A daily bulk discovery is needed in order for the integration to work
properly.

For bulk discovery, the TBSM database is populated with all the key objects that
are in the daily plan of Tivoli Workload Scheduler. For every key object in the

132 IBM Tivoli Workload Scheduler Planning and Installation Guide


Product Integration

plan, information about its type, properties, and status is forwarded by the
scheduler to the common listener interface of TBSM. The common listener then
populates the TBSM database with this data.

When a new key job or job stream is added to the Tivoli Workload Scheduler plan,
a delta discovery add function forwards related information to IBM Tivoli Business
Systems Manager. Likewise, when a key object attribute is changed, a delta
discovery modify function notifies IBM Tivoli Business Systems Manager.

Using the key flag mechanism


The key flag identifies the more critical jobs or job streams that are to be monitored
with IBM Tivoli Business Systems Manager. Such jobs and job streams are
commonly referred to as key jobs and key job streams.

Marking a job or job stream as key causes IBM Tivoli Business Systems Manager to
be notified every time there is a status change or a property change.

In any case, notification of certain critical events is forwarded for all jobs and job
streams, regardless of whether they have the key flag or not. Table 34 lists these
events.
Table 34. Forwarded events for key and non-key scheduling objects
Scheduler event ID Type TBSM Event Type Severity
TWS_Job_Abend Batch Exception Critical
TWS_Sched_Abend BatchCycle Exception Critical
TWS_Job_Cancel Batch Message Warning
TWS_Sched_Cancel BatchCycle Message Warning
TWS_Job_Failed Batch Exception Critical

Setting the key flag


To enable the key flag mechanism on your workstation, you must have properly
configured the LOGGING parameter of the BmEvents.conf file (see “Customizing
BmEvents.conf” on page 135).

You can mark a job or job stream as key in both the database and daily plan. In
the database, key jobs can be defined just as if inserted into a job stream.

To mark a job or a job stream as key, you can use one of the following:
v The keywords KEYSCHED (for job streams) and KEYJOB (for jobs), as the
following example shows.

SCHEDULE cpu1#sched1
ON mo,tu...
AT 0100
KEYSCHED
:cpu1#myjob1 KEYJOB
END
cpu1#myjob1
SCRIPTNAME"C:\my.bat"
STREAMLOGON"twsusr1"
RECOVERY STOP
v The job and job stream properties windows in the Job Scheduling Console.
The job properties windows display, both at the database and plan levels, an Is
Monitored Job check box that you mark to specify IBM Tivoli Business Systems

Chapter 10. Integration with other IBM Tivoli products 133


Product Integration

Manager monitoring. In the job properties window at the plan level you can
change this setting for the specific job instance.
The job stream properties windows display, at the database and plan levels, the
following two items:
– An Is Monitored Job Stream check box that you mark to specify that the job
stream is to be monitored by IBM Tivoli Business Systems Manager. You can
change this setting at the job stream instance level.
– A read-only field named Contains Monitored Job that indicates if any of the
jobs comprised in the job stream have been marked as key.
You can choose the key flag as a filtering criterion when you run lists of jobs or
job streams in the database or in the plan.

Installing and configuring the common listener agent


Tivoli Workload Scheduler, Version 8.2 integrates with Tivoli Business System
Manager, Version 1.5 with APAR OW51467 or later.

You must have the Java Runtime Environment (JRE) Version 1.3 installed on every
workstation that will be running the common listener agent.

The common listener agent is automatically installed with Tivoli Workload


Scheduler. It can run on any IBM Tivoli Workload Scheduler workstation type
(master, FTA, and so on). You need to start at least one per IBM Tivoli Workload
Scheduler network, possibly on the master or on an FTA that has been configured
with the full-status option.

After installing IBM Tivoli Workload Scheduler, do the following to configure the
common listener agent:
1. Enter the following to configure the environment:
v On Windows:
PATH=%JRE-DIR%\jre\bin;%JRE-DIR%\jre\bin\classic;%PATH%
v On Solaris:
LD_LIBRARY_PATH=$JRE-DIR/jre/lib/sparc/:$JRE-DIR/jre/lib/sparc/client:$LD_LIBRARY_PATH
v On AIX:
LD_LIBRARY_PATH=$JRE-DIR/jre/bin/classic/:$JRE-DIR/jre/bin/:$LD_LIBRARY_PATH
LIBPATH=$JRE-DIR/jre/bin/classic/:$JRE-DIR/jre/bin/:TWShome/Tbsm/
TbsmAdapter:$LIBPATH
export LIBPATH
AIXTHREAD_SCOPE=S
AIXTHREAD_MUTEX_DEBUG=OFF
AIXTHREAD_RWLOCK_DEBUG=OFF
AIXTHREAD_COND_DEBUG=OFF
v On Linux™:
LD_LIBRARY_PATH=$JRE-DIR/jre/bin/classic:$JRE-DIR/jre/bin:$LD_LIBRARY_PATH
where, $JRE-DIR is the installation path of Java RunTime Environment 1.3.1.

Note: After setting the path, restart IBM Tivoli Workload Scheduler before you
start the common listener agent.
2. Configure the <TWShome>/Tbsm/TbsmAdapter/adapter.config file to enable the
common listener agent to connect with the CommonListener of Tivoli Business
Systems Manager. Set the following parameters:
loggingmode.default = false

134 IBM Tivoli Workload Scheduler Planning and Installation Guide


Product Integration

transport.local.ip.address = adapterhost
transport.request.address = adapterhost.INSTR.QM+INSTR.Q
transport.response.address = adapterhost.INSTR.QM+INSTR.Q
transport.server.ip.address = serverhost

where,
adapterhost
Is the full hostname of the computer running the common listener
agent.
serverhost
Is the full host name of the computer where the CommonListener is
installed.

During installation, the following files are copied in a directory named


<TWShome>/CommonListener:
v The common listener agent executable that will run the common listener process.
v The ClEvents.conf configuration file.
v The customize script that you need to run after installation of Tivoli Workload
Scheduler has completed.

When you run customize, it performs the following actions:


1. Installs ClEvents.conf in the <TWShome> directory.
2. Instals BmEvents.conf in the <TWShome> directory if it is not yet installed and
customize it for use by the common listener agent.

Note: To run customize, you must be logged on as an administrator.

Customizing the configuration files


Two configuration files are important for running the common listener agent:
v BmEvents.conf
v ClEvents.conf

These files are configured with default values when you run customize. You can
change the defaults if you have different preferences with respect to which events
are reported to IBM Tivoli Business Systems Manager and how they must be
reported.

Mailman, batchman, or the common listener agent read form these configuration
files when they are initialized. If you make any changes in these files, then you
have to restart them.

Customizing BmEvents.conf
You can change the following parameters:
OPTIONS=MASTER|OFF
If the value is OFF, only local events are forwarded to IBM Tivoli Business
Systems Manager. If the value is MASTER, also the events occurring in the
attached workstations are forwarded. The value should be MASTER for
the master workstation and OFF for the other workstations.
LOGGING=ALL|KEY
If the value is KEY, the key flag filter mechanism is enabled. Events are
sent only for key jobs and job streams (refer to Table 35 to find the events

Chapter 10. Integration with other IBM Tivoli products 135


Product Integration

filtered by the key flag and to Table 34 for a list of events that are
forwarded regardless of whether the job or job stream is key or not). If the
value is ALL, events are sent also for non-key jobs and job streams.
EVENTS = <n>
The list of events to report to IBM Tivoli Business Systems Manager. By
default, all the events listed in Table 35 on page 137 are sent. You can
exclude some events if you are not interested in reporting them. In this
case, write here a list of the event numbers that you want to send.
MSG = <msg_path>
The name and the path of the message file where batchman and mailman
will write the events for the common listener agent to read. You can add
more than one message file.
SYMEVNTS=YES|NO
If the value is YES, batchman reports job status events immediately after
the generation of the plan. It is valid for only key-flagged jobs with
LOGGING=KEY. If the value is set to NO, no report is given. NO is the
default value.

Customizing ClEvents.conf
You can change the following parameters:
EVENTS = <n>
The list of events to report to IBM Tivoli Business Systems Manager. By
default, all the events listed in Table 35 on page 137 are sent. You can
exclude some events if you are not interested in reporting them. In this
case, write here a list of the event numbers that you want to send.
MSG = <msg_path>
The name and the path of the message file where batchman and mailman
will write the events for the common listener agent to read. The name and
path of this file must match one of the output files you specified in
BmEvents.conf.
RETRYINTERVAL=<seconds>
The amount of time after which the cl_agent tries to reconnect to the Tivoli
Business Systems Manager adapter.

Starting and stopping the common listener agent


The common listener agent process is independent from the other Tivoli Workload
Scheduler processes. You can run it with the following commands:
conman clagent_start
Starts the agent by sending a new service request to netman. Like the
conman start command, this request also starts netman, if it is down.
conman clagent_stop
Stops the agent by placing a message in the mailbox.
conman cl_bulkdiscovery
Gathers information about the key jobs and job streams.

Tivoli Workload Scheduler/IBM Tivoli Business Systems


Manager events
Table 35 lists the events that are reported to IBM Tivoli Business Systems Manager
unless you specify otherwise in the configuration files (see “Customizing the
configuration files” on page 135).

136 IBM Tivoli Workload Scheduler Planning and Installation Guide


Product Integration

Table 35. Tivoli Workload Scheduler events for Tivoli Business Systems Manager
Key flag
filter
Event Type Description enabled
101 mstJobAbend Job abended No
102 mstJobFailed Job is in error status No
103 mstJobLaunch Job launched Yes
104 mstJobDone Job finished Yes
105 mstJobUntil Job until time expired No
106 mstJobSubmit Job submitted No
107 mstJobCancel Job has been canceled No
108 mstJobReady Job is in ready status Yes
109 mstJobHold Job is in hold status Yes
110 mstJobRestart Job is in restart status Yes
111 mstJobCant Batchman failed to stream the job No
112 mstJobSuccp Job in succ-pending status Yes
113 mstJobExtrn Job is in extern status Yes
114 mstJobIntro Job is in intro status Yes
115 mstJobStuck Job is in stuck status Yes
116 mstJobWait Job is in wait status Yes
117 mstJobWaitd Job is in wait-deferred status Yes
118 mstJobSched Job is in sched status Yes
119 mstJobModify Job property modified Yes
120 mstJobLate Job is late Yes
121 mstJobUntilCont Job until time expired with continue Yes
option
122 mstJobUntilCanc Job until time expired with cancel Yes
option
151 mstSchedAbend Schedule abended No
152 mstSchedStuck Schedule is in stuck state No
153 mstSchedStart Schedule started Yes
154 mstSchedDone Schedule finished Yes
155 mstSchedUntil Schedule until time expired Yes
156 mstSchedSubmit Schedule submitted No
157 mstSchedCancel Schedule has been canceled No
158 mstSchedReady Schedule is in ready status Yes
159 mstSchedHold Schedule is in hold status Yes
160 mstSchedExtrn Schedule is in extern status Yes
161 mstSchedCnPend Schedule is in cancel-pending status Yes

163 mstSchedLate Schedule is late Yes


164 mstSchedUntilCont Schedule until time expired with Yes
continue option

Chapter 10. Integration with other IBM Tivoli products 137


Product Integration

Table 35. Tivoli Workload Scheduler events for Tivoli Business Systems
Manager (continued)
Key flag
filter
Event Type Description enabled
165 mstSchedUntilCanc Schedule until time expired with Yes
cancel option
201 mstGlobalPrompt Global prompt displayed. No
202 mstSchedPrompt Local prompt for a schedule is Yes
displayed.
203 mstJobPrompt Local prompt for a job is displayed. Yes
204 mstJobRecovPrompt Prompt for a recovery job is No
displayed.
251 mstLinkDropped Communication link between No
workstations closed.
252 mstLinkBroken Communication link between No
workstations failed.
351 mstDomainMgrSwitch Domain manager has been switched. No

138 IBM Tivoli Workload Scheduler Planning and Installation Guide


Chapter 11. Setting security
This chapter describes the following security features:
v “Setting strong authentication and encryption”
v “Working across firewalls” on page 149

Setting strong authentication and encryption


Tivoli Workload Scheduler provides a secure, authenticated, and encrypted
connection mechanism between components running in non-secure domains and
components running in secure domains. This mechanism is based on the Secure
Sockets Layer (SSL) protocol and uses the OpenSSL Toolkit, which is automatically
installed on your computer with IBM Tivoli Workload Scheduler.

You or your IBM Tivoli Workload Scheduler administrator can decide whether or
not to implement SSL support across your network.

Note: Before implementing SSL support in your Tivoli Workload Scheduler


network, check with your security administrator regarding company policy.
If you decide not to implement SSL support, then the security of your IBM Tivoli
Workload Scheduler installation will remain on a simple authentication mechanism
based on IP-checking and on an encryption mechanism that applies only to the
passwords of Windows users.

The SSL protocol is based on a private and public key methodology and is the
highest security standard currently in use for Internet communications. The
connection security it provides has three basic properties:
v The connection is private. Encryption is used after an initial handshake to define
a secret key. Symmetric cryptography is used for data encryption (for example,
DES and RC4).
v The peer’s identity can be authenticated using asymmetric, or public key,
cryptography (for example, RSA and DSS).
v The connection is reliable. Message transport includes a message integrity check
that uses a keyed MAC. Secure hash functions, such as SHA and MD5, are used
for MAC computations.

IBM Tivoli Workload Scheduler uses SSL authentication when opening a


client-server session between two workstations primarily to insure the identity of
the client process (such as mailman or the connector) that is requesting the
connection.

The Tivoli Workload Scheduler administrator will have to define which of the
workstations in the network need to establish SSL sessions with the other
workstations. The information indicating if a connection should be SSL or not can
be configured in the workstation definition in the IBM Tivoli Workload Scheduler
database from either the command line or the IBM Tivoli Workload Scheduler Job
Scheduling Console.

Tivoli Workload Scheduler SSL support provides basic functions in the area of
certificate management. It provides basic functions, such as the insertion and
management of keys and certificates into the key-store and trust-chain

© Copyright IBM Corp. 1991, 2004 139


Setting security

configuration. It does not provide advanced certificate management capabilities,


such as revocation lists and network directory-based revocation queries.

To provide SSL security for a domain manager attached to IBM Tivoli Workload
Scheduler for z/OS in an end-to-end connection, you have to configure the OS/390
Cryptographic Services System SSL in the IBM Tivoli Workload Scheduler code
that runs in the OS/390 USS UNIX shell in the IBM Tivoli Workload Scheduler for
z/OS server address space. See the IBM Tivoli Workload Scheduler for z/OS
documentation to learn how to accomplish this task.

Key SSL concepts


To authenticate a peer’s identity, the SSL protocol uses X.509 certificates called
digital certificates. Digital certificates are, in essence, electronic ID cards that are
issued by trusted parties and enable a user to verify both the sender and the
recipient of the certificate through the use of public-key cryptography.

Public-key cryptography uses two different cryptographic keys: a private key and a
public key. Public-key cryptography is also known as asymmetric cryptography,
because you can encrypt information with one key and decrypt it with the
complement key from a given public-private key pair. Public-private key pairs are
simply long strings of data that act as keys to a user’s encryption scheme. The user
keeps the private key in a secure place (for example, encrypted on a computer’s
hard drive) and provides the public key to anyone with whom the user wants to
communicate. The private key is used to digitally sign all secure communications
sent from the user while the public key is used by the recipient to verify the
sender’s signature.

Public-key cryptography is built on trust: the recipient of a public key needs to


have confidence that the key really belongs to the sender and not to an impostor.
Digital certificates provide that confidence. For this reason, the IBM Tivoli
Workload Scheduler workstations that share an SSL session must have locally
installed repositories for the X.509 certificates that will be exchanged during the
SSL session establishment phase to authenticate the session.

A digital certificate is issued by a trusted authority, also called a certificate


authority (CA). A signed digital certificate contains:
v The owner’s distinguished name
v The owner’s public key
v The certificate authority’s (issuer’s) distinguished name
v The signature of the certificate authority over these fields
A certificate request that is sent to a certificate authority for signing contains:
v The owner’s (requester’s) distinguished name
v The owner’s public key
v The owner’s own signature over these fields
A Certificate Authority (CA), such as for example, VeriSign or Thawte, is trusted
by a client (or a server) application when their root certificates (that is, the
certificate that contains the Certification Authority signature) is listed in the client
(server) trusted CA lists. The way an application creates its trusted CA list depends
on the SSL implementation. Using OpenSSL for example, the Trusted CA list is
simply a file containing the concatenated certificates of all the CA that should be
trusted. Using the OS/390 Cryptographic Services System SSL, the Trusted CA List
is a proprietary database containing the certificates of the Trusted CA. The
certificate authority verifies this signature with the public key in the digital

140 IBM Tivoli Workload Scheduler Planning and Installation Guide


Setting security

certificate to ensure that the certificate request was not modified while transiting
between the requester and the CA and that the requester is in possession of the
private key that matches the public key in the certificate request.

The CA is also responsible for some level of identification verification. This can
range from very little proof to absolute assurance of the owner’s identity. A
particular kind of certificate is the self-signed digital certificate. It contains:
v The owner’s distinguished name
v The owner’s public key
v The owner’s own signature over these fields
A root CA’s digital certificate is an example of a self-signed digital certificate.
Users can also create their own self-signed digital certificates for testing purposes.

The following example describes in a simplified way how digital certificates are
used in establishing an SLL session. In this scenario, Appl1 is a client process that
opens an SLL connection with the server application Appl2:
1. Client Appl1 asks to open an SSL session with server Appl2.
2. Appl2 starts the SSL handshake protocol. It encrypts the information using its
private key and sends its certificate with the matching public key to Appl1.
3. Appl1 receives the certificate from Appl2 and verifies that it is signed by a
trusted certification authority. If the certificate is signed by a trusted CA, Appl1
can optionally extract some information (such as the distinguished name)
stored in the certificate and performs additional authentication checks on
Appl2.
4. At this point, the server process has been authenticated, and the client process
starts it part of the authentication process; that is, Appl1 encrypts the
information using its private key and sends the certificate with its public key to
Appl2.
5. Appl2 receives the certificate from Appl1 and verifies that it is signed by a
trusted certification authority.
6. If the certificate is signed by a trusted CA, Appl2 can optionally extract some
information (such as the distinguished name) stored in the certificate and
performs additional authentication checks on Appl1.

Planning for SSL support in Tivoli Workload Scheduler


To implement SSL support for this network, the Tivoli Workload Scheduler
administrator must plan in advance how the workstations will authenticate each
other. The administrator can opt to configure the Tivoli Workload Scheduler
network, so that all the workstations that open SSL sessions authenticate in the
same way, or configure different authentication levels for each workstation. The
authentication level affects the way digital certificates are created and installed on
the workstations using SSL support. To enable SSL support, specify SSL local
options in the localopts file. See “Setting SSL local options” on page 147.

SSL provides the following authentication methods:


CA trusting only
Two workstations trust each other if each receives from the other a
certificate that is signed by a trusted certification authority (such as
VeriSign); that is, if the CA certificate is in the list of trusted CAs on each
workstation. With this authentication level, a workstation does not perform
any additional checks on certificate content, such as the distinguished
name. Any certificate (even a personal home banking certificate) signed by

Chapter 11. Setting security 141


Setting security

a trusted CA can be used to establish an SSL session. This authentication is


quite weak, because it allows an intruder with a private key and a signed
certificate to install them on any workstation and to establish a connection
within the network. With this method, you employ a single private key
and certificate for the entire Tivoli Workload Scheduler network. See 148 to
set the ″caonly″ option in the localopts file.
Check if the distinguished name matches a defined string
Two workstations trust each other if, after receiving a certificate with the
signature of the trusted CA, each performs a further check by extracting
the distinguished name from the certificate and comparing it with a string
that was defined in its local options file by the Tivoli Workload Scheduler
administrator. This method adds a further level of authentication to the CA
trusting only method. An intruder may have a private key and a signed
certificate, but the distinguished name specified in that certificate must
match the one specified in the original certificate. If they do not match, the
connection is not established. With this method, you can opt also for
employing a different private key and certificate for each domain in the
network. See 148 to set the ″string″ option in the localopts file.
Check if the distinguished name matches the workstation name
Two workstations trust each other if, after receiving a certificate with the
signature of the trusted CA, each performs a further check by extracting
the distinguished name from the certificate and comparing it with the
name of the workstation that sent the certificate. This method adds a
further level of authentication. An intruder may have a private key and a
signed certificate, but the distinguished name specified in that certificate
must match the workstation name. Using the distinguished name in this
way also helps to ensure the identity of the partner workstation. With this
method, you can opt also for employing a different private key and
certificate for each workstation in the network. See 148 to set the ″cpu″
option in the localopts file.

As an Tivoli Workload Scheduler administrator, you can choose to implement one


or a mix of the following:
Use the same certificate for the entire network
In this case, you must:
1. Create a private key.
2. Create a certificate signing request for that private key. Optionally, you
can set the distinguished name (DN) field in the certificate to a
particular string of your choice, for example the Tivoli Workload
Scheduler master name.
3. Ask a certificate authority (which can be a third-party certification
authority, such as VeriSign or Thawte, or a self-created one) to sign a
certificate corresponding to the private key. If you created your own
certification authority, the certificate is called a self-signed certificate.
4. Install the private key and the certificate on all the workstations that
will use SSL.
5. Add the certificate authority that signed the certificate to the Trusted
CA list on all the workstations that will use SSL.
If the workstations are configured with CA trusting only, they will accept
connections with any other workstation that sends a certificate signed by
that trusted CA. To enforce the authentication, you can define in the local

142 IBM Tivoli Workload Scheduler Planning and Installation Guide


Setting security

options file of the workstations a name or a list of names that must match
the contents of the distinguished name (DN) field in the certificate before a
connection request is accepted.
Use a Certificate for each domain
In this case, repeatedly follow the previous steps to create and install more
private keys and signed certificates, one for each domain in the IBM Tivoli
Workload Scheduler network. Then, configure each workstation to accept a
connection only with partners that have a particular string in the DN field
of their certificate.
Use a Certificate for each CPU
In this case, repeatedly follow the previous steps to create and install on
each workstation a different private key and a signed certificate and to add
a Trusted CA list containing the CA that signed the certificate. Then,
configure each workstation to accept a connection only with partners that
have their workstation name (as specified in the Symphony file) recorded
in the DN field of their certificate.

If you use SSL authentication for your enterprise’s Tivoli Workload Scheduler
network and not for outside Internet commerce, you can act as your own
certification authority to create and sign the certificates. To be your own CA, you
must create a CA key and a self-signed CA certificate. After that, you have the
power to sign any certificate request with your own CA signature and to create
valid certificates. To use a self-signed certificate, you must download your CA
certificate in the Trusted CA list of every workstation that will use SSL. This
capability allows customers to act as a real certification authority, without having
the need to request certificates to a commercial CA.

Configuring SSL support in Tivoli Workload Scheduler


To configure SSL for your network, you must:
1. Create an SSL directory under the TWShome directory. By default, the path
TWShome\ssl is registered in the localopts file. If you create a directory with a
name different from ssl in the TWShome directory, then update the localopts file
accordingly.
2. Copy openssl.cnf and openssl.exe to the SSL directory
3. Create as many private keys, certificates, and Trusted CA lists as you plan to
use in your network.
4. For each workstation that will use SSL authentication:
v Update its definition in the Tivoli Workload Scheduler database with the SSL
attributes.
v Add the SSL local options in the localopts file.
Although you are not required to follow a particular sequence, these tasks must all
be completed to activate SSL support.

In Tivoli Workload Scheduler, SSL support is available for the fault-tolerant agents
only (including the master and the domain managers), but not for the extended
agents. If you want to use SSL authentication for a workstation that runs an
extended agent, you must specify this parameter in the definition of the host
workstation of the extended agent.

Setting up private keys and certificates


To use SSL authentication on a workstation, you need to create and install the
following:

Chapter 11. Setting security 143


Setting security

v The private key and the corresponding certificate that identify the workstation in
an SSL session.
v The list of certificate authorities that can be trusted by the workstation.

You must use the openssl command line utility to:


v Create a file containing pseudo random generated bytes (TWS.rnd). This file is
needed on some platforms for SSL to function correctly.
v Create a private key.
v Save the password you used to create the key into a file.
v Create a Certificate Signing Request.
v Send this Certificate Signing Request (CSR) to a Certifying Authority (CA) for
signing, or:
– Create your own Certificate Authority (CA)
– Create a self-signed CA Certificate (X.509 structure) with the RSA key of your
own CA
– Use your own Certificate Authority (CA) to sign and create real certificates

These actions will produce the following files that you will install on the
workstation(s):
v A private key file (for example, TWS.key). This file should be protected, so that
it is not stolen to use the workstation’s identity. You should save it in a directory
that allows read access to the TWS user of the workstation, such as
TWShome/ssl/TWS.key.
v The corresponding certificate file (for example, TWS.crt). You should save it in a
directory that allows read access to the TWS user of the workstation, such as
TWShome/ssl/TWS.crt.
v A file containing a pseudo-random generated sequence of bytes. You can save it
in any directory that allows read access to the TWS user of the workstation, such
as TWShome/ssl/TWS.rnd.

In addition, you should create the following:


v A file containing the password used to encrypt the private key. You should save
it in a directory that allows read access to the TWS user of the workstation, such
as TWShome/ssl/TWS.sth.
v The certificate chain file. It contains the concatenation of the PEM-encoded
certificates of certification authorities which form the certificate chain of the
workstation’s certificate. This starts with the issuing CA certificate of the
workstation’s certificate and can range up to the root CA certificate. Such a file
is simply the concatenation of the various PEM-encoded CA certificate files,
usually in certificate chain order.
v The trusted CAs file. It contains the trusted CA certificates to use during
authentication. The CAs in this file are also used to build the list of acceptable
client CAs passed to the client when the server side of the connection requests a
client certificate. This file is simply the concatenation of the various
PEM-encoded CA certificate files, in order of preference.

Creating your own certification authority


If you are going to use SSL authentication within your company’s boundaries and
not for outside internet commerce, you might find it simpler to create your own
certification authority (CA) to trust all your IBM Tivoli Workload Scheduler
installations. To do so, follow the steps listed below.

144 IBM Tivoli Workload Scheduler Planning and Installation Guide


Setting security

Note: In the following steps, the names of the files created during the process TWS
and TWSca are sample names. You can use your own names, but keep the
same file extensions.
1. Choose a workstation as your CA root installation.
2. Type the following command from the SSL directory to initialize the pseudo
random number generator, otherwise subsequent commands may not work.
v On UNIX:
$ openssl rand -out TWS.rnd -rand ./openssl 8192
v On Windows:
$ openssl rand -out TWS.rnd -rand ./openssl.exe 8192
3. Type the following command to create the CA private key:
$ openssl genrsa -out TWSca.key 1024
4. Type the following command to create a self-signed CA Certificate (X.509
structure):
$ openssl req -new -x509 -days 365 -key TWSca.key -out TWSca.crt -config ./openssl.cnf

Now you have a certification authority that you can use to trust all of your
installations. If you wish, you can create more than one CA.

Creating private keys and certificates


The following steps explain how to create one key and one certificate. You can
decide whether to use one key and certificate pair for the entire network, one for
each domain, or one for each workstation. The steps below assume that you will
be creating a key and certificate pair for each workstation and thus the name of the
output files created during the process has been generalized to workstationname.

On each workstation, perform the following steps to create a private key and a
certificate:
1. Type the following command from the SSL directory to initialize the pseudo
random number generator, otherwise subsequent commands may not work.
v On UNIX:
$ openssl rand -out workstationname.rnd -rand ./openssl 8192
v On Windows:
$ openssl rand -out workstationname.rnd -rand ./openssl.exe 8192
2. Type the following command to create the private key (this example shows
triple-DES encryption):
$ openssl genrsa -des3 -out workstationname.key 1024
Then, save the password that was requested to encrypt the key in a file named
workstationname.pwd.

Note: Verify that file workstationname.pwd contains just the characters in the
password. For instance, if you specified the word maestro as the
password, your workstationname.pwd file should not contain any CR or
LF characters at the end (it should be 7 bytes long).
3. Type the following command to save your password, encoding it in base64 into
the appropriate stash file:
$ openssl base64 -in workstationname.pwd -out workstationname.sth
You can then delete file workstationname.pwd.
4. Type the following command to create a certificate signing request (CSR):
$ openssl req -new -key workstationname.key -out workstationname.csr
-config ./openssl.cnf

Chapter 11. Setting security 145


Setting security

Some values-such as company name, personal name, and more- will be


requested at screen. For future compatibility, you may specify the workstation
name as the distinguished name.
5. Send the workstationname.csr file to your CA in order to get the matching
certificate for this private key.
Using its private key (TWSca.key) and certificate (TWSca.crt), the CA will sign
the CSR (workstationname.csr) and create a signed certificate (workstationname.crt)
with the following command:
$ openssl x509 -req -CA TWSca.crt -CAkey TWSca.key -days 365
-in workstationname.csr -out workstationname.crt -CAcreateserial
6. Distribute to the workstation the new certificate workstationname.crt and the
public CA certificate TWSca.crt.

The table below summarizes which of the files created during the process have to
be set as values for the workstation’s local options.
Table 36. Files for Local Options
Local option File
SSL key workstationname.key
SSL certificate workstationname.crt
SSL key pwd workstationname.sth
SSL ca certificate TWSca.crt
SSL random seed workstationname.rnd

Configuring SSL attributes


Use the composer command line or the Job Scheduling Console to update the
workstation definition in the database. See the Tivoli Workload Scheduler Reference
Guide or the Tivoli Workload Scheduler Job Scheduling Console User’s Guide for further
reference.

Configure the following attributes:


secureaddr
Defines the port used to listen for incoming SSL connections. This value
must match the one defined in the nm SSL port local option of the
workstation. It must be different from the nm port local option that defines
the port used for normal communications. If securitylevel is specified but
this attribute is missing, 31113 is used as the default value.
securitylevel
Specifies the type of SSL authentication for the workstation. It must have
one of the following values:
enabled
The workstation uses SSL authentication only if its domain
manager workstation or another fault-tolerant agent below it in the
domain hierarchy requires it.
on The workstation uses SSL authentication when it connects with its
domain manager. The domain manager uses SSL authentication
when it connects to its parent domain manager. The fault-tolerant
agent refuses any incoming connection from its domain manager if
it is not an SSL connection.
force The workstation uses SSL authentication for all of its connections

146 IBM Tivoli Workload Scheduler Planning and Installation Guide


Setting security

and accepts connections from both parent and subordinate domain


managers. It will refuse any incoming connection if it is not an SSL
connection.

If this attribute is omitted, the workstation is not configured for SSL


connections. In this case, any value for secureaddr will be ignored. You
should also set the nm ssl port local option to 0 to be sure that this port is
not opened by netman. The following table describes the type of
communication used for each type of securitylevel setting.
Table 37. Type of communication depending on the securitylevel value.
Fault-tolerant agent (domain Domain manager (parent Connection type
manager) domain manager)
- - TCP/IP
Enabled - TCP/IP
On - No connection
Force - No connection
- On TCP/IP
Enabled On TCP/IP
On On SSL
Force On SSL
- Enabled TCP/IP
Enabled Enabled TCP/IP
On Enabled SSL
Force Enabled SSL
- Force No connection
Enabled Force SSL
On Force SSL
Force Force SSL

The following example shows a workstation definition that includes the security
attributes:
cpuname ENNETI3
os WNT
node apollo
tcpaddr 30112
secureaddr 32222
for maestro
autolink off
fullstatus on
securitylevel on
end

Setting SSL local options


Specify the following entries in the localopts file of the workstation. See “Local
options file example” on page 93 to view these entries in a sample localopts file.
nm SSL port
The port used to listen for incoming SSL connections. This value must
match the one defined in the secureaddr attribute in the workstation
definition in the IBM Tivoli Workload Scheduler database. It must be

Chapter 11. Setting security 147


Setting security

different from the nm port local option that defines the port used for
normal communications. The default value is 31113.
Notes:
1. On Windows, place this option also on TWShome/localopts.
2. If you install multiple instances of Tivoli Workload Scheduler 8.2 on the
same computer, set all SSL ports to different values.
3. If you plan not to use SSL, set the value to 0.
SSL auth mode
The behavior of Tivoli Workload Scheduler during an SSL handshake is
based on the value of the SSL auth mode option as follows:
caonly Tivoli Workload Scheduler checks the validity of the certificate and
verifies that the peer certificate has been issued by a recognized
CA. Information contained in the certificate is not examined. It is
the default. If you do not specify the SSL auth mode option, or you
define a value that is not valid, the caonly value is used.
string Tivoli Workload Scheduler checks the validity of the certificate and
verifies that the peer certificate has been issued by a recognized
CA. It also verifies that the Common Name (CN) of the Certificate
Subject matches the string specified in SSL auth string. See 148.
cpu Tivoli Workload Scheduler checks the validity of the certificate and
verifies that the peer certificate has been issued by a recognized
CA. It also verifies that the Common Name (CN) of the Certificate
Subject matches the name of the CPU that requested the service.
SSL auth string
Used in conjunction with SSL auth mode when the string value is
specified. The SSL auth string (ranges from 1 — 64 characters) is used to
verify the certificate validity. If you do not specify an SSL auth string
value in conjunction with the SSL auth mode, then the default string value
is tws.
SSL key
The name of the private key file. The default path in the localopts file is
TWShome/ssl/filename.key.
SSL certificate
The name of the local certificate file. The default path in the localopts file
is TWShome/ssl/filename.crt.
SSL key pwd
The name of the file containing the password for the stashed key. The
default path in the localopts file is TWShome/ssl/filename.sth.
SSL CA certificate
The name of the file containing the trusted CA certificates required for
authentication. The CAs in this file are also used to build the list of
acceptable client CAs passed to the client when the server side of the
connection requests a client certificate. This file is the concatenation, in
order of preference, of the various PEM-encoded CA certificate files. The
default path in the localopts file is TWShome/ssl/filename.crt.
SSL certificate chain
The name of the file that contains the concatenation of the PEM-encoded
certificates of certification authorities which form the certificate chain of the
workstation’s certificate. This parameter is optional. If it is not specified,
the file specified for the SSL CA certificate is used.
SSL random seed
The pseudo random number file used by OpenSSL on some platforms.
Without this file, SSL authentication may not work properly. The default
path in the localopts file is TWShome/ssl/filename.rnd.

148 IBM Tivoli Workload Scheduler Planning and Installation Guide


Setting security

SSL encryption cipher


The ciphers that the workstation supports during an SSL connection. Use
the following shortcuts:
Table 38. Shortcuts for encryption ciphers
Shortcut Encryption ciphers
SSLv3 SSL version 3.0
TLSv TLS version 1.0
EXP Export
EXPORT40 40-bit export
EXPORT56 56-bit export
LOW Low strength (no export, single DES)
MEDIUM Ciphers with 128 bit encryption
HIGH Ciphers using Triple-DES
NULL Ciphers using no encryption

Working across firewalls


In the design phase of an IBM Tivoli Workload Scheduler network, the
administrator must know where the firewalls are positioned in the network, which
fault-tolerant agents and which domain managers belong to a particular firewall,
and which are the entry points into the firewalls. When this has been clearly
understood, the administrator should define the behindfirewall attribute for some
of the workstation definitions in the Tivoli Workload Scheduler database. In
particular, if a workstation definition is set with the behindfirewall attribute to
ON, this means that there is a firewall between that workstation and the Tivoli
Workload Scheduler master. In this case, the workstation-domain manager link is
the only link allowed between the workstation and its domain manager.

All Tivoli Workload Scheduler workstations should be defined with the


behindfirewall attribute if the link with the corresponding domain manager, or
with any domain manager in the Tivoli Workload Scheduler hierarchy right up to
the master, is across a firewall.

All Tivoli Workload Scheduler workstations whose links with the corresponding
domain manager or with any domain manager in the Tivoli Workload Scheduler
hierarchy right up to the master, is across a firewall, should be defined with the
behindfirewall attribute.

When mapping an IBM Tivoli Workload Scheduler network over an existing


firewall structure, it does not matter which fault-tolerant agents and which domain
managers are on the secure side of the firewall and which ones are on the
non-secure side. Firewall boundaries should be the only concern. For example,
whether the master is in a non-secure zone and some of the domain managers are
in secured zones, or vice versa, does not make any difference. The firewall
structure must always be considered starting from the master and following the
IBM Tivoli Workload Scheduler hierarchy, marking all the workstations that have a
firewall between them and their corresponding domain manager.

For all workstations with behindfirewall set to ON, the start wkstation, stop
wkstation, and showjobs commands are sent following the domain hierarchy,

Chapter 11. Setting security 149


Setting security

instead of making the master or the domain manager open a direct connection to
the workstation. This makes a significant improvement in security.

This attribute works for multiple nested firewalls as well. For extended agents, you
can specify that an extended agent CPU is behind a firewall by setting the
behindfirewall attribute to ON, on the host workstation. The attribute is read-only
in the plan; to change it in the plan, the administrator must update it in the
database and then recreate the plan.

See the Tivoli Workload Scheduler Reference Guide for details on how to set this
attribute.

150 IBM Tivoli Workload Scheduler Planning and Installation Guide


Chapter 12. Uninstalling Tivoli Workload Scheduler
Uninstall is created during the install procedure, therefore, use the same method
you chose to install the product when uninstalling the product. For example, if you
installed the product using the installation wizard, use the uninstall wizard to
subsequently remove the product.

Uninstalling the product will not remove files created after Tivoli Workload
Scheduler was installed, nor files that are open at the time of uninstall. If you do
not need those files, you have to remove them manually. Refer to Tivoli Workload
Scheduler Administration and Troubleshooting for information about removing Tivoli
Workload Scheduler manually.

Using the uninstall wizard


The uninstall wizard removes product files, registry keys, and services. It removes
the binaries related to the Tivoli Workload Scheduler agent installed and the
language packs.

The uninstall program does not remove the Tivoli Workload Scheduler connector,
the Tivoli Plus Module, or the Tivoli Management Framework. Refer to the Tivoli
Workload Scheduler Job Scheduling Console User’s Guide for uninstalling the connector,
the Tivoli Workload Scheduler Plus Module User’s Guide for uninstalling the Tivoli
Plus Module, and the Tivoli Enterprise Installation Guide for uninstalling the Tivoli
Management Framework.

To uninstall Tivoli Workload Scheduler perform the following steps:


1. Ensure that all Tivoli Workload Scheduler processes and services are stopped,
and that there are no active or pending jobs. See “Unlinking and stopping
Tivoli Workload Scheduler” on page 30.
2. Locate the _uninst directory in the TWShome installation path.
3. Run the uninstall program.
v Windows: uninstaller.exe
v UNIX: uninstaller.bin
4. The installation wizard is launched. Select the installation wizard language.
Click OK.
5. Read the welcome information and click Next.
6. Review the uninstall summary. Click Next.
7. Click Finish to close the uninstall program.

Using the twsinst script


Refer to Tivoli Workload Scheduler Troubleshooting and Error Messages for information
about removing Tivoli Workload Scheduler manually.

Follow these steps to uninstall Tivoli Workload Scheduler using the twsinst script.
1. Before uninstalling, stop any existing Tivoli Workload Scheduler processes that
were created on this particular system. If you have jobs that are currently

© Copyright IBM Corp. 1991, 2004 151


Uninstall

running, the related processes must be stopped manually. For information


about stopping the processes and services, see “Unlinking and stopping Tivoli
Workload Scheduler” on page 30.
2. Navigate to the installation directory.
3. Run the twsinst script as follows:
twsinst -uninst -uname <username>
[-lang <lang_id>]
-uninst
Uninstalls Tivoli Workload Scheduler, Version 8.2. Before you perform an
uninstall, ensure that all Tivoli Workload Scheduler processes and services are
stopped. For information about stopping the processes and services, see
“Unlinking and stopping Tivoli Workload Scheduler” on page 30.
-uname <username>
The name of the user for which Tivoli Workload Scheduler is installed,
updated, promoted, or uninstalled. The software is installed or updated in this
user’s home directory. This user name is not to be confused with the user
performing the installation logged on as root. For a new installation, this user
account must be created manually before running the installation. Create a user
with a home directory. Tivoli Workload Scheduler will be installed under the
HOME directory of the specified user.
-lang <lang_id>
The language in which the twsinst messages are displayed. If not specified,
the system LANG is used. If the related catalog is missing, the default C
language catalog is used.

Note: The -lang option is not to be confused with the Tivoli Workload
Scheduler supported language packs. By default, all supported language
packs are installed when you install using the twsinst script.

For example, a sample twsinst script that uninstalls the Tivoli Workload Scheduler,
Version 8.2 engine, originally installed for user named twsuser:
./twsinst -uninst -uname twsuser

Using the Software Distribution CLI


Use the same method you chose to install the product when uninstalling the
product. You can uninstall Tivoli Workload Scheduler using the Software
Distribution command wremovsp as follows:
wremovsp Tivoli_TWS_WINDOWS.spb [subscribers...]

The software package block that installs language packs can also be removed in
this way. Refer to Tivoli Workload Scheduler Troubleshooting and Error Messages for
information about removing Tivoli Workload Scheduler manually.

Using the customize script


Follow these steps to uninstall Tivoli Workload Scheduler from a Tier 2 platform:
1. Before uninstalling, stop any existing Tivoli Workload Scheduler processes that
were created on this particular system. See “Unlinking and stopping Tivoli
Workload Scheduler” on page 30.
2. On the system, login as root.
3. Review the contents of the file named /usr/unison/components. If the file contains
multiple entries that correspond to different Maestro/Tivoli Workload

152 IBM Tivoli Workload Scheduler Planning and Installation Guide


Uninstall

Scheduler accounts (product groups), edit the file by deleting the lines that
correspond to the instance you want to remove.
For example, suppose that /usr/unison/components contains the following entries:
maestro 7.0 /opt/maestro DEFAULT
maestro 8.2 /data/maestro8/maestro TWS_maestro8_8.2

If you plan to remove the Tivoli Workload Scheduler instance located under
/opt/maestro, then delete the first line. If /usr/unison/components contains only the
instance that you want to remove, then delete the entire file.
4. Remove the links, if applicable, to the /usr/bin directory. The installation process
gives you the option to link Tivoli Workload Scheduler executables to a
common directory. The default is /usr/bin. Remove the following files:
v /usr/bin/maestro
v /usr/bin/mat
v /usr/bin/mbatch
v /usr/bin/datecalc
v /usr/bin/morestdl
v /usr/bin/jobstdl
v /usr/bin/parms
5. Finally, remove the entire Maestro/Tivoli Workload Scheduler account with the
following command:
rm -rf <twshome>
If your system startup command was modified to include a conman ″start″ or a
<twshome>/StartUp command, you must also remove those entries.

Chapter 12. Uninstalling Tivoli Workload Scheduler 153


154 IBM Tivoli Workload Scheduler Planning and Installation Guide
Appendix. Support information
This section describes the following options for obtaining support for IBM
products:
v “Searching knowledge bases”
v “Obtaining fixes” on page 156
v “Contacting IBM Software Support” on page 156

Searching knowledge bases


If you have a problem with your IBM software, you want it resolved quickly. Begin
by searching the available knowledge bases to determine whether the resolution to
your problem is already documented.

Search the information center on your local system or


network
IBM provides extensive product documentation that can be installed on your local
computer or on an intranet server. The documentation is supplied on the
publications CD available with the product, can be downloaded from IBM as
described in “Accessing publications online” on page xv, or ordered in hardcopy
from IBM as described in “Ordering publications” on page xvi.

Open the pdf versions of documents and use the built-in search facilities of Adobe
Reader to find the information you require.

Search the information center at the IBM support Web site


The IBM software support Web site has many documents available online, one or
more of which may provide the information you require:
1. Go to the IBM Software Support Web site
(http://www.ibm.com/software/support).
2. Under Products A - Z, select your product name: select ″I″ for IBM and then
scroll down to the product entries that commence ″IBM Tivoli Workload
Scheduler″. These open product-specific support sites.
3. Under Self help and Learn, choose from the list of different types of product
support documentation:
v Manuals
v Redbooks
v White papers
v Readme files and other documentation

To access some documents you need to register (indicated by a key icon beside the
document title). To register, select the document you wish to look at, and when
asked to sign in follow the links to register yourself. There is also a FAQ available
on the advantages of registering.

Search the Internet


If you cannot find an answer to your question in the information center, search the
Internet for other information that might help you resolve your problem.

© Copyright IBM Corp. 1991, 2004 155


Obtaining fixes

Obtaining fixes
A product fix might be available to resolve your problem. You can determine what
fixes are available for your IBM software product by checking the product support
Web site:
1. Go to the IBM Software Support Web site
(http://www.ibm.com/software/support).
2. Under Products A - Z, select your product name: select ″I″ for IBM and then
scroll down to the product entries that commence ″IBM Tivoli Workload
Scheduler″. These open product-specific support sites.
3. Under Self help, follow the link to Search all Downloads, where you will find
a list of fixes, fix packs, and other service updates for your product.
4. Click the name of a fix to read the description and optionally download the fix.
To receive weekly e-mail notifications about fixes and other news about IBM
products, follow these steps:
1. From the support page for any IBM product, click My support in the panel on
the left of the page.
2. If you have already registered, skip to the next step. If you have not registered,
click register in the upper-right corner of the support page to establish your
user ID and password.
3. Sign in to My support.
4. On the My support page, select the Edit profile tab and click Subscribe to
email. Select a product family and check the appropriate boxes for the type of
information you want.
5. Click Update.
6. For e-mail notification for other product groups, repeat Steps 4 and 5.
For more information about types of fixes, see the Software Support Handbook
(http://techsupport.services.ibm.com/guides/handbook.html).

Contacting IBM Software Support


IBM Software Support provides assistance with product defects.

Before contacting IBM Software Support, your company must have an active IBM
software maintenance contract, and you must be authorized to submit problems to
IBM. The type of software maintenance contract that you need depends on the
type of product you have:
v For IBM distributed software products (including, but not limited to, Tivoli,
Lotus, and Rational products, as well as DB2 and WebSphere products that run
on Windows or UNIX operating systems), enroll in Passport Advantage in one
of the following ways:
– Online: Go to the Passport Advantage Web page
(http://www.lotus.com/services/passport.nsf/WebDocs/
Passport_Advantage_Home) and click How to Enroll
– By phone: For the phone number to call in your country, go to the IBM
Software Support Web site
(http://techsupport.services.ibm.com/guides/contacts.html) and click the
name of your geographic region.
v For IBM eServer software products (including, but not limited to, DB2 and
WebSphere products that run in zSeries, pSeries, and iSeries environments), you
can purchase a software maintenance agreement by working directly with an

156 IBM Tivoli Workload Scheduler Planning and Installation Guide


Contacting support

IBM sales representative or an IBM Business Partner. For more information


about support for eServer software products, go to the IBM Technical Support
Advantage Web page (http://www.ibm.com/servers/eserver/techsupport.html).

If you are not sure what type of software maintenance contract you need, call
1-800-IBMSERV (1-800-426-7378) in the United States or, from other countries, go to
the contacts page of the IBM Software Support Handbook on the Web
(http://techsupport.services.ibm.com/guides/contacts.html) and click the name of
your geographic region for phone numbers of people who provide support for
your location.

Follow the steps in this topic to contact IBM Software Support:


1. Determine the business impact of your problem.
2. Describe your problem and gather background information.
3. Submit your problem to IBM Software Support.

Determine the business impact of your problem


When you report a problem to IBM, you are asked to supply a severity level.
Therefore, you need to understand and assess the business impact of the problem
you are reporting. Use the following criteria:

Severity 1 Critical business impact: You are unable to use the program,
resulting in a critical impact on operations. This condition
requires an immediate solution.
Severity 2 Significant business impact: The program is usable but is
severely limited.
Severity 3 Some business impact: The program is usable with less
significant features (not critical to operations) unavailable.
Severity 4 Minimal business impact: The problem causes little impact on
operations, or a reasonable circumvention to the problem has
been implemented.

Describe your problem and gather background information


When explaining a problem to IBM, be as specific as possible. Include all relevant
background information so that IBM Software Support specialists can help you
solve the problem efficiently. To save time, know the answers to these questions:
v What software versions were you running when the problem occurred?
v Do you have logs, traces, and messages that are related to the problem
symptoms? IBM Software Support is likely to ask for this information.
v Can the problem be re-created? If so, what steps led to the failure?
v Have any changes been made to the system? (For example, hardware, operating
system, networking software, and so on.)
v Are you currently using a workaround for this problem? If so, please be
prepared to explain it when you report the problem.

Submit your problem to IBM Software Support


You can submit your problem in one of two ways:
v Online: Go to the ″Submit and track problems″ page on the IBM Software
Support site (http://www.ibm.com/software/support/probsub.html). Enter
your information into the appropriate problem submission tool.

Appendix. Support information 157


Contacting support

v By phone: For the phone number to call in your country, go to the contacts page
of the IBM Software Support Handbook on the Web
(http://techsupport.services.ibm.com/guides/contacts.html) and click the name
of your geographic region.

If the problem you submit is for a software defect or for missing or inaccurate
documentation, IBM Software Support creates an Authorized Program Analysis
Report (APAR). The APAR describes the problem in detail. Whenever possible,
IBM Software Support provides a workaround for you to implement until the
APAR is resolved and a fix is delivered. IBM publishes resolved APARs on the
IBM product support Web pages daily, so that other users who experience the
same problem can benefit from the same resolutions.

For more information about problem resolution, see Searching knowledge bases
and Obtaining fixes.

158 IBM Tivoli Workload Scheduler Planning and Installation Guide


Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user’s responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you
any license to these patents.You can send license inquiries, in writing, to:

IBM Director of Licensing


IBM Corporation
North Castle Drive
Armonk, NY 10504-1785 U.S.A.

For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:

IBM World Trade Asia Corporation


Licensing
2-31 Roppongi 3-chome, Minato-ku
Tokyo 106, Japan

The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:

INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS


PUBLICATION ″AS IS″ WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE.

Some states do not allow disclaimer of express or implied warranties in certain


transactions, therefore, this statement might not apply to you.

This information could include technical inaccuracies or typographical errors.


Changes are periodically made to the information herein; these changes will be
incorporated in new editions of the publication. IBM may make improvements
and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.

Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product and use of those Web sites is at your own risk.

© Copyright IBM Corp. 1991, 2004 159


IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.

Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:

IBM Corporation
2Z4A/101
11400 Burnet Road
Austin, TX 78758 U.S.A.

Such information may be available, subject to appropriate terms and conditions,


including in some cases payment of a fee.

The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement or any equivalent agreement
between us.

Information concerning non-IBM products was obtained from the suppliers of


those products, their published announcements or other publicly available sources.
IBM has not tested those products and cannot confirm the accuracy of
performance, compatibility or any other claims related to non-IBM products.
Questions on the capabilities of non-IBM products should be addressed to the
suppliers of those products.

This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.

Open source: test


Copyright (c) 1992, 1993, 1994

The Regents of the University of California. All rights reserved.

This code is derived from software contributed to Berkeley by Kenneth Almquist.

Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list
of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution.
3. All advertising materials mentioning features or use of this software must
display the following acknowledgement:
This product includes software developed by the University of California,
Berkeley and its contributors.

160 IBM Tivoli Workload Scheduler Planning and Installation Guide


4. Neither the name of the University nor the names of its contributors may be
used to endorse or promote products derived from this software without
specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ″AS


IS″ AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
SUCH DAMAGE.

Open source: OpenSSL


This product includes software developed by the OpenSSL Project for use in the
OpenSSL Toolkit. (http://www.openssl.org/)

LICENSE ISSUES
The OpenSSL toolkit stays under a dual license, i.e. both the conditions of the
OpenSSL License and the original SSLeay license apply to the toolkit. See below
for the actual license texts. Actually both licenses are BSD-styleOpen Source
licenses. In case of any license issues related to OpenSSL please contact
openssl-core@openssl.org.

OpenSSL license
Copyright (c) 1998-2001 The OpenSSL Project. All rights reserved.

Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list
of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution.
3. All advertising materials mentioning features or use of this software must
display the following acknowledgment: ″This product includes software
developed by the OpenSSL Project for use in the OpenSSL Toolkit.
(http://www.openssl.org/)″.
4. The names ″OpenSSL Toolkit″ and ″OpenSSL Project″ must not be used to
endorse or promote products derived from this software without prior written
permission. For written permission, please contact openssl-core@openssl.org.
5. Products derived from this software may not be called ″OpenSSL″ nor may
″OpenSSL″ appear in their names without prior written permission of the
OpenSSL Project.
6. Redistributions of any form whatsoever must retain the following
acknowledgment: ″This product includes software developed by the OpenSSL
Project for use in the OpenSSL Toolkit (http://www.openssl.org/)″

Notices 161
THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT ″AS IS″ AND ANY
EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
OpenSSL PROJECT OR ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
SUCH DAMAGE.

This product includes cryptographic software written by Eric Young


(eay@cryptsoft.com). This product includes software written by Tim Hudson
(tjh@cryptsoft.com).

Original SSLeay license


Copyright (C) 1995-1998 Eric Young (eay@cryptsoft.com) All rights reserved.

This package is an SSL implementation written by Eric Young (eay@cryptsoft.com).


The implementation was written so as to conform with Netscape SSL.

This library is free for commercial and non-commercial use as long as the
following conditions are adhered to. The following conditions apply to all code
found in this distribution, be it the RC4, RSA, lhash, DES, etc., code; not just the
SSL code. The SSL documentation included with this distribution is covered by the
same copyright terms except that the holder is Tim Hudson (tjh@cryptsoft.com).

Copyright remains Eric Young’s, and as such any Copyright notices in the code are
not to be removed. If this package is used in a product, Eric Young should be
given attribution as the author of the parts of the library used. This can be in the
form of a textual message at program startup or in documentation (online or
textual) provided with the package.

Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the copyright notice, this list of
conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution.
3. All advertising materials mentioning features or use of this software must
display the following acknowledgement: ″This product includes cryptographic
software written by Eric Young (eay@cryptsoft.com)″ The word ’cryptographic’
can be left out if the routines from the library being used are not cryptographic
related :-).
4. If you include any Windows specific code (or a derivative thereof) from the
apps directory (application code) you must include an acknowledgement: ″This
product includes software written by Tim Hudson (tjh@cryptsoft.com)″

THIS SOFTWARE IS PROVIDED BY ERIC YOUNG ″AS IS″ AND ANY EXPRESS
OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE

162 IBM Tivoli Workload Scheduler Planning and Installation Guide


IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
AUTHOR OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.

The licence and distribution terms for any publically available version or
derivative of this code cannot be changed. i.e. this code cannot simply be copied
and put under another distribution licence [including the GNU Public Licence.]

Open source: SNMP library


Copyright 1988, 1989 by Carnegie Mellon University

All Rights Reserved

Permission to use, copy, modify, and distribute this software and its documentation
for any purpose and without fee is hereby granted, provided that the above
copyright notice appear in all copies and that both that copyright notice and this
permission notice appear in supporting documentation, and that the name of CMU
not be used in advertising or publicity pertaining to distribution of the software
without specific, written prior permission.

CMU DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,


INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS, IN NO EVENT SHALL CMU BE LIABLE FOR ANY SPECIAL,
INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE
OR PERFORMANCE OF THIS SOFTWARE.

Open source: time zone library


Copyright (c) 1985, 1987, 1988 The Regents of the University of California.

All rights reserved.

Redistribution and use in source and binary forms are permitted provided that the
above copyright notice and this paragraph are duplicated in all such forms and
that any documentation, advertising materials, and other materials related to such
distribution and use acknowledge that the software was developed by the
University of California, Berkeley. The name of the University may not be used to
endorse or promote products derived from this software without specific prior
written permission. THIS SOFTWARE IS PROVIDED ″AS IS″ AND WITHOUT
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT
LIMITATION, THE IMPLIED WARRANTIES OF MERCHANT[A]BILITY AND
FITNESS FOR A PARTICULAR PURPOSE.

Notices 163
Toolkit of Tivoli Internationalization Services
This package contains open software from Alfalfa Software. Use of this open
software in IBM products has been approved by the IBM OSSC, but there is a
requirement to include the Alfalfa Software copyright and permission notices in
product supporting documentation. This could just be in a readme file shipped
with the product. Here is the text of the copyright and permission notices:

Copyright 1990, by Alfalfa Software Incorporated, Cambridge, Massachusetts.

All Rights Reserved

Permission to use, copy, modify, and distribute this software and its documentation
for any purpose and without fee is hereby granted, provided that the above
copyright notice appear in all copies and that both that copyright notice and this
permission notice appear in supporting documentation, and that Alfalfa’s name not
be used in advertising or publicity pertaining to distribution of the software
without specific, written prior permission.

ALPHALPHA DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS


SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
AND FITNESS, IN NO EVENT SHALL ALPHALPHA BE LIABLE FOR ANY
SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE
OR PERFORMANCE OF THIS SOFTWARE.

Trademarks
IBM, Tivoli, the Tivoli logo, Tivoli Enterprise Console, AIX, AS/400, BookManager,
Dynix, OS/390, NetView, and Sequent are trademarks or registered trademarks of
International Business Machines Corporation or Tivoli Systems Inc. in the United
States, other countries, or both.

Intel is a registered trademark of Intel Corporation.

Microsoft, Windows, Windows NT, are registered trademarks of Microsoft


Corporation in the United States, other countries, or both.

UNIX is a registered trademark of The Open Group in the United States and other
countries.

Java and all Java-based trademarks and logos are trademarks or registered
trademarks of Sun Microsystems, Inc. in the United States and other countries.

Other company, product, and service names may be trademarks or service marks
of others.

164 IBM Tivoli Workload Scheduler Planning and Installation Guide


Glossary
A Deadline. The last moment in time that a job or job
stream can begin execution. This corresponds to the
Until time in legacy Maestro.
Access method. An access method is an executable
used by extended agents to connect and control job Dependency. A dependency is a prerequisite that
execution on other operating systems (for example, must be satisfied before the execution of a job or job
MVS™) and applications (for example, Oracle stream can proceed. The maximum number of
Applications, Peoplesoft, and Baan). The access method dependencies permitted for a job or job stream is 40.
must be specified in the workstation definition for the The four types of dependencies used by Tivoli
extended agent. Workload Scheduler are follows dependencies, resource
dependencies, file dependencies, and prompt
B dependencies.

Batchman. Batchman is a process started at the Domain. A domain is a named group of Tivoli
beginning of each Tivoli Workload Scheduler Workload Scheduler workstations consisting of one or
processing day to launch jobs in accordance with the more agents and a domain manager acting as the
information in the Symphony file. management hub. All domains have a parent domain
except for the master domain.

C Domain Manager. The management hub in a Tivoli


Workload Scheduler domain. All communications to
Calendar. A calendar is a defined object in the Tivoli and from the agents in the domain are routed through
Workload Scheduler database that contains a list of the domain manager.
scheduling dates. Because it is a unique object defined
in database, it can be assigned to multiple job streams. Duration. The time you expect the job to take to
Assigning a calendar to a job stream causes that job complete. In the Timeline view of jobs in the database,
stream to be run on the days specified in the calendar. the duration is represented by a light blue bar at the
Note that a calendar can be used as an inclusionary or center of the activity bar or by a light blue diamond.
exclusionary run cycle.

Conman. Conman (console manager) is a legacy


E
command-line application for managing the production
Earliest start time. The time before which the job or
environment. Conman performs the following tasks:
job stream cannot start. The earliest start time is an
start and stop production processes, alter and display
estimate based on previous experiences running the job
schedules and jobs in the plan, and control workstation
or job stream. However, the job or job stream can start
linking in a network.
after the time you specify as long as all other
Composer. Composer is a legacy command-line dependencies are satisfied. In the timeline, the start
application for managing the definitions of your time is represented by the beginning (left edge) of the
scheduling objects in the database. navy blue activity bar. For job instances, the start time
that OPC calculates is represented by a light blue bar.
See also “Actual start time” and “Planned start time”.
D
Exclusionary run cycle. A run cycle that specifies the
Database. The database contains all the definitions days a job stream cannot be run. Exclusionary run
you have created for scheduling objects (for example, cycles take precedent over inclusionary run cycles.
jobs, job streams, resources, workstations, etc). In
addition, the database holds other important Expanded database. Expanded databases allow longer
information such as statistics of job and job stream names for database objects such as jobs, job streams,
execution, information on the user ID who created an workstations, domains, and users. Expanded databases
object, and an object’s last modified date. In contrast, are configured using the dbexpand command or as an
the plan contains only those jobs and job streams option during installation. Do not expand your
(including dependent objects) that are scheduled for database before understanding the implications and
execution in today’s production. impact of this command.

Extended agent. Extended agents are used to integrate


Tivoli Workload Scheduler’s job control features with

© Copyright IBM Corp. 1991, 2004 165


other operating systems (for example, MVS) and Internetwork (INET) dependencies. A dependency
applications (for example, Oracle Applications, between jobs or job streams in separate Tivoli Workload
Peoplesoft, and Baan). Extended agents use scripts Scheduler networks. See also “Network agent”.
called access methods to communicate with external
systems. Internetwork (INET) job / job stream. A job or job
stream from a remote Tivoli Workload Scheduler
External job. A job from one job stream that is a network that is a predecessor to a job or job stream in
predecessor for a job in another job stream. An external the local network. An Internetwork job is represented
job is represented by a place holder icon in the Graph by a place holder icon in the Graph view of the job
view of the job stream. stream. See also “Network agent”.

F J
Fault-tolerant agent. An agent workstation in the Jnextday job. Pre- and post-production processing can
Tivoli Workload Scheduler network capable of be fully automated by scheduling the Jnextday job to
resolving local dependencies and launching its jobs in run at the end of each day. A sample jnextday job is
the absence of a domain manager. provided as TWShome\Jnextday. The Jnextday job does
the following: sets up the next day’s processing
Fence. The job fence is a master control over job (contained in the Symphony file), prints reports, carries
execution on a workstation. The job fence is a priority forward unfinished job streams, and stops and restarts
level that a job or job stream’s priority must exceed Tivoli Workload Scheduler.
before it can run. For example, setting the fence to 40
prevents jobs with priorities of 40 or less from being Job. A job is a unit of work that is processed at a
launched. workstation. The job definition consists of a unique job
name in the Tivoli Workload Scheduler database along
Final Job Stream. The FINAL job stream should be with other information necessary to run the job. When
the last job stream that is run in a production day. It you add a job to a job stream, you can define its
contains a job that runs the script file Jnextday. dependencies and its time restrictions such as the
estimated start time and deadline.
Follows dependency. A dependency where a job or
job stream cannot begin execution until other jobs or Job Instance. A job scheduled for a specific run date
job streams have completed successfully. in the plan. See also “Job”.

Job status. See “Status”.


G
Job Stream. A Job Stream consists of a list of jobs that
Global options. The global options are defined on the run as a unit (such as a weekly backup application),
master domain manager in the globalopts file, and along with times, priorities and other dependencies
these options apply to all workstations in the Tivoli that determine the exact order of job execution.
Workload Scheduler network. See also “Local options”.
Job stream instance. A job stream that is scheduled
H for a specific run date in the plan. See also “Job
stream”.
Host. A Workload Scheduler workstation required by
extended agents. It can be any Tivoli Workload L
Scheduler workstation except another extended agent.
Limit. Job limits provide a means of allocating a
I specific number of job slots into which Tivoli Workload
Scheduler is allowed to launch jobs. A job limit can be
set for each job stream, and for each workstation. For
Inclusionary Run Cycle. A run cycle that specifies the
example, setting the workstation job limit to 25 permits
days a job stream is scheduled to run. Exclusionary run
Tivoli Workload Scheduler to have no more than 25
cycles take precedent over inclusionary run cycles.
jobs executing concurrently on the workstation.
Interactive jobs. A job that runs interactively on a
List. A list displays job scheduling objects. You must
Windows NT desktop.
create separate lists for each job scheduling object. For
Internal status. Internal status reflects the current each job scheduling object, there are two types of lists:
status of jobs and job streams in the Tivoli Workload one of definitions in the database and another of
Scheduler engine. Internal status is unique to Tivoli instances in the plan.
Workload Scheduler. See also Status.

166 IBM Tivoli Workload Scheduler Planning and Installation Guide


Local options. The local options are defined in the properties of a job or job stream and is unique to that
localopts file. Each workstation in the Tivoli Workload job or job stream. A predefined prompt is defined in
Scheduler network must have a localopts file. The the Tivoli Workload Scheduler database and can be
settings in this file apply only to that workstation. See used by any job or job stream.
also “Global options”.
R
M
Resource. Resources can represent either physical or
Master Domain Manager. In a Tivoli Workload logical resources on your system. Once defined in
Scheduler network, the master domain manager Tivoli Workload Scheduler database, they can be used
maintains the files used to document the scheduling as dependencies for jobs and job streams. For example,
objects. It creates the plan at the start of each day, and you can define a resource named ″tapes″ with a unit
performs all logging and reporting for the network. value of two. Then, define jobs that require two
available tape drives as a dependency. Jobs with this
dependency cannot run concurrently because each time
N a job is run the “tapes” resource is in use.
Network agent. A type of extended agent used to Run cycle. A run cycle specifies the days that a job
create dependencies between jobs and job streams on stream is scheduled to run. In Tivoli Workload
separate Tivoli Workload Scheduler networks. See also Scheduler there are three types of run cycles you can
“Internetwork (INET) dependency”. specify for a job stream: a Simple run cycle, a Weekly
run cycle, or a Calendar run cycle (commonly called a
P calendar). Note that each type of run cycle can be
inclusionary or exclusionary. That is, each run cycle can
Parameter. Parameters are used to substitute values define the days a job stream is included in the
into your jobs and job streams. When using a production cycle, or the days a job stream is excluded
parameter in a job script, the value is substituted at run from the production cycle. When you define multiple
time. In this case, the parameter must be defined on the run cycles to a job stream, and inclusionary and
workstation where it will be used. Parameters cannot exclusionary run cycles specify the same days, the
be used when scripting extended agent jobs. exclusionary run cycles take precedent.

Plan. The plan contains all job scheduling activity


planned for a period of one day. In Tivoli Workload
S
Scheduler, the plan is created every 24 hours and
Simple Run Cycle. A simple run cycle is a specific set
consists of all the jobs, job streams, and dependency
of user-defined days a job stream is run. A simple run
objects that are scheduled to run for that day. All job
cycle is defined for a specific job stream and cannot be
streams for which you have created run cycles are
used by multiple job streams. For more information see
automatically scheduled and included in the plan. As
Run Cycle.
the production cycle progresses, the jobs and job
streams in the plan are run according to their time Status. Status reflects the current job or job stream
restrictions and other dependencies. Any jobs or job status within the Job Scheduling Console. The Job
streams that do not run successfully are rolled over into Scheduling Console status is common to Tivoli
the next day’s plan. Workload Scheduler and OPC. See also Internal status.
Planned Start Time. The time that Tivoli Workload stdlist file. A standard list file is created for each job
Scheduler estimates a job instance will start. This launched by Tivoli Workload Scheduler. Standard list
estimate is based on start times of previous executions. files contain header and trailer banners, echoed
commands, errors, and warnings. These files can be
Predecessor. A job that must complete successfully
used to troubleshoot problems in job execution.
before successor jobs can begin execution.
Successor. A job that cannot start until all of the
Priority . Tivoli Workload Scheduler has a queuing
predecessor jobs on which it is dependent are
system for jobs and job streams in the plan. You can
completed successfully.
assign a priority level for each job and job stream from
0 to 101. A priority of 0 will not run. Symphony file. This file contains the scheduling
information needed by the Production Control process
Prompt. Prompts can be used as dependencies for jobs
(batchman) to run the plan. The file is built and loaded
and job streams. A prompt must be answered
during the pre-production phase. During the
affirmatively for the dependent job or job stream to
production phase, it is continually updated to indicate
launch. There are two types of prompts: predefined and
the current status of production processing: work
ad hoc. An ad hoc prompt is defined within the
completed, work in progress, work to be done. To

Glossary 167
manage production processing, the contents of the
Symphony file (plan) can be displayed and altered with
W
the Job Scheduling console.
Weekly Run Cycle. A run cycle that specifies the days
of the week that a job stream is run. For example, a job
T stream can be specified to run every Monday,
Wednesday, and Friday using a weekly run cycle. A
Time restrictions. Time restrictions can be specified weekly run cycle is defined for a specific job stream
for both jobs and job streams. A time can be specified and cannot be used by multiple job streams. For more
for execution to begin, or a time can be specified after information see Run Cycle.
which execution will not be attempted. By specifying
both, you can define a window within which a job or Wildcards. The wildcards for Tivoli Workload
job stream will run. For jobs, you can also specify a Scheduler are:
repetition rate. For example, you can have Tivoli ? Replaces one alphanumeric character.
Workload Scheduler launch the same job every 30
minutes between the hours of 8:30 a.m. and 1:30 p.m. % Replaces one numeric character.
* Replaces zero or more alphanumeric characters in
Tivoli Management Framework (TMF). The base
the Tivoli Job Scheduling console.
software that is required to run the applications in the
Tivoli product suite. This software infrastructure @ Replaces zero or more alphanumeric characters in
enables the integration of systems management the Tivoli Workload Scheduler command line.
applications from Tivoli Systems Inc. and the Tivoli
Partners. The Tivoli Management Framework includes Wildcards are generally used to refine a search for one
the following: or more objects in the database. For example, if you
v Object request broker (oserv) want to display all workstations, you can enter the
v Distributed object database asterisk (*) wildcard. To get a listing of workstations
site1 through site8, you can enter site%.
v Basic administration functions
v Basic application services Workstation. A workstation is usually an individual
v Basic desktop services such as the graphical user computer on which jobs and job streams are run. They
interface are defined in the Tivoli Workload Scheduler database
as a unique object. A workstation definition is required
In a Tivoli environment, the Tivoli Management for every computer that runs jobs or job streams in the
Framework is installed on every client and server. Workload Scheduler network.
However, the TMR server is the only server that holds
the full object database. Workstation class. A workstation class is a group of
workstations. Any number of workstations can be
Tivoli Management Region (TMR). In a Tivoli placed in a class. Job streams and jobs can be assigned
environment, a Tivoli server and the set of clients that to run on a workstation class. This makes replication of
it serves. An organization can have more than one a job or job stream across many workstations easy.
TMR. A TMR addresses the physical connectivity of
resources whereas a policy region addresses the logical
organization of resources. X
Tree view. The view on the left side of the Job X-agent. See “Extended agent”.
Scheduling Console that displays the Tivoli Workload
Scheduler server, groups of default lists, and groups of
user created lists.

U
User . For Windows NT only, the user name specified
in a job definition’s “Logon” field must have a
matching user definition. The definitions furnish the
user passwords required by Tivoli Workload Scheduler
to launch jobs.

Utility commands. A set of command-line executables


for managing Tivoli Workload Scheduler.

168 IBM Tivoli Workload Scheduler Planning and Installation Guide


Index
Special characters C E
/usr/unison/components 153 caonly SSL auth mode option 90 enable list security check 83
carryforward keyword 99 enabling the time zone feature 78
carryforward option 82, 85 environment variables, notation xvii
Numerics CDs, installation 27
centralized security option 83
evtsize command 74
extended agent
5601-453, program number xvi
certificates 140, 144 behind firewall 150
5601-454, program number xvi
certification authority 144 extended agents 6
5765-086, program number xvi
CLEvents.conf
upgrading 32

A
CLI
conman 17
F
fault tolerance
access method switchmg 17
backup domain manager 17
local UNIX 7 wimpspo 53
switching 17
remote UNIX 7 winstsp 53
file dependency 6
accessibility xvi wmaeutil. 31
final 97, 108
adapter.config commands
final job stream 98
upgrading 32 console 97
adding 73
Administrator dumpsec 75
firewall support 149
adding 105 evtsize 74
fixes, obtaining 156
APARs makesec 75
full status 108
IY45982 85 wmaeutil 75
IY46485 100 wremovsp 152
IY47753 32 compiler 97, 99
IY48407 30 components file 9 G
IY49332 73, 75 components file, viewing 9 global options
IY50279 33 configuration files 32 file example 84
IY50282 32 configuration scripts 7, 100 file template 84
IY53209 112, 113 connector 33 setting 81
IY57227 121 install location 19 syntax 81
AT keyword 98, 99 Connector globalopts 108
auditing where to install 19 upgrading 32
database level 83 console command 97 globalopts file
plan level 84 console messages and prompts 96 time zone feature 78
conventions
typeface xvii
B cpu SSL auth mode option 90
cpuname 109
H
backup domain manager history option 83
creating
switching 17
the backup master 108
backup files 32
user on Windows 26
backup master 107
creating 108
customer support I
see Software Support 156 information centers, searching to find
moving 108
customize script software problem resolution 155
behind firewall
running 68 installation
extended agent 150
syntax 57 adding new features 43
behindfirewall option 149
CDs 27
bm check deadline option 87
fresh install 37
bm check file option 87
bm check status option 87 D log files 29
prerequisites 33
bm check until option 87 database
promoting 45
bm look option 88 mounting 109
silent 45
bm read option 88 database audit level 83
software package blocks 51
bm status option 88 deadline 78, 99
Tier 1 platforms 37
bm verbose option 88 dependency
Tier 2 platforms 58
BmEvents.conf file 6
upgrading 33, 61, 62
upgrading 32 directories
wizard program 37
BookManager xvi sharing 94
Internet, searching to find software
books xii directory names, notation xvii
problem resolution 155, 156
see publications xii, xv dumpsec command 75
internetwork dependencies 15
books, online xvi

© Copyright IBM Corp. 1991, 2004 169


J nm mortal 90
nm port option 90
scripts, configuration 100
security
jm job table size option 88 nm read option 90 overview 139
jm look option 88 nm retry option 90 security file
jm nice option 88 nm SSL port option 90 defining Tivoli administrators 105
jm no root option 88 notation updating 75
jm read option 89 environment variables xvii security level
Jnextday 97 path names xvii enabled 146
running 73 typeface xvii force 146
Job Scheduling Console 33 on 146
multiple users 105 services
job stream
customizing final 98 O stopping 30
setting the global options 81
object dispatcher 105
setting the local options 86
on keyword 98
setup programs 27
K online books xvi
online publications
setup_env 31, 76
knowledge bases, searching to find Sfinal 97
accessing xv
software problem resolution 155 adding 73
ordering publications xvi
shared directories
oserv 105
master domain manager 94
L silent installation 45
softcopy books xvi
language packs P Software Support
installing 23, 44, 49, 51, 54, 67, 152 parameters 108 contacting 156
removing 151 parameters.KEY 108 describing problem for IBM Software
library xii path names, notation xvii Support 157
list permission plan audit level 84 determining business impact for IBM
enable option 83 port number 53, 58, 147 Software Support 157
local options pre-production reports 98 submitting problem to IBM Software
file example 93 private keys 144 Support 157
file template 93 problem determination SSL attributes
setting 86, 95 describing problem for IBM Software configuring 146
setting sysloglocal 96 Support 157 SSL auth mode option 90
SSL 147 determining business impact for IBM SSL auth string 91
syntax 86 Software Support 157 SSL CA certificate option 91
localopts submitting problem to IBM Software SSL certificate chain option 91
upgrading 32 Support 157 SSL certificate option 91
log files 29 process messages 96 SSL communication
logman 97 process prompts 96 enabled 146
product groups 8 force 146
product library xii on 146
M program numbers SSL encryption cipher option 91
managed node 105 5601-453 xvi SSL key option 91
manuals xii 5601-454 xvi SSL key pwd option 91
see publications xii, xv 5765-086 xvi SSL port number 147
master publications xii SSL random seed option 91
backup 107 accessing online xv SSL support
master domain manager 3 ordering xvi authentication methods 141
install location 19 concepts 140
shared directories 94 configuring 143
merge stdlists option 89 R stageman 97
start 78, 98, 99
message level 97 registry file
migrating 33 start of day 84, 99
attributes 8
mm read option 89 start time 84
example 8
mm response option 89 startup script 4
removing the product 151
mm retry link option 89 stdlist width option 91
rep8 97
mm sound off option 89 stopping
reptr 97
mm unlink option 89 services 30
resolve dependencies 108
mounting databases 109 string SSL auth mode option 90
mozart 109 submit 19
support Web site, searching to find
S software problem resolution 155
N scheddate 99
schedulr 97, 99
switching
backup domain manager 17
naming conventions scripts fault-tolerant 17
workstations 19 configuration 7 switchmgr 17, 108
NFS mount 108 startup 4 Symphony 19
nm ipvalidate option 89

170 IBM Tivoli Workload Scheduler Planning and Installation Guide


sync level 92
syslog 96
syslog local option 92

T
tcp port 53, 58
tcp timeout option 92
thiscpu 92
time zone
enable option 84
on backup master 107
overview 20
timezone enable 78, 109
Tivoli desktop 105
Tivoli Management Framework
as prerequisite 33
supported versions 33
Tivoli Management Region 105
Tivoli Management Server 105
Tivoli software information center xv
TWS connector 3
instances 20
TWS_TISDIR variable 77
twsinst
authorization roles 24
usage 47
typeface conventions xvii

U
uninstalling
Tier 1 platforms 151
Tier 2 platforms 152
unlink workstations 30
updating 33
Tier 2 platforms 68
upgrading 33
Tier 1 61, 62
Tier 2 platforms 68
user account
creating on UNIX 26
creating on Windows 26
rights 26
user name
creating 38, 41, 44, 52

V
variables, notation for xvii

W
window
Create Workstations 109
wmaeutil command 75
wmaeutil. 31
workstations
naming conventions 19
unlinking 30
wr enable compression option 92
wr read option 92
wr unlink option 92
wremovsp command 152

Index 171
172 IBM Tivoli Workload Scheduler Planning and Installation Guide


Program Number: 5698-WSH

Printed in USA

SC32-1273-02

You might also like