Professional Documents
Culture Documents
Tivoli: IBM Tivoli Workload Scheduler
Tivoli: IBM Tivoli Workload Scheduler
Scheduler
Version 8.2 (Revised December 2004)
SC32-1273-02
Tivoli IBM Tivoli Workload
®
Scheduler
Version 8.2 (Revised December 2004)
SC32-1273-02
Note
Before using this information and the product it supports, read the information in “Notices” on page 159.
Contents v
vi IBM Tivoli Workload Scheduler Planning and Installation Guide
List of figures
1. Process flows . . . . . . . . . . . . 5 5. Multiple domain topology . . . . . . . 16
2. Processes in the network operation . . . . . 6 6. Multiple inbound connections architecture 18
3. Single domain topology . . . . . . . . 15 7. Common listener agent architecture . . . . 132
4. Internetwork dependencies . . . . . . . 16
In addition to the restructuring of the book, new information relating to the fix
packs issued since the previous release has also been added. This information
relates to new functionality added in fix pack 3 for the backup facility, and new
fault-tolerant switch management functionality added in fix pack 5.
Publications
This section lists publications in the Tivoli Workload Scheduler library and any other
related documents. It also describes how to access Tivoli publications online and
how to order Tivoli publications.
Note: This guide is only available on the product CD. It is not possible to access
it online, as you can the other books (see “Accessing publications online”
on page xv).
Explains how to plan and schedule the workload and how to control and
monitor the current plan.
v IBM Tivoli Workload Scheduler for z/OS: Quick Reference, SC32-1268
Provides a quick and easy consultation reference to operate Tivoli Workload
Scheduler for z/OS.
v IBM Tivoli Workload Scheduler for z/OS: Diagnosis Guide and Reference, SC32-1261
Provides information to help diagnose and correct possible problems when using
Tivoli Workload Scheduler for z/OS.
v IBM Tivoli Workload Scheduler for z/OS: Messages and Codes, SC32-1267
Explains messages and codes in Tivoli Workload Scheduler for z/OS.
v IBM Tivoli Workload Scheduler for z/OS: Programming Interfaces, SC32-1266
Provides information to write application programs for Tivoli Workload
Scheduler for z/OS.
v IBM Tivoli Workload Scheduler for z/OS: Licensed Program Specifications, GI11-4208
Provides planning information about Tivoli Workload Scheduler for z/OS.
v IBM Tivoli Workload Scheduler for z/OS: Memo for program 5697-WSZ, GI11-4209
Provides a summary of changes for the current release of the product.
v IBM Tivoli Workload Scheduler for z/OS: Program Directory for program 5697-WSZ,
GI11-4203
Provided with the installation tape for Tivoli Workload Scheduler for z/OS
(program 5697-WSZ), describes all of the installation materials and gives
installation instructions specific to the product release level or feature number.
v IBM Tivoli Workload Scheduler for z/OS: Program Directory for program 5698-WSZ,
GI11-4207
Provided with the installation tape for Tivoli Workload Scheduler for z/OS
(program 5698-WSC), describes all of the installation materials and gives
installation instructions specific to the product release level or feature number.
v IBM Tivoli Workload Scheduler for Virtualized Data Centers: User’s Guide, SC32-1454
Describes how to extend the scheduling capabilities of Tivoli Workload
Scheduler to workload optimization and grid computing by enabling the control
of IBM LoadLeveler® and IBM Grid Toolbox jobs.
See http://www.ibm.com/software/info/ecatalog/en_US/
products/Y614224T20392S50.html for an introduction to the product.
Related publications
The following documents provide additional information:
v IBM Redbooks™: High Availability Scenarios with IBM Tivoli Workload Scheduler and
IBM Tivoli Framework
This IBM Redbook, shows you how to design and create highly available IBM
Tivoli Workload Scheduler and IBM Tivoli Management Framework (TMR
server, Managed Nodes and Endpoints) environments. It presents High
Availability Cluster Multiprocessing (HACMP™) for AIX® and Microsoft®
Windows® Cluster Service (MSCS) case studies.
This Redbook can be found on the Redbooks Web site at
http://www.redbooks.ibm.com/abstracts/sg246632.html
v IBM Redbooks: Customizing IBM Tivoli Workload Scheduler for z/OS V8.2 to Improve
Performance
This IBM Redbook covers the techniques that can be used to improve the
performance of Tivoli Workload Scheduler for z/OS (including end-to-end
scheduling).
This Redbook can be found on the Redbooks Web site at
http://www.redbooks.ibm.com/abstracts/sg246352.html
v IBM Redbooks: End-to-End Scheduling with IBM Tivoli Workload Scheduler Version 8.2
This IBM Redbook considers how best to provide end-to-end scheduling using
Tivoli Workload Scheduler Version 8.2, both distributed (previously known as
Maestro™) and mainframe (previously known as OPC) components.
This Redbook can be found on the Redbooks Web site at
http://www.redbooks.ibm.com/abstracts/sg246624.html
The Tivoli Software Glossary includes definitions for many of the technical terms
related to Tivoli software. TheTivoli Software Glossary is available at the following
Tivoli software library Web site:
http://publib.boulder.ibm.com/tividd/glossary/tivoliglossarymst.htm
IBM posts publications for this and all other Tivoli products, as they become
available and whenever they are updated, to the Tivoli software information center
Web site. Access the Tivoli software information center by first going to the Tivoli
software library at the following Web address:
http://www.ibm.com/software/tivoli/library/
Scroll down and click the Product manuals link. In the Tivoli Technical Product
Documents Alphabetical Listing window, click the appropriate Tivoli Workload
Scheduler product link to access the product’s libraries at the Tivoli software
information center. All publications in the Tivoli Workload Scheduler suite library,
distributed library and z/OS library can be found under the entry Tivoli Workload
Scheduler.
Note: If you print PDF documents on other than letter-sized paper, set the option
in the File → Print window that allows Adobe Reader to print letter-sized
pages on your local paper.
Ordering publications
You can order many Tivoli publications online at the following Web site:
http://www.elink.ibmlink.ibm.com/public/applications/
publications/cgibin/pbi.cgi
In other countries, see the following Web site for a list of telephone numbers:
http://www.ibm.com/software/tivoli/order-lit/
Accessibility
Accessibility features help users with a physical disability, such as restricted
mobility or limited vision, to use software products successfully. With this product,
you can use assistive technologies to hear and navigate the interface. You can also
use the keyboard instead of the mouse to operate all features of the graphical user
interface.
For additional information, see the Accessibility Appendix in the Tivoli Workload
Scheduler Job Scheduling Console User’s Guide.
Support information
If you have a problem with your IBM software, you want to resolve it quickly. IBM
provides the following ways for you to obtain the support you need:
v Searching knowledge bases: You can search across a large collection of known
problems and workarounds, Technotes, and other information.
v Obtaining fixes: You can locate the latest fixes that are already available for your
product.
v Contacting IBM Software Support: If you still cannot solve your problem, and
you need to work with someone from IBM, you can use a variety of ways to
contact IBM Software Support.
For more information about these three ways of resolving problems, see “Support
information,” on page 155.
Typeface conventions
This guide uses the following typeface conventions:
Bold
v Lowercase commands and mixed case commands that are otherwise
difficult to distinguish from surrounding text
v Interface controls (check boxes, push buttons, radio buttons, spin
buttons, fields, folders, icons, list boxes, items inside list boxes,
multicolumn lists, containers, menu choices, menu names, tabs, property
sheets), labels (such as Tip:, and Operating system considerations:)
v Keywords and parameters in text
Italic
v Words defined in text
v Emphasis of words (words as words)
v New terms in text (except in a definition list)
v Variables and values you must provide
Monospace
v Examples and code examples
v File names, programming keywords, and other elements that are difficult
to distinguish from surrounding text
v Message text and prompts addressed to the user
v Text that the user must type
v Values for arguments or command options
When using the Windows command line, replace $variable with % variable% for
environment variables and replace each forward slash (/) with a backslash (\) in
directory paths. The names of environment variables are not always the same in
Windows and UNIX. For example, %TEMP% in Windows is equivalent to $tmp in
UNIX.
Note: If you are using the bash shell on a Windows system, you can use the UNIX
conventions.
Command syntax
This guide uses the following syntax wherever it describes commands:
Table 1. Command Syntax
Syntax Description
convention
Brackets ([ ]) The information enclosed in brackets ([ ]) is optional. Anything not
enclosed in brackets must be specified.
Braces ({ }) Braces ({ }) identify a set of mutually exclusive options, when one
option is required.
Underscore ( _ ) An underscore (_) connects multiple words in a variable.
Vertical bar ( | ) Mutually exclusive options are separated by a vertical bar( | ). You can
enter one of the options separated by the vertical bar, but you cannot
enter multiple options in a single use of the command. A vertical bar
can be used to separate optional or required options.
Bold Bold text designates literal information that must be entered on the
command line exactly as shown. This applies to command names and
non-variable options.
Italic Italic text is variable and must be replaced by whatever it represents.
When you are installing a Tivoli Workload Scheduler network, you can choose
from the following types of workstation:
Master Domain Manager (MDM)
The master domain manager is the topmost domain of a Tivoli Workload
Scheduler network. It contains the centralized database files used to
document scheduling objects. It creates the Production plan, distributes it
to all the agents in the network at the start of each day, and performs all
logging and reporting for the network.
Backup Master
A fault-tolerant agent capable of assuming the responsibilities of the master
domain manager.
Other than the roles you can select while installing, the fault-tolerant agents can
assume one of the following:
Domain Manager
The management hub in a domain. All communications to and from the agents
in a domain are routed through the domain manager.
Host
The scheduling function required by extended agents. It can be performed by
any Tivoli Workload Scheduler workstation, except another extended agent.
Extended Agent
A logical workstation definition that enables you to launch and control jobs on
other systems and applications.
Network Agent
A logical workstation definition for creating dependencies between jobs and
job streams in separate Tivoli Workload Scheduler networks.
Processes
Netman is started by the StartUp script. The order of process creation is Netman,
Mailman, Batchman, and Jobman. On standard agent workstations, Batchman does
not run. All processes, except Jobman, run as the TWS user. Jobman runs as root.
Domain managers, including the master domain manager, can communicate with a
large number of agents and subordinate domain managers. For improved
efficiency, you can define Mailman servers on a domain manager to distribute the
communications load (see the section that explains how to manage workstations in
the database in the IBM Tivoli Workload Scheduler Job Scheduling Console User’s
Guide).
Master/Domain Fault-tolerant
Manager Agent
Startup Startup
Netman Netman
Writer Writer
Mailman Mailman
Batchman Batchman
Jobman Jobman
Method
Extended
Agent
Network communications
In Tivoli Workload Scheduler network, agents communicate with their domain
managers, and domain managers communicate with their parent domain
managers. There are basically two types of communications that take place:
v Start-of-day initialization
v Scheduling change-of-state event messages during the processing day
Before the start of each new day, the master domain manager creates a production
control file called Symphony. Then, Tivoli Workload Scheduler is restarted in the
network, and the master domain manager sends a copy of the new Symphony file to
each of its automatically-linked agents and subordinate domain managers. The
domain managers, in turn, send copies to their automatically-linked agents and
subordinate domain managers. Agents and domain managers that are not set up to
link automatically are initialized with a copy of Symphony as soon as a link
operation is run in Tivoli Workload Scheduler. The autolink flag is set by default
when a workstation is created in Job Scheduling Console.
After the network is started, scheduling messages, like job starts and completions,
are passed from the agents to their domain managers, through parent domain
managers to the master domain manager. The master domain manager then
Chapter 1. Introduction 5
Network communications
broadcasts the messages throughout the hierarchical tree to update the domain
managers and all fault-tolerant agents running in full status mode.
Network operation
The Batchman process on each domain manager and fault-tolerant agent
workstation operates autonomously, scanning its Symphony file to resolve
dependencies and launch jobs. Batchman launches jobs via the Jobman process. On
a standard agent, the Jobman process responds to launch requests from the domain
manager’s Batchman.
The degree of synchronization among the Symphony files depends on the setting of
Full Status and Resolve Dependencies modes in a workstation’s definition.
Assuming that these modes are turned on, a fault-tolerant agent’s Symphony file
contains the same information as the master domain manager’s (see the IBM Tivoli
Workload Scheduler Job Scheduling Console User’s Guide).
Symphony Symphony
Batchman Batchman
Extended agents
An extended agent serves as an interface to an external, non-Tivoli Workload
Scheduler system or application. It is defined as an Tivoli Workload Scheduler
workstation with an access method and a host. The access method communicates
with the external system or application and launches a series of jobs, such as,
monitoring. A job or job stream that needs to verify the existence of one or more
files before it can begin execution is known as a file dependency. The host is
another Tivoli Workload Scheduler workstation (except another extended agent)
that resolves dependencies and issues job launch requests via the method.
Jobs are defined for an x-agent in the same manner as for other Tivoli Workload
Scheduler workstations, except that job attributes are dictated by the external
system or application.
6 IBM Tivoli Workload Scheduler Planning and Installation Guide
Network communications
Extended agent software is available for several systems and applications. The
UNIX extended agents, included with Tivoli Workload Scheduler are described in
the following section.
To run jobs via the x-agent, the job logon users must be given appropriate access
on the non-Tivoli Workload Scheduler UNIX computer. To do this, a .rhost,
/etc/host.equiv, or equivalent file should be set up on the computer. If Opens file
dependencies are to be checked, root access must also be permitted. Contact your
system administrator for help. For more information about the access method,
examine the script file TWShome/methods/unixrsh on an x-agent’s host.
Chapter 1. Introduction 7
Network communications
information on managing jobs, see the section that describes Tivoli Workload
Scheduler plan tasks in the IBM Tivoli Workload Scheduler Job Scheduling Console
User’s Guide.
Product instances
Multiple copies of the product can be installed on a single computer provided that
a unique name and installation path is used for each instance. Instances are
recorded in the registry file for Tier 1 platforms and in the components file for Tier
2 platforms. Former versions of Tivoli Workload Scheduler were also registered in
the components file.
Registry file
On Tier 1 platforms, when you install Tivoli Workload Scheduler using the ISMP
installation program or the twsinst script, a check is performed to determine
whether there are other Tivoli Workload Scheduler instances already installed. The
TWSRegistry.dat file stores the history of all instances installed, and this is the sole
purpose of this file. On Windows platforms, this file is stored under the system
drive directory, for example, c:\winnt\system32. On UNIX platforms, this file is
stored in the /etc/TWS path. The file contains the values of the following
attributes that define a Tivoli Workload Scheduler installation:
Table 2. Registry file attributes
Attribute Value
ProductID TWS_ENGINE
PackageName The name of the software package used to
perform the installation.
InstallationPath The absolute path of the Tivoli Workload
Scheduler instance.
UserOwner The owner of the installation.
MajorVersion Tivoli Workload Scheduler release number.
MinorVersion Tivoli Workload Scheduler version number.
MaintenanceVersion Tivoli Workload Scheduler maintenance
version number.
PatchVersion The latest product patch number installed.
Agent Any one of the following: standard agent,
fault-tolerant agent, master domain manager.
FeatureList The list of optional features installed.
LPName The name of the software package block that
installs the language pack.
LPList A list of all languages installed for the
instance installed.
/Tivoli/Workload_Scheduler/tws_nord_DN_InstallationPath=c:\TWS\tws_nord
/Tivoli/Workload_Scheduler/tws_nord_DN_UserOwner=tws_nord
/Tivoli/Workload_Scheduler/tws_nord_DN_MaintenanceVersion=
/Tivoli/Workload_Scheduler/tws_nord_DN_Agent=MDM
Components file
For product installations on Tier 2 platforms and for Tivoli Workload Scheduler
version 7.0 and 8.1 installations, product groups are defined in the components file.
This file permits multiple copies of a product to be installed on a single computer
by designating a different users for each copy. If the file does not exist prior to
installation, it is created by the customize script. For example:
Entries in this file are automatically made and updated by the customize script.
On UNIX, the file name of the components file is defined in the variable:
UNISON_COMPONENT_FILE
Chapter 1. Introduction 9
Quick start
Network planning
Before you begin installing Tivoli Workload Scheduler, determine the answers to
the following questions.
1. Will you use multiple domains or a single domain network structure?
2. If you use multiple domains, how will you divide your domains:
v By geographical locations, for example, London and Paris domains?
v By time zone, for example Pacific Standard Time (PST) and Eastern Standard
Time (EST)?
v By business unit, for example marketing and accounting?
3. Will you activate the time zone feature?
4. Will your environment contain firewalls?
Using multiple domains reduces the amount of network traffic by reducing the
communications between the master domain manager and other computers.
Domain functionality
When you define a new domain, you must identify the parent domain and the
domain manager. The parent domain is the domain directly above the new domain
in the domain hierarchy. All communications to and from a domain are routed
through the parent domain manager.
Network considerations
The following questions will help in making decisions about how to set up your
Tivoli Workload Scheduler network. Some questions involve aspects of your
network, and others involve the applications controlled by Tivoli Workload
Scheduler.
v How large is your Tivoli Workload Scheduler network? How many computers
does it hold? How many applications and jobs does it run?
The size of your network will help you decide whether to use a single domain
or the multiple domain architecture. If you have a small number of computers,
or a small number of applications to control with Tivoli Workload Scheduler,
there may not be a need for multiple domains.
v How many geographic locations will be covered in your Tivoli Workload
Scheduler network? How reliable and efficient is the communication between
locations?
This is one of the primary reasons for choosing a multiple domain architecture.
One domain for each geographical location is a common configuration. If you
choose single domain architecture, you will be more reliant on the network to
maintain continuous processing.
v Do you need centralized or decentralized management of Tivoli Workload
Scheduler?
An Tivoli Workload Scheduler network, with either a single domain or multiple
domains, gives you the ability to manage Tivoli Workload Scheduler from a
single node, the master domain manager. If you want to manage multiple
locations separately, you can consider the installation of a separate Tivoli
Workload Scheduler network at each location. Note that some degree of
decentralized management is possible in a standalone Tivoli Workload Scheduler
network by mounting or sharing file systems.
v Do you have multiple physical or logical entities at a single site? Are there
different buildings, and several floors in each building? Are there different
departments or business functions? Are there different applications?
These may be reasons for choosing a multi-domain configuration. For example, a
domain for each building, department, business function, or each application
(manufacturing, financial, engineering, and so on).
v Do you run applications that will operate with Tivoli Workload Scheduler?
If they are discrete and separate from other applications, you may choose to put
them in a separate Tivoli Workload Scheduler domain.
v Would you like your Tivoli Workload Scheduler domains to mirror your
Windows domains?
This is not required, but may be useful.
v Do you want to isolate or differentiate a set of systems based on performance or
other criteria?
This may provide another reason to define multiple Tivoli Workload Scheduler
domains to localize systems based on performance or platform type.
v How much network traffic do you have now?
If your network traffic is manageable, the need for multiple domains is less
important.
v Do your job dependencies cross-system boundaries, geographical boundaries, or
application boundaries? For example, does the start of Job1 on workstation1
depend on the completion of Job2 running on workstation2?
The degree of interdependence between jobs is an important consideration when
laying out your Tivoli Workload Scheduler network. If you use multiple
domains, you should try to keep interdependent objects in the same domain.
This will decrease network traffic and take better advantage of the domain
architecture.
v What level of fault-tolerance do you require?
An obvious disadvantage of the single domain configuration is the reliance on a
single domain manager. In a multi-domain network, the loss of a single domain
manager affects only the agents in its domain.
Single domain networks can be combined with other networks, single or multiple
domain, to meet multiple site requirements. Tivoli Workload Scheduler supports
internetwork dependencies between jobs running on different Tivoli Workload
Scheduler networks.
MDM
A A A A
Atlanta Denver
Or:
MDM MDM
A A A A
Atlanta Denver
The first example shows a single domain network. The master domain manager is
located in Atlanta, along with several agents. There are also agents located in
Denver. The agents in Denver depend on the master domain manager in Atlanta to
resolve all interagent dependencies, even though the dependencies may be on jobs
that run in Denver. An alternative would be to create separate single domain Tivoli
Workload Scheduler networks in Atlanta and Denver, as shown in the second
example.
MDM
Atlanta
Tier 1
A A A
DM DM
Denver Los Angeles
Tier 2
A A A A A A
DM Boulder
DM
Aurora Burbank DM Tier 3
All communications to and from the Boulder domain manager are routed through
its parent domain manager in Denver. If there are schedules or jobs in the Boulder
domain that are dependent on schedules or jobs in the Aurora domain, those
dependencies are resolved by the Denver domain manager. Most interagent
dependencies are handled locally by the lower tier domain managers, greatly
reducing traffic on the WAN (Wide Area Network).
If a domain manager fails during the production day, you can use either the Job
Scheduling Console, or the switchmgr command in the conman command line, to
switch to a backup domain manager. A Switch Manager action can be run by
anyone with start and stop access to the domain manager and backup domain
manager workstations.
A switch manager operation stops the backup manager, then restarts it as the new
domain manager, and converts the old domain manager to a fault-tolerant agent.
The identities of the current domain managers are carried forward in the
Symphony file from one processing day to the next, so any switch remains in effect
until you switch back to the original domain manager.
Master
Domain
Manager
Domain A
Domain
Manager Full-Status
DMA FTA-FS
FTA2 FTA3
Domain
AIX Linux
Manager
DMB
FTA4 FTA5
Linux AIX
The plain arrows represent the connections that are created with a Tivoli Workload
Scheduler without multiple inbound connections architecture. The dashed arrows
represent the additional inbound connections that are created to the full-status
fault-tolerant agent in a domain with multiple inbound connections architecture.
When the fault-tolerant switch is active, the link and unlink commands issued
from the primary domain manager act both on the primary, and on the secondary
connections.
The multiple inbound connections architecture ensures that all events received and
processed by the primary domain manager are also received and processed by the
full-status fault-tolerant agent (or will be received or processed later if the events
are still in some fault-tolerant agent pobox). If the primary domain manager fails, a
user can use the switch-manager command to switch the domain manager
functionality from the primary-domain manager to a selected full-status
fault-tolerant agent.
When the switch-manager command is received, all the fault-tolerant agents in that
domain disconnect from the primary domain manager and connect to the
full-status fault-tolerant agent. During the link establishment phase, the new
manager re-synchronizes with each connecting workstation by resending and
regenerating the delta of the events that were buffered on the ftboxes, ensuring
that none of the events still in the primary domain manager message boxes are lost
or duplicated.
The full-status fault-tolerant agents are always updated with the latest status
information and all the unprocessed or partially processed events are stored in at
least two machines (the original fault-tolerant agent if it was not able to deliver it,
and the domain manager or the full-status fault-tolerant agent). The events are
then ready to be resent and reprocessed, thus eliminating the single point of failure
of the primary-domain manager backup-domain manager communication.
Note: This approach applies both to top-down and bottom-up traffic. Inbound
does not depend on the direction of the traffic, but is domain-centric, and is
repeated the same way for each domain where at least one full-status
fault-tolerant agent resides.
Expanded databases
With Tivoli Workload Scheduler, Version 8.2, databases are created expanded on
the master domain manager the first time you run the Tivoli Workload Scheduler
engine.
If you are upgrading from version 7.0 or 8.1 to version 8.2, ensure that you expand
your databases before using them with Tivoli Workload Scheduler version 8.2. See
“Expanding your database” on page 33 for more information.
Workstation names
Job scheduling in Tivoli Workload Scheduler network is distributed across multiple
computers. To accurately track jobs, schedules, and other objects, each computer is
given a unique workstation name. The names can be the same as network node
names, as long as they comply with the naming rules of Tivoli Workload
Scheduler. The maximum permitted length of a workstation name is sixteen
alphanumeric, dash (-), and underscore (_) characters starting with a letter.
Connector installation
The connector is a Tivoli Management Framework service that enables Job
Scheduling Console clients to communicate with the Tivoli Workload Scheduler
engine. To install the connector you must have Tivoli Management Framework
version 3.7.1 or later. A connector can be installed on a system that must also be a
Tivoli server or managed node.
If you want to install the connector in your Tivoli Workload Scheduler domain, but
you have no existing regions and you are not interested in implementing a full
Tivoli management environment, then you should install the Tivoli Management
Framework as a unique region (and therefore install as a Tivoli server) on each
node that will run the connector.
You can install connectors on workstations other than the master domain manager.
This allows you to view the version of the Symphony file of this particular
workstation. This may be important for using the Job Scheduling Console to
manage the local parameters database or the submit command directly to the
workstation rather than submitting through the master. The workstation on which
you install the Connector must be either a managed node or a Tivoli server in the
Tivoli management region. However, to manage scheduling objects in the Tivoli
Workload Scheduler database, you must install the Connector on the master
domain manager configured as a Tivoli server or managed node.
All users:
v Be aware that during the connector installation process you will be prompted for
an Tivoli Workload Scheduler instance name. This name will be displayed in the
Job Scheduling tree of the Job Scheduling Console. To avoid confusion, you
should use a name that includes the name of the fault-tolerant agent.
v If you are installing the connector on several fault-tolerant agents within a
network, keep in mind that the instance names must be unique both within the
Tivoli Workload Scheduler network and the Tivoli management region.
When you enable time zones it removes the dead time in your global network. The
dead time is the period of time between Tivoli Workload Scheduler start of day on
the master domain, and the time on a fault-tolerant agent in another time zone. For
example, if a master in an eastern time zone has a start of day at 6 a.m. and
initializes a fault-tolerant agent in a western time zone with a 3–hour time
difference, the dead zone for the fault tolerant agent is between 3 a.m. and 6 a.m.
For a description of how the time zone works, refer to IBM Tivoli Workload
Scheduler: Reference Guide.
Planning installations
This section describes the things you need to take into consideration before you
start to install Tivoli Workload Scheduler.
The following sections describe why you would choose one way over another.
Table 4 on page 22 lists the available installation methods and the components and
features each method installs depending whether you are installing a Tier one or a
Tier 2 platform.
Language packs
Standard agent Language packs
Silent Install Fault-tolerant Connector + “Performing a
agent Tivoli silent installation”
Management on page 45
Master domain Framework
manager
Tivoli Plus
Backup domain Module + Tivoli
manager Management
Framework
Language packs
Standard agent Language packs
twsinst script Fault-tolerant Automatically
agent installs the
(UNIX platforms language packs,
only) Master domain not optional.
manager
Backup domain
manager
Standard agent
Software Fault-tolerant None “Software
Distribution agent packages and
parameters” on
Master domain page 51
manager
Backup domain
manager
Standard agent
None Language packs “Installing
language packs”
on page 54
Tivoli Workload
Scheduler Plus
Module User’s
Guide
Tier 2 customize Agent Chapter 6,
“Installing using
customize,” on
page 57
Note: Refer to Tivoli Workload Scheduler Release Notes for a list of supported Tier 1
and Tier 2 platforms.
Before running the installation program, decide on the type of installation you
want to perform:
v “Typical installation sequence” on page 38
Note: You can also use an existing user account. Ensure, however, that this
user is a member of the Windows Administrators group.
2. Grant the TWSuser the following advanced user rights:
Installation information
The installation installs Tivoli Workload Scheduler files for the TWSuser in
TWShome, where:
TWSuser
Is the user for which Tivoli Workload Scheduler is installed. On Windows
systems, if you specify a user name that is already defined on the
workstation, the installation automatically assigns the user the necessary
rights to perform the installation. On UNIX workstations only, you must
create the user login account for which you are installing the product prior
to running the installation, if it does not already exist.
TWShome
The installation location. On Windows systems, the default installation
location is defined as system_drive\win32app\TWS\TWSuser, but you can
specify a different location. On UNIX systems, the product is installed in
the user’s home directory.
Tivoli Workload Scheduler disk 1: Disk 1 includes images for AIX, SOLARIS,
HP-UX and Windows. The images are structured as follows:
Disk 2 includes images for Linux and Tier 2 platforms. The images are structured
as follows:
For Windows platforms, the SETUP.exe file is located in the Windows folder on
IBM Tivoli Workload Scheduler Installation Disk 1.
When you copy the image of a specific platform onto the workstation for
installation using the wizard, in addition to the specific image you must also copy
the following files:
v media.inf
v SETUP.jar
v Tivoli_TWS_LP.SPB
v TWS_size.txt
TWS_$(operating_system)_$(TWS_user)^8.2.log
The file to which Software Distribution writes.
twsinst_<ID>.log
The file to which twsinst writes. <ID> indicates the type of installation
process used.
For more information about log files, refer to the Tivoli Workload Scheduler
Administration and Troubleshooting
Windows services
An installation on Windows operating systems registers the following services with
the Windows Service Control Manager:
v Tivoli Workload Scheduler (for TWSuser)
v Tivoli Netman (for TWSuser)
v Tivoli Token Service (for TWSuser)
v Autotrace Runtime
The Service Control Manager maintains its own user password database. Therefore,
if the TWSuser password is changed following installation, you must use the
Services applet in the Control Panel to assign the new password for the Tivoli
Token Service and Tivoli Workload Scheduler (for TWSuser).
2. From the Job Scheduling Console, unlink the target workstation from the other
workstations in the network. Otherwise, from the command line of the master
domain manager, use the following command:
conman "unlink workstationname;noask"
3. From the command line (UNIX) or command prompt (Windows), stop the
netman process as follows:
v On UNIX, run:
conman “shut;wait"
v On Windows, run the shutdown.cmd command from the Tivoli Workload
Scheduler home directory.
4. If you are updating an agent, remove (unmount) any NFS mounted directories
from the master domain manager.
5. If you are upgrading an installation that includes the Connector, ensure that
you stop the Connector as well. See the next section for reference.
To verify whether there are services and processes still running, complete the
following steps:
v On UNIX, type the command
ps -u
Verify that the following processes are not running: netman, mailman, batchman,
writer, jobman, JOBMAN, stageman.
v On Windows, run the command:
<drive>unsupported\listproc.exe
Verify that the following processes are not running: netman, mailman, batchman,
writer, jobman, stageman, JOBMON, tokensrv, batchup.
Also, ensure that no system programs are accessing the directory or anything
below it, including the Command prompt and Windows Explorer.
Backup files
The upgrade procedure on Tier 1 platforms backs up the entire Tivoli Workload
Scheduler, Version 7.0 and 8.1 installation to a directory named:
TWShome_backup_TWSuser
Note: The backup files are moved to the same file system where you originally
installed the product. A check is performed to ensure that there is enough
space on the file system, otherwise, the upgrade procedure cannot start. If
you do not have the required disk space to perform the upgrade, backup the
mozart database and all your customized configuration files, and install a
new instance of Tivoli Workload Scheduler, Version 8.2. Then, transfer the
saved files and the mozart database to the new installation.
These configuration files are often customized to meet your specific needs, and you
can use the saved copies to incorporate your changes following the upgrade. The
installation program will not overwrite any files in the mozart directory, stdlist
directory, or unison directory that were modified after Tivoli Workload Scheduler
was installed, namely, the localopts, globalopts, and tbsmadapter.config files.
If there are any other files you want to protect during an upgrade, copy or rename
them now. As an added precaution, you should also backup the entire TWShome
directory.
Note also that if you have placed any personal files or directories in the TWShome
directory, these are going to be lost during the upgrade process, since all the files
in TWShome that do not belong to the IBM Tivoli Workload Scheduler installation
are not migrated. You should backup these files or directories before starting to
upgrade, and restore them when the upgrade has completed.
Note: The .TWS82 files are no longer installed if you upgrade your installation with
the twsinst.sh script distributed with the APAR IY48550 fix (contained in
the fixpack 3 package).
Following the upgrade process, you can continue to use the configuration files you
were using in the previous installation. For example, after upgrading you can find
three copies of the global options file distributed as follows:
v TWShome/config/globalopts
This is the working copy of the new Version 8.2 global options file.
You can choose one of the following options:
v Continue to use your old global options file. In this case, you need to do
nothing.
v Use the new global options file. In this case, you must:
1. Rename globalopts as globalopts.old
2. Rename globalopts.TWS82 as globalopts
v Use the new global options file with the addition of some of the values you had
in your older file. You can then do as in the preceding option and then manually
edit the file.
Remember that if you want to activate the new optional features available with
Version 8.2 (such as, for example, SSL), either use the new .TWS82 options file, or
manually add the corresponding options to the older version.
When you install or upgrade to Tivoli Workload Scheduler, Version 8.2, optional
features such as the Tivoli Workload Scheduler connector and the Tivoli Plus
Module both require the presence of Tivoli Management Framework, Version 3.7.1
or 4.1. These features must be installed on the Tivoli server. Upgrades on managed
nodes are not supported using the installation program, but can be performed
using the Tivoli desktop. The installation program automatically installs the Tivoli
Management Framework server if it is not detected during the installation. If an
installation is detected, the installation program verifies the version, and if a
supported version is not detected, the upgrade is not performed. You must
manually upgrade the Tivoli Management Framework to either version 3.7.1 or 4.1
and then begin the upgrade process.
Administrator roles
Verify that the Tivoli Management Framework Administrator has the
install_product authorization role assigned.
Tivoli Management Framework up and running
Verify that the Tivoli Management Framework server is up and running.
Tivoli Management Server, not managed node
The Tivoli Workload Scheduler connector and the Tivoli Plus Module must
be installed on a Tivoli Management Framework server. Upgrades on
managed nodes are not supported using the installation program, but can
be performed using the Tivoli desktop.
No prior versions of the Tivoli Workload Scheduler connector or the Tivoli Plus
Module on managed nodes in the region
To upgrade the Tivoli Workload Scheduler connector and the Tivoli Plus
Module on the Tivoli server using the installation program, no prior
versions of these features must exist on managed nodes in the Tivoli
region. In an environment where you have connectors or the Tivoli Plus
Module on the Tivoli server and managed nodes, you must perform the
upgrade in two phases: 1) use the installation program to upgrade the
Tivoli Workload Scheduler engine only on the Tivoli server and then, 2)
use the Tivoli desktop to upgrade the Connectors on the managed nodes
and Tivoli server.
To install a new instance of Tivoli Workload Scheduler perform the following steps:
1. Insert IBM Tivoli Workload Scheduler Installation Disk 1.
If you are installing on a Linux workstation, insert IBM Tivoli Workload
Scheduler Installation Disk 2.
2. Run the setup program for the operating system on which you are installing.
v On Windows platforms, the SETUP.exe file is located in the Windows
folder.
v On UNIX platforms, the SETUP.bin file is located in the root directory of
the installation CD. Only use the SETUP.bin file on the CD for a full install.
For custom or typical installs, use the system SETUP.bin located in the /bin
directory.
3. The installation wizard is launched. Select the language of the installation
wizard. Click OK.
4. Read the welcome information and click Next.
5. Read and accept the license agreement. Click Next.
Note: The password must comply with the password policy in your Local
Security Settings otherwise, the installation fails.
v On UNIX systems, this user account must be created manually before
running the installation program. Create a user with a home directory. IBM
Tivoli Workload Scheduler will be installed under the HOME directory of
the selected user.
Click Next.
8. On Windows systems, if you specified a user name that does not already
exist, an information panel is displayed. Review the information and click
Next.
9. On Windows systems only, specify the installation directory under which the
product will be installed. The directory cannot contain spaces. The directory
must be located on an NTFS file system. Click Browse to select a different
destination directory, and click Next.
10. Select the type of installation:
v Typical. See “Typical installation sequence.”
v Full. See “Full installation sequence” on page 39.
v Custom. See “Custom installation sequence” on page 40.
Click Next.
2. Review the installation settings and click Next. A progress bar indicates that the
installation has started.
3. When the installation completes, a panel displays a successful installation or
indicates the location of the log file if the installation was unsuccessful. Click
Finish.
To configure a fault-tolerant agent, see “Configuring a fault-tolerant or standard
agent” on page 74. For UNIX installations, see also “Configuration steps for UNIX
Tier 1 and 2 installations” on page 76.
Click Next.
3. The connector requires the Tivoli Management Framework. If no version of
Tivoli Management Framework is detected, you can install it now providing the
information in Table 15. If a version of Tivoli Management Framework that is
not supported is detected, you exit the installation and upgrade the Tivoli
Management Framework version as described in the Tivoli Enterprise Installation
Guide.
Table 15. Tivoli Management Framework installation panel
Field Value
Remote Access Account Type the Tivoli remote access account
name that allows Tivoli programs to access
remote file systems.
Password Type the password for the remote access
account.
Installation Password Specify an installation password if you
want a password to be used for
subsequent managed node installations.
The remaining fields are optional and apply if you intend to deploy Tivoli
programs or managed nodes in your Tivoli Management Framework
environment. Click Next.
Note: On Windows, the Tivoli Desktop must be installed separately. For more
information, see the Tivoli Management Framework Planning and Installation
Guide.
4. Review the installation settings and click Next. A progress bar indicates that the
installation has started. To determine the next step to be performed, check the
following list for the situation that best describes your environment:
v An installation of Tivoli Management Framework was not required because a
supported version was detected on your workstation, proceed to step 5.
v The Tivoli Management Framework server version 4.1 will be installed
because it was not detected. Complete the following steps:
a. You are prompted with a Locate the Installation Image window for the
location of the Tivoli Management Framework images. If you did not
copy the images to the local machine or do not have them accessible on
an NFS mounted drive, unmount the installation CD and mount the
Tivoli Management Framework CD. Navigate to the directory that
contains the images. Click OK to continue the installation. A progress bar
indicates the Tivoli server is being installed.
b. Next, you are prompted for the Tivoli job scheduling services images
required to install the connector. These images are located on the
installation CD in the TWS_CONN directory. Navigate to the directory
and click OK. The installation program installs the connector.
If you installed the connector, you must configure the security file as described in
“Updating the security file” on page 75.
Click Next.
Note: On Windows, the Tivoli Desktop must be installed separately. For more
information, see the Tivoli Management Framework Planning and Installation
Guide.
7. Review the installation settings and click Next. A progress bar indicates that the
installation has started. To determine the next step to be performed, check the
following list for the situation that best describes your environment:
v An installation of Tivoli Management Framework was not required because a
supported version was detected on your workstation. If you selected
additional languages, you may be prompted for the Tivoli Management
Framework language support images. Locate the images and click OK.
Proceed to step 8 on page 43.
v The Tivoli Management Framework server version 4.1 will be installed
because it was not detected. Depending on the optional features you selected,
you may or may not have to complete all of the following steps:
a. You are prompted with a Locate the Installation Image window for the
location of the Tivoli Management Framework images. If you did not
copy the images to the local machine or do not have them accessible on
an NFS mounted drive, unmount the installation CD and mount the
Tivoli Management Framework CD. Navigate to the directory that
contains the images. Click OK to continue the installation. A progress bar
indicates the Tivoli server is being installed.
b. Next, if you selected to install additional language packs, you are
prompted for the Tivoli Management Framework language pack images.
Navigate to the directory indicated and click OK.
c. Next, you are prompted for the Tivoli Job Scheduling Services images
required to install the connector. These images are located on the
installation CD in the TWS_CONN directory. Navigate to the directory
and click OK. The installation program installs the connector.
If you installed the connector, you must configure the security file as described in
“Updating the security file” on page 75.
Installation of Tivoli Plus Module and the connector are described in “Custom
installation sequence” on page 40
Before you perform a promote, ensure that all Tivoli Workload Scheduler processes
and services are stopped. For information about stopping the processes and
services, see “Unlinking and stopping Tivoli Workload Scheduler” on page 30.
The following table lists the response files available and the type of installation
each performs:
Installation procedure
For a silent installation, perform the following steps:
1. Copy the relevant response file to a local directory and edit it to meet the needs
of your environment.
2. Save the file with your changes.
3. Enter the following command:
v On UNIX,
./SETUP.bin -options <local_dir>/response_file.txt
You can also use the twsinst script to upgrade from versions 7.0 and 8.1, uninstall
a version 8.2 instance, and promote an existing version 8.2 agent to different type
of agent. For information about upgrading, see “Running twsinst” on page 65.
Refer to Tivoli Workload Scheduler Release Notes for a list of supported Tier 1
platforms.
Synopsis
Show command usage and version
twsinst -u | -v
Install a new instance
twsinst -new -uname <username>
[-cputype {master | bkm_agent | ft_agent | st_agent} ]
[-thiscpu <cpuname>]
[-master <master_cpuname>]
[-port <port_number>]
[-company <company_name>]
[-inst_dir <install_dir>]
[-lang <lang_id>]
Promote an instance
twsinst -promote -uname <username>
[-cputype {master | bkm_agent | ft_agent} ]
[-inst_dir <install_dir>]
[-lang <lang_id>]
Parameters
-u Displays command usage information and exits.
-v Displays the command version and exits.
-new | -promote
Specifies the type of installation to perform:
-new A fresh installation of Tivoli Workload Scheduler, Version 8.2.
Installs an agent or master and all supported language packs.
-promote
For existing installations of Tivoli Workload Scheduler, version 8.2,
you can perform the following operations:
v Promote a standard agent to a fault-tolerant agent, master
domain manager or backup master
v Promote a fault-tolerant agent to a master domain manager or
backup master
Before you perform a promote, ensure that all Tivoli Workload
Scheduler processes and services are stopped. For information
about stopping the processes and services, see “Unlinking and
stopping Tivoli Workload Scheduler” on page 30.
-uname <username>
The name of the user for which Tivoli Workload Scheduler is installed,
updated, promoted, or uninstalled. The software is installed or updated in
this user’s home directory. This user name is not to be confused with the
user performing the installation logged on as root. For a new installation,
this user account must be created manually before running the installation.
Create a user with a home directory. Tivoli Workload Scheduler will be
installed under the HOME directory of the specified user.
-cputype
Specifies the type of Tivoli Workload Scheduler agent to install. Valid
values are as follows:
v master
v bkm_agent (backup master)
v ft_agent (fault-tolerant agent, domain manager, backup domain
manager)
v st_agent (standard agent)
If not specified, the default value is ft_agent. When -cputype=master,
-master is set by default to the same value as -thiscpu.
-thiscpu <cpuname>
The name of the Tivoli Workload Scheduler workstation of this installation.
The name cannot exceed 16 characters. This name is registered in the
localopts file. If not specified, the default value is the hostname of the
workstation. Refer to the Internationalization Notes in the IBM Tivoli
Workload Scheduler Release Notes for restrictions.
-master <master_cpuname>
The workstation name of the master domain manager. This name cannot
exceed 16 characters and cannot contain spaces. This name is registered in
the globalopts file. If not specified, the default value is MASTER. Refer to
the Internationalization Notes in the IBM Tivoli Workload Scheduler Release
Notes for restrictions.
-port <port_number>
The TCP port number. This number is registered in the localopts file. If not
specified, it is set by default to 31111.
-company <company_name>
The name of the company. The company name cannot contain blank
characters. The name appears in program headers and reports. If not
specified, the default name is COMPANY.
Before you perform an upgrade or a promote of an existing CPU, ensure
that the company name does not contain blank characters. You can verify
the existence of blank characters and remove them from the company
name by modifying the related entry in the TWShome/mozart/globalopts
file.
-inst_dir <install_dir>
The directory of the Tivoli Workload Scheduler installation. This path
cannot contain blanks. If not specified, the path is set to the username home
directory.
-lang <lang_id>
The language in which the twsinst messages are displayed. If not
specified, the system LANG is used. If the related catalog is missing, the
default C language catalog is used.
Note: The -lang option is not to be confused with the Tivoli Workload
Scheduler supported language packs. By default, all supported
language packs are installed when you install using the twsinst
script.
Examples
For example, a sample twsinst script for installing a new instance of a
fault-tolerant agent workstation:
./twsinst -new -uname twsuser -cputype ft_agent -thiscpu fta -master mdm
-port 31124 -company IBM
An SPB exists for each supported Tier 1 platform. The software package blocks are
located on IBM Tivoli Workload Scheduler Installation Disk 1 and 2, under the
directory of the platform on which you want to install. The Software Distribution
command line is located in a folder named CLI under each platform folder. An
SPB also exists to install just the language packs. The language pack software
package block is found under the root directory of IBM Tivoli Workload Scheduler
Installation Disk 1. Table 20 lists the SPBs used to install Tivoli Workload Scheduler
components and features.
Table 20. SPBs to install Tivoli Workload Scheduler
.SPB file Description
Tivoli_TWS_WINDOWS.SPB The software package for Windows
operating systems.
Tivoli_TWS_AIX.SPB The software package for AIX operating
systems.
Tivoli_TWS_HP.SPB The software package for HP-UX operating
environments.
Tivoli_TWS_SOLARIS.SPB The software package for Solaris operating
environments.
Tivoli_TWS_LINUX_I386.SPB The software package for Linux for Intel.
Tivoli_TWS_LINUX_S390.SPB The software package for Linux for OS/390.
Tivoli_TWS_LP.SPB The software package that installs a
language pack.
Installation procedure
To perform the installation, complete the following steps:
1. Set the Tivoli environment. See “Stopping the connector” on page 31.
2. Import the software package block using the wimpspo command.
3. Install the software package block using the winstsp command.
4. Perform one of the following configuration tasks depending on the type of
agent you installed:
v “Configuring a master domain manager” on page 73
v “Configuring a fault-tolerant or standard agent” on page 74
v For UNIX installations, see also “Configuration steps for UNIX Tier 1 and 2
installations” on page 76
For complete instructions on performing these tasks, refer to wimpspo and
winstsp in the IBM Tivoli Configuration Manager, Reference Manual for Software
Distribution, and the IBM Tivoli Configuration Manager, User’s Guide for Software
Distribution.
In this example, some variables could be omitted. For example, if master = true,
the installation will ignore the values of the other types of agents. Therefore, the
variables st_agent, ft_agent, bkm_agent could be omitted from the command, or, even
if specified, their values are ignored because their default values are set to false.
es Spanish
ja Japanese
de German
fr French
The following is the syntax required to install Italian and German language packs:
winstsp -D install_dir="Installation Path" -D tws_user="UserName"
[-D it =true | -D de=true] Tivoli_TWS_LP.SPB [subscribers...]
Synopsis
customize -new -thiscpu wkstationname -master wkstationname [-company ″
companyname″] [-nolinks|-execpath pathname] [-uname username][-port netman port]
Description
The customize script installs or updates Tivoli Workload Scheduler. Use it to
perform the following functions:
v New Tivoli Workload Scheduler installation: Install Tivoli Workload Scheduler.
Create a components file with new entries.
v Tivoli Workload Scheduler updates: Upgrade Tivoli Workload Scheduler, if
necessary. Update entries in components file. Use it also to reset permissions to
their default values provided that the original MAESTRO.TAR file is not in the
TWShome directory.
v Details of the installation process are logged in a file named customize.log. You
can find this file in the same directory from where you run the customize script.
Arguments
-new This is a new installation.
-update
This is an update of an existing installation. Note that updating the
software will not change the type of databases in use by Tivoli Workload
Scheduler.
-thiscpu
The name of this workstation. The name can be up to sixteen
alphanumeric, dash (-), or underscore (_) characters starting with a letter.
This name must be used later to formally define the workstation in Tivoli
Workload Scheduler.
-master
The name of the master domain manager. The name can be up to sixteen
characters in length. This name must be used later to formally define the
workstation in Tivoli Workload Scheduler.
-company
The name of the company, enclosed in double quotation marks (up to 40
characters). The name appears in program headers and reports.
© Copyright IBM Corp. 1991, 2004 57
Customize installation
[-nolinks|-execpath pathname]
The link option determines the path used by customize to create links to
Tivoli Workload Scheduler’s utility commands. If you include -nolinks, no
links are created. If you include -execpath, links are created from the
specified path. If linkopt is omitted altogether, links are created as follows:
usr/bin/mat twshome/bin/at
usr/bin/mbatch twshome/bin/batch
usr/bin/datecalc twshome/bin/datecalc
usr/bin/jobstdl twshome/bin/jobstdl
usr/bin/maestro twshome/bin/maestro
usr/bin/mdemon twshome/bin/mdemon
usr/bin/morestdl twshome/bin/morestdl
usr/bin/muser twshome/bin/muser
usr/bin/parms twshome/bin/parms
-uname
The name of the user for whom Tivoli Workload Scheduler will be
installed or updated. The name must not contain dot (.) characters. The
software is installed or updated in this user’s home directory. If omitted,
the default user name is maestro.
-port The TCP port number that Netman responds to on the local computer. It
must be an unsigned 16-bit value in the range 1- 65535 (remember that the
values between 0 and 1023 reserved for the well-known services, such as
FTP, TELNET, HTTP, etc.). The default is 31111. You can modify this value
at any time in the local options file.
Note: For the IBM-Sequent Numa platform (DYNIX®), you must also add
the -o option.
where:
cd The pathname of your CD drive.
platform
Your platform type. One of the following:
DYNIX for IBM-Sequent Numa
IRIX for SGI Irix
LINUX_PPC for SuSE Linux Enterprise Server for iSeries and
pSeries
OSF for Compaq True64
3. Run the customize script. The script is run from the directory where you want
the product installed.
For example, a sample customize script for a fault-tolerant workstation:
/bin/sh customize -new -thiscpu dm1 -master mdm -uname twsuser [options]
For more information on the customize arguments and more examples, refer to
The customize script on page 57.
4. The Tivoli Workload Scheduler installation process is now complete. To
configure your workstation in the network, see “Configuration steps for UNIX
Tier 1 and 2 installations” on page 76 and “Configuration steps for Tier 2
installations” on page 77.
After you have completed the installation, run the StartUp command to start
Netman:
StartUp
During the upgrade procedure, the installation backs up all the master data and
configuration information, installs the new product code, and automatically
migrates old scheduling data and configuration information. However, it does not
migrate user files or directories placed in the TWShome directory. See “Backup
files” on page 32 for more details.
Upgrade scenarios
The following table describes upgrade scenarios that can exist in your network,
and the steps required to upgrade to Tivoli Workload Scheduler, Version 8.2 using
the installation program.
Table 23. Upgrading to Tivoli Workload Scheduler, Version 8.2
What is currently
installed... Follow these steps... Refer to...
Tivoli Workload Scheduler, Run the installation program “Using the installation wizard”
Versions 7.0 or 8.1 (no and perform an upgrade. on page 62
Tivoli Workload Scheduler
connector or Tivoli Plus
Module installed)
Tivoli Workload Scheduler, 1. Upgrade to Tivoli Tivoli Enterprise Installation
Versions 7.0 or 8.1 Management Framework, Guide
Version 4.1.
Tivoli Management “Using the installation wizard”
Framework, Version 3.6.x 2. Run the installation on page 62
program and perform an
Tivoli Workload Scheduler upgrade.
Connector, Versions 7.0 or
8.1
Tivoli Workload Scheduler, 1. Run the installation “Using the installation wizard”
Versions 7.0, 8.1 program and perform an on page 62
upgrade. The wizard
Tivoli Management “When a supported Tivoli
automatically upgrades
Framework, Versions 3.7.1, Management Framework
the connector, provided
or 4.1 version is already installed” on
that the connector is
page 33
Tivoli Workload Scheduler configured for the agent
Connector, Versions 7.0 or selected to be upgraded.
8.1 2. Check Tivoli
Management Framework
prerequisites.
During the upgrade procedure, the installation program backs up all the master
data and configuration information, installs the new product code, and
automatically migrates old scheduling data and configuration information.
Before you perform the upgrade, ensure that all Tivoli Workload Scheduler
processes and services are stopped. The installation program stops the Tivoli
Workload Scheduler processes if found to be running, but if you have jobs that are
currently running, the related processes must be stopped manually. For
information about stopping the processes and services, see “Unlinking and
stopping Tivoli Workload Scheduler” on page 30.
If you are upgrading an installation that includes the connector, ensure you stop
the connector before starting the upgrade process. See “Stopping the connector” on
page 31.
where,
-is:tempdir <temporary_directory>
Specifies a temporary working directory to which installation files
and directories are copied. The default value for
<temporary_directory> is the temporary directory set on the local
machine. If you use the default value, you must manually delete the
following files and directories copied to this directory after the
installation process completes:
– SETUP.jar
– TWS_size.txt
– media.inf
– Tivoli_TWS_LP.SPB
– RESPONSE_FILE directory
– TWS_CONN directory
– TWSPLUS directory
– the directory named after the operating system
If you specify a different name for the <temporary_directory> delete
the folder after the installation process completes.
3. The installation wizard is launched. Select the installation wizard language.
Click OK.
4. Read the welcome information and click Next.
5. Read and accept the license agreement. Click Next.
6. Select an existing installation of a previous release of the product from the
drop-down list for which you want to perform the upgrade. The instance can
be identified by its group name.
7. Upgrade the selected instance is selected by default. Click Next. On
Windows, check the user name and type the password associate with the user.
8. Review the Tivoli Workload Scheduler user for which the upgrade will be
performed. Click Next.
9. Review the location of the installation on the workstation and click Next.
10. Select the type of agent for which you want to perform the upgrade and click
Next. Be sure that the type selected corresponds to the instance you selected
to be upgraded.
11. Review the CPU data information and click Next.
12. Review the installation settings and click Next. The upgrade has started.
13. When the installation completes, a panel displays a successful installation. In
case of an unsuccessful installation, check the log file indicated.
14. Click Finish.
Using twsinst
You can upgrade using the twsinst script. If you intend to manually backup your
previous installation, you should read “Backing up before running the script”
before starting the upgrade.
v The main reason for using the -nobackup option is that you prefer to backup
your previous installation by yourself. Such backup is important because
twsinst uses it to retrieve and transfer some of your customization settings from
the old to the new installation. Therefore, when you use this option, to
guarantee that the migration process completes the customization step correctly,
do the following before running twsinst:
1. Stop all IBM Tivoli Workload Scheduler processes.
2. Create the backup directory needed by the customization step during the
migration process. Whether you also use the -backup_dir option, or you use
the default (see Install and promote on page 47), manually create this
directory.
3. Run the following command:
chmod -R 755 $BACKUP_DIR
4. Do the following to save the minimum IBM Tivoli Workload Scheduler set of
directories/files needed by the customization step in the $BACKUP_SUBDIR
directory:
a. Move the following directories/files:
– $INST_DIR/bin
– $INST_DIR/Security
b. Copy the following directories/files (use the command cp -p ... to
preserve the correct rights):
– $INST_DIR/catalog
– $INST_DIR/methods
– $INST_DIR/mozart/globalopts
– $INST_DIR/Tbsm
– $INST_DIR/Symphony
– $INST_DIR/parameters
– $INST_DIR/parameters.KEY
– $INST_DIR/Jobtable
– $INST_DIR/Jobmanrc
– $INST_DIR/localopts
– $INST_DIR/BmEvents.conf, if TBSM is customized
– $INST_DIR/MAgent.conf, if TBSM is customized
– $INST_DIR/CLEvents.conf, if TBSM is customized
– /usr/unison/components
Note: These are the files that the automatic backup makes a copy of. You
can backup any other files that you need to maintain.
5. Move $INST_DIR/../unison to the $BACKUP_DIR directory to create its backup
copy.
6. Run twsinst -update ... -nobackup to start the migration process.
7. Check that the migration process completed successfully and delete
$BACKUP_DIR.
v If you do not specify -nobackup, the backup copy is created automatically (in the
backup directory you specify with -backup_dir; in the default backup directory
otherwise). The process automatically:
1. Creates the TWS82bck_$TWSuser_PID.tar file under the backup directory
reading from the filelist_bck82 list. If this step fails, a message is displayed
and the process stops; if it completes successfully, the .tar file is compressed.
2. Copies all the files/directories needed for the customization step, and the
current /usr/unison/components file, to the backup directory.
If the migration process fails, the rollback procedure will restore the files from
the saved .tar file.
Running twsinst
Use this procedure to upgrade an existing IBM Tivoli Workload Scheduler version
7.0 or 8.1 installation to version 8.2 on Tier 1 platforms. It also installs all
supported language packs. This procedure uses the command line method of
upgrading using the twsinst script. Refer to Tivoli Workload Scheduler Release Notes
for a list of supported Tier 1 platforms. For Tier 2 installations using the command
line, see Chapter 6, “Installing using customize,” on page 57.
Before you perform the upgrade, ensure that all Tivoli Workload Scheduler
processes and services are stopped. The installation wizard stops the Tivoli
Workload Scheduler processes if found to be running, but if you have jobs that are
currently running, the related processes must be stopped manually. For
information about stopping the processes and services, see “Unlinking and
stopping Tivoli Workload Scheduler” on page 30.
Perform the following steps to upgrade a version 7.0 or 8.1 installation to Tivoli
Workload Scheduler, Version 8.2 using the twsinst script.
1. Insert IBM Tivoli Workload Scheduler Installation Disk 1.
2. Log in as root, and change your directory to TWShome.
3. Locate the directory of the platform on which you want to run the script and
run the twsinst script as follows:
twsinst -update -uname <username>
-cputype {master | bkm_agent | ft_agent | st_agent}
[-inst_dir <install_dir>]
[-backup_dir<backup_dir>]
[-nobackup]
[-lang <lang-id>]
Note: The following options are available only if you applied the fix for APAR
IY48550 included in the fixpack 3 for Version 8.2 package:
v -backup_dir<backup_dir>
v -nobackup
-update
Upgrades an existing installation. Installs all supported language packs
also. Only installations of versions 7.0 and 8.1 of the product are supported.
Updating the software does not change the type of databases in use by
Tivoli Workload Scheduler. See Chapter 7, “Upgrading to Tivoli Workload
Scheduler,” on page 61 for more information about upgrading and see
“Running twsinst” on page 65.
-cputype
Specifies the type of Tivoli Workload Scheduler agent to install. Valid
values are as follows:
v master
v bkm_agent (backup master)
v ft_agent (fault-tolerant agent, domain manager, backup domain manager)
v st_agent (standard agent)
If not specified, the default value is ft_agent. When -cputype=master,
-master is set by default to the same value as -thiscpu.
-master <master_cpuname>
The workstation name of the master domain manager. This name cannot
exceed 16 characters and cannot contain spaces. This name is registered in
the globalopts file. If not specified, the default value is MASTER. Refer to
the Internationalization Notes in the IBM Tivoli Workload Scheduler Release
Notes for restrictions.
-inst_dir <install_dir>
The directory of the Tivoli Workload Scheduler installation. This path
cannot contain blanks. If not specified, the path is set to the username home
directory.
-backup_dir<backup_dir>
Can be used to specify the name of an alternative directory (which must be
created manually) as the destination for the backup copy of a previous
version. This option can be used in combination with -nobackup.
If you do not specify this option when running an upgrade, the following
default value is used:
$BACKUP_DIR = $INST_DIR_backup_$TWS_USER
where:
v $INST_DIR is the IBM Tivoli Workload Scheduler installation path (the
user home directory on UNIX).
v $TWS_USER is the IBM Tivoli Workload Scheduler user name.
For example:
$INST_DIR=/opt/TWS/TWS81
$TWS_USER=maest81
$BACKUP_DIR=/opt/TWS/TWS81_backup_maest81
$BACKUP_SUBDIR=/opt/TWS/TWS81_backup_maest81/TWS81
-lang <lang_id>
The language in which the twsinst messages are displayed. If not specified,
the system LANG is used. If the related catalog is missing, the default C
language catalog is used.
Note: The -lang option is not to be confused with the Tivoli Workload
Scheduler supported language packs. By default, all supported
language packs are installed when you install using the twsinst
script.
For example, a sample twsinst script to upgrade an Tivoli Workload Scheduler,
version 7.0 fault-tolerant agent to a Version 8.2 fault-tolerant agent workstation:
./twsinst -update -uname twsuser -cputype ft_agent
Before you perform the upgrade, ensure that all Tivoli Workload Scheduler
processes and services are stopped. For information about stopping the processes
and services, see “Unlinking and stopping Tivoli Workload Scheduler” on page 30.
If you are upgrading an installation that includes the Connector, ensure you stop
the connector before starting the upgrade process. See “Stopping the connector” on
page 31.
Copy the response file to a local directory and edit it to meet the needs of your
particular upgrade environment. Instructions for customizing the files are included
directly in the files as commented text. To start the upgrade in silent mode, type
the following command:
v On UNIX,
./SETUP.bin -options <local_dir>/migrationInstall.txt
v On Windows,
SETUP.exe -options <local_dir>\migrationInstall.txt
Before you perform the upgrade, ensure that all Tivoli Workload Scheduler
processes and services are stopped. For information about stopping the processes
and services, see “Unlinking and stopping Tivoli Workload Scheduler” on page 30.
For complete instructions on performing these tasks, refer to the IBM Tivoli
Configuration Manager, Reference Manual for Software Distribution, and the IBM Tivoli
Configuration Manager, User’s Guide for Software Distribution.
Using customize
Before you perform the upgrade, ensure that all Tivoli Workload Scheduler
processes and services are stopped. For information about stopping the processes
and services, see “Unlinking and stopping Tivoli Workload Scheduler” on page 30.
Note: Be sure to read the Tivoli Workload Scheduler Release Notes for additional
information about updating existing software.
If there are any other files you want to protect during the update, copy or rename
them now. As an added precaution, you should also backup the following:
v The TWShome directory.
v The components file (generally, /usr/unison/components)
Netman
The Netman process is automatically started at the end of installation. This is to
control that the installation process succeeded.
Note: If you have more than one version of IBM Tivoli Workload Scheduler
installed on your computer, make sure TWS_TISDIR points to the
latest one. This ensures that the most recent character set conversion
tables are used.
v For UNIX systems see step 1 on page 76 in “Configuration steps for UNIX
Tier 1 and 2 installations” on page 76.
2. Login as TWSuser.
3. Run the composer command.
4. Add the final job stream definition to the database by running the following
command:
composer add Sfinal
If you did not use the Sfinal file provided with the installation but created a
new one, use its name in place of Sfinal.
5. Exit the composer command line.
6. Run the Jnextday job:
Jnextday
You can automate this step following the first time after installation. See
“Automating the production cycle” on page 97 for details.
7. When the Jnextday job completes, check the status of Tivoli Workload
Scheduler:
conman status
The ftbox is a cyclical message queue where each full-status agent stores the
messages it would send if it acted as a domain manager. When the queue fills up,
the new messages overwrite the old messages, from the beginning.
It is possible to change the ftbox size as any other message queue, using the
evtsize command.
Note: If you have more than one version of IBM Tivoli Workload Scheduler
installed on your computer, make sure TWS_TISDIR points to the
latest one. This ensures that the most recent character set conversion
tables are used.
v For UNIX systems, see step 1 on page 76 in “Configuration steps for UNIX
Tier 1 and 2 installations” on page 76.
2. Login to the master domain manager as TWSuser.
3. Create the fault-tolerant agent workstation definition in the Tivoli Workload
Scheduler database by using the composer command line. Open a command
line window and enter the following commands:
composer
new
4. This opens a text editor where you can create the fault-tolerant agent
workstation definition in the Tivoli Workload Scheduler database. Below is an
example workstation definition for a fault-tolerant agent . For more information
on workstation definitions, refer to the Tivoli Workload Scheduler Reference Guide.
cpuname DM1
os UNIX
node domain1
description "Fault-tolerant Agent"
for Maestro
autolink off
end
5. The newly-defined fault-tolerant agent is not recognized until the Jnextday job
runs in the final job stream. If you want to incorporate the fault-tolerant agent
sooner, you can run conman ″release final″. For information about defining
your scheduling objects, refer to the Tivoli Workload Scheduler Reference Guide.
6. Issue the link command from the master domain manager to link the
fault-tolerant agent and to download the Symphony file to it:
conman “link ftaname”
Note: To obtain the Administrator name, open the Tivoli desktop and
double-click Administrators. The Administrator name is the
Administrators group to which your login belongs.
5. Set the Tivoli environment:
From a UNIX command line:
v For ksh:
. /etc/Tivoli/setup_env.sh
v For csh:
source /etc/Tivoli/setup_env.sh
6. Enter the following command to stop the Connector:
v on UNIX
wmaeutil.sh ALL -stop
v on Microsoft Windows
wmaeutil.cmd ALL -stop
7. Run the makesec command to compile the temporary file into a new security
file:
makesec tempsec
For more information on the makesec and dumpsec commands, see IBM Tivoli
Workload Scheduler: Reference.
In addition to the PATH, you must also set the TWS_TISDIR variable to
TWShome. The TWS_TISDIR variable enables Tivoli Workload Scheduler to
display messages in the correct language and codeset. For example,
TWS_TISDIR=/opt/maestro
export TWS_TISDIR
In this way, the necessary environment variables and search paths are set to
allow you to run commands, such as conman or composer commands, even if
you are not located in the TWShome path. Alternatively, you can use the
tws_env shell script to set up both the PATH and TWS_TISDIR variables. These
variables must be set before you can run commands. The tws_env script has
been provided in two versions:
v tws_env.sh for Bourne and Korn shell environments
v tws_env.csh for C Shell environments
See step 1 on page 73 in “Configuring a master domain manager” on page 73
for information about the tws_env script on Windows systems.
2. To start the Tivoli Workload Scheduler network management process, Netman,
automatically as a daemon each time you boot your system, add one of the
following to the /etc/rc file, or the proper file for your system: To start Netman
only:
if [-x twshome/StartUp]
then
echo "netman started..."
/bin/su - twsuser -c " twshome/StartUp"
fi
Time zones are disabled by default on installation or update of the product. If the
timezone enable entry is missing from the globalopts file, time zones are disabled.
The following steps outline the method of implementing the time zone feature:
1. Load Tivoli Workload Scheduler.
The default setting is for time zone is timezone enable = no in the globalopts
file. The database allows time zones to be specified for workstations, but not on
start and deadline times within job streams in the database. The plan creation
(Jnextday) ignores any time zones that are present in the database. You will not
be able to specify any time zones anywhere in the plan.
2. Define workstation time zones.
Set the time zone of the master workstation, of the backup master, and of any
fault-tolerant agents that are in a different time zone than the master. No time
zones will be allowed in the database for start and deadline times. No time
zones will be allowed anywhere in the plan at this point, because the timezone
enable entry in the globalopts file is still set to NO.
3. When workstation time zones have been set correctly, set timezone enable to
YES in the globalopts file. This setting, and the time zone definition in the
master workstation, will enable the Tivoli Workload Scheduler network to take
advantage of time zone support.
At this point, all users will be able to use time zones anywhere in the database,
although they should wait for the next run of Jnextday to use them on start and
deadline times. Until Jnextday runs, they will not be able to use time zones in
the plan. The next time Jnextday runs, time zones will be carried over to the
plan, and the Job Scheduling Console and the backend will allow the
specification of time zones anywhere in the plan.
4. Start using time zones on start and until times where needed.
You can now use all time zone references in the database and in the plan with
both the Job Scheduling Console and the CLI.
In an end-to-end network, the time zone feature is always enabled and does not
need to be set in the globalopts file. Also, the value specified for the CPUTZ
keyword is used for every workstation. If it is not specified, the default value UTC
is used.
Global options
You define global options on the master domain manager and they apply to all the
workstations in the Tivoli Workload Scheduler network.
# comment
Treat everything from the pound sign to the end of the line as a comment.
automatically grant logon as batch job
This is for Windows jobs only. If set to yes, the logon users for Windows
jobs are automatically granted the right to Logon as batch job. If set to no,
or omitted, the right must be granted manually to each user or group.
Note that the right cannot be granted automatically for users running jobs
on a Backup Domain Controller (BDC), so you must grant those rights
manually.
bmmsgbase
Specify the maximum number of prompts that can be displayed to the
operator after a job abends. The default value is 1000.
bmmsgdelta
Specify an additional number of prompts for the value defined in
bmmsgbase for the case when a job is rerun after abending and the limit
specified in bmmsgbase has been reached. The default value is 1000.
batchman schedule
This is a production option that affects the operation of Batchman, which is
the production control process of Tivoli Workload Scheduler. The setting
determines the priority assigned to the job streams created for unscheduled
jobs. Enter yes to have a priority of 10 assigned to these job streams. Enter
no to have a priority of 0 assigned to these job streams.
carry job states
This is a pre-production option that affects the operation of the stageman
command. Its setting determines the jobs, by state, to be included in job
streams that are carried forward. You must enclose the job states in
parentheses, double quotation marks, or single quotation marks. The
commas can be replaced by spaces. The valid internal job states are as
follows:
from being copied into the new production plan (Symphony file). This
conserves space in the file, but permits the use of calendar names in date
expressions. Enter no to have user calendars copied into the new
production plan. See the explanation of the compiler command in the
Tivoli Workload Scheduler Reference Guide for more information.
master
The name of the master domain manager. This is set when you install
Tivoli Workload Scheduler.
plan audit level
Select whether to enable or disable plan auditing. Valid values are 0 to
disable plan auditing, and 1 to activate plan auditing. Auditing
information is logged to a flat file in the TWShome/audit/plan directory.
Each Tivoli Workload Scheduler workstation maintains its own log. For the
plan, only actions are logged in the auditing file, not the success or failure
of any action. For more information on this feature, see Enabling the time
zone feature.
retain rerun job name
This is a production option that affects the operation of Batchman, which is
the production control process of Tivoli Workload Scheduler. Its setting
determines whether or not jobs that are rerun with the Conman rerun
command will retain their original job names. Enter yes to have rerun jobs
retain their original job names. Enter no to permit the rerun from name to
be assigned to rerun jobs.
start Enter the start time of the Tivoli Workload Scheduler processing day in 24
hour format: hhmm (0000-2359). The default start time is 6:00 A.M., and the
default launch time of the final job stream is 5:59 A.M. If you change this
option, you must also change the launch time of the final job stream,
which is usually set to one minute before the start time.
timezone enable
Select whether to enable or disable the time zone option. Valid values are
yes to activate time zones in your network, and no to disable time zones in
the network. The time zone is defined in the workstation definition and
the feature can be enabled by the value of the entry in the globalopts file.
Time zones are disabled by default when installing or upgrading Tivoli
Workload Scheduler. If the timezone enable entry is missing from the
globalopts file, time zones are disabled. For more information on this
feature, refer to Enabling the time zone feature.
During the installation process, a working copy of the global options file is
installed as TWShome/mozart/globalopts.
You can customize the working copy to your needs. The following is a sample of a
global options file:
# Globalopts file on the master domain manager defines
# attributes of the Tivoli Workload Scheduler network.
#--------------------------------------------------------
company="IBM"
master=main
start=0600
history=10
carryforward=yes
ignore calendars=no
batchman schedule=no
retain rerun job name=no
centralized security=no
#
#--------------------------------------------------------
# End of globalopts.
The following table shows how the various carry forward options work together.
v If a job is running when the Jnextday job begins execution, and it is not
specified to be carried forward, the job continues to run and is placed in the
userjobs job stream for the new production day. Note that dependencies on such
jobs are not carried forward, and any resources that are held by the job are
released.
Local options
Local options are defined on each workstation, and apply only to that workstation.
# comment
Treats everything from the pound sign to the end of the line as a comment.
bm check deadline
Specify the maximum number of seconds Batchman will wait before
reporting the expiration of the deadline time for job or job stream. The
default value is (0) which means no check of the deadline is performed
and its expiration is not reported. To enable this check, specify a value in
seconds.
bm check file
Specify the minimum number of seconds Batchman will wait before
checking for the existence of a file that is used as a dependency.
bm check status
Specify the number of seconds Batchman will wait between checking the
status of an internetwork dependency.
bm check until
Specify the maximum number of seconds Batchman will wait before
reporting the expiration of an Until time for job or job stream. Specifying a
value below the default setting (300) may overload the system. If it is set
below the value of Local Option bm read, the value of bm read is used in
its place.
bm look
Specify the minimum number of seconds Batchman will wait before
scanning and updating its production control file.
bm read
Specify the maximum number of seconds Batchman will wait for a
message in the INTERCOM.MSG message file. If no messages are in
queue, Batchman waits until the timeout expires or until a message is
written to the file.
bm stats
Specify on to have Batchman send its startup and shutdown statistics to its
standard list file. Specify off to prevent Batchman statistics from being sent
to its standard list file.
bm verbose
Specify on to have Batchman send all job status messages to its standard
list file. Specify off to prevent the extended set of job status messages from
being sent to the standard list file.
composer prompt
Specify a prompt for the composer command line. The prompt can be of
up to 10 characters in length. The default is a dash (-).
conman prompt
Specify a prompt for the conman command line. The prompt can be of up
to 8 characters in length. The default is a percent sign (%).
date format
Specify the value that corresponds to the date format you desire. The
values can be:
v 0 corresponds to yy/mm/dd
v 1 corresponds to mm/dd/yy
v 2 corresponds to dd/mm/yy
v 3 indicates usage of Native Language Support variables
The default value is 1.
db visible for gui
Specify yes to enable the Job Scheduling Console to access the
fault-tolerant agent database, while connecting to the fault-tolerant agent,
even if it is not a master domain manager. The Job Scheduling Console
user is able to see the database icons. The default value is no.
jm job table size
Specify the size, in number of entries, of the job table used by Jobman.
jm look
Specify the minimum number of seconds Jobman will wait before looking
for completed jobs and performing general job management tasks.
jm nice
For UNIX only, specify the nice value to be applied to jobs launched by
Jobman.
jm no root
For UNIX only, specify yes to prevent Jobman from launching root jobs.
Specify no to allow Jobman to launch root jobs.
jm read
Specify the maximum number of seconds Jobman will wait for a message
in the COURIER.MSG message file.
mm cache mailbox
Use this option to enable mailman to use a reading cache for incoming
messages. In that case not all messages are cached, but only those not
considered essential for network consistency. the default is no.
mm cache size
Specify this option if you also use mm cache mailbox. The default is 32
event. Use the default for small and medium networks. Use larger values
for large networks. Avoid using a large value on small networks. The
maximum value is 512 (higher values are ignored).
merge stdlists
Specify yes to have all of the Tivoli Workload Scheduler control processes,
except Netman, send their console messages to a single standard list file.
The file is given the name TWSmerge. Specify no to have the processes
send messages to separate standard list files.
mm read
Specify the rate, in seconds, at which Mailman checks its mailbox for
messages. The default is 15 seconds. Specifying a lower value will cause
Tivoli Workload Scheduler to run faster but use more processor time.
mm resolve master
If this is set to ″yes″ (default), the $MASTER variable is resolved at the
beginning of the production day. The host of any extended agent will be
switched after the next Jnextday (long term switch). If it is set to ″no″, the
$MASTER variable is not resolved at Jnextday time, and this lets the host
of any extended agent to be switched right after a conman switchmgr
command (short- and long-term switch).
mm response
Specify the maximum number of seconds Mailman will wait for a response
before reporting that a workstation is not responding. The response time
should not be less than 90 seconds.
mm retry link
Specify the maximum number of seconds Mailman will wait, after
unlinking from a non-responding workstation, before it attempts to link to
the workstation again.
mm sound off
Specifies how Mailman responds to a conman tellop ? command. Specify
yes to have Mailman display information about every task it is performing.
Specify no to have Mailman send only its own status.
mm unlink
Specify the maximum number of seconds Mailman will wait before
unlinking from a workstation that is not responding. The wait time should
not be less than the response time specified for the Local Option nm
response.
nm ipvalidate
Specify full to enable IP address validation. If IP validation fails, the
connection is not allowed. Specify none to allow connections when IP
validation fails.
nm mortal
Specify yes to have Netman quit when all of its child processes have
stopped. Specify no to have Netman keep running even after its child
processes have stopped.
nm port
Specify the TCP port number that Netman responds to on the local
computer. This must match the TCP port in the computer’s workstation
definition. It must be an unsigned 16-bit value in the range 1- 65535
(remember that the values between 0 and 1023 are reserved for well-known
services such as, FTP, TELNET, HTTP, etc.)
nm read
Specify the maximum number of seconds Netman will wait for a
connection request before checking its message queue for stop and start
commands.
nm retry
Specify the maximum number of seconds Netman will wait before retrying
a connection that failed.
nm SSL port
The port used to listen for incoming SSL connections. This value must
match the one defined in the secureaddr attribute in the workstation
definition in the IBM Tivoli Workload Scheduler database. It must be
different from the nm port local option that defines the port used for
normal communications. At installation time, the default value is 0. When
the CPU is created and SSL authentication is enabled, the port number
assumes the value 31113.
Notes:
1. On Windows, place this option also in the localopts file.
2. If you install multiple instances of Tivoli Workload Scheduler version
8.2 on the same computer, set all SSL ports to different values.
3. If you plan not to use SSL, set the value to 0.
SSL auth mode
The behavior of Tivoli Workload Scheduler during an SSL handshake is
based on the value of the SSL auth mode option as follows:
caonly Tivoli Workload Scheduler checks the validity of the certificate and
verifies that the peer certificate has been issued by a recognized
CA. Information contained in the certificate is not examined. It is
the default. If you do not specify the SSL auth mode option, or you
define a value that is not valid, the caonly value is used.
string Tivoli Workload Scheduler checks the validity of the certificate and
verifies that the peer certificate has been issued by a recognized
CA. It also verifies that the Common Name (CN) of the Certificate
Subject matches the string specified into the SSL auth string option.
See 91.
cpu Tivoli Workload Scheduler checks the validity of the certificate and
verifies that the peer certificate has been issued by a recognized
CA. It also verifies that the Common Name (CN) of the Certificate
Subject matches the name of the CPU that requested the service.
SSL key
In SSL authentication, the name of the private key file. See “Setting strong
authentication and encryption” on page 139 for reference.
SSL key pwd
In SSL authentication, the name of the file containing the password for the
stashed key. See “Setting strong authentication and encryption” on page
139 for reference.
SSL random seed
The pseudo random number file used by OpenSSL on some platforms.
Without this file, SSL authentication may not work properly. See “Setting
strong authentication and encryption” on page 139 for reference.
stdlist width
Specify the maximum width of the Tivoli Workload Scheduler console
messages. You can specify a column number in the range 1 to 255 and lines
are wrapped at or before the specified column, depending on the presence
of imbedded carriage control characters. Specify a negative number or zero
to ignore line width. On UNIX, you should ignore line width if you enable
system logging with the syslog local option.
syslog local
Enables or disables Tivoli Workload Scheduler system logging for UNIX
computers only. Specify -1 to turn off system logging for Tivoli Workload
Scheduler. Specify a number from 0 to 7 to turn on system logging and
have Tivoli Workload Scheduler use the corresponding local facility
(LOCAL0 through LOCAL7) for its messages. Specify any other number to
turn on system logging and have Tivoli Workload Scheduler use the USER
facility for its messages. For more information, see “Tivoli Workload
Scheduler console messages and prompts” on page 96.
sync level
Specify the rate at which Tivoli Workload Scheduler synchronizes
information written to disk. This option affects all mailbox agents and is
applicable to UNIX workstations only. Values can be:
low Allows the operating system to handle it.
medium
Flushes the updates to disk after a transaction has completed.
high Flushes the updates to disk every time data is entered.
During the installation process, a working copy of the local options file is installed
as TWShome/localopts unless you have specified an non-default location for
netman. Then there two copies of the localopts file, one in TWShome and one in
Netmanhome. Any options pertaining to netman need to updated to the localopts
file in Netmanhome.
You can customize the working copy to your needs. For example:
#
# Tivoli Workload Scheduler localopts file defines attributes of this Workstation.
#
#----------------------------------------------------------------------------
# Attributes of this Workstation:
#
thiscpu = <THIS_CPU>
merge stdlists = yes
stdlist widt h= 80
syslog local = -1
#
#----------------------------------------------------------------------------
# Attributes of this Workstation for Tivoli Workload Scheduler batchman process:
#
bm check file = 120
bm check status = 300
bm look = 15
bm read = 10
bm stats = off
bm verbose = off
bm check until = 300
bm check deadline = 600
#
#----------------------------------------------------------------------------
# Attributes of this Workstation for Tivoli Workload Scheduler jobman process:
#
jm job table size = 1024
jm look = 300
jm nice = 0
jm no root = no
jm read = 10
#
#----------------------------------------------------------------------------
# Attributes of this Workstation for Tivoli Workload Scheduler mailman process:
#
mm response = 600
mm retrylink = 600
mm sound off = no
mm unlink = 960
mm cache mailbox = no
mm cache size = 32
mm resolve master = yes
#
#----------------------------------------------------------------------------
# Attributes of this Workstation for Tivoli Workload Scheduler netman process:
#
nm mortal = no
nm port = <TCP_PORT>
nm read = 10
nm retry = 800
#
#----------------------------------------------------------------------------
# Attributes of this Workstation for Tivoli Workload Scheduler writer process:
#
wr read = 600
wr unlink = 120
wr enable compression = no
#
#----------------------------------------------------------------------------
If you intend to administer scheduling objects in this manner, you must create
shares for the directories in the master domain manager and define a set of local
options on the other computers.
In TWShome\network\ :
cpudata
cpudata.KEY
userdata
userdata.KEY
In TWShome\mozart\ :
calendars
calendars.KEY
job.sched
job.sched.KEY
jobs
jobs.KEY
mastsked
mastsked.KEY
prompts
prompts.KEY
resources
resources.KEY
In TWShome :
parameters
parameters.KEY
The files are created as needed by Tivoli Workload Scheduler. If they do not exist,
you can simply set the local options to the shared directory as described below.
Note that each option can be set to a conventional name (drive:\share) or a UNC
name (\\node\share). If set to a conventional name, the Tivoli Workload Scheduler
user must explicitly connect to the share. If set to a UNC name, an explicit
connection is not required. The local options are:
mozart directory
Defines the name of the master’s shared mozart directory.
unison network directory
Defines the name of the master’s shared directory.
parameters directory
Defines the name of the master’s shared TWShome directory.
If an option is not set or does not exist, the Tivoli Workload Scheduler programs
attempt to open the database files on the local computer. See“Setting local options”
for more information.
If an option is not set or does not exist, the Tivoli Workload Scheduler Composer
program attempts to access the database files on the local computer.
Setting sysloglocal to a positive number defines the syslog facility used by Tivoli
Workload Scheduler. For example, specifying 4 tells Tivoli Workload Schedulerto
use the local facility LOCAL4. After doing this, you must make the appropriate
entries in the /etc/syslog.conf file, and reconfigure the syslog daemon. To use
LOCAL4 and have the Tivoli Workload Scheduler messages sent to the system
console, enter the following line in /etc/syslog.conf:
local4 /dev/console
To have the Tivoli Workload Scheduler error messages sent to the maestro and root
users, enter the following:
local4.err maestro,root
Note that the selector and action fields must be separated by at least one tab. After
modifying /etc/syslog.conf, you can reconfigure the syslog daemon by entering the
following command:
kill -HUP `cat /etc/syslog.pid`
console command
You can use the conman console command to set the Tivoli Workload Scheduler
message level and to direct the messages to your terminal. The message level
setting affects only Batchman and Mailman messages, which are the most
numerous. It also sets the level of messages written to the standard list file or files
and the syslog daemon. The following command, for example, sets the level of
Batchman and Mailman messages to 2 and sends the messages to your computer:
console sess;level=2
Messages are sent to your computer until you either run another console
command, or exit conman. To stop sending messages to your terminal, you can
enter the following conman command:
console sys
The final job stream is placed in production everyday, and results in running a job
named Jnextday prior to the start of a new day. The job performs the following
tasks:
1. Links to all workstations to ensure that the master domain manager has been
updated with the latest scheduling information.
2. Runs the schedulr command to select job streams for the new day’s production
plan.
3. Runs the compiler command to compile the production plan.
4. Runs the reptr command to print pre-production reports.
5. Stops Tivoli Workload Scheduler.
6. Runs the stageman command to carry forward uncompleted job streams, log
the old production plan, and install the new plan.
7. Starts Tivoli Workload Scheduler for the new day.
8. Runs the reptr and the rep8 commands to print post-production reports for the
previous day.
9. Runs the logman command to log job statistics for the previous day.
In the Tivoli Workload Scheduler library, the terms final and Jnextday are used
when referring to both the Tivoli-supplied versions, and any user-supplied
equivalents.
When creating your own job stream, model it after the one supplied by Tivoli. If
you choose to do so, consider the following:
v If you choose to change the way stageman generates log file names, remember
that reptr and logman must use the same names.
v If you would like to print the pre-production reports in advance of a new day,
you can split the Jnextday job into two jobs. The first job will run schedulr,
compiler and reptr. The second job will stop Tivoli Workload Scheduler, run
stageman, start Tivoli Workload Scheduler, and run reptr and logman. The first
job can then be scheduled to run at any time prior to the end of day, while the
second job is scheduled to run just prior to the end of day.
See “Configuring a master domain manager” on page 73 for information about
adding the final job stream to the database.
You can use the date option with the compiler to specify today’s date or the
date of the day you are trying to recreate. This option may be necessary if you
have job streams that contain date sensitive input parameters. The scheddate
Chapter 9. Optional customization 99
Optional customization
parameter is keyed off the date specified with the compiler command. If you
do not specify a date, it defaults to the date entered with the schedulr
command.
4. Run console manager to stop Tivoli Workload Scheduler processes:
conman stop @!@
5. Run stageman to create the new symphony file:
stageman
6. Run console manager to start Tivoli Workload Scheduler processes:
conman start
In the production environment, jobs are launched under the direction of the
Production Control process Batchman. Batchman resolves all job dependencies to
ensure the correct order of execution, and then issues a job launch message to the
Jobman process.
Each of the processes launched by Jobman, including the configuration scripts and
the jobs, retain the user name recorded with the Logon of the job. In case of
submitted jobs, they retain the submitting user’s name. To have the jobs run with
the user’s environment, be sure to add the user’s .profile environment to the local
configuration script.
SHELL_TYPE standard|user|script
v If set to standard the first line of the jcl file is
read to determine which shell to use to run
the job. If the first line does not start with #!,
then /bin/sh is used to run the local
configuration script or $UNISON_JCL.
Commands are echoed to the job’s standard
list file.
v If set to user, the local configuration script or
$UNISON_JCL is run by the user’s login
shell ($UNISON_SHELL). Commands are
echoed to the job’s standard list file.
v If set to script (default), the local
configuration script or $UNISON_JCL is run
directly, and commands are not echoed
unless the local configuration script or
$UNISON_JCL contains a set -x command.
Any other setting is interpreted as standard.
USE_EXEC yes|no
v If set to yes, the job, or the user’s local
configuration script is run using the exec
command, thus eliminating an extra process.
If a sub-shell is requested (see SHELL_TYPE),
the shell being used will be executed. In
other words, once the command/script is
run, the ″jobmanrc″ process no longer exists
which is why the USE_EXEC is forced to
″NO″ if the ″MAIL_ON_ABEND″ feature is
enabled. In this case, the process needs to
come back to ″jobmanrc″ in order to allow
the post-processing. This option is overridden
if MAIL_ON_ABEND is also set to yes.
v Any other setting is interpreted as no, in
which case the job or local configuration
script is run by another shell process.
If you intend to use a local configuration script, it must, at a minimum, run the
job’s script file ($UNISON_JCL). The Tivoli-supplied standard configuration script,
jobmanrc, runs your local configuration script as follows:
$EXECIT $USE_SHELL $TWSHOME/.jobmanrc "$UNISON_JCL" $IS_COMMAND
The value of USE_SHELL is set to the value of the jobmanrc SHELL_TYPE variable
(see Table 29 on page 101). IS_COMMAND is set to yes if the job was scheduled or
submitted using the docommand construct. EXECIT is set to exec if the variable
USE_EXEC is set to yes (see Table 29 on page 101), otherwise it is null. All the
variables exported into jobmanrc are available in the .jobmanrc shell, however,
variables that are defined, but not exported, are not available.
echo "**************************************"
echo "* Doing some pre-processing activity *"
echo "**************************************"
echo ""
USER_DEFINED_VARIABLES=some_value
export USER_DEFINED_VARIABLES
echo "************************************"
echo "* Launching the TWS command/script *"
echo "************************************"
echo ""
eval "$UNISON_JCL"
RETURN_CODE=$?
echo "*******************************************"
echo "* Executing post processing into .jobmanrc*"
echo "*******************************************"
echo ""
if [ $RETURN_CODE -gt 0 ]
then
echo "Return code from commnad = " $RETURN_CODE
echo "Setting Return Code to 0"
RETURN_CODE=0
fi
exit $RETURN_CODE
##### End of .jobmanrc script #####
A Tivoli managed node runs the same software that runs on a Tivoli server. From a
managed node you can run the Tivoli desktop and directly manage other Tivoli
managed resources. A managed node has its own oserv service that runs
continuously and communicates with the oserv service on the Tivoli server. A
managed node also maintains its own client database. The primary difference
between a Tivoli server and a managed node is the size of the database. Also, you
cannot have a managed node without a Tivoli server in a Tivoli management
region.
user sm logon=maestro,root
begin
# OBJECT ATTRIBUTES ACCESS CAPABILITIES
# ---------- ------------ ----------------------
job cpu=$thiscpu access=@
schedule cpu=$thiscpu access=@
resource cpu=$thiscpu access=@
prompt access=@
file access=@
calendar access=@
cpu cpu=$thiscpu access=@
parameter cpu=$thiscpu
~ name=r@ access=@
end
###########################################################
Suppose that you want these two users to use the Job Scheduling Console. After
you install the Tivoli Management Framework software, from the Tivoli desktop
you create an additional Tivoli administrator (beside the default one created by the
installation process) for each user. You call one mastersm and the other sm. You
then add the respective definitions so that the Security file looks like this:
###########################################################
# Security File
###########################################################
# (1) APPLIES TO MAESTRO OR ROOT USERS LOGGED IN ON THE
# MASTER DOMAIN MANAGER
user mastersm cpu=$master + logon=maestro,root
begin
# OBJECT ATTRIBUTES ACCESS CAPABILITIES
# ---------- ------------ ----------------------
job access=@
schedule access=@
resource access=@
prompt access=@
file access=@
calendar access=@
cpu access=@
parameter name=@ ~ name=r@ access=@
userobj cpu=@ + logon=@ access=@
end
###########################################################
# (2) TIVOLI ADMINISTATOR DEFINITION FOR MAESTRO OR ROOT USERS
LOGGED IN ON THE
# MASTER DOMAIN MANAGER
user mastersm cpu=$framework + logon=mastersm
begin
# OBJECT ATTRIBUTES ACCESS CAPABILITIES
# ---------- ------------ ----------------------
job access=@
schedule access=@
resource access=@
prompt access=@
file access=@
calendar access=@
cpu access=@
parameter name=@ ~ name=r@ access=@
userobj cpu=@ + logon=@ access=@
end
###########################################################
###########################################################
# (3) APPLIES TO MAESTRO OR ROOT USERS LOGGED IN ON ANY
# WORKSTATION OTHER THAN THE MASTER DOMAIN MANAGER.
user sm logon=maestro,root
begin
# OBJECT ATTRIBUTES ACCESS CAPABILITIES
# ---------- ------------ ----------------------
The new security file grants the same privileges to the original users also on the
Job Scheduling Console.
Furthermore, on the Tivoli server you could add two new logins for Tivoli
administrator mastersm:
maestro@rome.production.com
maestro@london.production.com
So that any authorized user who logs into rome or london as maestro will acquire
the privileges granted to mastersm.
If on the backup master you install a different Tivoli server than the master, be
sure to enable the same entries as on the master, that is:
v Start
v Plan and database audit levels
v time zone enable global option
Note: It is not necessary to edit all the global options for each workstation in
the network. The nodes will acknowledge the new master at Jnextday
time when the master initializes them.
5. Define the workstation in the Tivoli Workload Scheduler network. Either use
the Composer cpuname command or the Create Workstations window in the
Job Scheduling Console. Be sure you define it with Resolve Dependencies and
Full Status enabled. See the IBM Tivoli Workload Scheduler Reference Guide or the
IBM Tivoli Job Scheduling Console User’s Guide.
The fault-tolerant agent must have Full Status and Resolve Dependencies enabled
in its workstation definition.
Before mounting the databases, make certain that the file system containing the
required directories has been included in the /etc/exports file on the master
workstation. If you choose to control the availability of the file system, make the
appropriate entries in the /etc/hosts or /etc/netgroup file in the master.
The mount point on the fault-tolerant agent must be the same as the master. For
example, on the fault-tolerant agent:
cd twshome
/etc/mount mastername:mozart mozart
/etc/mount mastername:../unison/network ../unison/network
To have the databases mounted automatically, you can enter the mounts in the
/etc/checklist file.
If you use this solution, be aware that the parameters database in the fault-tolerant
agent is not the master’s but a local copy. This becomes an issue if you use parms
as part of the job definitions (in the task or login name), because at Jnextday time
all the parameters referenced with the ^ (carat) symbol in job definitions are
expanded from the parameters database in the master. You have two possible
workarounds for this issue:
v Create a script that uploads and changes the parameter values from the
fault-tolerant agent to the master. Run this script just before Jnextday. Making
Jnextday dependent on it will make sure that the parms are uploaded
successfully before Jnextday sets up production for the following day.
v On the master cpu, move the parameters database to the mozart directory. Create
a link from the master to the home directory. Next, on the fault-tolerant agent
create a link from the parameters database in mozart to twshome.
If you wish to enable the time zone feature in the Job Scheduling Console, you also
need to edit the local globalopts file on the fault-tolerant agent to set the timezone
enable entry.
General
Tivoli Workload Scheduler/NetView is a NetView application that gives network
managers the ability to monitor and diagnose Tivoli Workload Scheduler networks
from a NetView management node.
The agents also generate SNMP traps to inform the manager of asynchronous
events, such as job abends, stuck schedules, and restarted scheduler processes.
Although polling and traps are functionally independent, the information that
accompanies a trap can be correlated with symbol state changes. If, for example, a
scheduled job abends, the symbol for the workstation changes color, and a job
abend trap is logged in the NetView event log. By scanning the log, you can
quickly isolate the problem and take the appropriate action.
The muser process runs commands issued by a NetView user, and updates the
user’s map. An muser is started for each NetView user whose map has the Tivoli
Workload Scheduler/NetView application activated.
Types of information
The manager collects two types of information by polling its agents:
Job scheduling
Indicates the status of jobs and schedules in a Tivoli Workload Scheduler
network. The information is provided by a single agent, usually running on the
master of the network. Alternatively, the information can be provided by an
agent running on a fault-tolerant agent that has been configured as a backup
master.
Monitored process
Indicates the status of scheduler critical processes on a workstation (netman,
mailman, batchman, jobman, mailman servers, writers, and all extended agent
connections). This information is provided only by local agents running on each
workstation.
Definitions
Cpu and Node
The terms cpu and node are used interchangeably to mean a workstation.
Management Nodes
A NetView management node that runs a Tivoli Workload Scheduler/NetView
manager (mdemon). In NetView 6.x and later, the management node functions
can be distributed across a server and one or more clients.
Managed Node
The nodes that comprise a Tivoli Workload Scheduler network and that have
the Tivoli Workload Scheduler/NetView agent (magent) running.
General Requirements
The basic configuration requirements are:
v Management nodes (server and clients) must have Tivoli Workload Scheduler
installed, but need not be members of managed scheduler networks.
v Tivoli Workload Scheduler/NetView managers (mdemon) run exclusively on
AIX. Tivoli Workload Scheduler/NetView agents (magent) run on AIX, HPUX,
and Solaris.
v There must be at least one managed node in a managed scheduler network. To
obtain accurate job scheduling information, this should be either the master, or
the backup master, that is, a fault-tolerant agent with fullstatus on and
resolvedep on in its definition.
Configuration
The NetView management node can be a member of a managed Tivoli Workload
Scheduler network or not.
When you plan for your configuration, you should consider the following:
v If you choose to use a master workstation, or a backup master, as the NetView
management node, it can also have a Tivoli Workload Scheduler/NetView agent
that provides job scheduling status for its Tivoli Workload Scheduler network.
This minimizes Tivoli Workload Scheduler/NetView manager-agent traffic when
polling. However, you must also consider the additional workload imposed by
NetView management, particularly in large networks and those with several
NetView applications, which can noticeably slow down Tivoli Workload
Scheduler processing.
v Choosing an existing Tivoli Workload Scheduler standard agent as a NetView
management node, or making the current NetView management node a
standard agent, has the advantage of not overloading the master, and of letting
you use Tivoli Workload Scheduler on that node to schedule NetView
management tasks, such as clearing out log files.
Using customize on the management node and version 6.x NetView Server: On
the management node and version 6.x NetView server, customize does the
following:
1. Performs the steps listed above for managed nodes.
2. Registers the Tivoli Workload Scheduler/NetView mdemon process so that it is
started by NetView.
3. Adds Unison Software’s enterprise traps.
4. Copies the Tivoli Workload Scheduler fields, application, MIB, and help files
into the appropriate directory structure.
Using customize on version 6.x NetView clients: On version 6.x NetView clients,
customize does the following:
1. Performs the steps listed above for managed nodes.
2. Copies the Tivoli Workload Scheduler/NetView application and help files into
the appropriate directory structure. The Application Registration File (ARF)
installed by Tivoli Workload Scheduler/NetView uses NetView’s nvserver_run
facility to launch the application on the server.
Reviewing changes: If you want to review the changes made by customize before
installing any files, run the script with the -noinst option, as follows:
/bin/sh TWShome/OV/customize -noinst
Create the files in the /tmp directory. As customize runs, it tells you where to move
the files to complete the installation. Alternatively, you can remove the /tmp files
and rerun customize without the -noinst option.
where:
[-uname name]
IBM Tivoli Workload Scheduler user name.
-prev3 Include this option if your version of NetView is prior to version 3.
-noinst
Do not overwrite existing NetView configuration files. See “Reviewing
changes” on page 114.
-client For NetView version 6.x and later, include this option for management
clients.
-manager
The host name of the management node. For NetView version 6.x and
above, this is the host name of the NetView server. This is required for
managed nodes and NetView clients. Do not use this option on the
management node or NetView server.
Installing
The installation procedure is made up by the following two steps:
v Installing on managed nodes and on NetView clients
v Installing on the management node or NetView server
Installing on managed nodes and NetView clients: The management node can
also be a managed node. For the management node or NetView server, skip this
step and perform step Installing on the management node or NetView server.
1. Make certain that no Tivoli Workload Scheduler processes are running. If
necessary, issue a conman shutdown command.
2. Log in as root.
3. For managed nodes, including those that are also NetView clients that are not
used to manage Tivoli Workload Scheduler, run the customize script as follows:
/bin/sh <TWShome>/OV/customize -manager host
Setting up
Follow these steps:
1. Determine the user who will be managing Tivoli Workload Scheduler with
NetView.
a. On each managed node, enter the host name of the management node in
the user’s $HOME/.rhosts file.
b. To allow the user to run certain scheduler commands, you must add a user
definition to the scheduler security file. You can, for example, give this user
the same capabilities as the default maestro user. For more information about
Tivoli Workload Scheduler security, refer to the IBM Tivoli Workload
Scheduler Planning and Installation book.
2. On the management node, run NetView.
3. Bring up the map you intend to use.
a. From the File menu, select Describe Map....
b. When the Map Description dialog box appears, select Maestro-Unison
Software(c) from the Configurable Applications list, and click Configure
For This Map....
c. When the Configuration dialog box appears, click True under Enable
Maestro for this map.
d. Click Verify.
e. Click OK to close the Configuration dialog box.
f. Click OK to close the Map Description dialog box.
4. If you want to use the MIB browser, load the Tivoli Workload Scheduler MIB as
follows:
a. From the Options menu, select Load/Unload MIBs:SNMP... .
b. When the Load/Unload MIB dialog box appears, click Load.
c. When the Load MIB From File dialog box appears, enter
/usr/OV/snmp_mibs/Maestro.mib
You are now ready to use Tivoli Workload Scheduler/NetView. On the Tivoli
Workload Scheduler master issue a conman start@ command to restart Tivoli
Workload Scheduler in the network. This can be done in NetView on the Tivoli
Workload Scheduler Network submap as follows:
1. Select all of the nodes in the network.
2. From the Tools menu, select Tivoli Workload Scheduler, and then select Start.
For more information about Tivoli Workload Scheduler workstation status, see
“Configuring workstation status in NetView” on page 127.
Menu actions
To use Tivoli Workload Scheduler/NetView menu actions, select Tivoli Workload
Scheduler from the Tools menu. These actions are also available from the object
context menu by right clicking a symbol.
Conman
Run the conman command line program on the selected Tivoli Workload
Scheduler workstations. Running the program on a workstation other than
the master, permits you to run all conman commands on that workstation
only. For information about conman commands, see IBM Tivoli Workload
Scheduler Reference. For an extended agent, conman is run on its host.
Start Issue a conman start command for the selected workstations. By default,
the command for this action is:
remsh %H %P/bin/conman ’start %c’
Down (stop)
Issue a conman stop command for the selected workstations. By default,
the command for this action is:
remsh %H %P/bin/conman ’stop %c’
StartUp
Run the Tivoli Workload Scheduler StartUp script on the selected
workstations. By default, the command for this action is:
remsh %h %P/StartUp
Note: Run Rediscover each time you change the Tivoli Workload
Scheduler workstation configuration.
3. As written, the remsh commands require that the NetView user be able to login
on other nodes without a password prompt.
All of the listed events can result in SNMP traps generated by the Tivoli Workload
Scheduler/NetView agents. Whether or not traps are generated is controlled by
options set in the configuration files of the agents. See “Tivoli Workload
Scheduler/NetView configuration files” on page 123 for more information.
The Additional Actions column in IBM Tivoli Workload Scheduler Reference lists the
actions available to the operator for each event. The actions can be initiated by
selecting Additional Actions from the Options menu, then selecting an action
from the Additional Actions panel.
Note: You must have the appropriate Tivoli Workload Scheduler security access to
perform the chosen action.
Table 32. Tivoli Workload Scheduler/NetView events
Trap # Name Description Additional Actions
1* uTtrapReset The magent process na
was restarted.
51 uTtrapProcessReset A monitored process na
was restarted. This
event is reported by
default in the
BmEvents.conf file
52 * uTtrapProcessGone A monitored process is na
no longer present.
53 * uTrapProcessAbend A monitored process na
abended.
54 * uTrapXagentConnLost The connection na
between a host and
xagent has been lost.
101 * uTtrapJobAbend A scheduled job Show Job, Rerun Job,
abended. Cancel Job
102 * uTtrapJobFailed An external job is in Show Job, Rerun Job,
the error state. Cancel Job
103 uTtrapJobLaunch A scheduled job was Show Job, Rerun Job,
launched successfully. Cancel Job
104 uTtrapJobDone A scheduled job Show Job, Rerun Job,
finished in a state other Cancel Job
than ABEND.
105* uTtrapJobUntil A scheduled job’s Show Job, Rerun Job,
UNTIL time has Cancel Job
passed, it will not be
launched.
To obtain critical process status, the manager polls all of its agents. For job
scheduling status, the manager determines which of its agents is most likely to
have the required information, and polls only that agent. The choice is made in the
following order of precedence:
1. The agent running on the Tivoli Workload Scheduler master.
2. The agent running on a Tivoli Workload Scheduler backup master.
3. The agent running on any Tivoli Workload Scheduler fault-tolerant agent that
has fullstatus on in its workstation definition.
You may choose to disable some or all of the Tivoli Workload Scheduler/NetView
traps for the following reasons:
1. To reduce network traffic.
2. To avoid confusion on the part of other NetView users by limiting the number
of logged events.
For more information about the Unison Software’s enterprise-specific traps and
their variables, see “Re-configuring enterprise-specific traps” on page 127.
Event 51 causes mailman and batchman to report the fact that they were
restarted. Events 1, 52, and 53 are not valid in this file (see “The MAgent
configuration file”).
For example, to remove the writers for all workstations with ids starting
with SYS, enter:
-SYS@:WRITER
For more information, review the man pages for ovaddobj(8) and lrf(4). See also
Configuring agents in NetView.
The enterprise-specific traps and their positional variables are listed in Table 33.
Trap descriptions are listed in Table 32.
2 Software version
4 Workstation name on
which the job runs
12 An event timestamp,
expressed as:
yyyymmddhhmmss 00 (that is,
year, month, day, hour,
minute, second, hundredths
always zeroes).
2 Prompt number
3 Prompt text
202 * uTtrapSchedPrompt 1 Workstation name of the
schedule
2 Schedule name
3 Prompt number
4 Prompt text
2 Schedule name
3 Job name
5 Prompt number
6 Prompt text
251 252 * uTtrapLinkDropped 1 The to workstation name.
uTtrapLinkBroken
2 Link state, indicated by an
integer: 1 (unknown), 2
(down due to an unlink), 3
(down due to an error), 4
(up).
mdemon synopsis
mdemon [-timeout secs] [-pmd] [-port port] [-retry secs]
where:
-timeout
The rate at which agents are polled, expressed in seconds. The default is 60
seconds. See “Manager polling rate” on page 126 and “Configuring agents
in NetView” on page 126 for more information about changing the rate.
-pmd This option causes mdemon to run under NetView pmd (Port Map
Demon). Otherwise, it must be run manually. This option is included by
default in the file /usr/OV/lrf/Mae.mgmt.lrf file.
-port For HP-UX agents only. This identifies the port address on the managed
nodes on which the HP-UX agents will respond. The default is 31112.
-retry The period of time mdemon will wait before trying to reconnect to a
non-responding agent. The default is 600 seconds.
magent synopsis
The syntax of magent is:
magent -peers host [, host [,...]] [-timeout secs ] [-notraps] [-port port]
where:
-peers For HP-UX agents only. This defines the hosts (names or IP addresses) to
which the agent will send its traps. The default is 127.0.0.1 (loopback).
For AIX agents, the /etc/snmpd.conf file must be modified to define the
hosts to which the agent will send its traps. To add another host, for
example, duplicate the existing trap line and change the host name:
# This file contains Tivoli Workload Scheduler
# agent registration.
#
trap public host1 1.3.6.1.4.1.736 fe
trap public host2 1.3.6.1.4.1.736 fe
-timeout
The rate at which the agent checks its monitored processes, expressed in
seconds. The default is 60 seconds.
-notraps
If included, the agent will not generate traps.
-port For HP-UX agents only. This defines the port address on which this agent
responds. The default is 31112.
General
IBM Tivoli Business Systems Manager is an object-oriented systems management
application that provides monitoring and event management of resources,
applications and subsystems within the enterprise with the objective of providing
continuous availability. Monitoring Tivoli Workload Scheduler daily plans with
IBM Tivoli Business Systems Manager provides quick determination of problems
that can jeopardize the successful and timely completion of the schedules.
Integrating Tivoli Workload Scheduler with IBM Tivoli Business Systems Manager
provides the ability to manage schedules from a unique business systems
perspective.
TWS
TBSM
Console
TWS daily plan
Key Job
Key JStream
clagent
Common
Listener
Note: The CommonListener agent process must be stopped and restarted every
day. A daily bulk discovery is needed in order for the integration to work
properly.
For bulk discovery, the TBSM database is populated with all the key objects that
are in the daily plan of Tivoli Workload Scheduler. For every key object in the
plan, information about its type, properties, and status is forwarded by the
scheduler to the common listener interface of TBSM. The common listener then
populates the TBSM database with this data.
When a new key job or job stream is added to the Tivoli Workload Scheduler plan,
a delta discovery add function forwards related information to IBM Tivoli Business
Systems Manager. Likewise, when a key object attribute is changed, a delta
discovery modify function notifies IBM Tivoli Business Systems Manager.
Marking a job or job stream as key causes IBM Tivoli Business Systems Manager to
be notified every time there is a status change or a property change.
In any case, notification of certain critical events is forwarded for all jobs and job
streams, regardless of whether they have the key flag or not. Table 34 lists these
events.
Table 34. Forwarded events for key and non-key scheduling objects
Scheduler event ID Type TBSM Event Type Severity
TWS_Job_Abend Batch Exception Critical
TWS_Sched_Abend BatchCycle Exception Critical
TWS_Job_Cancel Batch Message Warning
TWS_Sched_Cancel BatchCycle Message Warning
TWS_Job_Failed Batch Exception Critical
You can mark a job or job stream as key in both the database and daily plan. In
the database, key jobs can be defined just as if inserted into a job stream.
To mark a job or a job stream as key, you can use one of the following:
v The keywords KEYSCHED (for job streams) and KEYJOB (for jobs), as the
following example shows.
SCHEDULE cpu1#sched1
ON mo,tu...
AT 0100
KEYSCHED
:cpu1#myjob1 KEYJOB
END
cpu1#myjob1
SCRIPTNAME"C:\my.bat"
STREAMLOGON"twsusr1"
RECOVERY STOP
v The job and job stream properties windows in the Job Scheduling Console.
The job properties windows display, both at the database and plan levels, an Is
Monitored Job check box that you mark to specify IBM Tivoli Business Systems
Manager monitoring. In the job properties window at the plan level you can
change this setting for the specific job instance.
The job stream properties windows display, at the database and plan levels, the
following two items:
– An Is Monitored Job Stream check box that you mark to specify that the job
stream is to be monitored by IBM Tivoli Business Systems Manager. You can
change this setting at the job stream instance level.
– A read-only field named Contains Monitored Job that indicates if any of the
jobs comprised in the job stream have been marked as key.
You can choose the key flag as a filtering criterion when you run lists of jobs or
job streams in the database or in the plan.
You must have the Java Runtime Environment (JRE) Version 1.3 installed on every
workstation that will be running the common listener agent.
After installing IBM Tivoli Workload Scheduler, do the following to configure the
common listener agent:
1. Enter the following to configure the environment:
v On Windows:
PATH=%JRE-DIR%\jre\bin;%JRE-DIR%\jre\bin\classic;%PATH%
v On Solaris:
LD_LIBRARY_PATH=$JRE-DIR/jre/lib/sparc/:$JRE-DIR/jre/lib/sparc/client:$LD_LIBRARY_PATH
v On AIX:
LD_LIBRARY_PATH=$JRE-DIR/jre/bin/classic/:$JRE-DIR/jre/bin/:$LD_LIBRARY_PATH
LIBPATH=$JRE-DIR/jre/bin/classic/:$JRE-DIR/jre/bin/:TWShome/Tbsm/
TbsmAdapter:$LIBPATH
export LIBPATH
AIXTHREAD_SCOPE=S
AIXTHREAD_MUTEX_DEBUG=OFF
AIXTHREAD_RWLOCK_DEBUG=OFF
AIXTHREAD_COND_DEBUG=OFF
v On Linux™:
LD_LIBRARY_PATH=$JRE-DIR/jre/bin/classic:$JRE-DIR/jre/bin:$LD_LIBRARY_PATH
where, $JRE-DIR is the installation path of Java RunTime Environment 1.3.1.
Note: After setting the path, restart IBM Tivoli Workload Scheduler before you
start the common listener agent.
2. Configure the <TWShome>/Tbsm/TbsmAdapter/adapter.config file to enable the
common listener agent to connect with the CommonListener of Tivoli Business
Systems Manager. Set the following parameters:
loggingmode.default = false
transport.local.ip.address = adapterhost
transport.request.address = adapterhost.INSTR.QM+INSTR.Q
transport.response.address = adapterhost.INSTR.QM+INSTR.Q
transport.server.ip.address = serverhost
where,
adapterhost
Is the full hostname of the computer running the common listener
agent.
serverhost
Is the full host name of the computer where the CommonListener is
installed.
These files are configured with default values when you run customize. You can
change the defaults if you have different preferences with respect to which events
are reported to IBM Tivoli Business Systems Manager and how they must be
reported.
Mailman, batchman, or the common listener agent read form these configuration
files when they are initialized. If you make any changes in these files, then you
have to restart them.
Customizing BmEvents.conf
You can change the following parameters:
OPTIONS=MASTER|OFF
If the value is OFF, only local events are forwarded to IBM Tivoli Business
Systems Manager. If the value is MASTER, also the events occurring in the
attached workstations are forwarded. The value should be MASTER for
the master workstation and OFF for the other workstations.
LOGGING=ALL|KEY
If the value is KEY, the key flag filter mechanism is enabled. Events are
sent only for key jobs and job streams (refer to Table 35 to find the events
filtered by the key flag and to Table 34 for a list of events that are
forwarded regardless of whether the job or job stream is key or not). If the
value is ALL, events are sent also for non-key jobs and job streams.
EVENTS = <n>
The list of events to report to IBM Tivoli Business Systems Manager. By
default, all the events listed in Table 35 on page 137 are sent. You can
exclude some events if you are not interested in reporting them. In this
case, write here a list of the event numbers that you want to send.
MSG = <msg_path>
The name and the path of the message file where batchman and mailman
will write the events for the common listener agent to read. You can add
more than one message file.
SYMEVNTS=YES|NO
If the value is YES, batchman reports job status events immediately after
the generation of the plan. It is valid for only key-flagged jobs with
LOGGING=KEY. If the value is set to NO, no report is given. NO is the
default value.
Customizing ClEvents.conf
You can change the following parameters:
EVENTS = <n>
The list of events to report to IBM Tivoli Business Systems Manager. By
default, all the events listed in Table 35 on page 137 are sent. You can
exclude some events if you are not interested in reporting them. In this
case, write here a list of the event numbers that you want to send.
MSG = <msg_path>
The name and the path of the message file where batchman and mailman
will write the events for the common listener agent to read. The name and
path of this file must match one of the output files you specified in
BmEvents.conf.
RETRYINTERVAL=<seconds>
The amount of time after which the cl_agent tries to reconnect to the Tivoli
Business Systems Manager adapter.
Table 35. Tivoli Workload Scheduler events for Tivoli Business Systems Manager
Key flag
filter
Event Type Description enabled
101 mstJobAbend Job abended No
102 mstJobFailed Job is in error status No
103 mstJobLaunch Job launched Yes
104 mstJobDone Job finished Yes
105 mstJobUntil Job until time expired No
106 mstJobSubmit Job submitted No
107 mstJobCancel Job has been canceled No
108 mstJobReady Job is in ready status Yes
109 mstJobHold Job is in hold status Yes
110 mstJobRestart Job is in restart status Yes
111 mstJobCant Batchman failed to stream the job No
112 mstJobSuccp Job in succ-pending status Yes
113 mstJobExtrn Job is in extern status Yes
114 mstJobIntro Job is in intro status Yes
115 mstJobStuck Job is in stuck status Yes
116 mstJobWait Job is in wait status Yes
117 mstJobWaitd Job is in wait-deferred status Yes
118 mstJobSched Job is in sched status Yes
119 mstJobModify Job property modified Yes
120 mstJobLate Job is late Yes
121 mstJobUntilCont Job until time expired with continue Yes
option
122 mstJobUntilCanc Job until time expired with cancel Yes
option
151 mstSchedAbend Schedule abended No
152 mstSchedStuck Schedule is in stuck state No
153 mstSchedStart Schedule started Yes
154 mstSchedDone Schedule finished Yes
155 mstSchedUntil Schedule until time expired Yes
156 mstSchedSubmit Schedule submitted No
157 mstSchedCancel Schedule has been canceled No
158 mstSchedReady Schedule is in ready status Yes
159 mstSchedHold Schedule is in hold status Yes
160 mstSchedExtrn Schedule is in extern status Yes
161 mstSchedCnPend Schedule is in cancel-pending status Yes
Table 35. Tivoli Workload Scheduler events for Tivoli Business Systems
Manager (continued)
Key flag
filter
Event Type Description enabled
165 mstSchedUntilCanc Schedule until time expired with Yes
cancel option
201 mstGlobalPrompt Global prompt displayed. No
202 mstSchedPrompt Local prompt for a schedule is Yes
displayed.
203 mstJobPrompt Local prompt for a job is displayed. Yes
204 mstJobRecovPrompt Prompt for a recovery job is No
displayed.
251 mstLinkDropped Communication link between No
workstations closed.
252 mstLinkBroken Communication link between No
workstations failed.
351 mstDomainMgrSwitch Domain manager has been switched. No
You or your IBM Tivoli Workload Scheduler administrator can decide whether or
not to implement SSL support across your network.
The SSL protocol is based on a private and public key methodology and is the
highest security standard currently in use for Internet communications. The
connection security it provides has three basic properties:
v The connection is private. Encryption is used after an initial handshake to define
a secret key. Symmetric cryptography is used for data encryption (for example,
DES and RC4).
v The peer’s identity can be authenticated using asymmetric, or public key,
cryptography (for example, RSA and DSS).
v The connection is reliable. Message transport includes a message integrity check
that uses a keyed MAC. Secure hash functions, such as SHA and MD5, are used
for MAC computations.
The Tivoli Workload Scheduler administrator will have to define which of the
workstations in the network need to establish SSL sessions with the other
workstations. The information indicating if a connection should be SSL or not can
be configured in the workstation definition in the IBM Tivoli Workload Scheduler
database from either the command line or the IBM Tivoli Workload Scheduler Job
Scheduling Console.
Tivoli Workload Scheduler SSL support provides basic functions in the area of
certificate management. It provides basic functions, such as the insertion and
management of keys and certificates into the key-store and trust-chain
To provide SSL security for a domain manager attached to IBM Tivoli Workload
Scheduler for z/OS in an end-to-end connection, you have to configure the OS/390
Cryptographic Services System SSL in the IBM Tivoli Workload Scheduler code
that runs in the OS/390 USS UNIX shell in the IBM Tivoli Workload Scheduler for
z/OS server address space. See the IBM Tivoli Workload Scheduler for z/OS
documentation to learn how to accomplish this task.
Public-key cryptography uses two different cryptographic keys: a private key and a
public key. Public-key cryptography is also known as asymmetric cryptography,
because you can encrypt information with one key and decrypt it with the
complement key from a given public-private key pair. Public-private key pairs are
simply long strings of data that act as keys to a user’s encryption scheme. The user
keeps the private key in a secure place (for example, encrypted on a computer’s
hard drive) and provides the public key to anyone with whom the user wants to
communicate. The private key is used to digitally sign all secure communications
sent from the user while the public key is used by the recipient to verify the
sender’s signature.
certificate to ensure that the certificate request was not modified while transiting
between the requester and the CA and that the requester is in possession of the
private key that matches the public key in the certificate request.
The CA is also responsible for some level of identification verification. This can
range from very little proof to absolute assurance of the owner’s identity. A
particular kind of certificate is the self-signed digital certificate. It contains:
v The owner’s distinguished name
v The owner’s public key
v The owner’s own signature over these fields
A root CA’s digital certificate is an example of a self-signed digital certificate.
Users can also create their own self-signed digital certificates for testing purposes.
The following example describes in a simplified way how digital certificates are
used in establishing an SLL session. In this scenario, Appl1 is a client process that
opens an SLL connection with the server application Appl2:
1. Client Appl1 asks to open an SSL session with server Appl2.
2. Appl2 starts the SSL handshake protocol. It encrypts the information using its
private key and sends its certificate with the matching public key to Appl1.
3. Appl1 receives the certificate from Appl2 and verifies that it is signed by a
trusted certification authority. If the certificate is signed by a trusted CA, Appl1
can optionally extract some information (such as the distinguished name)
stored in the certificate and performs additional authentication checks on
Appl2.
4. At this point, the server process has been authenticated, and the client process
starts it part of the authentication process; that is, Appl1 encrypts the
information using its private key and sends the certificate with its public key to
Appl2.
5. Appl2 receives the certificate from Appl1 and verifies that it is signed by a
trusted certification authority.
6. If the certificate is signed by a trusted CA, Appl2 can optionally extract some
information (such as the distinguished name) stored in the certificate and
performs additional authentication checks on Appl1.
options file of the workstations a name or a list of names that must match
the contents of the distinguished name (DN) field in the certificate before a
connection request is accepted.
Use a Certificate for each domain
In this case, repeatedly follow the previous steps to create and install more
private keys and signed certificates, one for each domain in the IBM Tivoli
Workload Scheduler network. Then, configure each workstation to accept a
connection only with partners that have a particular string in the DN field
of their certificate.
Use a Certificate for each CPU
In this case, repeatedly follow the previous steps to create and install on
each workstation a different private key and a signed certificate and to add
a Trusted CA list containing the CA that signed the certificate. Then,
configure each workstation to accept a connection only with partners that
have their workstation name (as specified in the Symphony file) recorded
in the DN field of their certificate.
If you use SSL authentication for your enterprise’s Tivoli Workload Scheduler
network and not for outside Internet commerce, you can act as your own
certification authority to create and sign the certificates. To be your own CA, you
must create a CA key and a self-signed CA certificate. After that, you have the
power to sign any certificate request with your own CA signature and to create
valid certificates. To use a self-signed certificate, you must download your CA
certificate in the Trusted CA list of every workstation that will use SSL. This
capability allows customers to act as a real certification authority, without having
the need to request certificates to a commercial CA.
In Tivoli Workload Scheduler, SSL support is available for the fault-tolerant agents
only (including the master and the domain managers), but not for the extended
agents. If you want to use SSL authentication for a workstation that runs an
extended agent, you must specify this parameter in the definition of the host
workstation of the extended agent.
v The private key and the corresponding certificate that identify the workstation in
an SSL session.
v The list of certificate authorities that can be trusted by the workstation.
These actions will produce the following files that you will install on the
workstation(s):
v A private key file (for example, TWS.key). This file should be protected, so that
it is not stolen to use the workstation’s identity. You should save it in a directory
that allows read access to the TWS user of the workstation, such as
TWShome/ssl/TWS.key.
v The corresponding certificate file (for example, TWS.crt). You should save it in a
directory that allows read access to the TWS user of the workstation, such as
TWShome/ssl/TWS.crt.
v A file containing a pseudo-random generated sequence of bytes. You can save it
in any directory that allows read access to the TWS user of the workstation, such
as TWShome/ssl/TWS.rnd.
Note: In the following steps, the names of the files created during the process TWS
and TWSca are sample names. You can use your own names, but keep the
same file extensions.
1. Choose a workstation as your CA root installation.
2. Type the following command from the SSL directory to initialize the pseudo
random number generator, otherwise subsequent commands may not work.
v On UNIX:
$ openssl rand -out TWS.rnd -rand ./openssl 8192
v On Windows:
$ openssl rand -out TWS.rnd -rand ./openssl.exe 8192
3. Type the following command to create the CA private key:
$ openssl genrsa -out TWSca.key 1024
4. Type the following command to create a self-signed CA Certificate (X.509
structure):
$ openssl req -new -x509 -days 365 -key TWSca.key -out TWSca.crt -config ./openssl.cnf
Now you have a certification authority that you can use to trust all of your
installations. If you wish, you can create more than one CA.
On each workstation, perform the following steps to create a private key and a
certificate:
1. Type the following command from the SSL directory to initialize the pseudo
random number generator, otherwise subsequent commands may not work.
v On UNIX:
$ openssl rand -out workstationname.rnd -rand ./openssl 8192
v On Windows:
$ openssl rand -out workstationname.rnd -rand ./openssl.exe 8192
2. Type the following command to create the private key (this example shows
triple-DES encryption):
$ openssl genrsa -des3 -out workstationname.key 1024
Then, save the password that was requested to encrypt the key in a file named
workstationname.pwd.
Note: Verify that file workstationname.pwd contains just the characters in the
password. For instance, if you specified the word maestro as the
password, your workstationname.pwd file should not contain any CR or
LF characters at the end (it should be 7 bytes long).
3. Type the following command to save your password, encoding it in base64 into
the appropriate stash file:
$ openssl base64 -in workstationname.pwd -out workstationname.sth
You can then delete file workstationname.pwd.
4. Type the following command to create a certificate signing request (CSR):
$ openssl req -new -key workstationname.key -out workstationname.csr
-config ./openssl.cnf
The table below summarizes which of the files created during the process have to
be set as values for the workstation’s local options.
Table 36. Files for Local Options
Local option File
SSL key workstationname.key
SSL certificate workstationname.crt
SSL key pwd workstationname.sth
SSL ca certificate TWSca.crt
SSL random seed workstationname.rnd
The following example shows a workstation definition that includes the security
attributes:
cpuname ENNETI3
os WNT
node apollo
tcpaddr 30112
secureaddr 32222
for maestro
autolink off
fullstatus on
securitylevel on
end
different from the nm port local option that defines the port used for
normal communications. The default value is 31113.
Notes:
1. On Windows, place this option also on TWShome/localopts.
2. If you install multiple instances of Tivoli Workload Scheduler 8.2 on the
same computer, set all SSL ports to different values.
3. If you plan not to use SSL, set the value to 0.
SSL auth mode
The behavior of Tivoli Workload Scheduler during an SSL handshake is
based on the value of the SSL auth mode option as follows:
caonly Tivoli Workload Scheduler checks the validity of the certificate and
verifies that the peer certificate has been issued by a recognized
CA. Information contained in the certificate is not examined. It is
the default. If you do not specify the SSL auth mode option, or you
define a value that is not valid, the caonly value is used.
string Tivoli Workload Scheduler checks the validity of the certificate and
verifies that the peer certificate has been issued by a recognized
CA. It also verifies that the Common Name (CN) of the Certificate
Subject matches the string specified in SSL auth string. See 148.
cpu Tivoli Workload Scheduler checks the validity of the certificate and
verifies that the peer certificate has been issued by a recognized
CA. It also verifies that the Common Name (CN) of the Certificate
Subject matches the name of the CPU that requested the service.
SSL auth string
Used in conjunction with SSL auth mode when the string value is
specified. The SSL auth string (ranges from 1 — 64 characters) is used to
verify the certificate validity. If you do not specify an SSL auth string
value in conjunction with the SSL auth mode, then the default string value
is tws.
SSL key
The name of the private key file. The default path in the localopts file is
TWShome/ssl/filename.key.
SSL certificate
The name of the local certificate file. The default path in the localopts file
is TWShome/ssl/filename.crt.
SSL key pwd
The name of the file containing the password for the stashed key. The
default path in the localopts file is TWShome/ssl/filename.sth.
SSL CA certificate
The name of the file containing the trusted CA certificates required for
authentication. The CAs in this file are also used to build the list of
acceptable client CAs passed to the client when the server side of the
connection requests a client certificate. This file is the concatenation, in
order of preference, of the various PEM-encoded CA certificate files. The
default path in the localopts file is TWShome/ssl/filename.crt.
SSL certificate chain
The name of the file that contains the concatenation of the PEM-encoded
certificates of certification authorities which form the certificate chain of the
workstation’s certificate. This parameter is optional. If it is not specified,
the file specified for the SSL CA certificate is used.
SSL random seed
The pseudo random number file used by OpenSSL on some platforms.
Without this file, SSL authentication may not work properly. The default
path in the localopts file is TWShome/ssl/filename.rnd.
All Tivoli Workload Scheduler workstations whose links with the corresponding
domain manager or with any domain manager in the Tivoli Workload Scheduler
hierarchy right up to the master, is across a firewall, should be defined with the
behindfirewall attribute.
For all workstations with behindfirewall set to ON, the start wkstation, stop
wkstation, and showjobs commands are sent following the domain hierarchy,
instead of making the master or the domain manager open a direct connection to
the workstation. This makes a significant improvement in security.
This attribute works for multiple nested firewalls as well. For extended agents, you
can specify that an extended agent CPU is behind a firewall by setting the
behindfirewall attribute to ON, on the host workstation. The attribute is read-only
in the plan; to change it in the plan, the administrator must update it in the
database and then recreate the plan.
See the Tivoli Workload Scheduler Reference Guide for details on how to set this
attribute.
Uninstalling the product will not remove files created after Tivoli Workload
Scheduler was installed, nor files that are open at the time of uninstall. If you do
not need those files, you have to remove them manually. Refer to Tivoli Workload
Scheduler Administration and Troubleshooting for information about removing Tivoli
Workload Scheduler manually.
The uninstall program does not remove the Tivoli Workload Scheduler connector,
the Tivoli Plus Module, or the Tivoli Management Framework. Refer to the Tivoli
Workload Scheduler Job Scheduling Console User’s Guide for uninstalling the connector,
the Tivoli Workload Scheduler Plus Module User’s Guide for uninstalling the Tivoli
Plus Module, and the Tivoli Enterprise Installation Guide for uninstalling the Tivoli
Management Framework.
Follow these steps to uninstall Tivoli Workload Scheduler using the twsinst script.
1. Before uninstalling, stop any existing Tivoli Workload Scheduler processes that
were created on this particular system. If you have jobs that are currently
Note: The -lang option is not to be confused with the Tivoli Workload
Scheduler supported language packs. By default, all supported language
packs are installed when you install using the twsinst script.
For example, a sample twsinst script that uninstalls the Tivoli Workload Scheduler,
Version 8.2 engine, originally installed for user named twsuser:
./twsinst -uninst -uname twsuser
The software package block that installs language packs can also be removed in
this way. Refer to Tivoli Workload Scheduler Troubleshooting and Error Messages for
information about removing Tivoli Workload Scheduler manually.
Scheduler accounts (product groups), edit the file by deleting the lines that
correspond to the instance you want to remove.
For example, suppose that /usr/unison/components contains the following entries:
maestro 7.0 /opt/maestro DEFAULT
maestro 8.2 /data/maestro8/maestro TWS_maestro8_8.2
If you plan to remove the Tivoli Workload Scheduler instance located under
/opt/maestro, then delete the first line. If /usr/unison/components contains only the
instance that you want to remove, then delete the entire file.
4. Remove the links, if applicable, to the /usr/bin directory. The installation process
gives you the option to link Tivoli Workload Scheduler executables to a
common directory. The default is /usr/bin. Remove the following files:
v /usr/bin/maestro
v /usr/bin/mat
v /usr/bin/mbatch
v /usr/bin/datecalc
v /usr/bin/morestdl
v /usr/bin/jobstdl
v /usr/bin/parms
5. Finally, remove the entire Maestro/Tivoli Workload Scheduler account with the
following command:
rm -rf <twshome>
If your system startup command was modified to include a conman ″start″ or a
<twshome>/StartUp command, you must also remove those entries.
Open the pdf versions of documents and use the built-in search facilities of Adobe
Reader to find the information you require.
To access some documents you need to register (indicated by a key icon beside the
document title). To register, select the document you wish to look at, and when
asked to sign in follow the links to register yourself. There is also a FAQ available
on the advantages of registering.
Obtaining fixes
A product fix might be available to resolve your problem. You can determine what
fixes are available for your IBM software product by checking the product support
Web site:
1. Go to the IBM Software Support Web site
(http://www.ibm.com/software/support).
2. Under Products A - Z, select your product name: select ″I″ for IBM and then
scroll down to the product entries that commence ″IBM Tivoli Workload
Scheduler″. These open product-specific support sites.
3. Under Self help, follow the link to Search all Downloads, where you will find
a list of fixes, fix packs, and other service updates for your product.
4. Click the name of a fix to read the description and optionally download the fix.
To receive weekly e-mail notifications about fixes and other news about IBM
products, follow these steps:
1. From the support page for any IBM product, click My support in the panel on
the left of the page.
2. If you have already registered, skip to the next step. If you have not registered,
click register in the upper-right corner of the support page to establish your
user ID and password.
3. Sign in to My support.
4. On the My support page, select the Edit profile tab and click Subscribe to
email. Select a product family and check the appropriate boxes for the type of
information you want.
5. Click Update.
6. For e-mail notification for other product groups, repeat Steps 4 and 5.
For more information about types of fixes, see the Software Support Handbook
(http://techsupport.services.ibm.com/guides/handbook.html).
Before contacting IBM Software Support, your company must have an active IBM
software maintenance contract, and you must be authorized to submit problems to
IBM. The type of software maintenance contract that you need depends on the
type of product you have:
v For IBM distributed software products (including, but not limited to, Tivoli,
Lotus, and Rational products, as well as DB2 and WebSphere products that run
on Windows or UNIX operating systems), enroll in Passport Advantage in one
of the following ways:
– Online: Go to the Passport Advantage Web page
(http://www.lotus.com/services/passport.nsf/WebDocs/
Passport_Advantage_Home) and click How to Enroll
– By phone: For the phone number to call in your country, go to the IBM
Software Support Web site
(http://techsupport.services.ibm.com/guides/contacts.html) and click the
name of your geographic region.
v For IBM eServer software products (including, but not limited to, DB2 and
WebSphere products that run in zSeries, pSeries, and iSeries environments), you
can purchase a software maintenance agreement by working directly with an
If you are not sure what type of software maintenance contract you need, call
1-800-IBMSERV (1-800-426-7378) in the United States or, from other countries, go to
the contacts page of the IBM Software Support Handbook on the Web
(http://techsupport.services.ibm.com/guides/contacts.html) and click the name of
your geographic region for phone numbers of people who provide support for
your location.
Severity 1 Critical business impact: You are unable to use the program,
resulting in a critical impact on operations. This condition
requires an immediate solution.
Severity 2 Significant business impact: The program is usable but is
severely limited.
Severity 3 Some business impact: The program is usable with less
significant features (not critical to operations) unavailable.
Severity 4 Minimal business impact: The problem causes little impact on
operations, or a reasonable circumvention to the problem has
been implemented.
v By phone: For the phone number to call in your country, go to the contacts page
of the IBM Software Support Handbook on the Web
(http://techsupport.services.ibm.com/guides/contacts.html) and click the name
of your geographic region.
If the problem you submit is for a software defect or for missing or inaccurate
documentation, IBM Software Support creates an Authorized Program Analysis
Report (APAR). The APAR describes the problem in detail. Whenever possible,
IBM Software Support provides a workaround for you to implement until the
APAR is resolved and a fix is delivered. IBM publishes resolved APARs on the
IBM product support Web pages daily, so that other users who experience the
same problem can benefit from the same resolutions.
For more information about problem resolution, see Searching knowledge bases
and Obtaining fixes.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you
any license to these patents.You can send license inquiries, in writing, to:
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product and use of those Web sites is at your own risk.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:
IBM Corporation
2Z4A/101
11400 Burnet Road
Austin, TX 78758 U.S.A.
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement or any equivalent agreement
between us.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list
of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution.
3. All advertising materials mentioning features or use of this software must
display the following acknowledgement:
This product includes software developed by the University of California,
Berkeley and its contributors.
LICENSE ISSUES
The OpenSSL toolkit stays under a dual license, i.e. both the conditions of the
OpenSSL License and the original SSLeay license apply to the toolkit. See below
for the actual license texts. Actually both licenses are BSD-styleOpen Source
licenses. In case of any license issues related to OpenSSL please contact
openssl-core@openssl.org.
OpenSSL license
Copyright (c) 1998-2001 The OpenSSL Project. All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list
of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution.
3. All advertising materials mentioning features or use of this software must
display the following acknowledgment: ″This product includes software
developed by the OpenSSL Project for use in the OpenSSL Toolkit.
(http://www.openssl.org/)″.
4. The names ″OpenSSL Toolkit″ and ″OpenSSL Project″ must not be used to
endorse or promote products derived from this software without prior written
permission. For written permission, please contact openssl-core@openssl.org.
5. Products derived from this software may not be called ″OpenSSL″ nor may
″OpenSSL″ appear in their names without prior written permission of the
OpenSSL Project.
6. Redistributions of any form whatsoever must retain the following
acknowledgment: ″This product includes software developed by the OpenSSL
Project for use in the OpenSSL Toolkit (http://www.openssl.org/)″
Notices 161
THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT ″AS IS″ AND ANY
EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
OpenSSL PROJECT OR ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
SUCH DAMAGE.
This library is free for commercial and non-commercial use as long as the
following conditions are adhered to. The following conditions apply to all code
found in this distribution, be it the RC4, RSA, lhash, DES, etc., code; not just the
SSL code. The SSL documentation included with this distribution is covered by the
same copyright terms except that the holder is Tim Hudson (tjh@cryptsoft.com).
Copyright remains Eric Young’s, and as such any Copyright notices in the code are
not to be removed. If this package is used in a product, Eric Young should be
given attribution as the author of the parts of the library used. This can be in the
form of a textual message at program startup or in documentation (online or
textual) provided with the package.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the copyright notice, this list of
conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution.
3. All advertising materials mentioning features or use of this software must
display the following acknowledgement: ″This product includes cryptographic
software written by Eric Young (eay@cryptsoft.com)″ The word ’cryptographic’
can be left out if the routines from the library being used are not cryptographic
related :-).
4. If you include any Windows specific code (or a derivative thereof) from the
apps directory (application code) you must include an acknowledgement: ″This
product includes software written by Tim Hudson (tjh@cryptsoft.com)″
THIS SOFTWARE IS PROVIDED BY ERIC YOUNG ″AS IS″ AND ANY EXPRESS
OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
The licence and distribution terms for any publically available version or
derivative of this code cannot be changed. i.e. this code cannot simply be copied
and put under another distribution licence [including the GNU Public Licence.]
Permission to use, copy, modify, and distribute this software and its documentation
for any purpose and without fee is hereby granted, provided that the above
copyright notice appear in all copies and that both that copyright notice and this
permission notice appear in supporting documentation, and that the name of CMU
not be used in advertising or publicity pertaining to distribution of the software
without specific, written prior permission.
Redistribution and use in source and binary forms are permitted provided that the
above copyright notice and this paragraph are duplicated in all such forms and
that any documentation, advertising materials, and other materials related to such
distribution and use acknowledge that the software was developed by the
University of California, Berkeley. The name of the University may not be used to
endorse or promote products derived from this software without specific prior
written permission. THIS SOFTWARE IS PROVIDED ″AS IS″ AND WITHOUT
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT
LIMITATION, THE IMPLIED WARRANTIES OF MERCHANT[A]BILITY AND
FITNESS FOR A PARTICULAR PURPOSE.
Notices 163
Toolkit of Tivoli Internationalization Services
This package contains open software from Alfalfa Software. Use of this open
software in IBM products has been approved by the IBM OSSC, but there is a
requirement to include the Alfalfa Software copyright and permission notices in
product supporting documentation. This could just be in a readme file shipped
with the product. Here is the text of the copyright and permission notices:
Permission to use, copy, modify, and distribute this software and its documentation
for any purpose and without fee is hereby granted, provided that the above
copyright notice appear in all copies and that both that copyright notice and this
permission notice appear in supporting documentation, and that Alfalfa’s name not
be used in advertising or publicity pertaining to distribution of the software
without specific, written prior permission.
Trademarks
IBM, Tivoli, the Tivoli logo, Tivoli Enterprise Console, AIX, AS/400, BookManager,
Dynix, OS/390, NetView, and Sequent are trademarks or registered trademarks of
International Business Machines Corporation or Tivoli Systems Inc. in the United
States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Java and all Java-based trademarks and logos are trademarks or registered
trademarks of Sun Microsystems, Inc. in the United States and other countries.
Other company, product, and service names may be trademarks or service marks
of others.
Batchman. Batchman is a process started at the Domain. A domain is a named group of Tivoli
beginning of each Tivoli Workload Scheduler Workload Scheduler workstations consisting of one or
processing day to launch jobs in accordance with the more agents and a domain manager acting as the
information in the Symphony file. management hub. All domains have a parent domain
except for the master domain.
F J
Fault-tolerant agent. An agent workstation in the Jnextday job. Pre- and post-production processing can
Tivoli Workload Scheduler network capable of be fully automated by scheduling the Jnextday job to
resolving local dependencies and launching its jobs in run at the end of each day. A sample jnextday job is
the absence of a domain manager. provided as TWShome\Jnextday. The Jnextday job does
the following: sets up the next day’s processing
Fence. The job fence is a master control over job (contained in the Symphony file), prints reports, carries
execution on a workstation. The job fence is a priority forward unfinished job streams, and stops and restarts
level that a job or job stream’s priority must exceed Tivoli Workload Scheduler.
before it can run. For example, setting the fence to 40
prevents jobs with priorities of 40 or less from being Job. A job is a unit of work that is processed at a
launched. workstation. The job definition consists of a unique job
name in the Tivoli Workload Scheduler database along
Final Job Stream. The FINAL job stream should be with other information necessary to run the job. When
the last job stream that is run in a production day. It you add a job to a job stream, you can define its
contains a job that runs the script file Jnextday. dependencies and its time restrictions such as the
estimated start time and deadline.
Follows dependency. A dependency where a job or
job stream cannot begin execution until other jobs or Job Instance. A job scheduled for a specific run date
job streams have completed successfully. in the plan. See also “Job”.
Glossary 167
manage production processing, the contents of the
Symphony file (plan) can be displayed and altered with
W
the Job Scheduling console.
Weekly Run Cycle. A run cycle that specifies the days
of the week that a job stream is run. For example, a job
T stream can be specified to run every Monday,
Wednesday, and Friday using a weekly run cycle. A
Time restrictions. Time restrictions can be specified weekly run cycle is defined for a specific job stream
for both jobs and job streams. A time can be specified and cannot be used by multiple job streams. For more
for execution to begin, or a time can be specified after information see Run Cycle.
which execution will not be attempted. By specifying
both, you can define a window within which a job or Wildcards. The wildcards for Tivoli Workload
job stream will run. For jobs, you can also specify a Scheduler are:
repetition rate. For example, you can have Tivoli ? Replaces one alphanumeric character.
Workload Scheduler launch the same job every 30
minutes between the hours of 8:30 a.m. and 1:30 p.m. % Replaces one numeric character.
* Replaces zero or more alphanumeric characters in
Tivoli Management Framework (TMF). The base
the Tivoli Job Scheduling console.
software that is required to run the applications in the
Tivoli product suite. This software infrastructure @ Replaces zero or more alphanumeric characters in
enables the integration of systems management the Tivoli Workload Scheduler command line.
applications from Tivoli Systems Inc. and the Tivoli
Partners. The Tivoli Management Framework includes Wildcards are generally used to refine a search for one
the following: or more objects in the database. For example, if you
v Object request broker (oserv) want to display all workstations, you can enter the
v Distributed object database asterisk (*) wildcard. To get a listing of workstations
site1 through site8, you can enter site%.
v Basic administration functions
v Basic application services Workstation. A workstation is usually an individual
v Basic desktop services such as the graphical user computer on which jobs and job streams are run. They
interface are defined in the Tivoli Workload Scheduler database
as a unique object. A workstation definition is required
In a Tivoli environment, the Tivoli Management for every computer that runs jobs or job streams in the
Framework is installed on every client and server. Workload Scheduler network.
However, the TMR server is the only server that holds
the full object database. Workstation class. A workstation class is a group of
workstations. Any number of workstations can be
Tivoli Management Region (TMR). In a Tivoli placed in a class. Job streams and jobs can be assigned
environment, a Tivoli server and the set of clients that to run on a workstation class. This makes replication of
it serves. An organization can have more than one a job or job stream across many workstations easy.
TMR. A TMR addresses the physical connectivity of
resources whereas a policy region addresses the logical
organization of resources. X
Tree view. The view on the left side of the Job X-agent. See “Extended agent”.
Scheduling Console that displays the Tivoli Workload
Scheduler server, groups of default lists, and groups of
user created lists.
U
User . For Windows NT only, the user name specified
in a job definition’s “Logon” field must have a
matching user definition. The definitions furnish the
user passwords required by Tivoli Workload Scheduler
to launch jobs.
A
CLI
conman 17
F
fault tolerance
access method switchmg 17
backup domain manager 17
local UNIX 7 wimpspo 53
switching 17
remote UNIX 7 winstsp 53
file dependency 6
accessibility xvi wmaeutil. 31
final 97, 108
adapter.config commands
final job stream 98
upgrading 32 console 97
adding 73
Administrator dumpsec 75
firewall support 149
adding 105 evtsize 74
fixes, obtaining 156
APARs makesec 75
full status 108
IY45982 85 wmaeutil 75
IY46485 100 wremovsp 152
IY47753 32 compiler 97, 99
IY48407 30 components file 9 G
IY49332 73, 75 components file, viewing 9 global options
IY50279 33 configuration files 32 file example 84
IY50282 32 configuration scripts 7, 100 file template 84
IY53209 112, 113 connector 33 setting 81
IY57227 121 install location 19 syntax 81
AT keyword 98, 99 Connector globalopts 108
auditing where to install 19 upgrading 32
database level 83 console command 97 globalopts file
plan level 84 console messages and prompts 96 time zone feature 78
conventions
typeface xvii
B cpu SSL auth mode option 90
cpuname 109
H
backup domain manager history option 83
creating
switching 17
the backup master 108
backup files 32
user on Windows 26
backup master 107
creating 108
customer support I
see Software Support 156 information centers, searching to find
moving 108
customize script software problem resolution 155
behind firewall
running 68 installation
extended agent 150
syntax 57 adding new features 43
behindfirewall option 149
CDs 27
bm check deadline option 87
fresh install 37
bm check file option 87
bm check status option 87 D log files 29
prerequisites 33
bm check until option 87 database
promoting 45
bm look option 88 mounting 109
silent 45
bm read option 88 database audit level 83
software package blocks 51
bm status option 88 deadline 78, 99
Tier 1 platforms 37
bm verbose option 88 dependency
Tier 2 platforms 58
BmEvents.conf file 6
upgrading 33, 61, 62
upgrading 32 directories
wizard program 37
BookManager xvi sharing 94
Internet, searching to find software
books xii directory names, notation xvii
problem resolution 155, 156
see publications xii, xv dumpsec command 75
internetwork dependencies 15
books, online xvi
T
tcp port 53, 58
tcp timeout option 92
thiscpu 92
time zone
enable option 84
on backup master 107
overview 20
timezone enable 78, 109
Tivoli desktop 105
Tivoli Management Framework
as prerequisite 33
supported versions 33
Tivoli Management Region 105
Tivoli Management Server 105
Tivoli software information center xv
TWS connector 3
instances 20
TWS_TISDIR variable 77
twsinst
authorization roles 24
usage 47
typeface conventions xvii
U
uninstalling
Tier 1 platforms 151
Tier 2 platforms 152
unlink workstations 30
updating 33
Tier 2 platforms 68
upgrading 33
Tier 1 61, 62
Tier 2 platforms 68
user account
creating on UNIX 26
creating on Windows 26
rights 26
user name
creating 38, 41, 44, 52
V
variables, notation for xvii
W
window
Create Workstations 109
wmaeutil command 75
wmaeutil. 31
workstations
naming conventions 19
unlinking 30
wr enable compression option 92
wr read option 92
wr unlink option 92
wremovsp command 152
Index 171
172 IBM Tivoli Workload Scheduler Planning and Installation Guide
Printed in USA
SC32-1273-02