Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Synchronizing the workload

across heterogeneous scheduling environments


using Tivoli Workload Scheduler V8.6
cross-dependencies

Alberto Ginestroni
Valeria Perticarà
Tivoli Workload Scheduler development

Version: 20120709

© Copyright IBM Corp. 2012

1
Table of Contents
1 Introduction...............................................................................................................................3
2 Configuring Tivoli Workload Scheduler Distributed for cross-dependencies...........................4
2.1 Tivoli Workload Scheduler Distributed functioning as local engine..................................4
2.1.1 Configuring the Tivoli Workload Scheduler scheduling environment........................4
2.1.2 Configuring the Tivoli Workload Scheduler workload................................................5
2.2 Tivoli Workload Scheduler Distributed functioning as remote engine..............................8
3 Configuring Tivoli Workload Scheduler for z/OS for cross-dependencies...............................8
3.1 Enable HTTP communications between engines.............................................................9
3.1.1 Define HTTPOPTS statement in the controller parameter library.............................9
3.1.2 Define ROUTOPTS statement in the controller parameter library..........................10
3.2 Define Remote Engine workstations in the local controller............................................11
3.3 Define Shadow jobs in the local controller.....................................................................13
4 Two sample scenarios............................................................................................................16
4.1 Scenario 1.......................................................................................................................16
4.2 Scenario 2.......................................................................................................................22

2
1 Introduction
Under certain conditions, it is beneficial to have multiple scheduling environments. For
example, for geographical or organizational reasons, to increase the scalability of scheduling
environments, or to keep different skills separated.
Yet, even if it is mostly separated, part of the workload might need to be synchronized. Tivoli
Workload Scheduler version 8.6 provides the capabilities to model dependencies between
jobs running on different Tivoli Workload Scheduler scheduling environments, either
distributed or z/OS.
This paper explains how this solution works, and includes two sample scenarios that clarify
how to configure Tivoli Workload Scheduler environments for cross-dependencies.

The following new object types are used to implement cross-dependencies:


• The Remote Engine workstation: a new type of workstation that represents a remote
Tivoli Workload Scheduler engine locally, either distributed or z/OS.
• The Shadow job: a job defined on a Remote Engine workstation that maps a remote job
defined on another engine.
To implement a cross-dependency, the user needs to:
-Define a workstation of Remote Engine
type that points to the engine where the
remote job (B) is defined.

-Define a Shadow job (B-shadow) that


points to the remote job (B) on which the
local job (C) must depend.

-Make the local job (C) successor of the


Shadow job.

When a Shadow job instance is ready to start, the scheduler selects it for submission and
sends a request to the remote engine. The remote engine binds an instance in the remote
engine plan. That is, it associates the Shadow job with a specific instance of the remote job in
the remote engine plan. The remote engine sends back the bind result and, if the bind is
successful, it starts notifying the remote job status changes.

3
2 Configuring Tivoli Workload Scheduler Distributed for cross-
dependencies
A newly installed Tivoli Workload Scheduler Distributed can easily be made ready for cross-
dependencies.

2.1 Tivoli Workload Scheduler Distributed functioning as local engine


Three modeling steps are needed to make Tivoli Workload Scheduler Distributed function as
a local engine in a cross-dependency environment:
1. Creation of a workstation of Remote Engine type
2. Creation of a job definition of Shadow type
3. Creation of a follow dependency from the Shadow job to the job you want to synchronize

2.1.1 Configuring the Tivoli Workload Scheduler scheduling environment


First, the user needs to define a workstation of Remote Engine type, which points to another
Tivoli Workload Scheduler Distributed Master Domain Manager or to a Tivoli Workload
Scheduler for z/OS controller to which remote jobs belong.
A Remote Engine workstation contains data that lets the local engine communicate with the
remote engine. This type of workstation must be hosted by a Workload Broker workstation,
because on the local side, the communication with the remote engine is entirely managed by
the Broker application.

Note: The hosting Workload Broker workstation may also be a Dynamic Domain
Manager.

For instance, if jobs running on the local Tivoli Workload Scheduler Distributed engine must
depend on jobs running on a remote Tivoli Workload Scheduler for z/OS engine, the following
workstation must be defined on the local engine:

4
The protocol, address, and port that the local engine must use to connect to the remote
engine are included in the definition of the Remote Engine workstation. Those used by the
remote engine to send back notifications to the local engine are specified in the bind requests.
More specifically, these values are stored in the form of a URI in the database of the local
engine, and copied inside each bind request. The URI can be handled using the
“import/exportserverdata.sh” scripts in the $TWAHOME}/TDWB/bin directory.

2.1.2 Configuring the Tivoli Workload Scheduler workload


The second modeling step on the local engine is to define a job of Shadow type.
These kind of jobs, which must be defined on Remote Engine workstations, contains data that
uniquely identifies the remote job that you want to make predecessor of one or more local

5
jobs.
Referring to the same example as before, a job of Shadow z/OS type must be defined on the
Remote Engine workstation created beforehand:

The Shadow z/OS job definition contains the remote “Job stream name” and the remote “Job
number” that uniquely identify the remote z/OS job:

If the remote engine is available to the Tivoli Dynamic Workload Console as an engine
connection, instead of entering the “Job stream name” and “Job number” manually, you can
pick the remote job data directly from the database of the remote engine.
In our example, the remote engine is available to the Tivoli Dynamic Workload Console as the
engine connection named “TWSz_Dublin”:

Therefore, the user can run the look-up function of the “Job number” field and choose the
engine of interest:

6
Finally, the user can pick the job from the remote database:

7
The third and final modeling step is to make the Shadow z/OS job predecessor of one or
more local jobs that you want to synchronize:

Once the Shadow job enters the Production Plan along with successors and possible
predecessors, it behaves as any other job. The only difference being that once it has started,
its status does not depend on a task running on the local Tivoli Workload Scheduler
environment, but reflects the status of a task running on a remote Tivoli Workload Scheduler
environment.

2.2 Tivoli Workload Scheduler Distributed functioning as remote engine


A newly installed Tivoli Workload Scheduler Distributed engine is immediately ready to be
used as a remote engine in a cross-dependency environment with other Tivoli Workload
Scheduler for z/OS or distributed engines.
It is possible to customize relevant security settings to narrow the scope of the feature. For
example, you can restrict the authorizations to access scheduling objects assigned to the
“bindUser” in the Security file.

Note: The “bindUser” is a global option that contains the name of the Tivoli Workload
Scheduler user who performs the bind in the Production Plan of the remote
engine. By default, it is the TWS_user (USER MAESTRO).

3 Configuring Tivoli Workload Scheduler for z/OS for cross-


dependencies
If a Tivoli Workload Scheduler for z/OS scheduling environment is involved in a cross-
dependency scenario, you must run some configuration steps. Some steps apply to the local
controller where you define the cross-dependency. Others apply to the remote controller
where the remote job resides.

First,you must enable HTTP and/or HTTPS communications both in the local and in the
remote controller. Additionally, in the local controller, you must define Remote Engine
workstations and Shadow jobs.

The following paragraphs detail each configuration step.

8
3.1 Enable HTTP communications between engines
To enable HTTP and/or HTTPS communications on all of the involved controllers, both local
and remote, you must:
• Specify in the HTTPOPTS statement of the controller parameter library:
◦ The address and port where the controller waits for incoming requests
◦ For HTTPS request, the SSL key ring to use
• Specify in the ROUTOPTS statement of the controller parameter library the HTTP and/or
HTTPS destinations:
◦ In the local controller, you must define a destination for each remote engine.
◦ In the remote controller, you must define a destination for each engine that may
request a bind.
If at least one HTTP or HTTPS destination is defined, the controller “HTTP server” and “HTTP
client” tasks start. These tasks manage all inbound and outbound requests.

3.1.1 Define HTTPOPTS statement in the controller parameter library


If you want to use HTTP connections, the setting of the HTTPOPTS statement is optional
because each keyword has a default value. The default port where the HTTP server listens
for requests is set to 511; if you want to use a different port, specify the HTTPPORTNUMBER
keyword:

HTTPOPTS HTTPOPTS
HOSTNAME('1.111.111.111') HOSTNAME('2.222.222.222')
HTTPPORTNUMBER(4567) HTTPPORTNUMBER(511)

If you want to use the SSL protocol, you must:


a) Get the proper certificate from the remote engine. If you are in a test environment, you
can bypass this step and use the default security certificates stored into the SEQQDATA
library:
◦ EQQCERCL: the security certificate for the client
◦ EQQCERSR: the security certificate for the sever
b) Create a key ring containing these certificates in the RACF. You can use the sample job
EQQRCERT to import the certificates. To run this job, ensure that you use the same user
ID that RACF associates with the controller started task.
c) In the HTTPOPTS statement, define the SSL specific keywords:
HTTPOPTS HTTPOPTS
HOSTNAME('1.111.111.111') HOSTNAME('2.222.222.222')
HTTPPORTNUMBER(4567) HTTPPORTNUMBER(511)
SSLPORT(4568) SSLPORT(512)
SSLAUTHMODE(CAONLY) SSLAUTHMODE(CAONLY)
SSLAUTHSTRING(tws_test) SSLAUTHSTRING(tws_test)
SSLKEYRINGTYPE(SAF) SSLKEYRINGTYPE(SAF)
SSLKEYRING(KEYRINGNAME) SSLKEYRING(KEYRINGNAME)
Note that the “HTTP server” task can listen concurrently on a non-SSL and SSL port.

9
3.1.2 Define ROUTOPTS statement in the controller parameter library
To enable the communication between two Tivoli Workload Scheduler for z/OS controllers, in
this example TWS1 and TWS2, configure the controllers parameter library in the following
way:
• In the TWS1 RUOTOPTS statement define a HTTP(S) destination that represents TWS2
controller:
◦ Use the same address and port that are defined in the TWS2 HTTPOPTS statement.
◦ After the port, specify the “Z”: it indicates that the destination represents a z/OS
remote engine, that is, a remote Tivoli Workload Scheduler for z/OS controller.

ROUTOPTS
HTTP(TWS2DEST:'2.222.222.222'/511/Z)
HTTPOPTS
HOSTNAME('9.168.119.180')
HTTPPORTNUMBER(4567)

HTTPOPTS
HOSTNAME('2.222.222.222')
HTTPPORTNUMBER(511)

• As in the previous definition, define a HTTP(S) destination that represents TWS1


controller in the TWS2 ROUTOPTS statement :
◦ Use the same address and port that are defined in the TWS1 HTTPOPTS statement.

ROUTOPTS
HTTP(TWS2DEST:'2.222.222.222'/511/Z)
HTTPOPTS
HOSTNAME('1.111.111.111')
HTTPPORTNUMBER(4567)

ROUTOPTS
HTTP(TWS1DEST:'1.111.111.111'/4567/Z)
HTTPOPTS
HOSTNAME('2.222.222.222')
HTTPPORTNUMBER(511)

To enable the communication between a Tivoli Workload Scheduler for z/OS controller and a
Tivoli Workload Scheduler Distributed Master Domain Manager, in this example TWS1 and
MDM, you must define a HTTP or HTTPS destination that represents the Master Domain
Manager in the ROUTOPTS of TWS1, by specifying:

10
◦ The hostname or IP address of the machine where the Master Domain Manager is
installed
◦ The port (31115 is the default the non-SSL port. 31116 is the default SSL port.)
◦ Beside the port, specify the “D”; it indicates that the destination represents a
distributed remote engine.

Note: If the controller is the remote engine, the hostname or IP address in the
ROUTOPTS must be exactly the same as what is specified in the
DWB.SRV_SERVERS table of the MDM database. To check the table
content, you can use the “exportserverdata.sh” script in the
${TWAHOME}/TDWB/bin directory.

ROUTOPTS
HTTP(TWS2DEST:'2.222.222.222'/511/Z)
HTTP(MDMDEST:'host1234.domain.com'/31115/D)

http://host1234.domain.com:31115/JobManagerRESTWeb/JobScheduler

3.2 Define Remote Engine workstations in the local controller


Once the controller parameter library is configured, you must create the Remote Engine
workstations in the database of the controller where you want to define the Shadow jobs.

As in the previous sample definitions if, for example, a job that is defined on TWS1 controller
must start after a job on TWS2 controller has completed successfully, you must define a z/OS
Remote Engine workstation on TWS1, which has the following characteristics:
◦ Work station name ===> TWS2
◦ Work station type ===> R - Remote Engine
◦ Reporting attribute ===> A - Automatic
◦ Server usage ===> N – Neither
◦ Destination ===> TWS2DEST
◦ Remote engine type ===> Z - z/OS

In the same way if, for example, a job defined on TWS1 controller must start after a job that is
defined on MDM has completed successfully, you must define a Distributed Remote Engine
workstation on TWS1, which has the following characteristics:
◦ Work station name ===> MDM
◦ Work station type ===> R - Remote Engine
◦ Reporting attribute ===> A - Automatic
◦ Server usage ===> N – Neither
◦ Destination ===> MDMDEST
◦ Remote engine type ===> D – Distributed

11
To create the workstations using the ISPF panels, from the Tivoli Workload Scheduler for
z/OS main panel, select 1.1.2 to list workstations in the database and enter the CREATE
command to create a new workstation:

To create the workstations using the Tivoli Dynamic Workload Console:


1. Select Tivoli Workload Scheduler → Scheduling environment → Design → Create
Workstations.
2. Select the Tivoli Workload Scheduler for z/OS engine connection and click "Create
Workstation."

12
3.3 Define Shadow jobs in the local controller
Once you have defined the Remote Engine workstation, you are ready to define the Shadow
job. A Shadow job has almost the same attributes as other jobs.

To define a Shadow job using the ISPF panels, from the Tivoli Workload Scheduler for z/OS
main panel, select 1.4.2 to create a job stream (application) in the database. Then specify
'OPER' to select operations.
Add a new operation specifying the Remote Engine workstation in the operation list. For the
Shadow job, the jobname is optional because there is no JCL in the joblib, but you can specify
it to filter in queries during monitoring.
Select operation details and choose option 13, REMOTE JOB INFO, to specify the remote job
information. This information identifies a job in the remote database.

For a Shadow z/OS job, you must define the application ID and the operation number:

For a Shadow Distributed job, you must specify the job stream workstation, the job stream
name, and the job name:

13
In both cases, you can also specify the “Complete if bind fails” option: this option indicates if
the Shadow job status is automatically set to complete when the remote job does not exist.
Y – The Shadow job status is set to complete.
N – The Shadow job status is set to error.

The Shadow jobs can also be created from the Tivoli Dynamic Workload Console: this
interface offers the additional option of looking up the remote job information in the remote
engine database. It reduces the risk of mistakes.

To create a Shadow job using the Tivoli Dynamic Workload Console:


1. Select Tivoli Workload Scheduler → Workload→ Design → Create Workload Definitions.
2. Select the Tivoli Workload Scheduler for z/OS engine connection and click "Go." The
Workload Designer opens in a new window.
3. Create a new job stream and add it the Shadow job:

14
4. Enter the remote job data in the “Remote job” panel. You can choose this information
directly from the remote engine database using the look up function.
◦ For a Shadow z/OS job, you must define the application name (“Job stream name”)
and the operation number (“Job number”):

◦ For a Shadow Distributed job, you must specify the “Job stream workstation,” the “Job
stream name,” and the “Job name”:

As with normal dependencies, the only supported matching criteria is “Closest preceding.”

15
4 Two sample scenarios
The two sample scenarios below show how to implement cross-dependencies.
4.1 Scenario 1
A company needs to synchronize the workload running on two different Tivoli Workload
Scheduler Distributed environments. The job STOPDB on NC123162__MASTER in Rome
must run only after job UPDATEDB on NC117152__MASTER in Paris has completed
successfully.

In this scenario, NC123162__MASTER functions as the local engine and


NC117152__MASTER functions as the remote engine.
As described in section 2.1 , a few modeling steps are needed on local side.
First, define a workstation of Remote Engine type to allow the local engine to communicate
with the remote engine.

16
Next, a Shadow job must be defined to uniquely identify the remote job that you want to make
predecessor of the local job.
In this case, using the Workload Designer of the Tivoli Dynamic Workload Console, you
create Shadow job SHUPD1 that points to remote job
NC117152__MASTER#MANAGEDATA.UPDATEDB

17
Note: If you want SHUPD1 to complete successfully and release its successors in case
no job to match is found on the remote engine, you can check the option
“Complete if bind fails.”

In case there are several instances of UPDATEDB job on the remote engine, you can bind the
correct one working with the “Matching criteria.”
Let's assume that SHUPD1 has a Scheduled Time of 13:00, and the remote instance of
UPDATEDB that you want to bind has a Scheduled Time of 11:00.
If you know there is no other instance of UPDATEDB between 13:00 and 11:00, you can
choose the “Closest preceding” option in the “Matching criteria” tab:

If there are other instances of UPDATEDB before the one you want to bind, or if the instance
of UPDATEDB has a Scheduled Time after SHUPD1, you can use the other available
matching criteria to bind it.
Finally, you need to make SHUPD1 predecessor of STOPDB.

Once the job stream BACKUPDATA gets into the Production Plan, job STOPDB is released
for execution only after SHUPD1 has completed successfully.
The synchronization between the Shadow job SHUPD1 and the remote job UPDATEDB starts
when the Shadow job is free from possible dependencies and starts executing.

18
However, as with any other Tivoli Workload Scheduler job, the Shadow job might initially have
unresolved dependencies and stay in HOLD:

As dependencies are satisfied, SHUPD1 follows the typical status transition of dynamic jobs,
which are managed by the Broker application:

Possible predecessors of SHUPD1 completed


successfully...

...the Remote Engine workstation is up, SHUPD1


starts executing...

… the Broker application allocates resources for


execution, contacts the remote engine and sends
bind request. The bind request is accepted...

...a matching job on remote engine was found.

Note: If the bind request fails because no matching job is found on the remote engine
or because of lack of authorization on remote side, SHUPD1 switches from
WAIT to ERROR.

Only if the “Complete if bind fails” option was checked, it switches from
WAIT to SUCC.

19
Once the Shadow job is in BOUND status, it is subscribed to receive notifications from the
remote engine concerning status changes of the remote job.
If you want to verify which remote job was bound, you can check it in the “Extra Information”
section of the Shadow job properties:

The job log of Shadow jobs contains messages sent by the remote engine. As SHUPD1 is in
BOUND, the log contains message “AWSRJM001I” sent by the remote engine upon
successful binding:

20
Note: The bind is initially performed at the job stream instance level on the
Preproduction Plan. After, it is validated at the job instance level on the
Production Plan.

The Shadow job stays in BOUND status as long as the remote job does not start. Once the
remote job starts executing, the status of the Shadow job starts taking on the status of the
remote job. If, at binding time, the remote job has already started, the Shadow job
immediately takes on the status of the remote job.
Let's assume that at binding time, the remote job is in HOLD status:

Once the dependencies of the remote job get resolved and it starts executing, the status of
the Shadow job changes with respect to the status of the remote job as follows:

21
At this point, Shadow job SHUPD1 releases the succeeding job STOPDB.

4.2 Scenario 2
A company spreads its workload across three different sites. Each site has its own scheduling
environment: two Tivoli Workload Scheduler for z/OS controllers and a Tivoli Workload
Scheduler Distributed Master Domain Manager. At the end of the workday, each site runs
end-of-day activities and finally updates the database. When all sites have accomplished
these tasks successfully, one of them analyzes the collected data to provide reports and
statistics.

22
In this example you must:
• Enable HTTP
communications between all
of the involved engines (see
paragraph "3.1 Enable HTTP
communications between
engines")
• Define two Remote Engine
workstations in theTWS1
controller (see paragraph
"3.2 Define Remote Engine
workstations in the local
controller"):
◦ MDM for the Master
Domain Manager.
◦ TWS2 for the controller.
• Define two Shadow jobs in the TWS1 controller (see paragraph "3.3 Define Shadow jobs
in the local controller"):
◦ The Shadow Distributed job SHUPDD1 that maps the UPDATEDB_D1 job in the
ENDDAYTWSD1 job stream defined on MDM database.

◦ The Shadow z/OS job SHUPDZ2 that maps the UPDDBZ2 job in the ENDDAYTWSZ2
job stream defined in the TWS2 controller.

Once in the plan, the synchronization between the Shadow jobs and the remote jobs starts
when the Shadow jobs are free from possible dependencies and start to execute. In this
example they do not have dependencies and can start immediately.

23
The Shadow job status remains READY until the remote job starts; the extended status
(status details) provides additional information about what is happening:
The controller is sending the bind
request to the remote engine...

… the remote engine has received the


bind request and is processing it....

...the remote engine has found a match


in the remote engine plan: either the LTP
or the CP.

Note: If the bind request fails because no matching job was found on the remote engine
or because of lack of authorization on the distributed remote engine, the
Shadow job status is set to ERROR with error code FBND.

The job completes successfully only if the “Complete if bind fails” option was
checked.

Once the bind is established, the Shadow job is subscribed to receive notifications from the
remote engine concerning status changes of the remote job. If you want to verify the remote
job that was bound, you can check it in the “Shadow job information” section of the job
properties:

Once the remote job starts executing, the status of the Shadow job starts taking on the status
and error code of the remote job. If at binding time the remote job has already started, the
Shadow job immediately takes on the status of the remote job.

24

You might also like