Professional Documents
Culture Documents
1.T24 NonStop
1.T24 NonStop
No part of this document may be reproduced or transmitted in any form or by any means,
electronic or mechanical, for any purpose, without the express written permission of TEMENOS Holdings NV.
T24 Non Stop
Table of Content
Introduction........................................................................................................................................... 4
Technical Architecture .......................................................................................................................... 4
GLOBUS And GLOBUS Desktop ...................................................................................................... 4
T24 And T24 Browser ....................................................................................................................... 5
Components Of T24 Browser Installation .......................................................................................... 7
Internet Explorer ............................................................................................................................ 7
Network Dispatcher ....................................................................................................................... 7
Web Server.................................................................................................................................... 7
EJB Layer ...................................................................................................................................... 7
MQ Series...................................................................................................................................... 7
T24 Server..................................................................................................................................... 8
bnk.data directory – NFS Mounted ................................................................................................ 8
Database Server............................................................................................................................ 8
Message Flow....................................................................................................................................... 8
Message In ....................................................................................................................................... 8
Message Out..................................................................................................................................... 8
Message Processing In Detail - Token Processing ............................................................................... 9
High Availability Failure ..................................................................................................................... 9
In Flight Failure Scenarios................................................................................................................. 9
Locking ................................................................................................................................................. 9
Close Of Business(COB) Routine For Releasing Timed Out Locks................................................... 9
Additional Information On EB.RECORD.LOCK ................................................................................. 9
Working Of Multi Threaded COB Routines In T24 – An Overview ........................................................ 9
Close Of Business (COB) ..................................................................................................................... 9
tSM And tSA – An Overview ............................................................................................................. 9
COB configuration............................................................................................................................. 9
TSA.SERVICE............................................................................................................................... 9
TSA.WORKLOAD.PROFILE.......................................................................................................... 9
TSA.PARAMETER ........................................................................................................................ 9
TSA.STATUS ................................................................................................................................ 9
Initiating COB Using tSM And tSA..................................................................................................... 9
Execution of the COB routines....................................................................................................... 9
Stopping tSA and tSM ................................................................................................................... 9
Resilience ......................................................................................................................................... 9
Introduction
The term T24 refers to ‘Temenos 24’ meaning 24 hours NON-STOP processing
where by transactions can be input while the system is running ‘Close Of Business’ (End Of DAY). For
the T24 non-stop processing to be implemented, there are a number of technical and architectural
changes that have been made. This document aims at giving an insight into the technical and
architectural changes done and also talks about the T24 Browser, an important part of T24.
The ‘Non Stop’ processing feature of T24 is available from T24 release G.14.0. As a
part of this release the following applications will support non-stop processing. The ability to use the
following applications when the system is running COB is only allowed if the product ‘NS’ (NON
STOP) is installed.
CUSTOMER
ACCOUNT
FUNDS.TRANSFER
STANDING.ORDER
TELLER
TELLER.ID
SEC.OPEN.ORDER
Technical Architecture
Client Machine with Client Machine with Client Machine with Client Machine with
GLOBUS Desktop GLOBUS Desktop GLOBUS Desktop GLOBUS Desktop
GLOBUS Server
• GLOBUS
• jBASE/uniVerse
• Unix/NT
Network Dispatcher
HTTP RESPONSE HTTP REQUEST
Internet Explorer
This is the front end that the client will be using to access data in T24. All that the user needs to do is
to supply a ‘url’ that will enable him to connect to T24.
Network Dispatcher
This is a third party software. The job of the network dispatcher is to receive the messages from the IE,
and route it to any one of the available web servers. This is used for load balancing purposes.
Web Server
This server can have either Tomcat or Websphere installed. This server is used to publish the web
pages on to the Internet Explorer. An example would be, when a user wishes to open a version, then
the fields to be displayed and their properties are sent to the web server. The web server then creates
the web page and sends it to the user via the network dispatcher. The web server is a cluster
installation – meaning one installation of web server across multiple severs. The sessions are
persisted on to a database at the web server level. The web server holds the TC (TEMENOS
CONNECTOR) Client and the T24 Browser. The T24 Browser is the one that will parse any HTTP
request to XML request and send it for further processing. The Web Server is capable of storing
tokens that are related to each message (Tokens is discussed in detail later in this section).
EJB Layer
This is the server that contains the web services that can be published for third party systems to
access. Some amount of business logic can be built into these web services.(Please refer the
documentation on Web Services for more information on this). The message received from the Web
Server is routed through a switch/MQ series to any one of the available T24 servers for processing.
There can be multiple EJB servers.
It is vital to understand that this EJB layer is optional and is required only when web services need to
be deployed. If web services is not deployed, then this layer will be used truly for connectivity
purposes.
If the EJB Server is present, the TC (TEMENOS CONNECTOR) Client running on the Web Server will
hold the IP of the EJB Servers and route the messages to EJB Server, else they will hold the IP of the
SWITCH/MQ SERIES though which the message will be routed to the T24 servers.
MQ Series
Software that is capable of handling application specific data load.
T24 Server
This is the T24 server and there can be more than one of them. Each of these servers will contain a
separate T24 installation (minus the bnk.data directory) and jBASE installation. The OFS module
and the TC (TEMENOS CONNECTOR) server have to be compulsorily installed on each of these
servers. TC Server is the entry point into T24. It is in the T24 server, that all the business logic is held
and the actual validation of data happens at this server.
This is the T24 ‘data’ directory that is mounted on to a network file system. All T24 servers will access
this ‘data’ directory when they need any information. The data files in this directory will not contain any
data. They will only contain references to the related Oracle tables where the data is actually stored.
Database Server
T24 is database independent, and supports several different databases, including Oracle and DB2. It is
in this database server that Oracle/DB2/J4 is installed. This is where the T24 data will reside in XML
format. Oracle/DB2 databases support clustering and therefore a single Oracle/DB2 installation can be
done across multiple servers. J4 does not support clustering and therefore only one database server
can be used if J4 is to be used as a database. In order to make use of T24 capabilities, we would
require the database to provided online backup mechanism (taking backup when users are logged in).
Message Flow
Message In
The user initiates a request from the Internet Explorer. This will be a HTTP
request. This request is received by the Network Dispatcher and is routed to any one of the Web
Servers. The T24 Browser which resides on the Web Server parses the HTTP request to XML and
passes it on to the TC (TEMENOS CONNECTOR) Client running on the same server. The TC
(TEMENOS CONNECTOR) Client then routes the request to any one of the T24 servers. The Web
Server then sends this HTTP request to any one of the EJB servers. As mentioned above, the entry
point into T24 is the TC (TEMENOS CONNECTOR) Server. The TC (TEMENOS CONNECTOR)
Server running on the T24 server then parses the XML request into OFS format and gives it to T24
(OFS) for processing. Then the request is processed and the database is updated. At this point the
jBASE drivers are used to convert the data into XML format and then the data is updated in the
database.
Message Out
Once the database update is complete, the response is sent to any one of the
T24 servers. The TC (TEMENOS CONNECTOR) Server in the T24 server then parses the OFS
response into XML and passes it on to the any one of the EJB servers. The TC (TEMENOS
CONNECTOR) Client running on the EJB Server receives the response and passes it on to the T24
Browser running on the same server. The T24 Browser then sends in to any one of the Web Servers.
The Web Server then sends the response as a html page to the Network Dispatcher which then sends
it to the appropriate Internet Explorer screen. The Web Server has the in built intelligence using which
it can decide which IE session initiated the request and hence will instruct the next layer to send the
response to the IE session that actually initiated the request.
When ever a message is sent to the T24 server for processing, it is sent along with a
unique message reference which comprises of a random number called a token and a sequence
number(Sequence number is discussed in detail later in this section). Assume a scenario where in a
user commits a FT transaction, the message is sent to the T24 server, the server processes it and
then due to some unforeseen situation, the result does not get displayed to the user. According to the
user, the transaction has not been processed, as he hasn’t seen the message ‘Validated’ or
‘Processed’. Therefore, he would commit the transaction again. At this point the T24 sever needs to
tell the user that the transaction has already been processed, give him the necessary response and
NOT process the transaction again. This is precisely what can be achieved by using UNIQUE
MESSAGE REFERENCE.
When a user signs on to the T24 server using the browser, a request to create a
session is sent to the T24 server. This request will have no unique message reference attached to it
as it the first request from the user. This request will hit the T24 server. At this point the
OFS.SESSION.MANAGER running as a part of the TC (TEMENOS CONNECTOR) server on the T24
server will understand that it is a ‘sign on’ request and will do the necessary SMS validations and
create a new token for that user. This is the token that will be used by the user when he sends his
subsequent request. Please note that a token is not created for the first request from the
user(Sign on request). The token once created, is stored in the following files
This token that has been generated and stored in the above-mentioned files will
be sent back to the user so that the user can use it for his next request. The token when is sent back,
is held by the browser and only the response is sent to the user. The token(along with the user
information - details stored in the F.OS.TOKEN file) is held by the T24 Browser as it needs to append
it to the subsequent request from the user.
After the user signs on using the browser, assume he inputs and commits a FT
transaction. This transaction needs to be sent to the T24 server for processing. This request (FT
transaction) will have to pass the T24 Browser. The T24 browser will append the unique message
reference, which comprises of the token and a sequence number (this sequence number is generated
by the T24 Browser) to the request and send it for processing. Thus a request (apart from the SIGN
ON request) is always sent to the T24 server along with a unique message reference.
When the message hits the T24 server, the first file to be checked is a file called
F.OS.TOKEN.HISTORY. This file will hold the unique message reference along with the response of
any processed request. This file is checked to see if the incoming request is an already processes one.
If it is an already processed request (the unique message reference would be in this file) the response
for that request will be retrieved and sent to the user. The transaction will not be processed again.
The FT transaction that has been input by the user is a new transaction and therefore
an entry in the F.OS.TOKEN.HISTORY file will not be found. Now the T24 server looks into the
F.OS.TOKEN file to see if it is a live token. In this case, as this is a new request the token number will
be present in the F.OS.TOKEN file. Since this is a new request, the T24 server will now
Once the response hits the T24 Browser, the token number is retained, the
sequence number gets reset to 1 and the response alone is sent to the client.
You must have noted from the above-mentioned steps that the F.OS.TOKEN.USE file
contains more than just a record with the id as the ‘Username’. The F.OS.TOKEN file contains 3 types
of records
The following flow chart depicts the flow of a request from the client with the unique
message reference.
T24 Server
Found & timed out.
Pass back
When the user types ‘ACCOUNT L’ from the Internet Explorer screen, there would be
2 messages that would get generated at that point in time.
Now both these messages will get to the web server, the web server will have only
one token number, therefore will assign the same token number to both but will assign separate
sequence number to both the messages.
Example ;
0000111111+ Message1+1
0000111111+ Message2+2
Now both these messages will get to the T24 server, one of them will take precedence,
the read in the F.OS.HISTORY will fail as this is a new token, the read in the F.OS.TOKEN.USE will
also fail as this is a new token and therefore the normal processing will take place for this message
and the response will be sent back to the client with a new token number.
Now, the second message will come into the system for processing, the read in the
F.OS.HISTORY file will fail as the request has a new sequence number attached. The read in the
F.OS.TOKEN file will fail as this is an already processed token. The read in the F.OS.TOKEN.USE file
with id ‘Username-STACK’ will succeed, the time out will be checked and if it is within the time out
period, the second message will get processed with the same token number and the
F.OS.TOKEN.HISTORY file will get updated. If the token has timed out then a ‘SECURITY
VIOLATION’ is sent back to the client.
The concept of the sequence number is used in order to handle multiple requests that
are sent from the client at the same time. Once both the messages get processed and the token
number is sent back to the T24 Browser, the sequence number gets reset to one.
Message 1 Message 2
Web Server
Forms The Unique Message Reference
Token(100)+1+Request1
Token(100)+2+Request2
Message 2 waits (Processed
after message 1) Found & timed out.
T24 Server Pass back
‘SECURITY
Check F.OS.TOKEN.HISTORY VIOLATION’
High Availability failures are hard ware failures and they are covered by routing to
an available machine and are entirely transparent to the end user. Each layer is horizontally scalable,
and hence and alternate route can always be found as long as there is at least one machine still
available at each level.
Network dispatcher
http Server
Web Tier
EJB Tier
T24 Server
Database Server
T24 is database independent, and supports several different databases, including Oracle and DB2.
These databases have their own resilience features that use multiple servers that support full fail over
and resilience, the details of which are outside the scope of this document.
It is important to note that all T24 transactions are atomic in nature and use the native transaction
management facilities of the database being used – either all the updates for a transaction are made,
or none.
In Flight failures refer to machine failure when the message is in flight from the
end user’s browser, to the tServer and back again. They refer to failures at any point – either
processed by T24 or not. The key feature that supports T24’s ability to remain resilient in such a
situation is that each message that is sent contains a unique message reference. This message
reference is used to identify whether a message has been processed previously – and this update is
done as part of the same database transaction as the transaction itself.
There is a simple, automatic retry mechanism at the web tier container level. Where a failure is at the
EJB or tServer level, the message is simply resent with the same message reference. T24 checks if
this transaction reference has been previously processed or not, and if so simply retrieves the
response (which is stored with the message reference). Where the message has not previously been
processed (i.e. in normal operation) the message is processed and the response returned. The time
out, number of channels to try and the number of attempts per channel is parameter driven. The effect
on the end user is minimal – a slight delay in the response time.
Where the failure occurs “above” our retry level i.e. web tier container running the servlet, the http
server or the network dispatcher the effect is not as clean. If possible, the error is caught in an
error.jsp (a feature of the web tier container) and warns the end user that there may have been a
problem. Implementation staff can tailor this page to automatically run an enquiry on protocol, or some
such, that details the last transactions that the user has made – this should confirm whether the deal
has been processed or not. If the error is not caught, the user will have to sign on again.
Though the end effect on the user is harsher, the transaction will never be processed twice – the worst
case scenario is that the user loses his input.
Here either the client machine (the end user’s PC) or the client software (the end user’s browser such
as Internet Explore or Netscape Navigator) fails. The user must sign on to T24 again, and at this point
the implementation of T24 can automatically launch this same enquiry that details the last transactions
that the user has made.
Locking
Locking in T24 is done is a manner slightly different from what we are used to, as
now, there can be multiple application(T24) servers and there exists a stateless connection between
the client and the T24 server. There are three different scenarios that we need to understand with
locking in T24.
All this while, when we used J4 as the database, whenever there was a request to
lock a record (READU, F.READU, F.MATREADU etc), jBASE would lock the record regardless of
whether the record exists or not in the database. The jRLA daemon running in the background is the
one in charge of locking. This daemon would lock the record and would store the locking information in
the shared memory of the server where jRLA is running provided the record that is being locked
belongs to a J4 file. If the record belongs to a J3 type of file, it is Unix that will lock the record. At the
jBASE level we need to know the various locks that have been executed by Unix as well, and hence,
this locking information is written on to a file by name JEDI_SHM_WORKFILE that we can find under
the tmp directory of jBASE. Data written on to the JEDI_SHM_WORKFILE is used only for
administrative purposes. When we execute a SHOW-ITEM-LOCKS command, locking information is
obtained from the shared memory and from the JEDI_SHM_WORKFILE. jBASE at no point in time
uses the JEDI_SHM_WORKFILE to decide whether or not a record is locked.
With multiple application servers (T24 Servers) and with Oracle/DB2 as the database
a. The data files don’t store data anymore but are merely stub files that point to the actual table
in Oracle or DB2 that stores the data. (We need to remember that these stub files are stored
in a common location (NFS mount) and are shared by all the T24 servers.)
b. Each T24 server now has its own copy of jBASE running and hence will have its own copy of
JEDI_SHM_WORKFILE
So, if there is no data at the jBASE level, how do we lock it? What information is held
in the JEDI_SHM_WORKFILE? What happens to database (Oracle/DB2) locking? The following
section should answer these questions.
Whenever we execute any statement from our routines to lock a record, jBASE
performs a calculation on the ID of the record that is to be locked and arrives at a number that could
be anything between 1 and 232. This number that it arrives at, is the byte of the stub file in the NFS
mount that jBASE will ask Unix to lock. This locking information is written on to the
JEDI_SHM_WORKFILE. We need to understand that
a. There is no lock maintained at the database level. Oracle or DB2 will not lock records until
the jBASE driver tells them to. In simple terms, the jBASE driver will not do a ‘SELECT FOR
UPDATE’ and hence no lock will be maintained at the database level.
b. We write into JEDI_SHM_WORKFILE only for information/administrative purposes.
Let us understand this scenario with an example.
Assume that there are 2 T24 servers each having their own copy of jBASE and T24
running, and a data directory that has been NFS mounted. Assume a routine that is to update a
CUSTOMER record with ID 100069 is being executed in T24 Server 1. This routine would do a
F.READU on the record 100069 in the FBNK.CUST000(Actual data file name for FBNK.CUSTOMER
file) file. Once the F.READU is executed, a calculation is performed on the ID that needs to be locked
(100069) and a number is obtained. This number is byte in the FBNK.CUST000 file that will get locked.
This locking information is then written on to the JEDI_SHM_WORKFILE on T24 server 1. At this point,
we need to note that the driver will just perform a read on the database (Oracle/DB2) and obtain the
record from the database and no lock will be maintained at the database level. Assume the same
routine to update the CUSTOMER record with ID 100069 is executed from T24 server 2. This routine
will try to get a lock on the same CUSTOMER record 100069. This server will also perform a
calculation on the ID of the record and the number that it obtains will be byte of the FBNK.CUST000
file that the T24 Server 1 has already locked. This server will then try to lock the same byte in the
FBNK.CUST000 stub file. This operation will fail as T24 Server 1 has already locked that byte and
hence, the message ‘RECORD LOCKED’ will be sent to the user.
Scenario 2: Locking mechanism used when records are opened by the user
Before we discuss about locking mechanism used when records are opened from the
browser, we need to be clear with the fact that when we communicate via the Browser, there is a
stateless connection between the client and the server. Let us understand this with an example.
Assume that there are 2 T24 servers and a user opens a Customer record with ID
100069 from the Browser and this request goes to T24 server1. As discussed before, jBASE would
lock a specific byte of the appropriate stub file, would write the locking information on to the
JEDI_SHM_WORKFILE on T24 Server 1, obtain the record from the Database and display the record
to the user. Therefore at this point, T24 Server 1 is holding a lock on this record. Assume, the user
makes some changes to the record and commits it. This request to commit the Customer record might
go to T24 Server 2. If it does, the T24 Server 2 would not be able to proceed with the processing as
the byte of the stub file that it wishes to lock in order to commence processing has already been
locked by T24 Server 1. T24 Server 2 cannot even check who has locked the record as the locking
information is at the moment available in the JEDI_SHM_WORKFILE on T24 Server 1. Therefore, in a
browser environment, there is no point holding a lock on records that a user opens. So, can we do
away with locking? The answer is surely a ‘NO’. So, how do we achieve locking in a stateless
connection environment? The following example should answer this question.
When a user opens a record in any application in T24 using the Browser, the
message reaches any one of the T24 servers. jBASE running on this server will perform a calculation
on the ID of the record, obtain a number and tell Unix to lock that specific byte in the appropriate stub
file and would also write the locking information on to JEDI_SHM_WORKFILE. In addition to this, in
T24, we have a routine by name EB.RECORD.LOCK that writes the locking information on to a file by
name F.RECORD.LOCK. Once locking information is written on to F.RECORD.LOCK, jBASE will
remove the lock from the stub file.
The F.RECORD.LOCK is a file that will be common to all the T24 serves and will be
stored with all the other data files in the Database servers.
We release the lock on the stub file as there is a stateless connection between the
client and the server and there is no point holding a lock on one of the T24 Servers as the subsequent
request for that record could go to another T24 server.
a. This file along with the EB.RECORD.LOCK routine is available to provide an extra layer of
record locking in T24
b. Updates into this file will happen only if the field BROWSER.REC.LOCKS in the SPF is set to
‘YES’. The default value is ‘’(NULL).
c. If the above mentioned field in the SPF is set to ‘YES’ and records are opened through
Desktop/Classic version of T24, a record will still be created in F.RECORD.LOCK for the
record that is being opened, but with no contents(no user name, no time stamp etc) as locks
cannot expire in a stateful connection.
d. Writing into F.RECORD.LOCK allows locks to expire after a certain time duration due to dead
connections and for the Browser to re-establish locks when requested again.
Now, if the user makes changes to the record and commits and if it does go to T24
Server 2, this server will try to lock the stub file after performing the calculation and will succeed in
doing so. Once done, it will try to update F.RECORD.LOCK, but, will realise that this user who has
committed the record is the one who already possess a lock on the record and hence will proceed with
the processing. (Please note that we store the username and timeout in the F.RECORD.LOCK file).
Scenario 3: Locking mechanism used when automatic ids(F3) are obtained from the Browser
When the user creates a new FT record from the Browser, the request goes
to the T24 server, there it is checked to see if the user has adequate privileges and if he does, a new
record is opened. You need to note that a new id is not allocated at this point. Just a blank record is
displayed to the user and the id field will have the value ‘New’. There is no locking at this point as
there is no ID to lock. It is only when the user presses the RETURN key on any of the fields or when
he commits the transaction (when a subsequent request for that record is sent to the server) the
F.LOCKING file is read, new id is allocated, and locking information is updated on to the
F.RECORD.LOCK file.
No
Exists
Release lock on
stub file
Not Time out Time out
Check timeout
There could be a scenario wherein a user opens a customer record with id 100069
and does nothing to it and no other users tries to access that record. In this case both the lock as well
as the session will expire. Yet, a record with id FBNK.CUSTOMER$NAU.100069 will still exist in the
F.RECORD.LOCK file. In order to clear such locks that have been timed out a COB routine has been
introduced.
Parameters : Type(LOCK/UNLOCK)
File Name
File Path
ID Of The Record
All COB routines in T24 are multi threaded in order to achieve transaction processing
and speed up the processing during the COB.
The BATCH.JOB.CONTROL is the controlling program for multi threaded COB jobs. It
first executes the ‘LOAD’ component of the multi threaded job where the necessary file opens and
variable initialisations are done, and then, executes the ‘SELECT’ component. It is in this component
where all the necessary transaction ids that are required to be processed are built and written on to a
LIST file. The name of the list file would be ‘JOB.LIST.Number’, where Number is the number
assigned to the agent (can compare it to a BATCH.SESSION. Discussed in detail later in this section)
that executed the SELECT component The transaction ids get written on to records in the LIST file
which have sequential ids starting from ‘1’. The number of transaction ids to be written into each
record in the LIST file is controlled by the field BATCH.LIST.MAXIMUM in the PGM.FILE record of the
corresponding COB job. If this field is left blank, then, by default, 1 transaction id will be written on to
each record in the LIST file. Once the LIST file is built, BATCH.JOB.CONTROL picks up records from
the LIST file and hands it over to the various agents in order for them to process. In each of the agents
the ‘ACTUAL’ processing routine (hence forth refereed as RECORD ROUTINE) will get executed.
Please note that all the agents will execute the LOAD routine, only one of the agents will execute the
SELECT routine and build the LIST file. After the LIST file is built, multiple agents will read from the
LIST file, extract their share of ids to process and start processing.
Multi threaded COB routines are should have the field BATCH.JOB in the PGM.FILE
set to @BATCH.JOB.CONTROL or should be left blank. Non multithreaded COB routines should have
the field BATCH.JOB in the PGM.FILE set to the actual routine name.
Following flow chart depicts the working of a multi threaded COB routine
Execute T24.TRG.COB.LOAD
Execute T24.TRG.COB.SELECT
In the traditional End Of Day mechanism, in order to execute the End Of Day
process, we would first take a backup of the system, then execute the various End Of day jobs defined
in the BATCH application and once all the jobs complete successfully, a post End Of Day backup
would be taken and only after this the system will return to its ‘ONLINE’ stage. It is only from this point
will the users will be able to input transactions. If any one of the End Of Day jobs fail, then the entire
process would stop.
To reflect the fact that the system is now always available there is now no concept of
an end of day, i.e. a period where the system is not available to users. Instead there is a close of
business process that runs when the bank wishes to close the bank’s books for the day. The tasks
performed should be viewed as automated transactions running in a particular order whilst other work
on the system takes place. The end of day or batch process is now referred to as Close of Business or
COB. The following section aims at giving an insight into the technical and functional architecture
changes done to T24 in order to support non stop processing.
The tSM as the name implies is the main manager of COB. It is a service that
runs as a background process and its main job is to initiate and monitor tSAs(also background
processes) which will actually be executing the COB. With the installation of T24 we will have 2
services defined for COB purpose namely, TSM and COB. While the TSM service is used for initiating
and monitoring the tSAs as specified earlier, the job of the COB service is to actually execute the COB
jobs using tSAs. Both the tSM and the tSA can be run as fore ground jobs if desired. This is discussed
in detail later in this section
In order to initiate the COB process, the first thing that we need to do is to start the
tSM on each of the T24 servers. Once initiated, the tSM will run in the background and will initiate
agents (tSA) in order to execute the COB service. We can control the number of agents that need to
run in each T24 server. Further, we can increase and decrease the number of agents in a server while
the COB is running and also parameterize the system in such a way that the tSM automatically
increases or decreases the number of agents for specific periods of time. You can compare the tSAs
to the BATCH.SESSIONs that we specify in the SPF. The tSAs are the ones that will actually execute
the COB jobs defined in the BATCH application.
COB configuration
Step 1:
As discussed earlier, both TSM and COB are services. They need to be defined.
Ensure that you have the definition of the above-mentioned 2 services in the file F.TSA.SERVICE.
TSA.SERVICE
This is the file where services need to be defined. The key(ID) to this file is the
service name itself. As a part of the T24 installation, this file will contain 2 records with the following
ids
File Structure
Sample Records
1. The SERVICE.CONTROL field in the records TSM and COB have to be set to ‘START’ in
order to start the TSM and COB services and should contain STOP in order to STOP the
services.
2. The USER field in the COB record should contain a valid user id as user information is
required for COB record updates and applying the default account officer / department
codes in processing.
Step 2
TSA.WORKLOAD.PROFILE
Field Description
ID Anything
XX<TIME HH:MM
XX-AGENTS.REQUIRED Number of tSAs to dedicate 0-N
Step 3
We have now defined the services and specified the number of agents required to run
each service.
We know that the number of agents for a service can be dynamically changed
when the COB is running. How does this happen? If this has to happen than the tSM should
constantly keep reading the TSA.WORKLOD.PROFILE and the TSA.SERVICE file to check if the user
has made any changes to it.
We know that the tSM should control all the agents. What happens if an agent dies?
How does the tSM know that an agent has died?
TSA.PARAMETER
Field Description
ID Only SYSTEM
REVIEW.TIME The number of seconds the tSM will sleep before reviewing the TSA.SERVICEs,
TSA.WORKLOAD.PROFILEs and running tSAs (TSA.STATUS) to determine if
more (or less) agents are required. If the REVIEW.TIME field in the
TSA.SERVICE record for a particular service is left bank, then this field’s value
will be defaulted. If left blank, this field defaults to ‘15’ seconds.
DEATH.WATCH Maximum number of seconds allowed for an agent to report to the tSM. If this is
set to 300 then the tSA will assume that an agent has failed if its last contact is
more than five minutes – it will then restart the agent.
HIGHEST.AGENT NOINPUT field. Automatically populated by the tSM depending on the number
of tSAs running at a point in time.
Sample Record
Step 4
Now that we have all the tables setup, we also need to know the mechanism to
monitor/know the status of the tSAs and the tSM. The F.TSA.STATUS file will help us do that.
This is the file that the tSM will keep updating in order to show the status of the tSAs.
TSA.STATUS
Field Description
ID 1–N
SERVER.NAME Server running this tSA
STATUS STOPPED, RUNNING
LAST.CONTACT Date and time of last contact with the tSA.
PID Server O/S process id
SERVICE Current SERVICE being run (ID to TSA.SERVICE)
NEXT.SERVICE The next service as instructed by the tSM (ID to TSA.SERVICE)
Sample Data
Step 1
jsh…>START.TSM
(OR)
>START.TSM -DEBUG
START.TSM -DEBUG
tSA 1 -DEBUG
COMO tSA_1_1297271750.89 established
19:55:50 07 JUL 2003
Agent 1 started 07 JUL 03 19:55:50
Running on server local
tSM DEBUG
Service Profile TSMÿCOB¤1ÿ2
It is vital to understand that the tSM is itself a background process like the tSA
but the only difference is that it will not execute COB jobs but will monitor tSAs. Therefore, when we
initiate the tSM, it is as good as initiating the first tSA. This tSA is like a master agent and will control
all the other agents. At any point in time, there can be only one tSM running on one T24 server
but we can have as many numbers of tSAs as required. Therefore, WORKLOAD.PROFILE
record used by the TSM service should never contain more than one agent.
Once the tSM is initiated, the first thing that the tSM will do is to start the required
number of agents for the service COB. The ‘COB’ service will always have specific number of agents
associated with it. Each of the tSAs internally will call the S.JOB.RUN program to actually start the
COB process.
Step 2
If you choose to run the tSM in interactive mode as specified earlier, then the tSAs will
not be started automatically. tSM will prompt you to start the tSAs manually.
Execute the following command from the jBASE prompt to start the tSA manually
(OR)
jsh…>tSA <<agent number>> -DEBUG (Interactive Mode – Can view the COMO)
Please note that one should never start tSA 1 as it is the tSM itself.
>tSA 2 -DEBUG
tSA 2 -DEBUG
COMO tSA_2_1297271935.265 established
19:58:55 07 JUL 2003
Agent 2 started 07 JUL 03 19:58:55
Running on server local
Time 19:58:55 SELECT F.COMPANY
Now that all the tSAs are up and running, let us understand how they perform the
COB. Each of the tSAs will call the program S.JOB.RUN in order to initiate the COB process. This
program in turn calls the EB.SORT.BATCH subroutine that sorts all the batch jobs in order of BATCH
STAGE and gives it to each tSA in a dynamic array. with the following details
PROCESSNAME_JOBNAME_ROUTINENAME_JOBDATA_COMPANYID_NEXTRUNDATEFM
PROCESSNAME_JOBNAME_ROUTINENAME_JOBDATA_COMPANYID_NEXTRUNDATE….
Sample Data
COB.INITIALISE_EB.CYCLE.DATES_BATCH.JOB.CONTROL_ _GB0010001_20040101
The important point to be noted here is, when the dynamic array is built for every tSA,
EB.SORT.BATCH will randomize batch records with the same batch stage in order for the tSAs to
process those records simultaneously and to avoid locking conflicts.
Assume that there are 4 batch records namely BATCH1, BATCH2 , BATCH3 and
BATCH4, all with the same batch stage A001. When EB.SORT.BATCH builds the dynamic array for
every tSA, it would randomize these records. See sample data below
tSA1 : BATCH2FMBATCH3FMBATCH4FMBATCH1
tSA2 : BATCH1FMBATCH2FMBATCH3FMBATCH4
tSA3 : BATCH4FMBATCH1FMBATCH2FMBATCH3
Once the tSAs get the dynamic array of all the jobs that they need to execute, they
start executing them in that order one after the other. For every job that the tSA processes, following
will be the sequence of steps that will be carried out
1. Lock the BATCH record of the job that it is to process and update the PROCESS.STATUS to
1 and JOB.STATUS to 1
2. Release the lock on the F.BATCH file.
3. Execute the LOAD routine
4. Create and lock a record in the F.BATCH.STATUS file.
ID : CompanyMnemonic/JobName
5. Execute the SELECT routine
6. Write the selected transaction ids on to the LIST file.
7. Create a record in F.LOCKING with the id ‘CompanyMnemonic/JobName’ and place the name
of the LIST file in the record so that the other tSAs know which LIST file to read.
8. Update the F.BATCH.STATUS file with ‘VMprocessing’
9. Release the lock on F.BATCH.STATUS
10. Select records from the LIST file and execute the ‘ROUTINE’ for each of the transaction ids.
11. Once a transaction id is processed, delete the transaction id from its list and also from the
master LIST file.
12. Delete the record from its list and the LIST file once all the transaction ids in that record are
processed.
13. Once all the records in its list are complete, obtain the next set of the records from the LIST
file. Continue this until the there are no more records to be selected from the LIST file.
14. Lock the BATCH.STATUS file and update to ‘processed’
15. Release the lock on F.BATCH.STATUS
16. Lock the BATCH record and change the JOB.STATUS to 2 and if it is the last job in a BATCH,
then change the PROCESS.STATUS in the BATCH record to 2.
17. Release the lock on F.BATCH.
18. Continue processing the next job.
When we have multiple tSAs there can be a scenario where all the tSA’s will want to
execute the job. What happens then?
When multiple tSA’s try to execute the same job – first, all of them will try to lock the
BATCH record and update it, but only one of then will succeed and hence all the others will wait for
TEMENOS Confidential Page 30 of 53 July 2003
T3TNS – R05 – 1.0
T24 Non Stop
the lock to be released on F.BATCH. By then, the tSA that was able to successfully lock the BATCH
record would have updated the JOB.STATUS and PROCESS.STATUS on the BATCH.RECORD.
Once done, it would release the lock on the BATCH file. Once the lock on the BATCH file is released,
all the agents will try and lock it and change the JOB.STATUS but since the JOB.STATUS has already
been changed to 1, they will understand that the job is ready for execution. Now all the tSAs will
execute the LOAD routine. Once done they will try and lock the F.BATCH.STATUS file with the ID
‘CompanyCode\JobName’. Only one of them would succeed and the others will wait for the lock to be
released. While the other tSAs wait, the tSA that holds the lock on the F.BATCH.STATUS file would
execute the SELECT routine and would write the data on to the LIST file. Once data is written on to
the LIST file, it will create a record in F.LOCKING with the ID ‘CompanyMnemonic/JobName’ and
place the name of the LIST file in it so that the other tSAs know which LIST file to process and then
will update the BATCH.STATUS file with ‘VMprocessing’. Once done, it will release the lock on the
BATCH.STATUS file. It is imperative to understand that if the BATCH.STATUS file contains a value
‘processing’, it denotes that the LIST file is ready for processing. Once the lock on the
BATCH.STATUS file is released, all the tSAs will try to lock it but they would find that the record
contains ‘processing’ and therefore will understand that the LIST file is ready for processing. Now all
the tSAs will try and access the LIST file(name would be obtained form the LOCKING file) and get
their share of transaction ids to process. As discussed earlier in the ‘Multithreading’ section, the
BATCH.JOB.CONTROL will distribute the records to all the tSAs. The important point to be noted here
is that, when a tSA has been allocated a record for processing, the tSA will lock that record when it is
processing the transaction ids in that record. This is done so that multiple tSAs do not process the
same record. Once a tSA processes ‘a’ transaction id, it will remove that transaction id from its list and
well as the LIST file. The same applies to the records that it processes – Once all the transaction ids in
a record are processed, the tSA will delete the record from its list as well as the LIST file. Once a tSA
processes all the records allocated to it by BATCH.JOB.CONTROL, it will obtain another set of
records from the LIST file. A failure to retrieve records from the LIST file denotes that all the records in
the LIST file have been processed and the job is complete. Once the select in the LIST file fails, the
tSAs will try and lock the F.BATCH.STATUS file and update it from ‘VMprocessing’ to ‘processed’ to
denote that all the ids have been processed. As normal, only one of them will succeed in obtaining the
lock and hence other tSAs will wait for the lock to be released. The tSA that was able to successfully
obtain the lock will update the F.BATCH.STATUS file and will also update the BATCH file(JOB
STATUS and PROCESS STATUS to 2) and the release the lock. Once the lock is released, the other
tSAs will try and lock the BATCH.STATUS file, but will find the value ‘processed’ and hence will
proceed to lock and update the BATCH file. Here again the JOB.STATUS would be ‘2’ and hence they
will understand that the job has been completed and hence will proceed to the next job.
If the routine to be executed is a single threaded routine then there would be no LOAD or SELECT
component to it. It is the BATCH.JOB.CONTROL that resolves whether the routine to be executed is a
‘Single threaded’ or a multi threaded routine. Once resolved, if it is a single threaded routine, then the
tSA that successfully locked the F.BATCH.STATUS file will write a record in the LIST file with id
‘Singlethread’, create a record in F.LOCKING and release the lock. Once the lock is released, all the
tSAs will now try to obtain the one record ‘Singlethread’. As always, only one of them will succeed and
the tSAs that succeeds in obtaining the ID, executes the required routine. The rest of the processing is
common for both multithreaded and single threaded jobs.
Once all the COB jobs are complete, the tSAs would automatically stop. If at any
point in time during the execution of the COB, a tSA needs to be stopped, the appropriate
WORKLOAD.PROFILE specified for the COB service needs to be amended. In order to stop the
TSM, the field SERVICE.CONTROL in the TSA.SERVICE file(ID:TSM) should be set to ‘STOP’.
Please ensure that the TSM is stopped only after all the tSAs are stopped.
Resilience
In the traditional End Of Day, when a job results in a fatal error, we would have to
compulsorily restore the PRE END OF DAY backup, make the necessary corrections and then
execute the EOD process all over again. With the progress to T24 with COB, there is no single point
of failure. Let us understand this using the following scenarios.
We need to understand and remember that all COB jobs are multi threaded and
therefore transactional. When a fatal error does occur, it would occur for a particular transaction id and
not for the entire LIST as we process data transaction by transaction. In T24, the transaction id for
which the fatal error occurs, will be
If the tSA terminates while executing the SELECT routine, the lock on the
F.BATCH.STATUS file will be released and one of the other tSAs will resume the process. This tSA
will re-run the SELECT process and overwrite the LIST file. No single point of failure.
If the tSA terminates after the SELECT routine, the other tSAs waiting for the lock to
be released on the F.BATCH.STATUS file will read the F.BATCH.STATUS file, will find the status as
‘PROCESSING’ and hence will understand that LIST file is ready and hence will start processing the
records in the LIST file. No single point of failure.
If the tSA terminates while processing a transaction ID, as all COB jobs comply to the
multi threaded model and as all the transactions done by the RECORD ROUTINE comply to
transaction processing (all statements are placed within a TRANSTART and TRANSEND), there will
be no partial updation. The entire transaction will be rolled back and the other tSAs will obtain the
record when they perform a SELECT on the LIST file and process that record. No single point of
failure.
If the tSA terminates after updating the F.BATCH.STATUS file to ‘PROCESSED’. The
other tSAs who were waiting for the lock to be released, will now read the F.BATCH.STATUS file, will
find that the status is ‘PROCESSED’ and hence will update the BATCH record (JOB.STATUS to 2 and
PROCESS.STATUS to 2(if it is the last job in that process)). No single point of failure.
If all the tSAs stop, then we could still restart them and they would continue
processing.
If a T24 server processing COB crashes, the other T24 servers will take on the job as
there will be one tSM and multiple tSAs running on them.
One of the main challenges that had to be solved was to allow the processing of
transactions on a 24*7 basis in the same system. The major issue here was the way that the system
managed recovery of the batch process. Previously the only recovery mechanism was to take a
physical backup of the database before the start of the batch and again at the end. If a problem
occurred that required recovery the only option was to restore the backup, fix the problem and re-run.
In a scenario where transactions are being entered whilst the batch process runs this method of
recovery is not possible. In a 24*7 environment the system cannot be restored except in the case of
serious hardware failure and also there is no window to take a traditional physical backup.
Make the close of business processes transactional and use the database transaction
management to manage this (in the same way as online transactions have always been managed by
transaction management).
This has been achieved by multi-threading all COB programs so that processing takes place
in a transactional manner, all updates relating to the transaction are committed at the end of the
transaction, in the event of failure the transaction is rolled back so no updates relating to the
transaction will have taken place. As all processing goes through a central process online
(JOURNAL.UPDATE) and end of day (BATCH.JOB.CONTROL) the system now has full transaction
management.
Since end of day transactions do not use JOURNAL.UPDATE we can no longer use the
T24 JOURNAL file for system recovery, instead the jBASE, ORACLE or DB2 journaling facilities
will be used, in the event of a recovery being necessary the relevant recovery procedures must
be followed. ORACLE and DB2 provide the ability to perform online backups, so these systems can
be truly non-stop, for jBASE j4 operation a traditional backup will still need to be taken at a convenient
point in the day, although now that the close of business is also under transaction management this
can take place at the choice of the user and is not necessary at start or end of COB.
The JOURNAL file is no longer updated from online transactions. The JOURNAL file is an
important source of information for investigating issues so the possibility of maintaining the JOURNAL
records in certain circumstances is being looked into. Relevant information should also be available in
the relevant ORACLE and jBASE journals.
If a traditional backup is required the program SYSTEM.BACKUP should be run directly from
a telnet or reflection session. This will execute the backup script defined in the SPF in the same
manner as the traditional end of day process. Note this process is no longer part of the Close of
Business processing and is not automatic.
Restore of a backup will be run using the same SYSTEM.RESTORE program, again this will
need to be run from Telnet / Reflection.
Only the JOURNAL.UPDATE and Batch control routines are permitted to use the transaction
management statements TRANSTART, TRANSEND OR TRANSABORT or to call EB.TRANS. Use of
this will be blocked in EB.COMPILE.
Application -A
System Wide -S
Reporting -R
Start of Day -D
Online -O
The Online stage is now a standard stage, previously online jobs could only be
defined in a single batch record called ONLINE (or xxx/ONLINE in a multi-company environment). This
allows full flexibility of stage numbers and frequencies. The online stage runs immediately after the
start of day stage. The system status is online at this point and all transaction types can be processed
whilst this stage is in progress. When writing new reporting processes or start of day updates where
possible these should be placed in the Online section. This will help to reduce the length of time that
the system is in offline mode.
CUSTOMER
ACCOUNT
FUNDS.TRANSFER
STANDING.ORDER
TELLER
TELLER.ID
SEC.OPEN.ORDER
The ability to run Non Stop is an additional product that must be purchased. The
ability to use the above applications when the system is running COB is only allowed if the NS product
is installed
The system previously provided some options to allow applications to run whilst
the system is offline in the ADDITIONAL.INFO field of the PGM.FILE. The options provided were:
To prevent unauthorized access to Non Stop processing for free and also to
ensure that products are not made Non Stop that have yet to be changed the ability to run an
application is now hard coded in the first release of T24.
A small number of applications can operate whilst the system is offline if .NOD is
in the PGM.FILE namely:
PGM.FILE
TEMENOS Confidential Page 34 of 53 July 2003
T3TNS – R05 – 1.0
T24 Non Stop
DE.PARM
SIGN.OFF
BATCH
PGM.FILE
SIGN.ON
PGM.BREAK
PASSWORD
STANDARD.SELECTION
TSA.SERVICE
TSA.WORKLOAD.PROFILE
TSA.PARAMETER
VERSION
VERSION.CONTROL
Date Changes
The first job to get executed as a part of the COB process is
This job will cycle the dates for the online user. The start of the COB is the business cut-off for
processing, any work apart from that in the COB itself after this point is treated as tomorrow’s work.
The system will now maintain 2 DATES records for each company. The record keyed on company
code will be used by any non-COB processing session. A second record keyed on ‘company code –
COB’ will be used by the COB process.
It is no longer necessary to define the next run date for Daily frequency jobs, the
system will automatically run Daily jobs irrespective of next run date. If a job needs to be turned off it
should be made an Ad Hoc job.
The first process in the start of day stage (D) section of the COB will cycle the
date for the COB process only, since the dates have already been cycled for the non-COB sessions.
Note that this process is a financial level process now (i.e. will run in each company) not the INT level
it was in previous releases.
The final job in the online (O) section of the COB will now cycle the next run
dates in the BATCH records. This was previously done as part of the BATCH.CONTROL mechanism
when switching the system back into online mode.
STMT.ENTRY Processing
Any program run in the COB attempting to build a list of the day’s movements for
an account after ’SYSTEM.END.OF.DAY3’ can no longer use ACCT.ENT.TODAY, instead it must use
ACCT.ENT.LWORK.DAY. In the release all core programs that perform this processing have been
modified, however there may be local routines that need amendment. Whenever coding a routine that
relies on the closing balance of an account you must only use the OPEN.ACTUAL or
OPEN.CLEARED balance since the other balances will be updated by non-COB transactions.
OPEN.ACTUAL.BAL – Will be cycled excluding tomorrow’s entries. This is the closing balance used in
the CRB processing.
ACCT.ACTIVITY – Will not include tomorrow’s entries and movements made while the COB is running
will not impact the interest accruals / capitalization as it is now built from ACCT.ENT.LWORK.DAY
ACCT.STMT.ENTRY - Will not be updated with tomorrow’s entries. Account Statements due in
tonight’s run will not show entries raised after the COB started. The reason is this picks up data from
ACCT.ACTIVITY which in turn picks up the data from ACCT.ENT.LWORK.DAY.
ACCT.ENT.TODAY – Will contain both today’s and tomorrow’s entries. Once the
EOD.ACCT.ACTIVITY job is complete, this will only hold tomorrow’s entries.
ACCT.ENT.LWORK.DAY - Will contains COB’s TODAY entries. Tomorrow’s entries will remain in
ACCT.ENT.TODAY.
CATEG.ENTRY Processing
Very similar changes have been required for profit and loss processing as for
accounts to cater for non-COB updates after the COB has begun. Again the BOOKING.DATE is used
to identify today’s P&L from tomorrow. The process EOD.RE.PROFIT.LOSS in
SYSTEM.END.OF.DAY.5 will now pick up all entries with BOOKING.DATE <= COB’s TODAY and
transfer them from CATEG.ENT.TODAY to CATEG.ENT.LWORK.DAY. Any processing done after
EOD.RE.PROFIT.LOSS will read from CATEG.ENT.LWORK.DAY. Only the entries in
CATEG.ENT.LOWRK.DAY will be used to update the CRB and CATEG.MONTH. Entries with a
forward BOOKING.DATE will remain on CATEG.ENT.TODAY and are ignored from further processing.
Reports
Reports produced after the system wide stage of the COB using
ACCT.ENT.TODAY have been changed to use ACCT.ENT.LWORK.DAY since this file will only
contain the entries entered since the start of COB. The TRANS.JOURNAL reports can no longer be
used, instead the TRANS.JOURNAL.YEST versions should be produced.
Currency Positions
Again non-COB transactions can update the currency positions, any updates
post start of COB must be excluded from the revaluation processing. Additionally there is now the
possibility that there will be unauthorized transactions remaining in the system, which could have
updated the currency positions, again these must be excluded from the revaluation processing.
The POSITION and POS.TRANSACTION files now contain additional fields to identify the position
movements by the system date on which the transaction was authorized and the unauthorized update.
Note that the original amount fields will contain the total of all updates.
All revaluation processes (FX, AL and CRF) processing have been amended to exclude the
unauthorized updates and those authorized after the start of COB.
Example
Number of COB routines that have been created as a part of the training and have been attached to
the BATCH with FREQUENCY set to ‘A’ – Adhoc. In order to test them, the entire set of BATCH
records which have the string ‘TRG.’ as a part of the ID need to chosen and their frequencies need to
be set to ‘D”. A multi-threaded service is required for this purpose.
Solution
Step 1
Create the necessary routines. The routines need to be written in a manner similar to multi threaded
COB routines.
The Insert File
I_USER.DEFINED.SERVICE.COMMON
COM /USER.DEFINED.SERVICE/FN.BATCH,
F.BATCH
The Load Routine
SUBROUTINE USER.DEFINED.SERVICE.LOAD
$INSERT I_COMMON
$INSERT I_EQUATE
$INSERT I_USER.DEFINED.SERVICE.COMMON
FN.BATCH = 'F.BATCH'
F.BATCH = ''
CALL OPF(FN.BATCH,F.BATCH) ; *Open the BATCH file
RETURN
END
The Select Routine
SUBROUTINE USER.DEFINED.SERVICE.SELECT
$INSERT I_COMMON
$INSERT I_EQUATE
$INSERT I_USER.DEFINED.SERVICE.COMMON
SEL.LIT = ''
NO.OF.REC = ''
RET.CODE = ''
SEL.CMD = "SELECT ":FN.BATCH:" WITH @ID LIKE ...TRG..." ; *We are only
concerned about BATCH records with the string TRG in it
CALL EB.READLIST(SEL.CMD,SEL.LIST,'',NO.OF.REC,RET.CODE)
CALL BATCH.BUILD.LIST('',SEL.LIST)
RETURN
END
TEMENOS Confidential Page 38 of 53 July 2003
T3TNS – R05 – 1.0
T24 Non Stop
The entry in the PGM.FILE needs to be made in the same manner as we would make for a normal
multithreaded routines.
Note : If the phantom needs to be executed as a single threaded phantom, then the routine associated
with it needs to be written in a single threaded fashion and the field BATCH.JOB in the PGM.FILE
should contain the routine name prefixed with ‘@’.
Step 3
Make an entry in the BATCH application. Ensure that the field BATCH.STAGE is set to ‘’. This is the
flag to tell the system that we are NOT executing a batch job.
Field Description
ID USER.DEFINED.SERVICE
BATCH.STAGE
BATCH.ENVIRONMENT F
JOB.NAME USER.DEFINED.SERVICE
FREQUENCY D
The field BATCH.ENVIRONMENT could contain either ‘B’ or ‘F’. The value in this field will in no way
affect/change the execution of the phantom. The phantom will always be executed in the background
mode.
The field FREQUENCY can contain any valid value that field supports. The value in this field will in no
way affect the working of the phantom.
Step 4
Create a record in the TSA.SERVICE application for this service/phantom.
Field Name Field Value
ID USER.DEFINED.SERVICE Name of the service. Has be the
SAME as the BATCH record id.
XX<SERVER.NAME Server1 Name of the server where this
phantom needs to be run. Can
specify multiple server names as
well.
XX>WORKLOAD.PROFILE TWO ID of the associated
TSA.WORKLOD.PROFILE record
USER INPUTTER User Id
SERVICE.CONTROL AUTO This field should contain a vale
‘AUTO’. This is the flag to tell the
system that this is a phantom and
needs to be run continuously.
REVIEW.TIME 20 This is the sleep time for this
phantom. If this is left blank, then
the REVIEW.TIME value in the
TSA.PARAMETER file will be
used as the sleep time for this
phantom.
application(ID:USER.DEFINED.SERVICE) to STOP.
Quick Reference
To create user defined services and run them as phantoms
1. Create the necessary routines
2. Make an entry in the PGM.FILE with the type as ‘B’ like any other COB routine.
3. Create an entry in the BATCH record. The field BATCH.STAGE should be set to NULL.
4. Create a record in TSA.SERVICE. The ID of this record (ServiceName) should be exactly the
same as that of the BATCH record. The field SERVICE.CONTROL should contain a value
‘AUTO’.
5. Start the TSM in phantom mode. This will start the phantom internally.
6. To stop the phantom, specify ‘STOP’ in the field SERVICE.CONTROL in the TSA.SERVICE
application. (Record ID : ServiceName)
<ADAPTERS>
<ADAPTER id="G14003">
<MAX_SESSION> 5 </MAX_SESSION>
<MIN_SESSION> 1 </MIN_SESSION>
<OFSTIMEOUT>300</OFSTIMEOUT>
<GLOBUSPATH> C:\Localhost\G14003\bnk.run </GLOBUSPATH>
<JBASEPATH>C:\JBASE40</JBASEPATH>
<OFSENTRY>OFS.CONNECTION.MANAGER</OFSENTRY>
<OFSSOURCE>GCS</OFSSOURCE>
</ADAPTER>
<ADAPTER id="G14005">
<MAX_SESSION> 5 </MAX_SESSION>
<MIN_SESSION> 1 </MIN_SESSION>
<OFSTIMEOUT>300</OFSTIMEOUT>
<GLOBUSPATH> C:\Localhost\G14005\bnk.run </GLOBUSPATH>
<JBASEPATH>C:\JBASE40</JBASEPATH>
<OFSENTRY>OFS.CONNECTION.MANAGER</OFSENTRY>
<OFSSOURCE>GCS</OFSSOURCE>
</ADAPTER>
</ADAPTERS>
<LISTENERS>
<LISTENER Name="Browser.1" type="tcp" active="true">
<ADAPTERID>G14003</ADAPTERID>
<PORT> 7001 </PORT>
</LISTENER>
<CHANNEL>
<NAME>browser.2</NAME>
<TIMEOUT>300</TIMEOUT>
<ADAPTER>
<TYPE>tcp</TYPE>
<PORT>7002</PORT>
<SUPPLIER>
<INITIATOR>
<HOSTNAME>127.0.0.1</HOSTNAME>
</INITIATOR>
</SUPPLIER>
<CONSUMER>
<MAX_SESSION>5</MAX_SESSION>
<ACCEPTOR>
<BACKLOG>30</BACKLOG>
</ACCEPTOR>
</CONSUMER>
</ADAPTER>
</CHANNEL>
<channels>
<channel>browser.1</channel>
<channel>browser.2</channel>
<channel>browser.3</channel>
</channels>
</instance>
</instances>
Before you can log on to a T24 environment of your choice, ensure that all you listeners have been
properly configured. For this, start your TC Server and from the prompt, execute the following
command
gcs :> listenerinfo
Listener Info ....
Browser.2 TCP 7002 g14005 true
Browser.1 TCP 7001 g14003 true
T24 Installation
The following section describes the installation of T24 on a WINDOWS machine.
Components
Following are the components that are required for the installation on T24.
1. jBASE
2. T24 (G14)
3. Web server (Tomcat / Webspheare)
4. Temenos Connector
5. Java developers kit
6. Temenos Browser
Pre requisite
• Minimum of 2 GB of free space
• IE 5 or above
Installation Procedure
Step 1
Create a directory named ‘T24InstallationFiles’ and copy the contents of the CD on to that directory.
Step 2
The next step is to install jBASE. Under the T24InstallationFiles directory you will find a file by name
jbase4_4.0.4.4_win32.zip. This is the file that contains jBASE. Unzip this file on to a directory named
‘jBASEInstallationFiles’. Once done, go to
<drive>:\jBASEInstallationFiles\CDROM\i386
and execute the SETUP.EXE file that will take you through the jBASE installation. Install jBASE in the
default path that is prompted (C:\jbase40 - You may choose to change the drive depending on the
space available).
Step 3
The next step is to install T24(G14). Under the T24InstallationFiles directory you will find a file by
name jbbase14003.nt.zip. This is the file that contains G14. Unzip it on to a directory named
G14003LocalHost. Unzipping the file completes the G14 installation.
Step 4
The next step is to install Java developers kit. This is required by the Temenos Connector and the
Web Server and hence needs to be installed before the installation of the other 2 components. Under
the T24InstallationFiles directory you will find a file by name j2sdk-1_4_0-win. Execute this file. This
will install the required java component. Install it under the default path prompted by the installation
procedure.
Step 5
The next step is to install the Temenos Connector Server component. Under the T24InstallationFiles
directory, you will find a file named GCServer.1.2.1.jar. Create a directory named ‘tcs’ under
G14003LocalHost and copy this file to that directory. The Temenos Connector Server is installed on
the Application server and hence we choose this path to install it. Once done, invoke the command
line
Start->Run->Cmd
From the command line, change to the tcs directory
cd c:\G14003LocalHost\tcs
Now execute the following command that will install the Temenos Connector Server component.
java –jar GCServer.1.2.1.jar
You will be prompted with the following questions
Configuration
Step 1
Configure the Temenos Connector Server.
Open the jgcserver.xml file with notepad and make the following changes. You will find this file under
C:\G14LocalHost\tcs\conf.
<?xml version="1.0" ?>
<!-- DOCTYPE LISTENER SYSTEM "../dtd/LISTENER.dtd" -->
<!-- GLOBUS Connector communications server LISTENERs definitions -->
<!-- T&R Department 2002 -->
<!-- Please check the installation documentation for a detailed
description of this file -->
<GCSERVER>
<MONITOR_PORT> 9500 </MONITOR_PORT>
<TELNETD_PORT> 9501 </TELNETD_PORT>
<DEBUGGER_PORT> 9502 </DEBUGGER_PORT>
<STACKEXPIRATION>120</STACKEXPIRATION>
<ADAPTERS>
<ADAPTER>
<MAX_SESSION> 5 </MAX_SESSION>
<MIN_SESSION> 5 </MIN_SESSION>
<OFSTIMEOUT>300</OFSTIMEOUT>
<GLOBUSPATH>C:\G142LocalHost\bnk.run</GLOBUSPATH>
<JBASEPATH>c:\jbase40</JBASEPATH>
<OFSENTRY>OFS.CONNECTION.MANAGER</OFSENTRY>
<OFSSOURCE>GCS</OFSSOURCE>
</ADAPTER>
</ADAPTERS>
<LISTENERS>
<LISTENER Name="browser.1" type="tcp" active="true">
<PORT> 7001 </PORT>
</LISTENER>
</LISTENERS>
The remeaining contents in the jgcsever.xml can be deleted.
Step 2
Configure the Temenos Connector Client. Edit the file channels.xml with notepad. You will find this file
under C:\Tomcat4.1\webapps\BrowserWeb\WEB-INF\conf
<?xml version="1.0" ?>
<!-- DOCTYPE CHANNEL SYSTEM "../dtd/channel.dtd" -->
<!-- GLOBUS Connector communications channels definitions -->
<!-- I&A Department 2002 -->
<CHANNELS>
<CHANNEL>
<NAME>browser.1</NAME>
<TIMEOUT>300</TIMEOUT>
<ADAPTER>
<TYPE>tcp</TYPE>
<PORT>7001</PORT>
<SUPPLIER>
<INITIATOR>
<HOSTNAME>127.0.0.1</HOSTNAME>
</INITIATOR>
</SUPPLIER>
<CONSUMER>
<MAX_SESSION>5</MAX_SESSION>
<ACCEPTOR>
<BACKLOG>30</BACKLOG>
</ACCEPTOR>
</CONSUMER>
</ADAPTER>
</CHANNEL>
</CHANNELS>
The remaining contents of the channels.xml file can be deleted.
Step 3
Configure the Temenos Browser. Open the file BrowserParameters.xml with notepad. You will find
this file under C:\Tomcat4.1\webapps\BrowserWeb
<?xml version="1.0"?>
<browserParameters xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<messageData>
<parameter>
<parameterName>Server Connection Method</parameterName>
<parameterValue>GLOBUSCONNECTOR</parameterValue>
<!-- Options: GLOBUSCONNECTOR / INSTANCE / SOCKET / EJB -->
</parameter>
<parameter>
<parameterName>Instance</parameterName>
<parameterValue>production</parameterValue>
</parameter>
<parameter>
<parameterName>GC_CHANNEL</parameterName>
<parameterValue>browser.1</parameterValue>
</parameter>
<parameter>
<parameterName>GC_TIME_OUT</parameterName>
<parameterValue>300</parameterValue>
</parameter>
<parameter>
<parameterName>Use Transformer</parameterName>
<parameterValue>YES</parameterValue>
<!-- Options: YES / NO -->
</parameter>
<parameter>
<parameterName>Log Events</parameterName>
<parameterValue>NO</parameterValue>
<!-- Options: YES / NO -->
</parameter>
<parameter>
<parameterName>Log Level</parameterName>
<parameterValue>NONE</parameterValue>
<!-- Options: NONE / INFO / DEBUG -->
</parameter>
<parameter>
<parameterName>Web Server Skins</parameterName>
<parameterValue>bluesquare:default:xp</parameterValue>
<!-- Skin Names separated by : character -->
</parameter>
<parameter>
<parameterName>CALL_CENTRE_CLASS</parameterName>
<parameterValue>TestAPI</parameterValue>
</parameter>
<parameter>
<parameterName>GC_EJB_JNDI_NAME</parameterName>
<parameterValue>ejb/com/temenos/browser/connector/ejb/ConnectorEJBHome</parameterValu
e>
</parameter>
<parameter>
<parameterName>JNDI_PROVIDER_URL</parameterName>
<parameterValue>corbaloc:iiop:localhost:2809</parameterValue>
<!-- For Weblogic t3://localhost:7001 -->
<!-- For Websphere corbaloc:iiop:localhost:2809/ -->
<!-- For Cluster corbaloc::myhost1:9810,:myhost2:9810 -->
</parameter>
<parameter>
<parameterName>JNDI_INITIAL_CONTEXT_FACTORY</parameterName>
<parameterValue>com.ibm.websphere.naming.WsnInitialContextFactory</parameterValue>
<!-- For Weblogic weblogic.jndi.WLInitialContextFactory -->
<!-- For Websphere com.ibm.websphere.naming.WsnInitialContextFactory -->
</parameter>
<parameter>
<parameterName>Server IP Address</parameterName>
<parameterValue>127.0.0.1</parameterValue>
</parameter>
<parameter>
<parameterName>Server Port Number</parameterName>
<parameterValue>5434</parameterValue>
</parameter>
</messageData>
</browserParameters>
Step 5
Create a user to login into the new Globus environment.
Choose
ControlPannel->AdministrativeTools->Computer Management->Local users and groups->Users
From the ‘Action’ Menu option and choose ‘Create New User’.
Step 6
Start TC Server.
Go to the installation path of TC Server : C:\G14LocalHost\tcs
Change to bin directory and execute gcserver.bat.
Step 7
The next step is to start Tomcat.
Start->Programs->Apache Tomcat4.1->Start Tomcat
Step 8
Now you are ready to log on to Browser. Type the following URL in address bar of IE.
http://localhost:8080/BrowserWeb/servlet/BrowserServlet
Important Settings
Additional Information
BATCH.BUILD.LIST Routine
This is a core GLOBUS subroutine that actually writes the list of ids selected by the SELECT routine
on to the LIST file.
BATCH.BUILD.LIST(LIST.PARAMETERS,ID.LIST)
Parameters :
LIST.PARAMETERS<1> = List file name or null is we're using a list from the pool
LIST.PARAMETERS<2> = Name of the file whose ids need to be selected
LIST.PARAMETERS<3> = Selection criteria (if any)
LIST.PARAMETERS<4> = Last key to the list file - used for multiple calls to this routine.
LIST.PARAMETERS<5> = Total number of keys to process
The first parameter in the LIST file is usually set to ‘’(NULL). The LIST file will be automatically created
by the system and the name would be ‘JOB.LIST.AgentNumber’. The agent number is the number of
the agent that executed the SELECT routine.