Professional Documents
Culture Documents
ABPP3 Programmers Reference Part3 6.2.8 RevA
ABPP3 Programmers Reference Part3 6.2.8 RevA
Platform (ABPP) 3
Programmer’s Reference - Part 3
Revision A
Version 6.2.8
Copyright Information
One i2 Place
11701 Luna Rd.
Dallas, TX 75234 USA
This notice is intended as a precaution against inadvertent publication and does not imply any waiver of
confidentiality. Information in this document is subject to change without notice. No part of this document may be
reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying,
recording, or information storage or retrieval systems, for any purpose without the express written permission of i2
Technologies US, Inc.
The software and/or database described in this document are furnished under a license agreement or nondisclosure
agreement. It is against the law to copy the software onto any medium except as specifically allowed in the license or
nondisclosure agreement. If software or documentation is to be used by the federal government, the following
statement is applicable: In accordance with FAR 52.227-19 Commercial Computer Software -- Restricted
Rights, the following applies: This software is Unpublished--rights reserved under the copyright laws of the
United States.
The text and drawings set forth in this document are the exclusive property of i2 Technologies US, Inc. Unless
otherwise noted, all names of companies, products, street addresses, and persons contained in the scenarios are
designed solely to document the use of i2 Technologies US, Inc. products.
The brand names and product names used in this document are the trademarks, registered trademarks, service marks,
or trade names of their respective owners. i2 Technologies US, Inc. is not associated with any product or vendor
mentioned in this publication unless otherwise noted.
The following registered trademarks are the property of i2 Technologies US, Inc. and its authorized affiliates: i2; i2
& Design; i2 User Group & Design; Planet; and Freightmatrix.
01/16/08
Taiwan Patent No. 241800 U. S. Patent No. 6,374,249 U. S. Patent No. 6,988,104
Taiwan Patent No. 242952 U. S. Patent No. 6,374,252 U. S. Patent No. 6,988,111
Taiwan Patent No. 251760 U. S. Patent No. 6,397,191 U. S. Patent No. 7,003,729
Taiwan Patent No. 251996 U. S. Patent No. 6,397,192 U. S. Patent No. 7,013,485
Taiwan Patent No. 258090 U. S. Patent No. 6,442,528 U .S. Patent No. 7,024,265
Taiwan Patent No. 266251 U. S. Patent No. 6,442,554 U. S. Patent No. 7,024,371
Taiwan Patent No. 271617 U. S. Patent No. 6,456,996 U. S. Patent No. 7,028,000
Taiwan Patent No. 284847 U. S. Patent No. 6,462,736 U. S. Patent No. 7,031,955
Taiwan Patent No. 285339 U. S. Patent No. 6,480,894 U. S. Patent No. 7,039,562
Taiwan Patent No. 285342 U. S. Patent No. 6,486,899 U. S. Patent No. 7,039,597
U. S. Patent No. 5,630,123 U. S. Patent No. 6,490,566 U. S. Patent No. 7,039,602
U. S. Patent No. 5,742,813 U. S. Patent No. 6,560,501 U. S. Patent No. 7,039,833
U. S. Patent No. 5,764,543 U. S. Patent No. 6,560,502 U. S. Patent No. 7,043,444
U. S. Patent No. 5,778,356 U. S. Patent No. 6,567,783 U. S. Patent No. 7,050,874
U. S. Patent No. 5,832,532 U. S. Patent No. 6,574,619 U. S. Patent No. 7,054,841
U. S. Patent No. 5,835,910 U. S. Patent No. 6,577,304 U. S. Patent No. 7,055,137
U. S. Patent No. 5,838,965 U. S. Patent No. 6,631,363 U. S. Patent No. 7,062,540
U. S. Patent No. 5,845,258 U. S. Patent No. 6,658,413 U. S. Patent No. 7,062,542
U. S. Patent No. 5,930,156 U. S. Patent No. 6,708,161 U. S. Patent No. 7,065,499
U. S. Patent No. 5,931,900 U. S. Patent No. 6,708,174 U. S. Patent No. 7,073,164
U. S. Patent No. 5,937,155 U. S. Patent No. 6,731,998 U. S. Patent No. 7,085,729
U. S. Patent No. 5,943,244 U. S. Patent No. 6,778,991 U. S. Patent No. 7,086,062
U. S. Patent No. 5,974,395 U. S. Patent No. 6,785,689 U. S. Patent No. 7,089,196
U. S. Patent No. 5,983,194 U. S. Patent No. 6,789,252 U. S. Patent No. 7,089,330
U. S. Patent No. 5,995,945 U. S. Patent No. 6,826,538 U. S. Patent No. 7,093,233
U. S. Patent No. 6,031,984 U. S. Patent No. 6,828,968 U. S. Patent No. 7,117,163
U. S. Patent No. 6,047,290 U. S. Patent No. 6,836,689 U. S. Patent No. 7,117,164
U. S. Patent No. 6,055,519 U. S. Patent No. 6,839,711 U. S. Patent No. 7,127,416
U. S. Patent No. 6,055,533 U. S. Patent No. 6,845,499 U. S. Patent No. 7,127,458
U. S. Patent No. 6,076,108 U. S. Patent No. 6,857,017 U. S. Patent No. 7,130,809
U. S. Patent No. 6,085,220 U. S. Patent No. 6,868,299 U. S. Patent No. 7,139,719
U. S. Patent No. 6,119,149 U. S. Patent No. 6,873,994 U. S. Patent No. 7,149,744
U. S. Patent No. 6,167,380 U. S. Patent No. 6,874,008 U. S. Patent No. 7,162,453
U. S. Patent No. 6,169,992 U. S. Patent No. 6,895,384 U. S. Patent No. 7,177,827
U. S. Patent No. 6,188,989 U. S. Patent No. 6,895,550 U. S. Patent No. 7,197,473
U. S. Patent No. 6,222,533 U. S. Patent No. 6,898,593 U. S. Patent No. 7,210,624
U. S. Patent No. 6,233,493 U. S. Patent No. 6,920,476 U. S. Patent No. 7,213,037
U. S. Patent No. 6,233,572 U. S. Patent No. 6,922,675 U. S. Patent No. 7,213,232
U. S. Patent No. 6,266,655 U. S. Patent No. 6,934,686 U. S. Patent No. 7,216,142
U. S. Patent No. 6,289,384 U. S. Patent No. 6,944,598 U. S. Patent No. 7,225,146
U. S. Patent No. 6,289,385 U. S. Patent No. 6,947,905 U. S. Patent No. 7,248,937
U. S. Patent No. 6,321,207 U. S. Patent No. 6,947,982 U. S. Patent No. 7,249,044
U. S. Patent No. 6,321,230 U. S. Patent No. 6,957,234 U. S. Patent No. 7,251,614
U. S. Patent No. 6,332,130 U. S. Patent No. 6,963,847 U. S. Patent No. 7,257,541
U. S. Patent No. 6,332,155 U. S. Patent No. 6,963,849 U. S. Patent No. 7,260,550
U. S. Patent No. 6,334,146 U. S. Patent No. 6,973,626 U. S. Patent No. 7,263,515
U. S. Patent No. 6,360,249 U. S. Patent No. 6,980,885 U. S. Patent No. 7,266,549
U. S. Patent No. 6,366,922 U. S. Patent No. 6,980,966 U. S. Patent No. 7,277,862
U. S. Patent No. 6,370,509 U. S. Patent No. 6,983,276 U. S. Patent No. 7,277,863
U. S. Patent No. 6,374,227 U. S. Patent No. 6,983,421
Contents
Preface........................................................................................................... ix
3 Data Upload.................................................................................................. 29
Introduction .............................................................................................................................................29
z An architecture that allows a business user to be able to define and adjust process
flows more rapidly rather than having to depend on tech savvy people
z An architecture that can string multiple disparate applications to provide a
business process flow rather than having to rewrite all the individual applications
on a single technology or with a single vendor.
z An architecture that provides for data synchronization and data harmonization
across the myriad of systems that are in place
i2’s Agile Business Process Platform (ABPP) has been built with the above mentioned
objectives in mind. It enables a business user to interact with the system at a business
abstraction through a graphical integrated development environment and an intuitive
scripting language to express business rules. It provides several pre-built constructs
(such as Approval nodes, Data Upload, Data Profiling, etc) that allow a user to
quickly prototype a business process without having to worry about the nitty-gritty
associated with generic application building software. It also allows a user to define
all aspects of an application in one single environment starting from data model
definition, process workflows, business rules and validations all the way to user
interface design and integration design.
For information on other i2 solutions, contact your i2 sales representative.
Target Audience
This book is intended for SCOS i2 Application users.
Conventions
Table 1 lists examples of the typographic conventions used to display different types
of information in this document.
Table 1 Typographic conventions used in this document
Class Names Make the Class Configurations Class names appear in bold.
pointer in the Module
Configuration class a primary
key.
Interface element Click Organization Management Button names, field names, window
in the toolbar. names are shown in a san-serif font.
Note: This kind of note contains information that is useful or interesting but not
essential to an understanding of the main text.
CAUTION: This kind of note contains instructions that are especially important to
follow for proper functioning of the product.
WARNING! This kind of note contains instructions that must be followed to avoid
potential crashes or loss of data.
Related Documentation
For more information about i2 ABPP, refer to the following in the documentation set:
z Agile Business Process Platform (ABPP) 3 Release Notes
| [ABPP3_RelNotes_6.2.8.pdf]
z Agile Business Process Platform (ABPP) 3 Studio User Guide
| [ABPP3_Studio_UserGuide_6.2.8.pdf]
z Agile Business Process Platform (ABPP) 3 Manufacturing User Guide
| [ABPP3_Manufacturing_UserGuide_6.2.8.pdf]
z Agile Business Process Platform (ABPP) 3 Programmers Reference
| [ABPP3_Programmer_Reference_Part1_6.2.8.pdf]
| [ABPP3_Programmer_Reference_Part2_6.2.8.pdf]
z Agile Business Process Platform (ABPP) 3 Best Practices
| [ABPP3_BestPractices_6.2.8.pdf]
z Agile Business Process Platform (ABPP) 3 Install Guide
| [ABPP3_Install_6.2.8.pdf]
z Agile Business Process Platform (ABPP) 3 Deployment Guide
| [ABPP3_DeploymentGuide_6.2.8.pdf]
z Agile Business Process Platform (ABPP) 3 How To Guide
| [ABPP3_HowTo_6.2.8.pdf]
z Agile Business Process Platform (ABPP) 3 Manufacturing Admin Guide
| [ABPP3_Manufacturing_AdminGuide_6.2.8.pdf[
z Agile Business Process Platform (ABPP) 3 Performance Tuning Guide
| [ABPP3_PerformanceTuning_6.2.8.pdf]
z Agile Business Process Platform (ABPP) 3 Authentication and Authorization
Guide
| [ABPP3_Authentication_Authorization_6.2.8.pdf]
z Agile Business Process Platform (ABPP) 3 Frequently Asked Questions
| [ABPP3_FAQs_6.2.8.pdf]
z Agile Business Process Platform (ABPP) 3 Monitoring Guide
| [ABPP3_Monitoring_Guide_6.2.8.pdf]
z Agile Business Process Platform (ABPP) 3 PGL Internationalization Guide
| [ABPP3_PGL_Internationalization_6.2.8.pdf]
To Read Documentation
To read the .pdf files, you must have Adobe Acrobat Reader, version 4.0 or higher.
If you do not have Acrobat Reader on your machine, you can download it from
Adobe’s Web site at http://www.adobe.com.
To read the Help files, you must have one of the following browsers:
z Internet Explorer, version 5.0 or higher. You can download this software from the
Microsoft Web site at http://www.microsoft.com/.
z Netscape, version 4.0 or higher. You can download this software from the
Netscape Web site at http://home.netscape.com/.
To Obtain Licenses
To obtain licenses for i2 and third-party products, go to http://support.i2.com, and log
on. On the Contents list, expand Cases Menu, and then click Request LicenseKey.
Alternatively, you can request licenses by email, but the Web site provides priority
service.
Email: support@i2.com
Give Us Feedback
We value your comments and suggestions about our documentation. If you have
comments about this book or the online Help, please enter them in the Comments and
Feedback section of the i2 Customer Support Web page. We will use your feedback in
our plans to improve i2 documentation.
Timer Service
This chapter gives you information on Timer Service in ABPP. It includes the
following topics.
Topics:
z “Introduction”
z “Timer setup and callback”
z “Deployment”
z “Configuration of timer poll interval”
z “Timer commands”
z “Guaranteed Messaging”
Introduction
Timer service allows one to perform specified actions at a specified time. It also
allows one to specify a duration at which the timer should repeat these actions. It
provides a collection of functions and tags to perform the timer related activities.
Using the timer feature is a two step process:
1. Setup a timer in the system with a callback time and specify callback actions that
need to be executed when the callback time is reached. ABPP provides
START_TIMER command for this. This results in ABPP server adding timer
record to the database.
2. The Timer Sink runs as a service (TIMER_SINK) on the ABPP server and takes
care of monitoring the timer entries for expired timers and executing the specified
callback actions. It also takes care of repeating the call after the specified duration
if so specified in START_TIMER.
Deployment
In a typical deployment there can be multiple ABPP Servers. Each of these servers
should be homogeneously deployed with multiple Xservices. Each of the ABPP
Server can have a timer source component and a timer sink component. In such a
scenario there will be multiple timer services running across different ABPP Servers.
When a service in one ABPP Server starts a timer job, the ABPP server will persist the
job as a timer record in the database. Whenever this job is due to be run, any of the
timer sink services across ABPP servers can pick up this job and act on it. So, timer
callback actions can be executed in an ABPP Server instance that is not the one that
initiated it. This design helps achieve high availablity for the timer function.
Timer commands
START_TIMER
Description :
It adds a timer record to the database. Each timer record contains the following key
elements:
Identifier: This is specified by a set of key-value pairs. The key-value pairs are
enclosed within IDENTIFIED_BY tag. The identifier needs to passed in to Stop
Timer command to remove the timer entry created by this command. This identifier
will be made available (as identifiedByDoc variable) whenever the call back actions
are invoked
Example:
<IDENTIFIED_BY>
<OrderId Value="C-1" />
<Type Value="creditCheckTimer"/>
</IDENTIFIED_BY>
In the above example, the timer entry is identified by the values for the keys - OrderId
and Type
For Document: This encapsulates the context on which the timer is acting.This is a
place-holder to associate any xml document along with the timer entry.
Example: You might want to create a timer to perform credit check on an order at a
future time. The credit check logic (triggered on the timer call back) might require
some information (Customer associated with the order, Order Value etc) on the order
to perform its logic. You can store this information along with the timer entry. This
information will be made available (as thisDoc variable) whenever the call back
actions are invoked.
Example:
<FOR_DOCUMENT>
<Order>
<ID Value="C-1" />
<Customer Value="Velocity" />
<OrderValue Value="100000" />
</Order>
</FOR_DOCUMENT>
Call Back Date: This is the date on which the the call back actions should be
executed. This can be specified in one of the 2 ways:
z A date time specified by CALL_BACK_DATE tag. After the current time
exceeds the specified time:
Call Back Actions: This is the X-Rule actions to be executed when the timer wakes
up.The X-Rule actions are enclosed within the CALLBACK_ACTIONS tag.
Only the following implicit variables can be referenced within the call back actions:
z thisDoc : This is a reference to the xml document contained within the
<FOR_DOCUMENT> tag of the START_TIMER.
z identifiedByDoc : This is the identifier. This is the reference to the
<IDENTIFIED_BY> xml of the START_TIMER
z callBackDateVar : This is a special variable that contains the current call back date
time.
Example:
<CALL_BACK_ACTIONS>
<PRINTLN Value="Call back date time is {$callBackDateVar}"/>
</CALL_BACK_ACTIONS>
Note: Only the implicit variables specified above are visible within the scope of the
CALLBACK_ACTIONS tag.
Syntax:
<START_TIMER>
<!--The timer record will be identified by this name. This is mandatory
but does not have to be unique. -->
<IDENTIFIED_BY>
<key-name Value="value" Repeatable="true"/>
</IDENTIFIED_BY>
<!--the document for call back actions-->
<FOR_DOCUMENT>
<!-- the content can be any xml document. We are showing CustomerOrder
document as an example -->
<CustomerOrder>
<ID Value="CO-1" />
</CustomerOrder>
</FOR_DOCUMENT>
<CHOICE_OF>
<CHOOSE>
<CHOICE_OF>
<CHOOSE>
<!-- Call back duration in seconds. The duration X-Path function can be
used to get number of seconds.This is the interval between successive
callbacks -->
<CALLBACK_DURATION Value="{duration(0,1,30,0)}" />
</CHOOSE>
<CHOOSE>
<!-- Call back repeat DST duration. This accounts for any adjustments
because of DST. dstDuration X-Path function can be used specify the
DstDuration. This is the interval between successive callbacks -->
<CALLBACK_DST_DURATION
Value="{dstDuration(1,1,30,0)}" />
</CHOOSE>
</CHOICE_OF>
<!-- Call back start date. Date time when the call back should be
started. If not provided the default is now -->
<CALLBACK_START_DATE Value="01/01/2000[Type=datetime,
Optional=true]" />
<!-- Call back start date. Date time when the call back should be
ended. If not provided then the end date is infinity -->
<CALLBACK_END_DATE Value="01/01/2006[Type=datetime,
Optional=true]" />
</CHOOSE>
<CHOOSE>
<!--The datetime when the actions should be first performed. Date
function can be used to get the date time.-->
<CALLBACK_DATE Value="01/01/2005[Type=datetime]" />
</CHOOSE>
</CHOICE_OF>
<!--The actions to be performed when the timer goes off.-->
<CALLBACK_ACTIONS>
<!-- Any X-Rule code -->
<PRINTLN Value="thisDoc Variable = {$thisDoc}" />
<PRINTLN Value="Identifier Variable = {$identifiedDoc}" />
<PRINTLN Value="Current Expiration Date = {$callBackDateVar}" /
</CALLBACK_ACTIONS>
</START_TIMER>
Example 1:
Description :
The following example defines a timer with name SIMPLE_TIMER . It asks to call
back after six hours. When it calls back it prints the date time and calls doSomething
request.
<DEFINE_METHOD Name="startSimpleTimer">
<RULE>
<ACTION>
<START_TIMER>
<IDENTIFIED_BY>
<NAME Value="SIMPLE_TIMER" />
</IDENTIFIED_BY>
<CALLBACK_DATE Value="{incrDate(date(),duration (0, 6,
0,0))}" />
<CALLBACK_ACTIONS>
<PRINTLN Value="Call back date time is:
{$callBackDateVar}" />
Example 2:
Description :
The following example starts a timer. The timer call back starts the following day and
repeats every hour until the end of the day. On each call back, the parameters for the
call back are printed and the myRequest X-Rule is executed.
<DEFINE_METHOD Name="startCreditTimer1">
<RULE>
<ACTION>
<START_TIMER>
<IDENTIFIED_BY>
<NAME Value="CreditCheckTimer" />
</IDENTIFIED_BY>
<FOR_DOCUMENT>
<Customer>
<ID Value="{$thisParam/ID/@Value}" />
</Customer>
</FOR_DOCUMENT>
<CALLBACK_START_DATE
Value="{incrDate(stripTime(date()), duration(1,0,0,0)}" />
<CALLBACK_DURATION Value="{duration(0,1,0,0)}" />
<CALLBACK_END_DATE Value="{incrDate(stripTime(date()),
duration(2,0,0,0)}" />
<CALLBACK_ACTIONS>
<PRINTLN Value="{$identifiedDoc}" />
<PRINTLN Value="{$thisDoc}" />
<REQUEST Name="myRequest">
<TO_XML DocVar="thisDoc" />
</REQUEST>
</CALLBACK_ACTIONS>
</START_TIMER>
</ACTION>
</RULE>
</DEFINE_METHOD>
The Value attribute of ID is the same as the ID contained in the input to the
startCreditTimer1 X-Rule.
Note: In this example, we are using the duration function to compute the call back
start and end dates. The duration function assumes the days argument to mean
24 hours. Hence this does not account for DST adjustments. So the call back
may not start at the 00 hrs:00 min:00 secof the following day. Similarly it may
not end at 11 hrs:59 min:59 sec of the following day.
Example 3:
Description :
This example is similar to the previous one. It shows the usage of DstDuration to
account for DST adjustment. The timer call back is invoked at 00:00:000 hrs for the
next 7 days starting from tommorrow.
<DEFINE_METHOD Name="startCreditTimer1">
<RULE>
<ACTION>
<START_TIMER>
<IDENTIFIED_BY>
<NAME Value="CreditCheckTimer" />
</IDENTIFIED_BY>
<FOR_DOCUMENT>
<Customer>
<ID Value="{$thisParam/ID/@Value}" />
</Customer>
</FOR_DOCUMENT>
<CALLBACK_START_DATE
Value="{incrDate(stripTime(date()), dstDuration(1,0,0,0)}" />
<CALLBACK_DST_DURATION Value="{dstDuration(1,0,0,0)}" /
>
<CALLBACK_END_DATE Value="{incrDate(stripTime(date()),
dstDuration(7,0,0,0)}" />
<CALLBACK_ACTIONS>
<PRINTLN Value="{$identifiedDoc}" />
<PRINTLN Value="{$thisDoc}" />
<REQUEST Name="myRequest">
<TO_XML DocVar="thisDoc" />
</REQUEST>
</CALLBACK_ACTIONS>
</START_TIMER>
</ACTION>
</RULE>
</DEFINE_METHOD>
STOP_TIMER
Description :
Deletes the timer record identified by the key-value pairs contained within the
IDENTIFIED_BY tag.
Syntax
<STOP_TIMER>
<IDENTIFIED_BY>
<key-name Value="value" Repeatable="true" />
<!-- This should match the ones provided in start timer command -->
</IDENTIFIED_BY>
</STOP_TIMER>
Example 1:
Description :
The following example will delete the timer entry corresponding to the ones created in
examples 2 and 3 of the START_TIMER command. The timer entry to be deleted is
identified by the key-value pairs: key=NAME and Value=CreditCheckTimer.
<DEFINE_METHOD Name="stopCreditCheckTimer">
<RULE>
<ACTION>
<STOP_TIMER>
<IDENTIFIED_BY>
<NAME Value="CreditCheckTimer" />
</IDENTIFIED_BY>
</STOP_TIMER>
</ACTION>
</RULE>
</DEFINE_METHOD>
Guaranteed Messaging
Guaranteed Messaging is an extension of the timer service and provides a way to
make guaranteed call to a rule defined in ABPP. The guaranteed call can be
configured so that if for some reason the call fails the system can retry the call at a
later time.
The guaranteed call will be an asynchronous call (if guarantee level is High) i.e. it will
be invoked using a different transaction only if the current transaction commits. This
is useful when the ABPP server needs to call external systems only if the originating
transaction succeeds. In such situations the call to the external system can be
encapsulated in an ABPP rule and then the ABPP rule invoked using guaranteed
messaging.
A guaranteed message request is defined as follows in the rule. It has the following
components.
z Guaranteed Request invocation (<GUARANTEED_REQUEST>)
| Specifies the request to be of type guaranteed.
| Specifies the API that needs to be invoked.
| Specifies the service that this API is registered with.
z Header (<HEADER>)
| Specifies the parameters that the Timer Service will use.
| Specifies the mandatory message key, which can be any user defined string
(MSG_KEY).
| Specifies the number of retry attempts, default value being one
(NUM_RETRIES).
| Specifies the retry interval, default being zero (RETRY_INTERVAL).
| Specifies the start date when the API is to be called first (START_DATE).
| Specifies the result determining rule (RESULT_DETERMINER).
Note: Use semi-guaranteed messaging only if you are certain that you are willing to
take the chance of losing a message.
Example 1:
Description :
Example of setting up a guaranteed message. Once the message has been setup, the
timer sink service will attempt delivery of this message. If the attempt is successful, it
will execute the actions listed under ON_SUCCESS . If the delivery attempts fail,
then the message is re-tried for a maximum of the specified number of retry times. If
all attempts fail, then the ON_FAILURE actions are executed.
<DEFINE_METHOD Name="executeGuaranteedMsg">
<RULE>
<ACTION>
<GUARANTEED_REQUEST Name="callDummyExternalApi"
ServiceName="DUMMY">
<HEADER>
<MSG_KEY Value="{$thisParam/ID/@Value}"/>
<!-- Mandatory -->
<NUM_RETRIES Value="{$thisParam/NUM_RETRIES
/@Value}"/>
<!--Default value = 1-->
<RETRY_INTERVAL Value="{duration(0, 0, 0,
$thisParam/RETRY_SECONDS/@Value)}"/>
<!--default = 0-->
<START_DATE Value="{incrDate(date(),
duration(0,0,0,20))}"/>
<!--Optional-->
<RESULT_DETERMINER Rule="resultDeterminer"/>
<!--Optional-->
<GUARANTEE_LEVEL Value="High"/>
<!--Optional-->
</HEADER>
<BODY>
<ORDER Value="1">
<LINE Value="{sum(1, 0)}"/>
<LINE Value="{sum(1, 1)}"/>
</ORDER>
<ORDER Value="2">
<LINE Value="{sum(2, 0)}"/>
<LINE Value="{sum(2, 1)}"/>
</ORDER>
</BODY>
<ON_SUCCESS>
<PRINTLN Value="This is the thisDoc for msg
{$thisDoc/@Name}"/>
<PRINTLN Select="$thisDoc"/>
<REQUEST Name="CallbackGMsgSuccess">
<TO_XML DocVar="thisDoc"/>
<ON Value="Success"/>
</REQUEST>
</ON_SUCCESS>
<ON_FAILURE>
<ADD_DEAD_LETTER>
<TO_XML DocVar="thisDoc"/>
</ADD_DEAD_LETTER>
<REQUEST Name="CallbackGMsgFailure"
ServiceName="DUMMY">
<TO_XML DocVar="thisDoc"/>
<ON Value="Failure"/>
</REQUEST>
</ON_FAILURE>
</GUARANTEED_REQUEST>
</ACTION>
</RULE>
</DEFINE_METHOD>
Success-failure Determination
When the message delivery is attempted, success is determined by the response from
the invocation. An exception that occurs or an indication of failure is considered a
failure.
Users can optionally specify a result determining rule. If this rule is specified, the rule
is invoked right after an attempted delivery with the guaranteed message request data
as parameter to this rule. The format of this input is in the next section. The rule then
will have to indicate success or failure in the format specified below.
<!-- Attempt at delivery of guaranteed message-->
<REQUEST Name="callDummyExternalApi" ServiceName="DUMMY">
<ORDER Value="1">
<LINE Value="1.0"/>
<LINE Value="2.0"/>
</ORDER>
<ORDER Value="2">
<LINE Value="2.0"/>
<LINE Value="3.0"/>
</ORDER>
</REQUEST>
<!-- Success case -->
<RESPONSE Status="Success">
<_RESULT Value="SUCCESS"/>
</RESPONSE>
<!-- Failure case -->
<RESPONSE Status="Success">
<_RESULT Value="FAILURE"/>
</RESPONSE>
OR
<RESPONSE Status="Error">
<SOME_ EXCEPTION_TRACE/>
</RESPONSE>
The on success and on failure actions will have access to variable thisDoc that will
have the following.
The reply section contains the response from the API invoked.
<GUARANTEED_REQUEST Name="callDummyExternalApi"
ServiceName="DUMMY">
<HEADER>
<IDENTIFIED_BY>
<MSG_KEY Value="101010"/>
<REQUEST_NAME Value="callDummyExternalApi"/>
</IDENTIFIED_BY>
<NUM_RETRIES Value="2"/>
<RETRY_INTERVAL Value="65"/>
<START_DATE Value="10/31/2003 16:41:42:143"/>
<MSG_KEY Value="101010"/>
<END_DATE Value="10/31/2003 16:43:52:143"/>
<LAST_ATTEMPTED_DELIVERY_DATE Value="10/31/2003 16:41:48:921"/>
<CURR_RETRY_COUNT Value="1"/>
</HEADER>
<BODY>
<ORDER Value="1">
<LINE Value="1.0"/>
<LINE Value="2.0"/>
</ORDER>
<ORDER Value="2">
<LINE Value="2.0"/>
<LINE Value="3.0"/>
</ORDER>
</BODY>
<REPLY>
<RESPONSE Status="Success">
<_RESULT Value="SUCCESS"/>
<EXT_API_RESPONSE Value="SOME_DATA"/>
</RESPONSE>
</REPLY>
</GUARANTEED_REQUEST>
Approval Service
This chapter gives you information about the Approval Service. It includes the
following topics.
Topics:
z Introduction
z Approval Nodes
z Serial Approval Node
z Parallel Approval Node
z Multi-line Approval node
z Approval Node Execution
z Input/Output to Approval UI workflows
z Configuring & Managing Approval Alerts
z Steps for creating an Approval Workflow
Introduction
Approval Workflow framework is used to define and manage approval business
process. Here is an examples of approvals required in a business process:
New Part Introduction:
Typically in a factory, finished products are manufactured from different raw
materials. The raw materials are referred to as parts. Once in a while, an existing part
is replaced with new part to meet objectives such as improved manufacturing
efficiency, lower cost, increased durability of the finished product etc.
Introducing a new part in to the manufacturing process typically requires various
departments within the factory to consider the implications of the change. For
example,
z Engineering department has to decide if the new part will require additional
tooling and changes to the existing manufacturing process.
z Quality control department has to evaluate if any changes have to be made to the
existing inspection procedures to accomodate this change.
z Purchasing department has to evaluate which suppliers to contract for procuring
the new part.
z Accounting department has to consider the overall cost benefit from this change.
After considering the implication, each department can either approve or reject the
change. The new part can be introduced if all the relevant departments approve this
change. The approval process may require orchestration in a particular sequence.
For example,
z Engineering and Purchasing departments can approve in parallel.
z Quality Control department has to get the inputs from Engineering before it can
provide its recommendation. So it can approve only after the Engineering
department.
z Accounting department requires all other departments to approve before
providing its recommendation.
Approval Framework allows you to model approval business processes similar to the
one descibed above. The approval framework provides special purpose approval
workflow nodes for:
z Notifying approvers through email and on-line alerts when there is an approval
request waiting on their input
z Enabling approvers to provide their input. Typically the input is to either approve
or reject the approval request. Approval framework allows specifying a user
interface workflow for each of the approval nodes. This user interface workflow
will be launched when the approver tries to act on the on-line alert associated with
the approval node. The approver can provide his input by clicking the relevant
buttons ( Approve, Reject etc) on the user interface workflow.
The approval workflow nodes can be used within any standard workflow in ABPP.
The workflow can be used to define the order in which each of the individual
approvals should take place in the overall approval process.
Approval framework is a tool kit and hence can be used to model various types of
approval processes including:
z New Part Introduction - Process for introducing a new part into the Approved
Product Catalog
z New Supplier Introduction - Process for introducing a new Supplier into the
Approved Supplier Catalog
z Suggested Orders - Process for requesting a Purchase
Approval Nodes
Approval framework provides three special nodes for handling approvals- Serial,
Parallel and Multi-line approval nodes.
In case of serial approval, the approvers work on the approval in the same order as
they appear in the list. Example: After the first approver in the list approves, the
second approver in the list works on the approval etc.
The format of the xml is different for multi-line approval node. Please see multi-
line approval node properties for details.
z Document Id: The Document Identifier for the document being run through the
Approval Process. Example: The Id of the part in a New Part Request (NPR)
approval process. The Approval Service does not check for the uniqueness of this
Id. This is just for the developer to use. Typically this property will be an X-Path
expression that will evaluate to a string.
z Document Type: This indicates the type of the Document being used in the
Approval Workflow. Example: The document type would be Part in a NPR
approval process. The Approval Service is not aware of and does not define the
types of documents possible. This is just for the developer to use. This property
can be an X-Path expression that will evaluate to a string.
z Alert Object (Optional): If provided, this X-Path expression should evaluate to
an xml. This xml will be passed to the alert events invoked by the Approval
Service. The alert events are described in a later section. Alert Object can be used
as a placeholder for any variables etc. which are defined in the Approval
Workflow but are required when creating the alerts for the approval requests.
z Consult Allowed: This check box indicates if the Approver is allowed to consult
another Approver for the approval assigned to him. For the consulted approver,
only respond action is permitted. The original approver can act on the approval
after the consulted approver responds with his comments.
z Forward Allowed: This checkbox indicates if the Approver is allowed to forward
the approval to another approver. Upon forwarding, the original approver gives
away all rights to the forwarded approver.
z PreActions: X-Rule actions. These actions are executed when the workflow
initially reaches the node. All the property X-Path expressions (Alert Object,
Approver List etc.) are evaluated after the pre-actions are executed. The typical
use of pre-action would be to compute the list of approvers for this node and
assign it to the variable referenced in the X-Path expression for Approver List
property.
z PostActions (Optional): X-Rule actions. These actions are executed after
approvals have been completed for the node. The approval result (Approved or
Rejected) can be accessed through the implicit variable, approvalResult in the
post actions. The node output for Approved/Rejected next nodes will be evaluated
after executing the post actions. The typical use of post actions would be to
perform any post approval tasks. After the post actions, the workflow will
continue executing the next nodes depending on the approval result.
z On Approver Response Actions (Optional): X-Rule actions. These actions are
executed whenever an approver performs an approval action. Examples of
approval actions are: Approve, Reject, Hold etc. performed by an approver. An
implicit variable, approvalContext is also available for use within these actions.
The payload of this variable is the same as that of expediteApprovalAlert event for
Expedite action. For all other actions, the payload is same as that of
modifyApprovalAlert event. Please see the section on Approval events for details
on these events. These actions act as an user exit to execute some X-Rule actions
whenever an approver performs an approval action.
This section focuses only on the properties specific to parallel approval node. The
common properties are described in the Common Approval Node Properties
section.The property editor for a parallel approval node is the following:
alert or by selecting the alert and clicking a button. At this point the application
should invoke the “activateApprovalNode” command on the APPROVAL service
passing
| Token associated with the alert
| Approver’s user id.
| Return URL. The approver will be taken to this URL after he has processed
the approval. In most cases the return URL will be the URL of the alert in-box
page from which the user launched the activation.
Please see Approval Node Activation section for the payload of
activateApprovalNode command.
z The approval framework retrieves the approval context (approval workflow
instance and approval node information) from the token. It launches the UI
workflow associated with the approval node. The approval framework provides
the UI workflow with the approval node properties provided in the settings panel
and the list of approval actions that are allowed for that approver. The input/output
to the UI workflow is described in Input/Output to Approval UI workflows
section.
z The UI workflow renders the approval document and the displays the buttons for
performing the approval actions - Approve, Reject etc. It is the responsibility of
the approval workflow author to create the UI workflow and associate it with the
approval node. The same UI workflow can be reused across multiple approval
nodes. The UI workflow should also provide the screens for choosing the consult
and forward users if these features are enabled on the approval node.
z The user performs an approval action by clicking on one of the buttons on the UI
workflow page. At this point, the UI workflow should pass the following
information in its thisReturn variable:
| Token associated with the alert
| Approval action performed
| Approver Comments
| Consult or Forward Approver’s user id in case of consult or forward actions
z Approval framework gets the result of the approver action through the return
variable (thisReturn) of the UI workflow. It will process the approver action.
z The approver will be redirected to the return URL provided as part of the
activateApprovalNode command.
z Raise ‘modifyApprovalAlert’ event for all approver actions other than expedite.
The event payload providing information on the current approver, approval status,
the approver action and the next approver (if any).The listener to this event should
perform the necessary actions to manage the alert entries. Typically this will
involve deleting the alert entry for the current user and creating an active alert for
the next approver.
z Raise ‘expediteApprovalAlert’ event for the expedite action. This event contains
information on the expedited users in addition to the information contained by the
payload of the ‘modifyApprovalAlert’ event. The listener to this event should clear
the alert entries for the expedited users in addition to the processing done for a
‘modifyApprovalAlert’ event listener. Note that ‘modifyApprovalEvent’ event is
NOT raised for an expedite event.
z Invoke the “On Approver Response Actions” provided on the approval node. The
approval framework will provide an implicit variable, approvalContext, providing
the approval context. The payload of the approvalContext variable is same as that
of the payload of modifyApprovalAlert event.
z If there are no next approvers in the node then the approval frame-work
determines the result of the approval.
z Raise ‘removeApprovalAlert’ event. The listener to this event should remove all
the alert entries associated with the approval node.
z Invoke the “PostActions” provided on the approval node. The implicit variable,
approvalResult, contains the approval result and is visible in the post actions. The
valid values for approval result are APPROVED and REJECTED.
z If the approval result is APPROVED, the execution passes on to the next nodes of
the approval node associated with this outcome.
z If the approval result is REJECTED, the execution passes on to the next nodes of
the approval node associated with this outcome.
<AlertObject>
<ANY />
</AlertObject>
<ApproverIdList>
<root>
<Id Value="gnorman" />
<Id Value="jwright" />
</root>
</ApproverIdList>
<DocumentLineIdStatusList>
<ANY />
</DocumentLineIdStatusList>
<ApprovalResult Value="PENDING" />
<AllowedActions>
<Approve Value="true|false" />
<Reject Value="true|false" />
<Consult Value="true|false" />
<Respond Value="true|false" />
<Hold Value="true|false" />
<Forward Value="true|false" />
<Expedite Value="true|false" />
</AllowedActions>
</APPROVAL_NODE_CONTEXT>
</ADDITIONAL_PARAMETERS>
The thisReturn variable of the UI Workflow should have the following contents:
For consult and forward actions,
<RESPONSE>
<ApproverResult Value="CONSULT|FORWARD" />
<ApproverComment Value="abc" />
<TokenId Value="xyz" />
<ToApproverId Value="hwilson" />
</RESPONSE>
For other actions,
<RESPONSE>
<ApproverResult Value="APPROVED|REJECTED|HOLD|RESPOND" />
<ApproverComment Value="abc" />
<TokenId Value="xyz" />
</RESPONSE>
infrastructure). So the approval framework has been designed to work with any alert
implementations for managing approval alerts.
The approval framework will raise events providing information necessary for
creating, modifying and deleting the approval alert entries. It is the responsibility of
the application to define the listeners to these approval alert events for managing
(create/modify/delete) the alert entries and notifying approvers through email etc. The
payload of all the approval events can be found in approval_events.xml in the
APPROVAL service.
The alert entry persisted by the application should contain the approval token,
approval node name, approval document id and approval document type. These
properties are required for interacting with the approval infrastructure. Implementing
listeners to the approval events can be quite tricky. Please see the ABPP Approval
sample for an example of implementing listeners to the approval events. Note that this
is a one time effort for an application to use the approval framework.
<API_DOC>
<INPUT>
<REQUEST Name="activateApprovalNode">
<TokenId Value="xxxx" />
<REDIRECT_URL Value="yyyy" />
<ActingUserId Value="xxxxx" />
</REQUEST>
</INPUT>
<OUTPUT>
<ON_SUCCESS>
<RESPONSE />
</ON_SUCCESS>
</OUTPUT>
</API_DOC>
TokenId is the token persisted along with the alert entry. The REDIRECT_URL is the
URL to which the approver should be re-directed after performing the approval action.
Typically this will be the URL of the alert in-box of the approver.
The approval framework also raises recordApprovalHistory event. This event is useful
if the calling application wants to record the approval history.
Data Upload
Introduction
Dataupload service is a native service which is packaged with ABPP and allows the
user to upload data into various services. This document lays out a detailed description
of what features the service provides.
ApiName (Required): Name of the REQUEST that has to be invoked for each
document that is encountered in the source document.
DataType (Required): If the source is going to be CSV, XML or XLS.
DisplayName (Required): This is a required attribute for the User Interface to show a
meaningful name for the Template.
BatchSize: The number of batches that are to be run while running the said template.
Each batch will be running on different thread trying to execute requests
simultaneously.
RequestStreamClass: If the user wants to override the standard stream for processing
the records, he can implement a class and register it in this field. The standard streams
support CSV, XML and XLS.
TemplateType: This field is used by the UI for categorizing Templates. If 2 templates
have the same type then they are folded under the same node in the UI. In the
reference implementation there are 2 broad categories, Transaction and static under
which the user can create further categorization by following this convention
Transaction.Category or Static.Category.
UserData: This field can hold data associated with user information, in the reference
implementation this field holds the activity associated with the template. This is
further used by the UI to display only the templates that a user is eligible to view.
MiscData: This field holds any miscellaneous data, that's needed by an application.
When a getTemplate call is made this field along with other fields is returned to the
user. Not used in the reference implementation.
For a CSV type template the upload also needs the order in which the properties will
occur. This is defined in the create template under the template node with the
PROPERTY tag.
Here is an example of a typical request.
<REQUEST Name="addTemplate">
<TEMPLATE Name=""
ServiceName=""
ApiName=""
DisplayName=""
DataType=""
BatchSize=""
RequestStreamClass=""
TemplateType=""
UserData=""
MiscData="">
<PROPERTY Name="" />
<PROPERTY Name="" />
<PROPERTY Name="" />
:
:
</TEMPLATE>
</REQUEST>
Example Templates:
1. XML Template
<REQUEST Name="addTemplate">
<TEMPLATE Name="Upload Order (XML)"
ServiceName="Order_Processing"
ApiName="createCustomerOrder"
DisplayName="Upload Order (XML)"
DataType="xml"
BatchSize="1"
TemplateType="Transaction.Order"
UserData="upload_customer_order" />
</REQUEST>
2. CSV Template
<REQUEST Name="addTemplate">
<TEMPLATE Name="Upload Order (CSV)"
ServiceName="Order_Processing"
ApiName="createCustomerOrderCsv"
DisplayName="Upload Order (CSV)"
DataType="csv" BatchSize="1"
TemplateType="Transaction.Order"
UserData="upload_customer_order">
<PROPERTY Name="UserID" />
<PROPERTY Name="ShipDate" />
<PROPERTY Name="Expedite" />
<PROPERTY Name="BookID" />
<PROPERTY Name="Price" />
<PROPERTY Name="Quantity" />
</TEMPLATE>
</REQUEST>
The user can delete a template in the system by using the deleteTemplate API. The
user can access the Template by calling the getTemplate API, also the
getAllTemplates API can be used to get all the templates available in an application.
Upload Methods
There are a couple of ways in which data can be uploaded through the dataupload
service there is a server based method and then there is a UI based method. Following
is a description of both these mechanisms.
The UPLOAD_FILE_DIR parameter defines the root folder for dataupload, this is the
folder where the service will maintain a set of other folders for internal management
like source, archive and inprocess. Dataupload uses the source directory to look for
relevant files when a request is posted, once the service picks up a file for processing
the file will be moved to inprocess folder with a time stamp in the folder name, once
the processing of the file is over it will be moved to the archive folder.
After the above configuration is completed and a template is added for the data to be
uploaded, the user can copy the file to be uploaded into a folder under source
directory, (if the source directory does not exist then the user might have to create it).
If the data type is XML then the upload data file must follow a certain convention and
must look like the following:
<DATAUPLOAD Template="">
<BATCH_REQUEST>
...
</BATCH_REQUEST>
<BATCH_REQUEST>
...
</BATCH_REQUEST>
<BATCH_REQUEST>
...
</BATCH_REQUEST>
:
:
</DATAUPLOAD>
Example:
<DATAUPLOAD Template="Upload Order (XML)">
<BATCH_REQUEST>
<CustomerOrder>
<ShipDate Value="5/15/2006" />
<LineItems>
<LineItem>
<BookID Value="1" />
<Price Value="3" />
<Quantity Value="10" />
</LineItem>
</LineItems>
</CustomerOrder>
</BATCH_REQUEST>
<BATCH_REQUEST>
<CustomerOrder>
<ShipDate Value="5/15/2006" />
<LineItems>
<LineItem>
<BookID Value="1" />
<Price Value="3" />
<Quantity Value="10" />
</LineItem>
</LineItems>
</CustomerOrder>
</BATCH_REQUEST>
</DATAUPLOAD>
For all non xml files the user also has to create another file called source.cfg and
copy it in the same folder as the data file. The source.cfg file should have the
following content:
<UPLD_CFG>
<JOB_PATH ResourceName="" Sequence="true" >
<TEMPLATE Value="for non xml files'/>
</JOB_PATH>
<JOB_PATH ResourceName="" />
</UPLD_CFG>
In the source.cfg file the ResourceName attribute should contain the actual file
name and the sequence attribute defines that the files will be processed in the order
that they are specified in the file.
The user has to now post a request to the server which should be as follows:
<REQUEST Name="uploadBatchRequestsDir" >
<JOB_NAME Value=""/>
</REQUEST>
Where the job name is the name of the folder under source where the data file has
been copied.
Example:
Assume that the UPLOAD_FILE_DIR parameter is set to c:/i2/data/upload folder
then the user can create a folder with a date stamp like templateName-01-01-2006 and
copy the data files under c:/i2/data/upload /source/templateName-01-01-2006 and
execute the following request on dataupload service
<REQUESTS ServiceName=" DATAUPLOAD">
<REQUEST Name="uploadBatchRequestsDir">
<JOB_NAME Value="templateName-01-01-2006"/>
</REQUEST>
</REQUESTS>
Lets further assume that we would like to upload two types of files, the first one is non
xml based template (csv etc) the user will have to create another file called source.cfg
and put it in the same folder as the data file. The relevant source.cfg file for non
XML file:
<UPLD_CFG>
<JOB_PATH ResourceName="myFile-01-01-2006.csv"
Sequence="true">
<TEMPLATE Value="MY_NON_XML_TEMPLATE" />
</JOB_PATH>
<JOB_PATH ResourceName="myFile-01-01-2006.xml" />
</UPLD_CFG>
In the above example we had two resources the first one is a csv and therefore the
template name is specified in the source config file and the second one is an xml file
which would contain the name of the template in the Template attribute of the root
DATAUPLOAD tag as specified earlier. However, if a folder contains only xml file
types with no implied sequence then source.cfg is NOT needed and all files will start
loading almost simultaneously. In all other cases a source.cfg file is needed. If
source.cfg is provided it should list all the files in the folder regardless of their file
types.
A Note about aborted uploads, if the user copies a invalid xml file for upload then the
upload will abort, but the file is copied under the inprocess folder. The user can go to
that directory, correct the xml file and then post the following request.
<REQUEST Name="processAbortedDirs">
<DATE1 Value=".." />
<DATE2 Value=".." />
</REQUEST>
Where DATE1 is start time and DATE2 is end time respectively, this will ensure that
all the uploads processed in the DATE1 to DATE2 time line are reprocessed. Please
note that the user will have to leave the aborted files after correction in the directories
that they were found under the inprocess folder.
UI Based Method
Uploading Data Files
The following are the steps to upload a data file through the UI, Please note that the
screens themselves are a part of the ABPP product, however, the navigation links are
subject to implementation, the navigation links described here are as implemented by
the reference implementation.
1. In the Navigation pad (left-pane), click on the Uploads > Upload File.
The Transaction Templates screen is displayed.
2. Note that the categorization happens when the user creates the upload template
with TemplateType field as follows: Transaction.Order.
3. The user can click the appropriate template name and the upload screen is
displayed.
4. The User can then browse the appropriate data file and click the Upload button.
5. The file is then submitted to the server, and a success or failure message is
displayed to the user.
6. For checking the details of a template the user can click Template Details button
and a screen is displayed with the details of the Template.
Status Report
The status report presents the status of an upload, it shows the details of the various
uploads that happened, the user can filter these by various parameters including
Report ID, Template Name, Upload Status and Data range of when the upload
occurred. The result screen shows all the data related to uploads. The following is a
description of the fields that are displayed in the results:
1. Report ID: This is the id provided by the system to a upload that has been
executed.
2. Template Name: Name of the template that was used for the upload.
3. Report Time: Time when the upload actually started.
4. Progress: Status of the upload the valid status are as follows:
| Aborted: This status indicates that upload service was not able to resolve
certain issues, like if an xml upload has invalid xml.
| Completed: All records in the upload file were processed without any records
returning an error message by the API.
| Completed with gaps: All records were processed by the API but some
records returned a error message from the API, the error messages have to be
in the format that ABPP validation service reports it in, more details on how
to generate error messages are in Appendix.
| Initialized: This indicates that the upload has just been initialized; this is a
transitory state which should quickly go away.
| Processing: This status indicates that the dataupload service is still processing
records associated with this upload.
| Processing with gaps: dataupload is still processing the records for this
upload but it has already encountered some valid errors.
| Waiting for start signal: This is a state where this upload is in pre initialized
state, again this is a transitory state.
| Reserve Report: As soon as the upload starts upload reserves a report id
number for the upload, this state indicates that the upload is in the process of
acquiring that id.
5. Total Records: The number of records that were processed in this upload
6. Errors: The number of records that had errors returned by the API.
7. Purged: The number of records that were purged from this report without being
corrected.
The user is also allowed to reprocess or purge the upload information, the user can
select a record in the status report screen and click the appropriate button, Purge for
deleting all data related to the upload or Reprocess to reprocess the upload, for
example in case where the records error out due to a referential constraint and the user
later realizes that the constraint has now been fixed he/she can select the appropriate
upload record in this screen and click the reprocess button, all the records will be
posted again to the server and to the same API/service combination.
The following is a screen shot of the Status report screen which is accessible from the
view status menu in the reference implementation.
Error Correction
The user can look for reports with status Completed with gaps and click the button
report id link on the Status reports screen. Here he will see more details regarding the
upload including a list of records which have errors in them as follows:
The user can then click on the error record number link and inspect the error as well as
execute correction on the error, the following screen displays the error correction UI:
The correction can be made by the user and can be submitted to the server by clicking
the submit button. The system returns with a success or failure error message, in case
of failure the user can correct the error once again and submit.
A Note on how to create Links for the above mentioned screen flows.
The above screens have been written using the x2 technology. For both the screens,
Upload file and Status Report there are page definitions already available in the
system. The user has to refer to these page definitions to create URL links in the
navigation panel.
Here are the steps to include these screens in your Application:
z Open your applications Navigation workflow right click the Navigation UI node
and select Properties.... as shown below.
z On the following screen set Display Text and name as Upload and upload
respectively and click the OK button.
z The previous step will create a pad called Upload. Right click on the pad and
select Add...->pad-item as below
z On the resulting dialog box enter Upload File for displayText and click OK.
z Select pad-item created in the previous step, and fill the url property with
{$pages:upload_transaction_templates}. This url internally points to a pre
packaged jsp in the system.
z Right click on the Upload : pad again and select Add...->pad-item, on the resulting
dialog type View Status and click OK button as shown below
z Right click on the newly created pad-item View Status and set the url property to
{$pages:upload_reports} as shown below.
z Hit the apply button and on the right hand side of UI nodes property page you
should see the newly created pad items as follows
Advanced Customizations
Dataupload Workflow
Dataupload goes through the following workflow when a new upload is started. Please
note that UNDERLINED text represents parameters that can be configured in
dataupload service file. Certain parameters can be further over-ridden at a template
level by setting some attributes in the addTemplate API. These parameters are specified
in Bold:
dataupload will process in parallel. Note that this parameter ensures that only
configured number of directories are processed in parallel at any given point in
time in dataupload.
Consider the following situation:
| User invokes 3 requests for uploadBatchRequestsDir simultaneously
with each one having two job names.
| The MAX_JOB_THRESHOLD is set to 2.
There are totally 6 jobs (i.e. directories) for data upload from the 3 invocations.
Dataupload will start to process only 2 jobs from the first invocation. The other 4
jobs are queued up. The jobs from the queue will be processed as soon as any of
the existing jobs finish. At any point there can only be a maximum of 2 jobs that
are processed simultaneously.
3. The first step in processing a job ( i.e. directory) is to move the files in the
$UPLOAD_FILE_DIR/source/job-name directory in to $UPLOAD_FILE_DIR/
inprocess/job-name +time-stamp directory. If the service parameter
CAN_DELETE_SOURCE_DIR is true then dataupload also deletes the
$UPLOAD_FILE_DIR/source/job-name directory.
4. After moving the files to inprocess directory, dataupload will process as many
files as controlled by service parameter MAX_REQ_STREAM_THRESHOLD. The
above parameter defines how many files are processed in parallel by dataupload
across all the jobs that are currently in execution.
For example consider the following scenario
z Two directories are created containing three files each.
z uploadBatchRequestsDir is called twice on dataupload almost
simultaneously with the above two job names.
z MAX_REQ_STREAM_THRESHOLD is set to four
z MAX_JOB_THRESHOLD is set to two.
In the above scenario, dataupload will start processing all the files under the
first directory (total of 3 files) and one file from the second directory.
Dataupload will wait until it processes at least one file before picking up the
next file from the second directory.
5. Dataupload then creates a Request Queue for each file. The request queue in effect
restricts the number of requests that are maintained in memory by dataupload.
Size of this Queue is determined by reading a service parameter
REQ_QUEUE_SIZE (ReqQueueSize).
6. The Request Queue is not kept filled up all the time because of performance
reasons. The Queue is filled only when it drops below a certain percentage. This
percentage is defined by the REQ_QUEUE_REFILL_PERCENT
(ReqQueueRefillPercent).
For example if the Request Queue size is 200 and the fill percentage is set to 75
then the request queue will be filled only when 150 requests are remaining in the
queue.
7. Eventually dataupload starts filling up the Request Queue with requests read from
a file. A set of threads start pulling data from the Queue and start executing
requests on the appropriate xservice (ServiceName/API are configured when the
Note: In most cases, executing more than 1 request within a transaction can result in
dead-locks. This typically happens if the invoked API employs any locking of
the resources it uses. A common example of a resource is the database rows
that are implicitly locked when issuing any write operation. So a safe
guideline is to set this parameter to 1.
Guidelines
1. For dataupload service to capture errors raised by the API, the error response
should be in the standard ABPP validation format as follows:
| response will have a child element called _RESULT
| _RESULT should have a Value attribute with a value of ERROR or
SEVERE_ERROR.
| RESPONSE should have all the fields as they appear on the REQUEST
tagged along with the standard error response (_ERROR).
| Details attribute in the _RESULT should be set to true. If the Details attribute
is not set to true then will assume that the contents of the RESPONSE are not
the same as the REQUEST and will display the REQUEST form as is to the
user (It will not contained the errors marked) for correction.
2. Ensure that the code is completely free of contentions before using BatchSize of
more than 1. When in doubt use BatchSize of 1.
3. If the Server Method is chosen for implementation, please note that all the files
used for upload will be saved to the Archive folder so its recommended that the
user should clean up the files at regular intervals.
4. Upload service also maintains data in several tables, these tables can increase in
size as more and more uploads are executed. ABPP has a built in purge utility that
allows the user to configure purging of data. The following is an example of a
purge spec implemented for cleaning up data upload tables:
<purge_defs>
<purge_def Name="UploadDataPurgeDef" Action="PURGE"
AnchorDocumentName="UPLOAD_RESOURCE_SPEC" BatchSize="150">
<documents>
<document DocumentName="UPLOAD_RESOURCE_SPEC">
<filter>
<AGE_IN_DAYS
Value="{$PURGE_AGE_IN_DAYS}" FieldToBeUsed="CREATION_DATE"/>
</filter>
</document>
</documents>
<!-- traverse_links: Used to delete related
documents -->
<traverse_links>
<traverse_link
LinkName="UPLOAD_PROGRESS_RESC_LINK">
<traverse_link
LinkName="UPLOAD_RECORDS_LINK"/>
<traverse_link LinkName="UPLOAD_ERROR_LINK"/
>
</traverse_link>
</traverse_links>
</purge_def>
</purge_defs>
For more information about the purge utility please refer to Part B of the ABPP
programmers reference. The PURGE_AGE_IN_DAYS (refer to the spec above)
is the number of days of dataupload data that you would want to keep in the
tables. It is recommended to purge the data in the tables by running the purge
utility on a daily basis.
5. There is a lot of over-head incurred per file by the data upload framework to track
the upload status, move the files through directory structure etc. This implies
creating fewer files with large sizes to avoid the over head. On the other hand
reading a huge file may take a long time since this is a serial activity. So there is a
need to strike a balance between the two factors.
The right file size is often determined by running your uploads with different file
sizes and choosing the one that yields the best results. One rule of thumb is to use
file sizes > 5 MB to strike the right balance.
6. On a server with more memory ReqQueueSize can be set to a higher number to
yield better performance. You can find your optimal size by trial and error.
7. On a server with many CPU’s you can increase the UploadThreadCount. Some
trial and error might be required to find the optimal setting for the given server.
8. On a machine with many CPU’s, running multiple instances of data upload server
may yield a better throughput. You can divide the work amongst the servers by
distributing the jobs (directories). This is worth checking it out.
<RESPONSE Status="Success">
<DBConnectionStats TotalConnectionsMade="3"
ConnectionsInPool="3" />
<MemoryStats TotalMemory="887033856" FreeMemory="31541272"
UsedMemory="855492584" />
<ThreadGroups NumThreads="44">
<ThreadGroup Name="system" NumThreads="6">
<!-- system threads omitted for readability -->
</ThreadGroup>
<ThreadGroup Name="main" ParentGroup="system"
NumThreads="5">
<!-- main threads omitted for readability -->
<Thread Name="XServer" />
</ThreadGroup>
Here is how you can make sense of the resource stats command output listed above:
Note: Refer to the Dataupload Workflow section for description of any of the upload
parameters mentioned in the discussion below.
z You will see a master thread, MasterUploader-xxxxx, for each of the files that is
currently being uploaded. A master thread controls the upload for a single upload
file. The number of master threads will be less than or equal to (<=)
MAX_REQ_STREAM_THRESHOLD parameter.
| The MaxQueueSize attribute on the MasterUploader thread corresponds to
ReqQueueSize parameter. This parameter determines the number of upload
batches that can be held in memory for a given file upload. If the system is not
memory constrained then increasing ReqQueueSize parameter will improve
the throughput.
| The QueueSize attribute corresponds to the actual number of upload requests
in the queue waiting to be processed. If this number is very high then
This chapter gives you information about the Task Scheduler Concepts and Service. It
includes the following topics.
Topics:
z Introduction
z Task Nodes
z Wait Process Node
z Launch Process Node
z Load SQL File Node
z Execute SQL Procedure Node
z FTP Files Node
z Notification Node
z Task Node Execution
z Assumptions and Dependencies
z Configuring and Managing Notification
z Scheduling
z UI Log and Monitoring
z Suggestions and Best Practices
Introduction
Task scheduler framework provides a toolkit which enables the user to create, monitor
and schedule task based workflows. It uses various components already available in
ABPP such as the workflow engine, PGL and the timer service.
The framework also provides a rich set of workflow nodes which can be easily
configured. It also provides a monitoring UI which can be used by the user to check
the status of a Task workflow, these screens could be customized further based on
customer requirements.
The task nodes have inbuilt notification mechanism which can be configured to send
emails, or can be hooked up with an alerting system
Task Nodes
Task scheduler framework has five special nodes for the user to create various tasks.
These are Wait Process Node, Launch Process Node, Load SQL File Node, Execute
Procedure Node and FTP Files Node.
Wait Process Node, Load SQL File Node, Execute Procedure and FTP Files Node
have similar characteristics, as mentioned below:
z These nodes have a Pre Action and Post Action components where the user can
execute rules before and after the Task is executed.
z They also have a Notification component, where the user can specify the
condition in which a notification is to be raised, along with email subject and
body, it also allows the user to associate a activity with the notification.
z These nodes also have next nodes component. This enables the user to decide
which next node to execute after the post action has been executed.
z All of the above mentioned nodes are restartable, as in when a notification is
raised the user can look at the state of the workflow and decide if he wants to
restart the workflow from any of the four nodes available in the workflow. If he so
chooses the workflow will restart from the specified task node onwards.
z An implicit variable named outputDocVar is available (This is also available in
Launch Process Node) to the user in the Post Process component which returns
the result of the task. The signature of this variable is defined in the explanation of
each node
z All properties of these nodes can be expressed as xpath expressions except for the
Procedure name in the Execute Procedure node.
z All the task nodes can be used with regular ABPP nodes in a workflow, the other
nodes will not be restartable.
z Capture Errors: This optional parameter will configure the node to capture
Standard Error of the application configured in the Command parameter, the
result is available only at the end of the process in the outputDocVar. Warning: As
this setting starts capturing standard error buffer of the application till the
application is running it will cause the memory footprint to go up, it is best to
avoid this property and leave it as false.
z Node Input: Input to this node. Warning: It is best not to use Node Input because
this is a restartable node. The values of the node input parameter will not be
remembered when a restart happens.
Note: The above only returns the status of the launch, it does not include the exit
code etc.
the rest of the database properties are used to connect to the database and the SQL file
is loaded into the specified database.
DB Type: Required if Use System DB is set to false. The Database type can be set to
‘ORACLE’ or ‘DB2’ as per the i2 Stack.
Host: Required when Use System DB is set to false. This parameter is to be set to the
Host where the SQL file is to be loaded.
Port: Required when Use System DB is set to false. This parameter is used to specify
the Port to connect to the database.
User Id: Required when Use System DB is set to false. This parameter is used to
specify the User Id to connect to the database.
Password: Required when Use System DB is set to false. This parameter is used to
specify the Password to connect to the database
DB Instance Name: Required when Use System DB is set to false. This property is
used to specify the Instance name to connect to the database.
Note: Note: Description in the root node is inserted only when the status is Error.
The above table defines the parameters to be accepted by the SQL procedure. The
table has as many rows as the number of parameters accepted by the SQL procedure.
The following explains the columns in the above table:
z Name: This is a required parameter which will be used to return the value for all
OUT and IN_OUT parameters in the response contained in the outputDocVar. In
the above example outputDocVar will contain the values <PARAM2 Value=””/>
and <PARAM3 Value=””/> PARAM1 will not be available as its an input
parameter.
z Value: This has to be populated to a literal or xpath expression value required for
the IN parameters.
z Type: This is a required parameter, as explained earlier this will indicate if the
parameter is of Type IN, OUT or IN_OUT.
z Data Type: This indicates the type of parameter that the SQL procedure is
expecting, this could be of the following types:boolean, string, int, float, double,
date, datetime,time and timestamp
Note: Description in the root node is inserted only when the status is Error.
Note: If all files in a directory are copied and if the FTP fails in between due to a
netowork error the user might have to cleap up the existing file(s). The user
will have access to this information in the Status field of outputDocVar, the
user will get ‘Error’ in the Status attribute, as mentioned earlier outputDocVar
is available in post action.
Notification Node
The Notification node is a task scheduler node which can be used to execute xrules.
However, it has all the charecteristics of Task Scheduler nodes like it allows the user
to execute action, configure notification and branch into next nodes based on multiple
conditions. This node is also restartable from the monitoring UI.
Each of the above tabs of the Notification Node properties window represent the
following:
Actions: The user can execute any xrules, the actions are executed as normal rule
code.
Notification Conditions: The user can specify notification conditions under this tab.
If any of these conditions are evaluated to true then a notification is raised to the
people with appropriate activity.
Condition settings: Next node condtions can be specified here in this tab. This allows
the user to branch into multiple nodes based on various conditions.
Notification nodes are executed as follows:
Actions are executed first, then the notifications are evaluated and finally Next Node
conditions are evaluated.
<UserID/>
<Email/>
</SELECT>
<QUERY_DOCUMENT_LINK Name="UsersToRoleLink">
<QUERY_DOCUMENT_LINK
Name="RoleToRoleActivity">
<ActivityId Value="{$thisParam/ACTIVITY/
@Value}"/>
</QUERY_DOCUMENT_LINK>
</QUERY_DOCUMENT_LINK>
</GET_DOCUMENT>
<FOR_EACH SelectList="$response/UserProfile"
CurrentElement="userProfile">
<APPEND_TO_XML DocVar="thisReturn">
<USER_PROFILE>
<ID Value="{$userProfile/UserID/@Val}"/>
<EMAIL_ADDRESS
Value="{$userProfile/Email/@Value}"/>
</USER_PROFILE>
</APPEND_TO_XML>
</FOR_EACH>
</DEFINE_IMPLEMENTATION>
The following picture shows a typical configuration for a Notification on a task node:
The user can click the link and go to the Details page and decide to either restart a
particular node or continue from the notification node. Details UI is explained in more
details in “UI Log and Monitoring” chapter.
This table has a insert trigger built in, which invokes a Task Event with the following
signature.
The above raise event derives all its properties from the data inserted into the
TASK_SCHEDULE_STATUS table. This mechanism inherently depends on the timer
server. Therefore its imperative that TIMER_SINK service be co-located with the
TASK_SCHEDULER service.
Here is an example workflow which demonstrates how the TaskEvent could be used:
The above workflow uses Launch Process to start an external process, after launching
the process the control moves to an event node which waits for //
TASK_SCHEDULER/TaskEvent, assuming that the external process after finishing
its operations will insert a record into TS_STATUS table. When the external
application inserts a record in TS_STATUS table, a trigger is used to create a raise
event in the TIMER_SINK service for the TaskEvent, this raise event uses the same
values as in the insert TS_STATUS, therefore the STATUS field is available to the
user in the post action of the Event node. On timer expiration the TIMER_SINK
service will execute a raise event for the TaskEvent, and the workflow will start
executing from the post actions of the Event node. The Events node can decide on
what to do next based on the STATUS received from the external application. Note
that the user has to specify the module name and the instance name as keys which are
evaluated right after pre actions of the event node.
Scheduling
The TIMER_SINK service is to be used for scheduling task workflows. Please read
detailed instructions about how to use timer service in the ABPP Services Guide.
The user can register Timer operations in the DEFINE_INIT of the service in which
the user has the task scheduler workflows. Within the START_TIMER a call back
duration could be setup such that the workflow is repeated based on the duration
specified. The user could also register a callback with rules to check more complex
conditions.
The following is an example of a START_TIMER which Invokes a Task Workflow
only on Weekday and will not be invoked on weekends
<DEFINE_INIT>
<RULE>
<ACTION>
<STOP_TIMER>
<IDENTIFIED_BY>
<NAME Value="SchedulerSample"/>
</IDENTIFIED_BY>
</STOP_TIMER>
<START_TIMER>
<IDENTIFIED_BY>
<NAME Value="SchedulerSample"/>
</IDENTIFIED_BY>
<CALLBACK_DURATION Value="{duration(0,1,0,0)}"/>
<CALLBACK_DATE Value="{date()}"/>
<CALLBACK_ACTIONS>
In the above define init the user first stops the timer, this is to ensure that we do not
end up calling the same timer entry multiple times. Once the timer is stopped the start
timer is executed which registers the timer. The <CALLBACK_DURATION
Value="{duration(0,1,0,0)}"/> sets the duration of the timer entry in the given
example.
When the above timer expires the code under CALLBACK_ACTIONS is executed, in
the above scenario the system evaluates <IF_TEST Test="((getDayOfWeek() != 7) or
(getDayOfWeek() != 1))"> which checks to see if the current day is a weekday. If the
current day is a weekday only then workflow is executed.
The user could come up with different strategies around specifying the rules for
starting a workflow based on the requirements.
Note: Please note that just like any workflow in ABPP this workflow can be
extended and customized to meet different requirements.
tsTaskHistory Workflow:
For the user to be able to see their workflow in this monitoring UI the user has to
follow certain steps while creating a TASK_SCHEDULER workflow.
On the workflow node set the following properties
| Set the Type attribute to “TASK_SCHEDULER”.
| Set the IsStartAllowed to “Yes”
| Set the NodeEventListener to “true” or set it to “default”, however, if the
property is set to ‘default’ then the node will look for ‘NodeEventListener’ in
the list of service parameters (these are specified in the service xml file), this
should be set to ‘true’, if it does not find this property set to true the logs may
not show up in the tsTaskHistory results.
Alternatively in the studio right click on a workflow and look up properties of the
workflow and set them as follows:
The following picture shows the initial search screen of the tsTaskHistory table:
z Started By: The user can also filter by the Started By parameter.
Once the user sets up the above parameters and hits the search button the following
results are displayed:
| Nodes Executed: These are nodes that were executed in the workflow
z Name: Name of the Node (this is the name that is entered at design time)
z Type: Type of Node
z Start Time: The time at which the node started execution
z End Time: The time at which the node started execution. If the end time is
not found that means the node has not finished execution.
z Status: Completed if the node finished execution, empty otherwise.
| Nodes Not Executed: These are nodes that are not executed so far.
z Field Explanation is same as Nodes Executed except that the Start Time,
End Time and Status are always blank as there is execution that has
happened on these nodes.
z Description: If the Workflow Description is populated that will show up here.
z Start Date: Start time of the workflow
z End Date: End time of the workflow
z Status: Current Status of a workflow.
tsDetails Workflow:
The tsDetails workflow allows the user to directly go to the Details page and look
at the execution path of the workflow. This page is ideal for notifications as it
takes the user directly to the pertinent information page.
The first column is selectable only if there is a notification pending on the relevant
workflow.
Here is a detailed description of the above fields:
z Nodes Executed: These are nodes that were executed in the workflow
| Name: Name of the Node (this is the name that is entered at design time)
| Type: Type of Node
| Start Time: The time at which the node started execution
| End Time: The time at which the node started execution. If the end time is not
found that means the node has not finished execution.
| Status: Completed if the node finished execution, empty otherwise.
z Nodes Not Executed: These are nodes that are not executed so far.
Field Explanation is same as Nodes Executed except that the Start Time, End
Time and Status are always blank as there is execution that has happened on these
nodes.
z Continue Button: The Continue button will be displayed when the workflow is in
a notification state. The press of this button will allow the workflow to go forward
with the next nodes and evaluate conditions on the next node.
z Restart Button: The Restart button will be displayed when the workflow is in a
notification state, the user can select any node of his choice and click the Restart
button to restart the workflow from that point onwards. Note that non Task
Scheduler nodes will not be restartable.
z Important note, if a workflow is either ACTIVE, COMPLETED or ABORTED
Restart and Continue buttons will not appear
4. Start the studio and execute workflow which either has any of the Launch/Wait
process nodes. Now notice that the ABPP process console gets the output from the
launched process.
Note: The external process launched from the Launch/Wait Process Nodes are still
external to studio or ABPP server, it’s just that we register the output stream
to the console of ABPP.
Subworkflows:
Ideally the task workflows will be monitored by the Admin user group, to keep the
monitoring view fairly straight forward its suggested not to use subworkflows.
However, in certain scenarios it might be necessary to use subworkflows, in such
situations it might be best to create a subworkflow which has
Type=”TASK_SCHEDULER” and IsExternalStartAllowed set to “true”. This way
the user could directly search for the subworkflow and monitor it.
Email Service
Introduction
Email support in ABPP is provided by the OMS_MESSAGE_SERVICE service. This
is an archived service (available in bpe-services.jar) that provides an X-Rule method
that can be called from other services to send out emails.
The name OMS_MESSAGING_SERVICE is misleading and will be renamed to some
thing more meaningful like EMAIL in next major release of ABPP.
For most of the ABPP applications that need dashboard kind of functionality, the
emails will be generated from MESSAGING service. In such cases this service needs
to be setup as explained in next section but there is no need to call this service directly.
Please refer to MESSAGING service chapter for details on features provided by
MESSAGING.
Setup
The service configuration file for this service has following service parameters that
need to be set properly:
z om.smtpHost: This service parameter defines the SMTP host name. The value
can be a host name or host IP address.
z om.systemFromAddress: This service parameter defines the default from
address that will be used when from address is not provided as part of the request.
z isEmailEnabled: This flag can be set to ‘true’ or ‘false’. Default is ‘false’. If set
to ‘false’ no email will be sent from the system.
As this is an archived service, to change these parameters create a custom service
extension and then change these parameters in the custom service configuration file.
When using Studio, the service can be customized by right clicking on this service
node in left navigation and selecting ‘Customize Service…’ menu option.
To change these parameters, open the custom service configuration file and add
following parameters with proper values.
Sending Emails
To send out emails this service provides an X-Rule method “sendEmail. The
API_DOC for this method is:
<API_DOC>
<INPUT>
<REQUEST Name="sendEmail">
<SMTP_HOST Value="i2.com" Optional="true" Comment="Defaults to
om.smtpHost service parameter"/>
<FROM_ADDRESS Value="dummy@i2.com" Optional="true" Com-
ment="Defaults to om.systemFromAddress service parameter"/>
<TO_ADDRS>
<TO_ADDR Value="abc@i2.com" Repeatable="true"/>
</TO_ADDRS>
<CC_ADDRS Optional="true">
<CC_ADDR Value="fdhd@i2.com" Repeatable="true"/>
</CC_ADDRS>
<BCC_ADDRS Optional="true">
<BCC_ADDR Value="fdhd@i2.com" Repeatable="true"/>
</BCC_ADDRS>
<SUBJECT Value="some subject"/>
<FORMAT Value="TEXT|HTML"/>
<MESSAGE>
<DATA Value="some data" Optional="true" Comment="One of DATA or
XML_DATA is required"/>
<XML_DATA XsltFileName="inv.xsl" Optional="true" Comment="One of
DATA or XML_DATA is required">
<ANY/>
</XML_DATA>
</MESSAGE>
<ATTACHMENTS Optional="true">
<ATTACH_FILE_NAME Value="demo.doc" Repeatable="true"/>
</ATTACHMENTS>
</REQUEST>
</INPUT>
<OUTPUT>
<ON_SUCCESS>
<RESPONSE Status="Success"/>
</ON_SUCCESS>
</OUTPUT>
</API_DOC>
The various tags expected by this x-rule method are explained below:
SMTP_HOST
This specifies the SMTP host to use to send the email. This is optional and if
not specified the ‘om.smtpHost’ parameter specified in the service
configuration file is used.
FROM_ADDRESS
This specifies the ‘from’ address for the email. Most SMTP servers will force
this to be a valid email address in the server’s domain. This is optional and if
not specified the ‘om.systemFromAddress’ parameter specified in the service
configuration file is used.
TO_ADDR
This specifies email address of a recipient in ‘To’ category. Multiple
TO_ADDR tags can be used to send email to multiple recipients.
CC_ADDR
This specifies email address of a recipient in ‘Cc’ category. Multiple
CC_ADDR tags can be used to send email to multiple recipients.
BCC_ADDR
This specifies email address of a recipient in ‘Bcc’ category. Multiple
BCC_ADDR tags can be used to send email to multiple recipients.
SUBJECT
This specifies subject of the email.
FORMAT
Valid values for this are TEXT or HTML. Using TEXT will result in a plain
text email. The HTML type should be used for emails where the body is a
HTML document so that the client displays it properly.
MESSAGE
This tag is used to define the email body. There are two ways to specify the
email body contents. The DATA tag can be used to specify the content as a
string. The XML_DATA tag can be used to convert XML data to email body
content (either plain text or HTML) using an XSL file specified by the
XsltFileName attribute. The XSL file should be specified using absolute path
or path relative to classpath.
ATTACH_FILE_NAME
This tag is used to send a file as an attachment in the email. The Value
attribute specifies the absolute path or path relative to working directory of
the attachment file.
<QUANTITY Value="1"/>
<SHIP_DATE Value="01/10/2007"/>
</ORDER>
</ORDERS>
</TO_DOCVAR>
<REQUEST Name="sendEmail" Service-
Name="OMS_MESSAGE_SERVICE">
<TO_ADDRS>
<TO_ADDR Value="testuser@testcompany.com"/>
</TO_ADDRS>
<SUBJECT Value="Test"/>
<FORMAT Value="HTML"/>
<MESSAGE>
<XML_DATA XsltFileName="EmailTest.xsl">
<TO_XML SelectList="$orderData/ORDER"/>
</XML_DATA>
</MESSAGE>
</REQUEST>
</ACTION>
</RULE>
</DEFINE_METHOD>
Contents of EmailTest.xsl:
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Trans
form">
<xsl:template match="XML_DATA">
<html>
<body>
<b>Summary of Orders</b>
<table border="1">
<tr bgcolor="#9acd32">
<th align="left">Id</th>
<th align="left">Item</th>
<th align="left">Quantity</th>
<th align="left">Ship Date</th>
</tr>
<xsl:for-each select="ORDER">
<tr>
<td>
<xsl:value-of select="ID/@Value"/>
</td>
<td>
<xsl:value-of select="ITEM/@Value"/>
</td>
<td>
<xsl:value-of select="QUANTITY/@Value"/>
</td>
<td>
<xsl:value-of select="SHIP_DATE/@Value"/>
</td>
</tr>
</xsl:for-each>
</table>
</body>
</html>
</xsl:template>
</xsl:stylesheet>
<TO_DOCVAR AssignToVar="emailBody">
<html>
<body>
<b>Summary of Orders</b>
<table border="1">
<tr bgcolor="#9acd32">
<th align="left">Id</th>
<th align="left">Item</th>
<th align="left">Quantity</th>
<th align="left">Ship Date</th>
</tr>
<FOR_EACH SelectList="$orderData/ORDER" Curren-
tElement="order">
<APPEND_TO_XML>
<tr>
<td>{$order/ID/@Value}</td>
<td>{$order/ITEM/@Value}</td>
<td>{$order/QUANTITY/@Value}</td>
<td>{$order/SHIP_DATE/@Value}</td>
</tr>
</APPEND_TO_XML>
</FOR_EACH>
</table>
</body>
</html>
</TO_DOCVAR>
</DEFINE_METHOD>
Messaging Service
Introduction
The MESSAGING service is a utility service that can be used to notify a group of
users by email or by alerts on their Web UI about an event that has occurred in the
system. Examples of events are: Order creation, Order needs approval etc. Notifying
the users is referred to as sending a message to the user and the business event that
triggers this notification is referred to as messaging event.
This service can be used by all other services (lets call them client services) to send
messages to different recipients. The client services have to define xml specification
files for each type of messaging event they intend to send. The specification files
include messaging details like: messaging event that will be raised by the client
service, recipients for the event, message format (email or Web UI alert) and message
content.
Then to raise/generate messaging events at appropriate places in the business
workflows the client service invokes a predefined X-Rule action component called
'RAISE_MSG_EVENT'. The parameters to 'RAISE_MSG_EVENT' identify the
messaging event configuration to use and also provide event related details like Order
ID, Items etc. On such an invocation the MESSAGING service uses the messaging
event configuration that has already been loaded and the data provided by
'RAISE_MSG_EVENT' to notify appropriate recipients. Please note that
RAISE_MSG_EVENT action component has no relation to RAISE_EVENT action
component.
For a client service to use the MESSAGING service the steps to be followed are as
follows:
1. Define Message Event: The first specification that the client service has to define
is the list of message events that it will generate. Each message event has a name
that needs to be unique within the client service.
2. Define Message Template: Based on messaging events the client service needs to
define the message templates that provide the contents of the message in different
protocols (EMAIL and/or ALERT). Each message template has a name that
should be unique within the client service.
3. Define Recipient Groups: Next step is to define the Recipient Groups needed.
Recipient Group is a logical grouping of recipients and maps to one or more
recipient. A recipient is a valid user in the system as identified by the User
Security service. The recipient group specification allows for three types of
recipient groups ENTITY, RULE and CUSTOM. These are explained later. Each
Recipient Group is uniquely identified by its name across all client services in the
system.
4. Link Message Event to Message Template and Recipient Groups: The next
step in setting up messaging is to link each of the message events with appropriate
message template to use and provide a list of recipient groups to be notified as a
result of the message event.
5. Loading messaging specification files: All of the above configurations are done
in xml specification files and these files are specified in the service configuration
file of the client service.
6. Raise message event: Finally to send messages i.e. notify users, the client service
uses 'RAISE_MSG_EVENT' x-rule action component from its workflows/x-rules.
On receiving this invocation the MESSAGING service will use the specifications
defined in previous steps to notify appropriate recipients.
Each of the steps above is explained in more details below.
Example:
<messaging>
<events>
<event Name="CoCreditHoldEvent" ServiceName="ORDER_ADMIN"
Document="CUSTOMER_ORDER" Category="Order"/>
<event Name="AoRouteEvent" ServiceName="ORDER_ADMIN" Document="ACTIVITY_OR
OnEventTypeClick="//ORDER_ADMIN/orderReview" Category="ActivityOrder"/>
...
</events>
</messaging>
EMAIL_NOTIFICATION_SUB=Email Notification
EMAIL_NOTIFICATION_MSG=This is an email notification for {0}
…
The base resource bundle file defines the default translations. For various
languages supported the appropriate resource bundle file needs to be defined. The
naming convention for the resource bundle file is
'<baseName>_<language>_<country>.properties' and use of <country> is
optional. The list of languages and countries can be found at http://ftp.ics.uci.edu/
pub/ietf/http/related/iso639.txt and http://userpage.chemie.fu-berlin.de/diverse/
doc/ISO_3166.html (use 2 letter codes).
ALERT Template
The syntax for <comm_protocol> of type "ALERT" is:
Syntax:
Example:
<com m _protocol Type="ALER T">
<priority Value="5"/>
<url>
<docum ent_id Select="$thisDoc/ID/@ Value"/>
<docum ent_nam e Value="CUSTOM ER_OR DER"/>
</url>
<m essage Type="TEXT">
<text>Credit hold placed on custom er order {0} with BillTo {1}</text>
<args>
<arg Select="$thisDoc/ID/@ Value"/>
<arg Select="$thisDoc/BILL_TO _ID/@ Value"/>
</args>
</m essage>
</com m _protocol>
EMAIL Template
The syntax for specifying <comm_protocol> of type "EMAIL" is shown below.
Syntax:
<comm_protocol Type="EMAIL" Optional="true">
<subject Value="Email subject using {n} for arguments">
<args Optional="true">
<arg Select="X-Path expression" Repeatable="true"/>
</args>
</subject>
<from_address Select="X-Path expression" Optional="true" Comment="Defaults to om.systemFromAddress
service parameter in OMS_MESSAGE_SERVICE"/>
<format Value="text|html"/>
<message Type="TEXT|XML">
<text Optional="true" Comment="Required for message type TEXT">Email text using {n} for
arguments</text>
<args Optional="true" Comment="Required for message type TEXT">
<arg Select="X-Path expression" Repeatable="true"/>
</args>
<data Select="X-Path expression for xml to be passed to XSL file" Optional="true" Comment="Required for
message type XML"/>
<xsl_file Value="Absolute or relative path of XSL file" Optional="true" Comment="Required for message
type XML"/>
</message>
</comm_protocol>
<subject>: This tag defines the subject of the email. The Value attribute specifies the
email subject. The subject can be specified using dynamic arguments by using curly
brackets like {0}, {1},…, {n} and corresponding <arg> tags inside <subject>. By
specifying the subject value as some string that is specified in the language resource
bundles explained before, the email subject can be internationalized based on locale
defined for the email recipient.
<from_address>: This tag specifies the 'From' address of the Email. If there is no
from_address specified then default 'From' address used will be
om.systemFromAddress service parameter defined in OMS_MESSAGE_SERVICE
service.
<format>: This tag specifies the format of the message. Valid values for this tag are
"text" or "html". Default value is "html". If format specified is "text" then the contents
of the email sent will be treated as plain text by the mail client. The 'text' format is
useful for simple emails. The "html" format is used to send emails where the email
content is an html document.
<message>: This tag is used to specify the email body. To send out simple text emails
the <text> and <args> child tags are used. To send out emails with html content the
<data> and <xsl_file> child tags are used.
<text>: The text of this tag specifies the email body. Dynamic arguments can be
specified by using curly brackets like {0}, {1}, … {n} etc. By specifying the text of
this tag as a string that is specified in the language resource bundles explained before,
the email body can be internationalized based on locale defined for the email
recipient.
<arg>: This is used to define dynamic arguments for email body specified by <text>
tag. The first <arg> corresponds to {0} in email body text, second to {1} and so on.
<data>: The Select attribute of this tag specifies the xml document to use for
generating HTML contents of the email.
<xsl_file>: The Value attribute of this tag specifies the XSL file to apply to the xml
document identified by <data> tag to generate the HTML body of the email. The XSL
file should be specified using absolute path or path relative to classpath.
Internationalization is not supported when an XSL file is used.
Example 1:
<comm_protocol Type="EMAIL">
<subject Value="Credit hold on customer order"/>
<from_address Select="$thisDoc/EMAIL/@Value"/>
<format Value="text"/>
<message Type="TEXT">
<text>Credit check resulted in credit hold for Customer Order Id: {0}</text>
<args>
<arg Select="$thisDoc/ID/@Value"/>
</args>
</message>
</comm_protocol>
Example 2:
<comm_protocol Type="EMAIL">
<subject Value="Credit hold on customer order"/>
<from_address Select="$thisDoc/EMAIL/@Value"/>
<format Value="html"/>
<message Type="XML">
<data Select="$thisDoc"/>
<xsl_file Value="CreateHoldEmail.xsl"/>
</message>
</comm_protocol>
Contents of CreateHoldEmail.xsl:
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="CUSTOMER_ORDER">
<html>
<body>
<b>Credit check resulted in credit hold for Customer Order Id: <xsl:value-of
select="ID/@Value"/></b>
</body>
</html>
</xsl:template>
</xsl:stylesheet>
at run-time to find the recipients that need to be notified. Recipients from each
recipient definition are combined to form the final unique list of recipients.
Defining the recipient groups using a separate specification file allows for reuse of a
recipient group across various message events. The syntax for defining recipient
groups is:
Syntax:
<messaging>
<recipient_groups>
<recipient_group Name="RecipientGroup1" ServiceName="ClientService[Optional=true]"
Document="XmlDocument[Optional=true]" >
<recipient_def Name="RecipientDef1" Type="ENTITY" Optional="true" Repeatable="true">
… contents explained below …
</recipient_def>
<recipient_def Name="RecipientDef2" Type="RULE" Optional="true" Repeatable="true">
… contents explained below …
</recipient_def>
<recipient_def Name="RecipientDef3" Type="CUSTOM" Optional="true" Repeatable="true">
… contents explained below …
</recipient_def>
</recipient_group>
</recipient_groups>
</messaging>
Example:
<messaging>
<recipient_groups>
<recipient_group Name="CoCreatorRG" ServiceName="ORDER_ADMIN"
Document="CUSTOMER_ORDER">
<recipient_def Name="RecipientDef1" Type="ENTITY">
... recipient def of type ENTITY. See details below ...
</recipient_def>
<recipient_def Name="RecipientDef2" Type="RULE">
... recipient def of type RULE. See details below ...
</recipient_def>
<recipient_def Name="RecipientDef3" Type="CUSTOM">
... recipient def of type CUSTOM. See details below ...
</recipient_def>
</recipient_group>
</recipient_groups>
</messaging>
In the above example the recipient group 'CoCreatorRG' is defined for service
'ORDER_ADMIN' and document 'CUSTOMER_ORDER'. That means that the
recipient group needs a CUSTOMER_ORDER document for evaluation and so is
compatible only with those message events that are also defined for the same
document.
The different recipient definition types are explained below.
When the recipient definition is of type 'ENTITY', at runtime the recipients are
evaluated by invoking a request descriptor 'getEntityMessagingInfo'. The 'entity_type'
and runtime value of 'entity_id' XPath expression specified above are passed as
parameters to this request descriptor. The request descriptor can be implemented by
any service in the solution and is generally implemented by the User Security service.
The rule implementing the request descriptor is expected to return appropriate
recipients and their contact information. The API_DOC for the request descriptor can
be found in MESSAGING/Events/EMRequestDescriptors.xml in Studio left
navigation. The valid values for <entity_type> will depend on the rule implementing
the request descriptor. The USER_SECURITY service packaged as part of Studio
examples in the userSecurity folder provides basic implementation of the
‘getEntityMessagingInfo’ request descriptor. It supports only 'USER' as a valid value
for <entity_type>. Also it assumes that all users are interested in all categories of
events.
Example of recipient_def of type ENTITY:
<recipient_def Name="Creator" Type="ENTITY">
<entity_type Value="USER"/>
<entity_id Select="$thisDoc/CREATED_BY_ID/@Value"/>
</recipient_def>
<request>: This defines the custom rule to be called and its service.
Attributes:
z Name (Required): The name of the custom rule to be invoked to find the
recipients.
z ServiceName (Required): Name of the service that is implementing this custom
rule. The custom rule can be implemented in any service.
This type of recipient definition is used when the logic to figure out the recipients is
complex. Here the MESSAGING service will call the user-defined rule at runtime to
get the recipient information. The input to the rule is the input xml document passed as
a parameter to 'RAISE_MSG_EVENT' (if a document is specified for the recipient
group).
The request to this rule will be like:
<REQUEST Name="customRule" ServiceName="CustomRuleService">
<!-- Input xml document if it is available -->
…
</REQUEST>
<event_category>: This tag is used to list the event categories that the recipient is
interested in. If no <event_category> tags are listed then it implies that the recipient is
not interested in any events. A special event category 'All' is allowed to indicate that
the recipient is interested in all categories. At runtime, if the category of the message
event that is being raised is not listed in this list then the recipient will not receive
notification for that message event.
<comm_protocol>: This is used to provide the contact information for the recipient.
The 'Type' attribute can be 'ALERT' or 'EMAIL'. Either one or both protocol types can
be specified. The expected children of this tag are shown above.
Example of recipient_def of type CUSTOM:
<recipient_def Name="recipient1" Type="CUSTOM">
<event_categories>
<event_category Value="All"/>
</event_categories>
<comm_protocols>
<comm_protocol Type="ALERT">
<USER_ID Value="admin"/>
</comm_protocol>
<comm_protocol Type="EMAIL">
<ADDR Value="admin@abc.com"/>
</comm_protocol>
</comm_protocols>
</recipient_def>
<event_def>: This tag identifies the message event that is being linked. The 'Name'
and 'ServiceName' specified should match an already defined message event.
<action>: This tag is used to link the message event to message template and recipient
groups. This tag can have one <message_template> and one or more
<recipient_group> as children. When the message event is raised by using
When using Studio the messaging configuration files can be added to the client service
without having to manually type the xml mentioned above in client service
configuration file.
To do that first right click on the client service node in the studio left navigation and
select the “Insert Messaging Definitions” option under “Additional Configuration”.
This adds a “Messaging Definitions” node under the client service node as seen in the
image below. Then right click on the “Messaging Definitions” node and select “Insert
Messaging Definition File...” option.
This pops up a dialog where you can specify the name of the messaging configuration
file and its path.
Clicking OK in this popup will add the specified messaging configuration file to the
client service (under the “Messaging Definitions” node) as shown below. Double click
this newly added file in studio left navigation to open the editor that provides intelli-
sense support for messaging specification.
Attributes:
z Name (Required): This is the name of the message event that has already been
registered with the MESSAGING service by uploading the messaging
specification files.
z Select (Optional): If the message event has been defined with a document then the
document can be sent to the MESSAGING service using the 'Select' attribute. The
value of this attribute should be an X-Path expression that evaluates to an xml
document.
z Synchronous (Optional): The default behavior of RAISE_MSG_EVENT is to
send the message in an asynchronous manner. This attribute can be set to 'yes' to
send the message synchronously. In asynchronous mode the message will be sent
using a different thread and different database transaction and if it fails it will not
affect the parent transaction.
Example:
<RAISE_MSG_EVENT Name="createCoEvent" Select="$coDoc" Synchronous="yes"/>
delSelectedAlerts
The 'delSelectedAlerts' API can be used to delete alerts by specifying filters on
properties of the alert document MSG_ALERT. Alert deleted by using this API will be
deleted from all users who received that alert. The API_DOC for this is:
<API_DOC>
<INPUT>
<REQUEST Name="delSelectedAlerts" Any="true" OrderIndicator="true">
<!-- Any filter based on properties of document MSG_ALERT -->
<ANY/>
</REQUEST>
</INPUT>
<OUTPUT>
<ON_SUCCESS>
<RESPONSE Status="Success"/>
</ON_SUCCESS>
</OUTPUT>
</API_DOC>
Example 1:
<REQUEST Name="delSelectedAlerts">
<ID Value="ALERT-1011169763975492"/>
</REQUEST>
Example 2:
<REQUEST Name="delSelectedAlerts">
<OR>
<ID Value="ALERT-1011169763975492"/>
<ID Value="ALERT-1011169763975456"/>
</OR>
</REQUEST>
Example 3:
<REQUEST Name="delSelectedAlerts">
<EVENT_NAME Value="CoCreditHoldEvent"/>
<DOCUMENT_ID Value="CO-123"/>
<DOCUMENT_NAME Value="CUSTOMER_ORDER"/>
</REQUEST>
Web UI
Configure UI
Based on specifications described in previous sections the messaging service will
generate alerts when messaging events are raised. These alerts can be displayed on the
WebUI. The messaging service provides a UI workflow for searching active alerts. To
add a link to this search workflow from the left navigation of the WebUI, append
following line to Navigation.xml at appropriate level like any other link.
<ui:pad-item displayText="Search Alerts" workflow="//MESSAGING/SearchAlerts"
activity=”serch_alerts”/>
Search Alerts
The alert search screen enables the user to search with various criteria for the alerts for
which the user is a recipient. Following search filters are provided by the alert
mechanism.
1. Service Name: Every alert belongs with an Xservice. The services of the alerts are
specified while configuring the alert.
2. Event Name: This dropdown lists all the events. User can select "All" or an event
type of interest.
3. Priority: Every event can be assigned a priority according to its importance. Once
configured, priority for a particular event does not change. Alert mechanism
provides following list of priorities.
a. Fatal
b. Severe
c. Warning
d. Information
4. Category: It is a logical grouping of alerts. Category is defined at the time of
configuration.
5. Document Name: This particular search filter can be used when user knows for
which particular entity (Purchase Order or ASN) the alert is created.
6. Document Id: If user is aware of Document ID, then the user can find particular
alert for the ID.
7. Creation Date: User can search alerts based on creation date.
The following screenshot shows various search filters.
On the same page it provides detailed information for the alert. User can use the
pagination functionality for browsing alerts.
Remove Alerts
Apart from searching, users can remove (clear) the alerts in the same search alert
screen. To remove the alert
1. Select the check box for the alert(s).
2. Click on the clear button.
After the above action the selected alerts will be deleted from the system (for all the
recipients of the alert) and will not be available later for any action. (refer the above
figure)
Alert Details
To view the alert details user can click the link given under the description column.
This link will take the user to another page, which can provide further details relevant
to that alert. For different types of alerts this link can take the user to different pages as
per the configuration in the messaging event definition. The UI workflow that is
invoked by clicking on this link will have access to all the details of the alert. The
'thisParam' variable in this UI workflow will refer to following xml:
<ADDITIONAL_PARAMETERS>
<ID Value="ALERT-1011169662846851[Type=string]"/>
<CREATION_DATE Value="01/24/2007 12:20:46[Type=datetime]"/>
<DOCUMENT_ID Value="1[Type=string]"/>
<DOCUMENT_NAME Value="CUSTOMER_ORDER[Type=string]"/>
<EVENT_NAME Value="PoCreated[Type=string]"/>
<MESSAGE Value="New Order 1 placed by buyer buyer1. [Type=string]"/>
<MSG_TEMPLATE_NAME Value="PoCreatedMT[Type=string]"/>
<MSG_TEMPLATE_TEXT Value="New Order {0} placed by buyer {1}.[Type=string]"/>
<PRIORITY Value="3[Type=int]"/>
<SERVICE_NAME Value="ORDER_ADMIN[Type=string]"/>
<CATEGORY Value="Order[Type=string]"/>
<ON_CLICK_SERVICE_NAME Value="ORDER_ADMIN_UI[Type=string]"/>
<ON_CLICK_WORKFLOW_NAME Value="orderReviewFromAlert[Type=string]"/>
</ADDITIONAL_PARAMETERS>
D
Data Upload 29
Deployment 2
E
Error Correction 36
G
Guaranteed Messaging 8
Guidelines 44
M
Multi-line Approval node 20, 52, 54, 55, 58
P
Parallel Approval Node 19
S
Serial Approval Node 18, 50
Server Based Method 31
Service Setup & Configuration 26
START_TIMER 3
Status Report 34
STOP_TIMER 7
Success-failure Determination 11
T
Tabs on Approval Nodes 26
Timer commands 3
Timer Service 1, 75, 83