Professional Documents
Culture Documents
B2Bi OperationsGuide allOS en PDF
B2Bi OperationsGuide allOS en PDF
OPERATIONS GUIDE
B2Bi
Version 2.1.0
25 August, 2017
Copyright © 2015 Axway
All rights reserved.
This documentation describes the following Axway software:
B2Bi 2.1.0
No part of this publication may be reproduced, transmitted, stored in a retrieval system, or translated into any human or
computer language, in any form or by any means, electronic, mechanical, magnetic, optical, chemical, manual, or otherwise,
without the prior written permission of the copyright owner, Axway.
This document, provided for informational purposes only, may be subject to significant modification. The descriptions and
information in this document may not necessarily accurately represent or reflect the current or planned functions of this
product. Axway may change this publication, the product described herein, or both. These changes will be incorporated in
new versions of this document. Axway does not warrant that this document is error free.
Axway recognizes the rights of the holders of all trademarks used in its publications.
The documentation may provide hyperlinks to third-party web sites or access to third-party content. Links and access to these
sites are provided for your convenience only. Axway does not control, endorse or guarantee content found in such sites.
Axway is not responsible for any content, associated links, resources or services associated with a third-party site.
Axway shall not be liable for any loss or damage of any sort associated with your use of third-party content.
Contents
Accessibility 17
Accessibility features of B2Bi 17
Keyboard shortcuts 17
Screen reader support 18
Accessibility features of the documentation 18
Screen reader support 18
Graphic readability 18
2 B2Bi architecture 20
User interface layer 20
Application layer 21
B2Bi trading engine 21
B2Bi integration engine 21
Application layer Java requirement 21
Application layer clustering technology 21
Application layer third party software 22
Data Layer 22
3 B2Bi engines 23
Trading engine technology 23
Integration engine technology 23
Integration engine extended processing 23
Message Builder Components (MBC) 24
Integration engine tasks 25
B2Bi engine files, logs and tools 26
B2Bi engine runtime data persistence 27
Managing engines in the user interface 27
The following sections of this guide have been added or revised:
Document republished October 12, 2016
Tune and scale Corrected information on configuring CPU usage in relation to
clusters on page Integration Engine processes.
325
Use System Profile Corrected information on configuring CPU usage in relation to
Manager on page Integration Engine processes.
214
Document republished June 15, 2016
Installer configure mode on Added information about selecting the "Stop duplicates from
page 272 being processed option".
Document republished May 24, 2016
B2Bi I/O management on Corrected analysis of IO usage test results.
page 276
Document republished January 11, 2015
Tools in tools directory on Restored trading engine tools directory tool descriptions.
page 243
Document republished June 5, 2015
Use System Profile Manager Added information on using the System Profile Manager to set
on page 214 parameters that control how the B2Bi integration engine sends
events to Sentinel.
Increase message splitting Added tip for improving performance when spitting inbound
efficiency on page 287 messages to a large number of smaller messages.
Document republished October 12, 2016
Tune and scale Corrected information on configuring CPU usage in relation to
clusters on page Integration Engine processes.
325
Use System Profile Corrected information on configuring CPU usage in relation to
Manager on page Integration Engine processes.
214
Document republished June 15, 2016
Installer configure mode on Added information about selecting the "Stop duplicates from
page 272 being processed option".
Document republished May 24, 2016
B2Bi I/O management on Corrected analysis of IO usage test results.
page 276
Document republished January 11, 2015
Document republished December 18, 2014
Import and update Updated and corrected flow deployment procedures.
component resources on
page 44
Command line interface for Updated and corrected flow deployment procedures.
the deployment server on
page 79
This section describes the accessibility features of the B2Bi product and its documentation.
l Keyboard shortcuts on page 17
l Screen reader support on page 18
Keyboard shortcuts
B2Bi provides a set of shortcuts for navigating the interface screens and for executing various
actions.
Note To use shortcuts with JAWS, turn off the virtual PC cursor. For more information, see
Screen reader support on page 18.
The following table contains a list of keyboard shortcuts that you can use:
To do this Press
Move forward through selectable objects Tab
Move backwards through selectable objects Shift + tab
Multi-select in list boxes (where multi-select is enabled) Ctrl + click
Multi-select (where multi-select is enabled) Shift + click
Select/clear check boxes and radio buttons Space
Display drop-down box content Alt + down arrow
Move cursor within drop-down box Up arrow / down arrow
As with other screen readers, you interact with JAWS using keyboard shortcuts. Most of the time,
you must press the JAWS key in combination with other keys. By default, the JAWS key is the Insert
key.
To use the arrow keys and keyboard shortcuts with B2Bi, turn off the virtual PC cursor by pressing
the JAWS key+Z.
Graphic readability
l The documentation is very readable on high-contrast displays.
l There is sufficient contrast between the text and the background color.
l The colors used in graphics are designed to be easily distinguishable by people who have color
blindness.
This guide provides information to help you start, run, tune and maintain B2Bi. It presents B2Bi
architecture, logs, tools, best practices, and discusses strategies and procedures for resolving issues
and operating problems.
l User interface layer
l Application layer
l Data layer
Users can also work with the B2Bi server (for development / deploying purposes) through Mapping
Services. Mapping Services is an Eclipse plug-in application that runs on Windows. Mapping
Services provides an environment for the development and deployment of maps and other B2Bi
component resources.
Application layer
In the application layer in the server environment, we distinguish two main server engines:
l B2Bi trading engine
l B2Bi integration engine
The Axway Clustering Framework (ACF) is an extension to the OSGi framework
(http://www.osgi.org/). It enables multiple OSGi instances to work together as a cluster. ACF is
composed to two main constructs:
l Infrastructure to manage multiple instances of an OSGi framework, each running in its own JVM.
This allows nodes to be dynamically defined, started, stopped, and restarted, thereby enabling
scalability and fault tolerance.
l A suite of cluster related APIs that are provided as an OSGi bundle. These APIs provide cluster
management, RPC style messaging, cluster singletons, resource locking, and more.
Benefits of ACF:
l Scalability
l High availability and fault tolerance
l Simple management
l Inter-node communication
l Resource locking
l OSGi features
For details of cluster management, see B2Bi Active / Active clustered installations on page 297.
Data Layer
B2Bi requires a database. Supported databases are DB2, Oracle, SQL Server and MySQL. For
clustered installations, B2Bi additionally requires a shared file system. Various shared file systems are
supported. For detailed information, see the B2Bi Installation Guide.
l Trading engine – mainly responsible for protocol and security handling
l Integration engine – mainly responsible for message content handling
Both of these engines are started when you start B2Bi, and both must be running for B2Bi to
operate.
The trading engine web interface is based upon Cocoon and uses an embedded Jetty application
server, version 6.
The trading engine has an extensible architecture that enables system integration engines to apply
custom logic to messages that transit in the processing pipeline. This custom processing logic,
implemented as a user-defined Java class, can be selectively applied at runtime to inbound or
outbound messages.
There are several categories of MFC:
l Message Builder Components (MBC) – written in the Axway-proprietary Message Builder
Language.
l Mapping programs – developed in Mapping Services.
l Java, to create Java Message Components
Message Builder is byte compiled, using the embedded MB compiler. After compilation, the code
can be executed using the MB interpreter (Virtual Machine).
A Message Builder program consists of one or more text files containing the program text. These files
are compiled using a compiler that produces an executable Message Builder program. This
executable program consists of instructions for a virtual machine. An interpreter implements this
virtual machine and is used to execute the program.
C- Extender
The loadable-object extension module feature of Message Builder enables you to load a C-written
module into a running Message Builder interpreter. This may be convenient when you interface the
integration engine with a new product, or when you require C-written routines for performance
reasons.
An object extension module adds a number of functions and statements to the Message Builder
language. These functions and statements are called in the same way as built-in and user functions
and statements.
To create a loadable-object extension module, you must do the following two things:
1. Write the C code that implements the module.
2. Write the Message Builder code that declares the new functions and statements and loads the
extension module.
Java-Extender
The Java extension enables you to call static Java methods from Message Builder code, and also to
make callbacks to Message Builder code from Java.
Communicating with Java code means (among other things) that you must be able to handle Java
objects. Since this is not possible in Message Builder, a Java utility class named ObjectMap is also
included. This class contains static methods used to insert objects into a global map, retrieve objects
from the map, and delete them. Integers (object map IDs) are used to identify objects in the map.
There is also a Message Builder interface to some of the methods in the class. The JAVA_MAP module
implements this interface.
The JAVA_UTIL Message Builder module contains various utility routines.
The JAVA_REFLECT Message Builder module uses the Java reflection mechanism to make it possible
to create objects, call Java methods, etc. without having to write Java wrapper code (typically, both
a Message Builder and a Java wrapper is needed to be able to call a Java method from Message
Builder code).
For a detailed descriptions of how to work with MBCs, see:
l B2Bi Integration Engine MBC Developer Guide
l B2Bi Integration Engine Message Builder Guide
l B2Bi Integration Engine MBC Message Builder Library
This documentation is available on the B2Bi Server installation image, and at
https://support.axway.com.
To view the tasks that run on your integration engine:
1. Open a session in the B2Bi user interface.
2. Go to System management > System management.
This page displays a list of integration tasks and their current status. You can stop and start any of
the displayed tasks.
The following table describes some example tasks:
Task Description
Core Services Responsible for the transfer of messages among the activities.
Filer Responsible for all disk IO (the actual reading and writing from / to disk).
Logger Responsible for logging all message processing related activity.
Table Responsible for storing message related information (which, for example, can
be used for correlation purposes).
Timer Responsible for timer events (time-outs).
Trace Responsible for interactions with the trace (log) files.
Each of the tasks described in the preceding table writes information to the shared file system.
The Queue task and Tmer task information is updated / emptied automatically as part of message
processing.
Table task events either expire automatically (if event purges are configured) or can be removed
programmatically.
Every day, two new Trace files are created (one for system and one for processing events). Trace
files older than seven days are automatically deleted.
The Logger and Filer tasks are the only tasks that store persistent data. To view this data, use EDI
Tracker (see EDI Tracker on page 108) and Message Log (see Message Log on page 179).
The information stored by the Logger and Filer tasks is cleaned by the Archiver task. This task cleans
files on a schedule that you can define. It cleans all log-entries and accompanying payloads. By
default the Archiver task runs every day at midnight and cleans everything older than 30 days. To
manage this behavior, see Set the log file archiving schedule on page 470.
Configuration files, logs and tools specifically related to the integration engine are located in the
directory:
The B2Bi installation log is located in the root of the installation directory:
Files, logs and tools that are shared by the two engines are principally located in the directories:
B2Bi/
Common/
Configuration/
Tools/
For clustered operations, files shared between nodes are stored in the directory that you specify
during B2Bi installation.
l Payloads
l Log / trace files
l Message-processing details
This information is stored for a certain period, after which most of it is removed automatically.
The integration engine produces various log files. Apart from the trace information, there is no
inbuilt mechanism for purging the other log files. Normally these files do not grow as a result of
regular message processing, unless debugging has been turned on at processing step level for DML
maps (mapping.log), or in case of errors (starter.log / system.err).
Log on to the user interface and click System Management on the tool bar, to open the System
management page.
This page displays a full list of commands related to node and system management. For help with
any of the tasks listed on the System management page, select the task and then click Help > Help
for this page on the toolbar.
Trading engine
l Installation folder – <B2BI_installation_directory>/Interchange
l Logs folder – <B2BI_installation_directory>/Interchange/logs
l Configuration folder – <B2BI_installation_directory>/Interchange/conf
l Custom inline processes jars folder – <B2BI_installation_
directory>/Interchange/site/jars
Integration engine
l Installation folder – <B2BI_installation_directory>/Integrator
l Logs folder – <B2BI_installation_directory>/Integrator/data/log
l Configuration folder – <B2BI_installation_directory>/Integrator/local/config
l Trace logs folder – <B2BI_installation_directory>/Integrator/data/trace/
{logNumber}
The trading engine manages sending and receiving files between partners and back-end
applications and is the heart of B2Bi. The trading engine writes logs to the TE log.
The B2Bi engine manages the connection between the integration engine and the trading engine.
Log files can be divided in two main categories:
l Event logs
l System logs
The log files from the trading engine are written to <B2BI_install_
directory>\Interchange\logs. The log files typically have the extension .log. They grow to a
maximum of 10 Mb. As soon as the 10 Mb threshold is reached, a new log file is created and the old
log file gets a number added to the extension.
The following table lists important trading engine logs:
Events logs
System logs
l B2Bi Administration Interface
l B2Bi Tools Client
l Password: Secret1
Administration menu
The menu bar at the top of the B2Bi user interface provides you with the following set of menus and
page access:
For a detailed description of the B2Bi Tools Client tools, see B2Bi integration engine management
tools on page 87.
For upgrade procedures, see the B2Bi Installation Guide.
When you run the installer in configure mode you have access to the original installation pages as
well as to additional pages that you can use to fine-tune the B2Bi performance and behavior.
1. In Windows Explorer, go to the root of the B2Bi Client or Server installation directory and right-
click configure64.exe.
2. Select Run as administrator.
Alternatively, you can:
1. Go to Start > All Programs > Axway Software > [B2Bi Client or Server installation
name] > Configure.
2. Right-click Configure and select Run as administrator.
UNIX/Linux:
Go to the root of the B2Bi Client or Server installation directory and launch configure.sh.
Configuring
After you start an installer in configure mode, you use it much as you do in installation mode.
You can work in either in a console display or a graphic interface display (Windows only) to view
your current installation settings and modify fields to meet your operating requirements.
l Use PassPort AM
o PassPort host
o PassPort port
o Shared secret
Integrator
l License Number
l License Key
l SAP connector
o Library path
l Queue size
l Message size limit – Default = 16384
l Use B2Bi Visibility
l Enable online archive
l Enable Integration Manager
l WebEDI
l ALE
l FTP
l File system
l HTTP
l Email
l Secure Transport
l Enable migration
B2Bi SAP exchanges require SAP version 3.0.9 libraries.
About resources
Resources are small installable programs that extend the standard set of message handling processes
provided by B2Bi.
A resource might enable B2Bi to converse with a specific type of remote application, or it might
modify the structure of a transiting file, or retrieve a specific data element from a database. The
functional possibilities of component resources are virtually limitless.
Resources can be classified in the following categories:
l Message Builder Component (MBC) resources, built using the Axway-proprietary Message Builder
language
l XSLT component resources for wrapping, transforming and validating the structure of XML files
l Map flows built using Mapping Services
l Maps built using Datamapper
l Maps built using the Axway Business Object Modeler (JTransform and TF-XSLT)
B2Bi includes a collection of resources that are delivered by the B2Bi Server installer. Some of these
are installed as part of the standard installation process and are immediately available for the
configuration of message exchanges. Other components are available as add-ons and must be
manually copied to an appropriate directory on the integration engine. Map components must also
be registered on the integration engine before they can be deployed in a message flow.
In addition to the installed resources and resource add-ons, you can build your own components
either using Java or the Message Builder language. You can also modify exiting components to
perform specific tasks.
l The resource file must reside in <B2Bi_shared_directory>\local\4edi\component,
where <B2Bi_shared_directory> is the B2Bi shared directory specified during installation.
l If the resource is a map, it must be registered. MBC resources are automatically registered.
See Import and update component resources on page 44.
l A resource (the processing code) for a component can produce either zero or one output
messages. If the resource that is assigned to a component has no outputs (as in the case of a
Modifier MBC), the B2Bi UI page for configuring the component displays the option "Copy input
message if no output is created".
o When this option is disabled, B2Bi completes processing for the component and performs no
additional processing on the message.
o When this option is selected, B2Bi copies the input message to the output of the
component.
Selecting this option enables B2Bi to continue processing the message (mapping, etc.,).
This also enables you to select the desired output format when configuring the component.
Learning more
This section explains where to locate additional resources for learning about the different types of
B2Bi message-processing resources.
For detailed information about building Message Builder Components, refer to following
documents, delivered on the B2Bi Server installer image:
l B2Bi Integration Engine MBC Developer Guide
l B2Bi Integration Engine Message Builder Guide
l B2Bi Integration Engine MBC Message Builder Library
XSLT Components
For information on deploying and managing XSLT components, refer to the B2Bi Administrator
Guide, "XSLT with B2Bi" chapter.
Resource management
General procedures for managing resources:
l Import and update component resources on page 44
l Manage the resource deployment server on page 77
Introduction
You may want to:
l Add new B2Bi component resources (flows) that you create in Mapping Services.
l Create new versions of resources that you have already deployed to the production server.
l Add resources that you edit or download from the Axway support site.
l Deploy additional example resources delivered with the B2Bi product:
o Mapping Services maps and flows
o Message Builder Component resources
l Deploy the flow directly from Mapping Services to a runtime server using
UI commands
This is the standard direct deployment method available to Mapping Services users with access
rights to B2Bi servers.
See Deploy using Mapping Services UI commands on page 44.
l Export a flow to a container, and then deploy the container using a command line
This is a deployment method that separates the roles of the Mapping Services flow creator and
the B2Bi Server flow administrator.
See Command line interface for the deployment server on page 79.
Prerequisites
l Create or update a flow in Mapping Services.
l Create a connection to a B2Bi runtime server.
For complete details on how the manage maps, flows and server connections, see the Mapping
Services documentation.
Deployment procedure
1. In the Mapping Services user interface Project tab, select a project, and in the directory
structure of the project, right-click the map or the flow that you want to deploy.
2. From the context menu, select Deploy... to open the Deploy page.
3. In the Deploy page:
l Make sure that the correct flow is selected for deployment.
l Select the target Runtime system and from the drop-down menu select the B2Bi
server to which you want to deploy.
If no server has yet been defined, select New connection from the drop-down list, and
configure a server connection in the configuration page.
l Optionally select to update any existing flow with the same name that is already
deployed to the selected runtime server.
4. Click Finish to launch the deployment.
Mapping Services deploys all of the necessary elements to compile and run the map flow on the
server.
A "Deployment Result" message confirms successful deployment.
<B2Bi_shared_directory>\local\4edi\component.
Where: <B2Bi_shared_directory> is the B2Bi shared directory specified during installation.
l JTransform – The JTF mapping configuration file in the formdb folder is *.tcf.
l TFXSLT – The TFXSLT mapping configuration file is *.xcf.
The tcf and xcf files are editable and serve as environment files.
1. Manually copy the JTF/TFXSLT project structure to:
l Windows: <B2Bi_shared_
directory>\local\bom\lib\convert\project\<project_name>
l UNIX: <B2Bi_shared_
directory>/local/bom/lib/convert/project/<project_name>
Where <B2Bi_shared_directory> is the B2Bi shared directory specified during
installation.
2. From the map project structure, copy from the mapping configuration file (<mapname>.tcf or
<mapname>.xcf) to:
l Windows: <B2Bi_shared_directory>\local\bom\lib\convert
l UNIX: <B2Bi_shared_directory>/local/bom/lib/convert
Where <B2Bi_shared_directory> is the B2Bi shared directory specified during
installation.
3. Change the extension of this configuration file from .tcf to .cfg.
l Windows: <B2BI_shared_directory>\local\bom\<project_name>
l UNIX: <B2Bi_shared_directory>/local/bom/<project_name>
Where: <B2BI_shared_directory> is the B2Bi shared directory specified during installation.
<integration_engine_install_directory>/solutions/example
The installer also deposits generic integration engine example files in the directory:
<integration_engine_install_directory>/example
You can use examples from either of these two locations in a B2Bi environment.
The files in these directory include:
l .java files – Java Message Component resources
l .x4 files – compiled Message Builder Component resources
l .s4 files – uncompiled Message Builder Component resources
You can then link the resources to component and to service objects in the B2Bi user interface.
1. Open a command console on the machine where the integration engine is installed.
2. Change directory to <integration_engine_install_directory>\
3. Run the command profile.bat.
4. Change to directory to <integration_engine_install_
directory>\local\4edi\component.
5. Enter the compiler command c4edi <component_file_name>.s4
The program compiles the MBC and changes the name to <component_file_name>.x4 to
indicate that it is an executable.
6. Copy the compiled MBC .x4 file to:
<B2Bi_shared_directory>\local\4edi\component
Where: <B2BI_shared_directory> is the B2Bi shared directory specified during installation.
You can then link the MBC to a component and to service objects in the B2Bi user interface.
After you have deployed resources to a B2Bi server, you can use them in message processing
sequences.
Currently B2Bi provides three versions of the inhouse file detector V1 and V2 are provided for
backwards compatibility with earlier implementations of B2Bi. For new flow processing
configurations, use inhouse detector V3.
To identify transiting in-house files, the in-house detector:
1. Reads the transiting message.
2. Retrieves information from the message parameters.
3. Checks a reference table for matches with what were found in the message.
When you install B2Bi, the B2Bi in-house three detector components are registered by default in
your system:
Common configuration
For all versions of this component, you must define the input and the output document formats.
These must be set to “In-house” :
l Type — From the drop-down list, select Detector.
l Resource filter – You can optionally enter a text string to filter the list of resources
that are displayed in the Resource field drop-down list. You can use wildcards ( *, ? ).
l Resource – From the list of available resources, select B2BX Application/B2Bi
Inhouse File Detection V3.
4. Select the Input tab and select the In-house input format.
5. Select the Output tab and select the In-house output format.
6. Select the Configuration tab and complete the fields:
Parameter Description
Start line Line index to begin parsing. Default =1.
Number of Enter the number of characters to be read in the document for detection.
characters
to read in
file
Parameter Description
If you do not specify a field separator for this parameter, B2Bi looks for the
partner and message identifiers based on fixed position specifications which
you enter in fields below, you must either:
l Specify the beginning and end position for each field. Example:
10-16 indicates the 10th through 16th characters of the
message counted from the first character of the start line.
l Specify a fixed value by placing the value between asterisks.
Document The document version to be used as an in-house message detection value.
version
l If a field separator value is used in the message, enter the index
value for the position of this identifier.
l If no field separator is defined enter either:
o the begin-end character range
o a fixed value in quotation marks
Document The document type value to be used as an in-house message detection value.
Type
l If a field separator value is used in the message, enter the index
value for the position of this identifier.
l If no field separator is defined enter either:
o the begin-end character range
o a fixed value in quotation marks
Parameter Description
Document The document reference value to be used as an in-house message detection
reference value.
l If a field separator value is used in the message, enter the index
value for the position of this identifier.
l If no field separator is defined enter either:
o the begin-end character range
o a fixed value in quotation marks
Sender The sender ID value to be used as an in-house message detection value.
name
l If a field separator value is used in the message, enter the index
value for the position of this identifier.
l If no field separator is defined enter either:
o the begin-end character range
o a fixed value in quotation marks
Sender The messaging profile ID value to be used as an in-house message detection
messaging value.
profile ID l If a field separator value is used in the message, enter the index
value for the position of this identifier.
l If no field separator is defined enter either:
o the begin-end character range
o a fixed value in quotation marks
Recipient The receiver name value to be used as an in-house message detection value.
name
l If a field separator value is used in the message, enter the index
value for the position of this identifier.
l If no field separator is defined enter either:
o the begin-end character range
o a fixed value in quotation marks
Recipient The receiver messaging profile ID value to be used as an in-house message
messaging detection value.
profile ID
l If a field separator value is used in the message, enter the index
value for the position of this identifier.
l If no field separator is defined enter either:
o the begin-end character range
o a fixed value in quotation marks
Parameter Description
Test flag The document reference value to be used as an in-house message detection
value.
l If a field separator value is used in the message, enter the index
value for the position of this identifier.
l If no field separator is defined enter either:
o the begin-end character range
o a fixed value in quotation marks
Code 1 Optionally, enter a code to trigger the selection of this component.
l If the message uses field separator characters, enter the code in
the format: [field index]:[value]
l If the message does not have separators, enter the code in the
format start index- end index]:[value]
Example (no separators):
For a parsing with no field separators, you enter the code 2-
11:MyPartner
During message parsing, if the value in the character range 2-8 in
the message equals MyPartner (and if any other additional codes
return correct values), then the in-house detector collects parameter
values, and allows B2Bi to proceed with message processing.
Code 2 Same as Code 1 above.
Code 3 Same as Code 1 above.
Code 4 Same as Code 1 above.
Code 5 Same as Code 1 above.
Debug Same as Code 1 above.
7. Click Add.
1*ORDERS*SENDER123*RECIPIENT4*1
The values in the message are separated by the "*" character and represent for this example:
To detect messages with this structure using field index values, we configure the following in-house
V3 detector:
Alternatively, we can detect the same message using fixed values. To do this we replace the index
field values with fixed values placed in parentheses:
1XXXORDERSSENDER123 RECIPIENT4BLAB1
12345678901234567890123456789012345678901234567890
The values in the message are :
1-10 Document version
5-10 Document Type
11-20 Sender ID
21-30 Recipient ID
35-35 Test Flag
To detect messages with this structure using position indicators, we configure the following in-
house V3 detector:
Alternatively, we can replace one or more position indicators with fixed values placed in
parentheses, as in the following example:
Use the B2Bi in-house detector V2 component when you are creating a service where the syntax
expression is based on a separator and a column index.
Parameter Description
Start line Line number to begin parsing. Default=1.
Sender Id The sender ID value to be used as an in-house message detection
[Required field] value.
The value can either be:
l A fixed value (example: BMW_Manufacturer)
l A dynamic value: In this case, enter character range in
which the value is located in the message
Receiver Id The receiver ID value to be used as an in-house message detection
[Required field] value.
The value can either be:
l A fixed value
l A dynamic value: In this case, In this case, enter
character range in which the value is located in the
message
Document The document standard value to be used as an in-house message
Standard detection value.
[Required field] The value can either be:
l A fixed value
l A dynamic value: In this case, enter character range in
which the value is located in the message
Document Type The document type value to be used as an in-house message
[Required field] detection value.
The value can either be:
l A fixed value
l A dynamic value: In this case, enter character range in
which the value is located in the message
Test Flag The document reference value to be used as an in-house message
detection value.
The value can either be:
l A fixed value
l A dynamic value: In this case you must indicate the
column in which the value can be found in the message
Parameter Description
Document The document reference value to be used as an in-house message
reference detection value.
The value can either be:
l A fixed value
l A dynamic value: In this case, enter character range in
which the value is located in the message
Number of Enter the number of characters to be read in the document.
Characters to read
in file
[Required field]
Code 1 Optionally, enter a code to trigger the selection of this component.
Use the syntax [character range]:[value]
Example:
You enter the code 2-8:MyPart
During message parsing, if the value in the character range
2-8 in the message equals MyPart (and any other additional
codes return correct values), then the in-house detector uses
the syntax defined in this component configuration to
collect parameter values, and permits B2Bi to proceed with
message processing.
Code 2 Optionally, enter a code to act as a for trigger the selection of this
component. Use the syntax [column index]:[column value]
Code 3 Optionally, enter a code to act as a for trigger the selection of this
component. Use the syntax [column index]:[column value]
Code 4 Optionally, enter a code to act as a for trigger the selection of this
component. Use the syntax [column index]:[column value]
Code 5 Optionally, enter a code to act as a for trigger the selection of this
component. Use the syntax [column index]:[column value]
Debug Indicate whether or not to activate debug and log information on the
Trace Viewer.
l y (activate)
l n (don't activate)
7. Click Add.
In this example, B2Bi extracts the first 250 characters of the input file. Then it checks if the value of
“code 1”, “Renault”, matches with the first column value and the value of “code 2”, “PSA”, matches
with the second column value. If they match, then B2Bi retrieves attribute values using the
definitions: “Sender Id” is located between the 2nd and 8th characters, “Receiver Id” is located
between the 10th and 16th characters, “Document standard” is located between the 20th and 25th
characters, “Document type” is located between the 26th and 32nd characters, and the “Test flag”
value is located between the 40th and 41st characters.
Use the B2Bi in-house detector V1 component when you are creating a service where the syntax
expression is based on field positions.
Parameter Description
Sender Id The sender ID value to be used as an in-house message detection value.
[Required The value can either be:
field]
l A fixed value (example: BMW_Manufacturer)
l A dynamic value: In this case, enter the column number in which
the value is located in the message (the field separator character
is defined below)
Receiver Id The receiver ID value to be used as an in-house message detection value.
[Required The value can either be:
field]
l A fixed value
l A dynamic value: In this case you must indicate the column in
which the value can be found in the message (the field separator
character is defined below)
Document The document standard value to be used as an in-house message detection
Standard value.
[Required The value can either be:
field]
l A fixed value
l A dynamic value: In this case you must indicate the column in
which the value can be found in the message (the field separator
character is defined below)
Parameter Description
Document The document type value to be used as an in-house message detection value.
Type The value can either be:
[Required
l A fixed value
field]
l A dynamic value: In this case you must indicate the column in
which the value can be found in the message (the field separator
character is defined below)
Test Flag The test status of the message to be used as an in-house message detection
value.
The value can either be:
l A boolean value: y/n
l A dynamic value: In this case you must indicate the column in
which the value can be found in the message (the field separator
character is defined below)
Document The document reference value to be used as an in-house message detection
Reference value.
The value can either be:
l A fixed value
l A dynamic value: In this case you must indicate the column in
which the value can be found in the message (the field separator
character is defined below)
Field Enter the character used as the field separator in the message.
separator You can enter a simple printable character (example: “;”), or a simple string
value (example: “_;_”).
Alternatively, you can enter a more complex definition by using a regular
expression (example: pattern matching or using a Unicode character(s)
“\u010A”).
If you are using a regular expression and you are using a separator character
that has special meaning in a regular expression, you must precede the
character with the escape character. For example, if you have a file where the
separator is a the “*” character, you must set the separator value to “\*”.
Number of Enter the number of characters to be read in the document.
characters
to read in
file
[Required
field]
Parameter Description
Code 1 Optionally, enter a code to act as a for trigger the selection of this
component. Use the syntax [column index]:[column value]
Example:
You enter the code 1:MyPartner
During message parsing, if the value of column 1 in the message
equals MyPartner (and any other additional codes return correct
values), then the in-house detector uses the syntax defined in this
component configuration to collect parameter values, and permits
B2Bi to proceed with message processing.
Code 2 Optionally, enter a code to act as a for trigger the selection of this
component.
Use the syntax [column index]:[column value]
Code 3 Optionally, enter a code to act as a for trigger the selection of this
component.
Use the syntax [column index]:[column value]
Code 4 Optionally, enter a code to act as a for trigger the selection of this
component.
Use the syntax [column index]:[column value]
Code 5 Optionally, enter a code to act as a for trigger the selection of this
component.
Use the syntax [column index]:[column value]
Debug Indicate whether or not to activate debug and log information on the Trace
Viewer.
l y (activate)
l n (don't activate)
7. Click Add.
In this example, B2Bi analyzes only the first 300 characters of the input file and splits this segment
to column units, using the separator “*”. It then checks if the value of “code 1”, “BMW”, matches
the first column value and the value of “code 2”, “PO”, matches the second column value. If they
match, then B2Bi retrieves the B2Bi attributes using the definitions in the component parameters:
“Sender Id” is a fixed value (BMW_Manufacturer), “Receiver Id” is located in column 6, “Document
standard” is located in column 8, “Document type” is located column 7, and the “Test flag” value is
located in column 10.
The XPath detector component enables you to dynamically identify categories of XML-based
documents that transit on B2Bi servers.
Implementation procedure
1. Open the B2Bi user interface.
2. Click the toolbar Trading Configuration menu and select Manage trading configuration.
3. Select the community that represents your local trading identity.
4. Click Message processing in the community navigation graphic at the top of the page.
5. Select Add a processing step.
6. Name – Enter a name for the processing step. For example: B2Bi XPath detector.
7. Type – From the drop-down list, select Detector.
8. Component– From the drop-down list, select B2Bi Application/XPathDetector.
When you select this component, the user interface displays a set of configuration fields.
Use the following table to complete the fields.
Parameter Description
XPath Indicate where in the XML message to find the information related to the
sender sender identifier. This identifier will be later used for the partner management
identifier call.
For example: /ORDERS/UNH/cmp02/e01_0004
XPath Indicate where in the XML message to find the information related to the
recipient recipient identifier.
identifier For example: /ORDERS/UNH/cmp03/e01_0010
XPath Enter the document format to be used by the messages you are detecting.
document For example: "XML_D99B"
standard
XPath Enter where in the XML message to retrieve the document type information,
document using XPath access.
type For example: /ORDERS/UNH/cmp01/e01_0065
XPath test Enter where in the XML message to retrieve the information related to the test
flag flag, using XPath access.
XPath Enter where in the XML message to retrieve the information related to the
document document reference, using XPath access.
reference
Implementation overview
The zip processing is called via an exit in the processing flow. Exit processing is provided by a B2Bi
resource. Once the resource is installed on the integration engine, you can implement the exit in a
message flow by:
1. Adding the resource to a component.
2. Adding the component to one or more services.
3. Adding the service to one or more agreements.
Implementation procedure
1. In the B2Bi user interface, from the Processing configuration menu, select Manage
components to open the Manage components page.
2. From the related tasks list, click Add a component to open the Add a component wizard.
3. In the Type field, from the drop-down list, select one of the following component types:
l Post enveloping
l Post transfer failed
l Post transfer success
4. In the Resources field, from the drop-down list, select B2BX Application/B2Bi Zip.
When you select this resource, the user interface displays a set of tabs.
5. In the Name field, accept the default name or enter a preferred name for the component.
6. In the Input tab, select the input format you expect from the partner, or select Unspecified if
you want to receive multiple formats.
7. In the Output tab, select the output format you expect to send to the partner, or select
Unspecified if you want to send multiple formats.
8. In the Configuration tab, complete the fields:
Parameter Description
Archive type to be Type of archive that will be created by the component. The
created possible values are:
l Zip
l Tar
l GZip
Archive compression Compression ratio for the compressed archive.
parameter Warning: Compression has an impact on global
performance. Select according to your system preferences.
l BEST_COMPRESSION
l BEST_SPEED
l DEFAULT_COMPRESSION
l NO_COMPRESSION
Name of the document Name of the document that is contained in the compressed
inside the archive package.
Activate debug Whether or not to activate debug and log information on the
Trace Viewer.
l y (activate)
l n (don't activate)
9. Click Add.
Implementation overview
The DPS caller resource takes the file from the B2Bi environment and writes it to the disk as a
temporary file. It then calls the old AMTrix map (to do the conversion) and writes the output files.
These output files are read by the Caller MBC and sent to the integration engine. If the Caller MBC is
called by the integration engine, it first prepares the environment for the conversion program (the
old map program). It creates a record (migration interface record) that contains the environment
information, such as the current log ID, the receive attributes, and the temporary written input file
name. This record serves as a communication memory area provided to the old map. The Caller MBC
reads the input file from the B2Bi environment and writes it to a temporary file.
After this, the Caller MBC loads the old map (x4) so that the map runs in the same environment as
the Caller MBC. The Caller MBC calls the routine of the old map called DPS_MAIN. DPS_MAIN
retrieves the interface record as a parameter, acquiring full access to the record. Next, the statement
calls the conversion part of the old map. The map converts the input file and writes the output files.
Implementation procedure
1. In the B2Bi user interface, from the Processing configuration menu, select Manage
components.
2. From the related tasks list, click Add a component to open the Add a component wizard.
3. In the Type field, select the appropriate type.
4. If desired, enter a text string in the Resource filter field to filter the list of resources that are
displayed in the Resource field drop-down list. You can use wildcards ( *, ? ).
5. In the Resource field, from the drop-down list, select B2BX Application/B2Bi DPS Caller.
When you select this resource, the user interface displays a set of tabs.
6. In the Name field, accept the default name or enter a preferred name for the component.
7. On the Input tab, select the input format you expect from the partner, or select Unspecified
if you want to receive multiple formats.
8. On the Output tab, select the output format you expect to send to the partner, or select
Unspecified if you want to send multiple formats.
9. On the Configuration tab, complete the fields:
Parameter Description
DPS Name Enter the name of the DPS only. For example, if your DPS has the file name
“filter.x4”, enter “filter”.
Arguments Specify the list of arguments to be used. In AMTrix it is possible to give
arguments to a DPS program if you insert it into the AMTrix Process Manager.
To call old programs that are designed as DPS and expect arguments, enter
them here. Note that the system environment variables are not available here.
AMTrix If a program needs valid values for the sender, sender sub address, or the
Sender:Sub:Name sender name, enter them here, separated by colons.
AMTrix If a program needs valid values for the recipient, recipient sub address, or the
Recipient:Sub:Name recipient name, enter them here, separated by colons.
AMTrix Document Type Specify the document type. Its value is set to the document type selected for
the document agreement.
Debug (y/[n]) If set to yes [y], the migrated map will write debug information to the Trace
Log.
File Represents the first 4 identification fields used in AMTrix agreements; values
Syntax:Type:Version:Tag must be separated by colons.
Id1 Sender:Recipient Represents fields 5 and 6 (the Sender / Recipient Id1 field used in AMTrix
agreements); values must be separated by colons.
Id2 Sender:Recipient Represents fields 7 and 8 (the Sender / Recipient Id2 field used in AMTrix
agreements); values must be separated by colons.
Id3 Sender:Recipient Represents fields 9 and 10 (the Sender / Recipient Id3 field used in AMTrix
agreements); values must be separated by colons.
Status Sender:Recipient Represents fields 11 and 12 (the Status field for the Sender / Recipient used in
AMTrix agreements); values must be separated by colons.
Label 13:14 Represents fields 13 and 14 used in AMTrix agreements; values must be
separated by colons.
Label 15:16 Fields 15 and 16 used in AMTrix agreements; values must be separated by
colons.
Label 17:18 Fields 17 and 18 used in AMTrix agreements; values must be separated by
colons.
Parameter Description
Label 19:20 Fields 19 and 20 used in AMTrix agreements; values must be separated by
colons.
Allow no outputs (y/[n]) If this option is set to yes(y), no error results if the component does not
produce output.
Custom Optional Data In general, if your DPS tries to parse the optional data, it will fail. However,
(y/[n]) there is one case where DPS parsing of optional data will work. This is the
case if the DPS manipulates ONLY optional data that is not defined by AMTrix,
but is custom defined and set by a previous DPS. To enable this scenario, set
this parameter to yes (y).
Control Specify “STANDARD” or “CUSTOM” to override the behavior of SEND_FILE.
Use STANDARD for Datamapper components. Use CUSTOM for non-
Datamapper components.
Produces EDI Document If this option is set to yes (y), the output is considered to be EDI.
(y/[n])
10. Click Add.
Implementation procedure
1. In the B2Bi user interface, from the Processing configuration menu, select Manage
components.
2. From the related tasks list, click Add a component to open the Add a component wizard.
3. In the Type field, select the appropriate type.
4. If desired, enter a text string in the Resource filter field to filter the list of resources that are
displayed in the Resource field drop-down list. You can use wildcards ( *, ? ).
5. In the Resource field, from the drop-down list, select B2BX Application/B2Bi Loadable
Caller.
When you select this resource, the user interface displays a set of tabs.
6. In the Name field, accept the default name or enter a preferred name for the component.
7. On the Input tab, select the input format you expect from the partner, or select Unspecified
if you want to receive multiple formats.
8. On the Output tab, select the output format you expect to send to the partner, or select
Unspecified if you want to send multiple formats.
9. On the Configuration tab, complete the fields:
Parameter Description
AMTrix Loadable Name Enter the name of the loadable program only. For example, if
your loadable program has the file name “filter.x4”, enter
“filter”.
Arguments Specify the list of arguments to be used. In AMTrix it is possible
to give arguments to a loadable program if you insert it into the
AMTrix Process Manager. To call old programs that are designed
as loadable and expect arguments, enter them here. Note that
the system environment variables are not available here.
AMTrix If a program needs valid values for the sender, sender sub
Sender:Sub:Name address, or the sender name, enter them here, separated by
colons.
AMTrix If a program needs valid values for the recipient, recipient sub
Recipient:Sub:Name address, or the recipient name, enter them here, separated by
colons.
AMTrix Document Type Specify the document type. Its value is set to the document type
selected for the document agreement.
Debug (y/[n]) If set to yes [y], the migrated program writes debug information
to the Trace Log.
File Represents the first 4 identification fields used in AMTrix
Syntax:Type:Version:Tag agreements; values must be separated by colons.
Id1 Sender:Recipient Represents fields 5 and 6 (the Sender / Recipient Id1 field used
in AMTrix agreements); values must be separated by colons.
Id2 Sender:Recipient Represents fields 7 and 8 (the Sender / Recipient Id2 field used
in AMTrix agreements); values must be separated by colons.
Id3 Sender:Recipient Represents fields 9 and 10 (the Sender / Recipient Id3 field used
in AMTrix agreements); values must be separated by colons.
Status Sender:Recipient Represents fields 11 and 12 (the Status field for the Sender /
Recipient used in AMTrix agreements); values must be separated
by colons.
Parameter Description
Label 13:14 Represents fields 13 and 14 used in AMTrix agreements; values
must be separated by colons.
Label 15:16 Represents fields 15 and 16 used in AMTrix agreements; values
must be separated by colons.
Label 17:18 Represents fields 17 and 18 used in AMTrix agreements; values
must be separated by colons.
Label 19:20 Represents fields 19 and 20 used in AMTrix agreements; values
must be separated by colons.
Allow no outputs (y/[n]) If this option is set to yes(y), no error results if the component
does not produce output.
Custom Optional Data In general, if your loadable program tries to parse the optional
(y/[n]) data, it will fail. However, there is one case where the loadable
program parsing of optional data will work. This is the case if
the loadable program manipulates ONLY optional data that is
not defined by AMTrix, but is custom defined and set by a
previous program. To enable this scenario, set this parameter to
"y”.
Control Specify “STANDARD” or “CUSTOM” to override the behavior of
SEND_FILE. Use STANDARD for Datamapper components. Use
CUSTOM for non-Datamapper components.
Produces EDI Document If this option is set to yes (y), the output is considered to be
(y/[n]) EDI.
10. Click Add.
Implementation procedure
1. In the B2Bi user interface, from the Processing configuration menu, select Manage
components to open the Manage components page.
2. From the related tasks list, click Add a component to open the Add a component wizard.
3. In the Type field, select Map.
4. If desired, enter a text string in the Resource filter field to filter the list of resources that are
displayed in the Resource field drop-down list. You can use wildcards ( *, ? ).
5. In the Resource field, from the drop-down list, select B2BX Application/B2Bi Set
Inbound EDI Separators.
When you select this resource, the user interface displays a set of tabs.
6. In the Name field, accept the default name or enter a preferred name for the component.
7. On the Input tab, select the input format you expect from the partner, or select Unspecified
if you want to receive multiple formats.
8. On the Output tab, select the output format you expect to send to the partner, or select
Unspecified if you want to send multiple formats.
9. On the Configuration tab, complete the fields:
Parameter Description
Segment separator character (or The segment separator value matching the incoming
hex value with \H prefix) EDI document.
Element separator character (or The element separator value matching the incoming
hex value with \H prefix) EDI document.
Composite separator character (or The composite separator value matching the incoming
hex value with \H prefix) EDI document.
Release character (or hex value The release character value matching the incoming
with \H prefix) EDI document.
Debug (y/[n]) If set to "y", the component will produce additional
debug information in the Trace Viewer.
10. Click Add.
Map updates
Working in Mapping Services, you can edit and deploy new versions of existing Maps to a B2Bi
Server for use in message handling. See the Mapping Services DML User Guide for details about Map
editing and deployment.
After you edit a map in Mapping Services and redeploy it (in a container) to a B2Bi Server, the Server
detects the updated version and replaces the older version.
l Name of an output
l Number of outputs
l Settings of an exiting output
When you redeploy a container from Mapping Services, the B2Bi Server analyzes the Map
modification. If B2Bi detects a difference between certain parameters of the updated Map and the
data that is contained in the database for that Map, it generates a warning message on the
component page.
To handle these types of warnings, use the following procedure
<component name> has been updated. The following components will be updated after
clicking save:
Services
<linked service name>
<linked service name>
Document agreements
<document agreement name>
<document agreement name>
Check the logs for a description of the update.
Custom-Functions
When you develop Custom-Functions in Mapping Services, use them in a Flow and then deploy the
Flow to a B2Bi Server, the Custom Functions are not automatically d eployed to the server upon
deployment of the Flow. You must manually deploy the JAR files containing the code of these
Custom-Functions. This is true for Custom-Functions used in any part of the Flow: Business-
Document, Validation Rules, Maps, or any other object. For details about Custom Function manual
deployment, see the B2Bi Installation Guide.
Multiple-session support
Multiple users of Mapping Services can simultaneously connect to the B2Bi Server Map repository to
query the deployed Maps (read access), but multiple users cannot simultaneously
deploy/update/delete Maps (write access). Only one write-access session is allowed at a time.
Examples:
Container dependencies
In Mapping Services, Maps and Flows are packaged in “containers” for deployment. By default,
Mapping Services deploys containers with the “independent containers” option selected. When this
option is selected, each Flow is packed in its “own” container and the Map/Flow operates as a self-
contained item on the server side. This enables the user to avoid unwanted relations (and further
avoid the side effects caused by object relationships) between maps on the server side that share the
use of the same objects such as Business-Documents.
The result of not selecting the “independent containers” option is a potential dependency between
containers. This dependency can make container removal problematic, as containers must be
removed in the opposite order from which they were deployed.
Container synchronization
The integration engine and the trading engine must have the same set of containers. To manage this
synchronization, the B2Bi deployment server compares:
l The number of unique containers contained in the B2Bi database table b2bimapcontainers.
l The number of the unique containers returned by the command ./mapProxyDeployer LIST.
If B2Bi detects more containers on the integration engine side, the non-synchronized containers are
removed.
If B2Bi detects fewer containers on the integration engine side, the missing containers are
redeployed.
De-synchronization may occur, for instance, if some containers were removed by the deployment
server when the integration engine was stopped. In that case, all the missing containers are
redeployed at integration engine start-up.
If a large number of containers need to be redeployed, the redeployment operation may take some
time. This can temporarily prevent users from deploying new flows from Mapping Services.
Deployment performance
The greater the number of Maps on the B2Bi Server, the longer the time required to deploy
containers.
The number of deployed Maps does not affect the runtime performance. But it does affect:
l Deployment time for new containers
l Time needed to complete the addition of a new runtime node in a cluster
l Node (map) synchronization logic.
The number of deployed Maps includes the history of each Map being deployed (example: a Map
deployed and then updated three times counts as four Maps). As a result, each Map deployment
causes the B2Bi Server repository to grow. Because all of these versions are potentially dependent
on each other, we need to deploy all of them in the exact same order on each node, causing slower
deployment / response time.
l Mapping Services attaches a “version” to a container at deployment time. If you redeploy a
container with the same name, the result is multiple instances of the same container at on the
runtime system, although only the last deployed container of the multiple instances is active for
message processing. This has several negative effects. There is a negative impact on deployment
performance (see the performance section above). Additionally, due to the relationships and
hierarchical structure between the various versions, multiple versions of container makes
container removal more difficult. Several removal operations may have to be manually
performed.
l The“version” information that is attached to a container is not used by B2Bi at runtime to
identify a container. For this reason, this information does not enable a roll-back to an earlier
version.
l The displayed list of containers on the Mapping Services side does not clearly reflect the actual
runtime available content in all cases. The container list shows both the old (inactive) versions as
well as the current (active) versions.
In addition to the user interface features described in this chapter, you can use a command line tool
to manage the deployment server. See Command line interface for the deployment server on page
79.
There are two principal sources of processing resources:
l An initial set of maps and other processing resources are installed with each integration engine
when it is installed.
l You typically create additional resources in a development environment.
After you create maps and other resources in the development environment, you deploy them to the
runtime system through the deployment server. The deployment server stores copies of the
deployed resources in a dedicated cache. This cache also stores any resources that were originally
deployed to integration engines during installation.
The deployment server is responsible for the synchronized deployment of these resources from the
cache to the integration engine on each node of the B2Bi cluster. Whenever a new node joins the
cluster, the deployment server synchronizes the resource content of the integration engine of the
new node with the content of the server cache. In this way, the deployment server ensures that all
nodes are populated with the maps and other processing resources required for runtime processing.
In some cases you may detect a de-synchronization between the population of cached resources
and the resources that reside on one or more integration engines. The following paragraphs
describe tools for managing this type of de-synchronization.
For your environment to be fully operational and protected as an Active / Active cluster, the number
of resources listed in each of the summary sections should be identical.
l Remove all – Select this command to remove all resources from the deployment server cache.
Warning: This command also removes all deployed resources from the cache and from all
connected integration engines. You must redeploy all maps and other processing resources in
order to restore message-processing capacity. Use this command only if you have a specific
operations issue that requires it.
l Refresh cluster – Use this command to deploy all resources from the deployment server cache
to all connected integration engines.
Depending on the volume of the cache, this process may take several minutes.
When you use either of the above commands,B2Bi executes the action on the cache and displays a
summary of the files that were removed or synchronized in your runtime environment.
l Import Container action
l Remove Container action
l Display Repository Container List action
You can optionally add additional permissions as required.
Procedure
1. When you are licensed to use PassPort with B2Bi, and you have not yet set the deployment
authentication credentials to PassPort, the following message appears on the Manage
deployment server page:
"Set the default deployment server credentials"
Click this message to open the Deployment server credentials page.
2. On the credentials page, indicate the PassPort access credentials that deployment server should
use:
l Domain – Enter the PassPort domain under which the deployment server user name
was created.
l User name – Enter the user name for the deployment server to use to access PassPort.
l Password – Enter the password of the Deployment server user account.
3. Click Save changes.
Commands are not case sensitive ( except for containerId, dirName, and hostName parameters).
If no command is specified or an unknown command is used, a usage info message is displayed.
To display comprehensive a list of syntax and commands, from the \tools directory enter:
mapProxyDeployer
Available commands:
l *LIST [location] – Lists all containers from Map Proxy or only those found at the specified
location. Location can be CACHE, DEPLOY, or REJECT. For every container, this command
displays name, version, ID, a short description, generation date, and location.
l *REMOVE containerId – Removes the container with the specified ID from Map Proxy. This
command permanently deletes the database entry and the related folder stored on the file
system. To broadcast the update to all configured integration engine instances, run FORCE_
SYNC.
With each command that you send to the B2Bi server, you must include the user name and
password of an authorized server account. For example, to list all currently deployed containers:
Example
This example describes a series of procedures in which a flow deployment container is first created
in Mapping Services, then manually edited and zipped, and finally deployed a B2Bi server. In many
cases, these tasks can be distributed to persons with different roles in an enterprise. For example
map design personnel may create and maintain maps and container content, while a server
administrator may handle the container deployment to runtime environments.
To deploy a flow that has been created in Mapping Services to all configured integration engine
instances:
Mapping Services builds an export container in the specified folder.
A "Deployment Result" message confirms the successful container build and location.
Attributes.xml
BusinessDocument_1.xml
BusinessDocument_2.xml
Component.xml
Container.xml
Info.xml
MapBroker_1.xml
MapStage_1.xml
ObjectDependencies.xml
Variables.xml
Where:
l The DEPLOY command sends the container to the B2Bi server.
l The FORCE_SYNC command broadcasts all changes to all configured integration engines, (or,
optionally, only to a configured integration engine that you specify.
You can configure Monitoring Framework to send the collected data to a third-party application for
viewing and analysis.
You configure monitoring dynamically so that you can enable statistics reporting at any time
without restarting B2Bi.
Histograms
Monitoring Framework operation is based on a statistics tool called Histogram. Histogram measures
the distribution of values in a stream of data (for example, the number of results returned by a
search). Histogram metrics enable you to measure common metrics such as the min, mean, max,
and standard deviation of values, as well as metrics of points taken at regular intervals (quantiles)
such as the median or 95th percentile. To accomplish this, Metrics maintains a small manageable
reservoir that is statistically representative of the data stream as a whole. This technique is called
reservoir sampling.
You can configure Monitoring Framework reporters to send the statistics it collects to any of the
following third-party and open source metrics reporter applications:
l Apache Log4j – Generates statistic log files
l CSV – Gathers detailed CSV-formatted data on specific metrics.
l JMX – Enables the availability of metrics to all JMX agents (including Java VisualVM).
l Graphite – Open-source monitoring framework
Timers
A timer is a set of data that measures the duration of a type of event and the rate of its occurrence.
Timer data is embedded in the data of the related Filter/Name for a given Reporter. When you
configure the reporter and select the filters for sending data to that reporter you can include or
exclude timer data.
Monitoring Framework provides timers that enable B2Bi to collect measurements of variety of values.
The following example shows a Log4j output with timer data on how long it takes to update a
database heartbeat:
Monitoring Framework provides timers to gather statistics on the following actions:
l Event purge
l Message purge
l B2Bi sending client
l Consumption time (by pickup)
l Production sending (by delivery)
l File system health monitor
l Cluster Database Heartbeat
l All PM Server requests (The PM Server is a B2Bi internal server that provides the integration
engine with appropriate processing and partner configuration at runtime).
l Sending events to Sentinel
1. Go to <Interchange_install_directory>/conf and open the file
monitoringconfig.xml in a text editor.
2. Use the tables that follow this procedure to specify the target reporter applications and the
statistics to be sent.
3. Save the file.
Note: You do not have to restart B2Bi, statistics monitoring changes take effect immediately.
Reporter attributes
To control the amount and type of statistics that you send to the reporter application, set the
following attributes:
Parameter Description
Enabled Set to true/false to enable or disable.
rateUnit TimeUnit (Seconds, MS, Minutes) to covert values to. Example - events/second
or events/minute.
durationUnit TimeUnit (Seconds, MS, Minutes) to measured time periods.
writeInterval Number of seconds to send metrics to reporter. Example 5 – Statistics will be
sent every 5 seconds.
Filter Use filters to control which statistics to send to the reporter. See the following
table for a list and description of available filters.
Filter names
To set the filter attributes, use the following names in the Filter/Name field:
AlertSystem Alert system activity
B2BiSendClient Sending messages to the integration engine
ClusterConnections Cluster framework individual connections between cluster nodes
Cluster-HeartbeatUpdater Cluster framework heartbeat to database
Cluster-ThreadPool Cluster framework thread pool
ConnectionCache Sync connections being held by the trading engine
Consumption Consumption on a per pickup basis (timers, per minute/per
hour/per day stats available)
DatabaseCommit Database commit notifications (local and remote)
DatabaseConnectionPool BoneCP data base connection pool
Events Event system activity
FileSystemHealth Timer activity on monitoring each configured file system health
check
JPADataCache OpenJPA data cache activity
jvm.gc, jvm.memory, Internal jvm heap usage, gc collections and thread
jvm.thread-states statesMessageProduction
MessageDispatched Messages dispatched to the trading engine for processing
MessageProduction Production system coordination, and timers on producing
messages on a per partner delivery
MessagePurge Message purge activity (non Oracle stored procedures)
PmServer All PM Server activity statistics. The PM Server is a B2Bi internal
server that provides the integration engine with appropriate
processing and partner configuration at runtime.
ResourceManager Trading engine resource manager (used for X25 type protocols)
Sentinel Sentinel connection and queue
SequenceCoordinator Sequence coordination activity
ThreadPool Core trading engine thread pool activity
TokenManager Cluster framework TokenManager for cluster wide locks
XmlDomDocumentFactory XML Dom Cache activity
After you log in you see the following page:
From the B2Bi Tools Client Help menu you can view integration engine system details:
l View integration engine system information on page 90
The following sections describe the tools that are provided by the B2Bi Tools Client:
l Alert Manager on page 91
l Character Sets Manager on page 99
l Datamapper on page 103
l Datamapper Builder on page 104
l EDI Tracker on page 108
l File Viewer on page 138
l Integration Manager on page 139
l Message Log on page 179
l Metadata Browser on page 191
l Performance Monitor on page 194
l Queue Monitor on page 202
l Remote Compiler on page 207
l SeqNum Utility on page 210
l System Profile Manager on page 213
l Task Monitor on page 229
l Trace Viewer on page 234
l User Manager on page 239
l Operating System – System type (Linux / Windows) and version
l Machine – Machine type and name
l Disk Space – Reserved space and occupied space for the directories CORE_ROOT, CORE_LOCAL
and CORE_DATA
l Environment Variables – List of defined environment variables and their values
Alert Manager
Alerts are events that occur within the integration engine that enable external and automatic system
monitoring. Alerts are typically very general. Their primary purpose is to make a system (and
ultimately a person) aware of a problem or to inform about an important event. Although most alerts
report errors, some report events of a benign nature. Unlike message logging, alerting is not
message-centered. However, unlike trace logging, alerts are structured. In addition, alerts are type
and code documented.
Use Alert Manager to view any alerts that have occurred in the integration engine. You should
delete an alert as soon as it is no longer current. In other words, do not use alerts as a log for
monitoring events. Normally alerts are not repeated in the Alert Manager. If an alert is generated due
to a system error, a new alert for the same system error will not appear until you delete the original
alert.
Tip: You can create a system filter for alerts and select the Suppress alerts check box.
Suppressing alerts is useful when you have a generic filter that matches a generic error condition,
but want to exclude certain errors from generating an alert.
To create a system filter:
Tip: You can create a message filter for alerts and select the Suppress alerts check box.
Suppressing alerts is useful when you have a generic filter that matches a generic error condition,
but want to exclude certain errors from generating an alert.
To create a message filter:
13. Click OK to save the filter.
14. In the Message Filters dialog box, use the Up and Down buttons as needed to change the
processing order.
15. Click OK.
1. In the Alert Manager, select the alert you want to view.
2. Double-click the alert to see the details.
The following information is displayed on the General tab.
Item Description
Date Date and time when the alert was created
Item Description
Alert ID Identifier of the alert; uniquely identifies the alert within the alerter server task
Type, Code The combination of type and code defines the alert. The type is typically the
source of the alert, while the code is an alert identifier for the specific type.
External monitoring software can use the type and code to identify the alert.
Severity Level of severity for the alert
Description A description of the cause of the alert
The Data tab shows the cause of the alert (if available).
110 EDIFACT enveloping failed
120 EDIFACT envelope: Ack. analyzing storing failed
130 EDIFACT message processing failed
140 EDIFACT parsing failed
150 EDIFACT message rejected by recipient
120 Datamapper conversion program failed
4201 Failed to look up POP3 port
4202 Failed to connect to POP3 server
4203 Failed to log in to POP3 server
4204 STAT command failed
4205 RETR command failed
4206 Failed to parse email headers
4207 Unknown error when connecting to POP3 server
4602 Invalid header
4603 Multipart without boundary
4604 Syntax error in header
4605 Decode Base64
4606 Decode quoted printable
4607 Invalid MDN report
4608 MDN in no MDN mode
4609 MimeIn only MDN mode
4801 Encode Base64
4802 Encode Quoted Printable
4803 Failed to set MDN attribute
4804 Failed to write MDN correlation
4805 MDN Timed out
4806 Encode unknown
200 Receive failed
201 Receive classification failed
300 Send failed
400 Retrieve failed
101 Trace fatal
101 Procengine fatal
101 Starter fatal
101 Logger fatal
101 Queue fatal
101 Timer fatal
101 Cfgserver fatal
101 Porter fatal
101 Unspecified fatal
The goal of UCS is to eventually include all characters used in all the written languages in the world
as well as all mathematical and other symbols. The current first edition of UCS covers all major
languages and all commercially important languages.
To be able to give every character a unique coded representation, the designers of UCS chose a
uniform encoding, using bit sequences consisting of 16 or 31 bits (in the two coding forms, UCS-2
and UCS-4). This is the reason for the term multi-octet in the name of the standard.
l Create character sets comprising one or more intervals of a universal character set
l Modify one or more of the code intervals that are contained in a character set
l Delete a character set
l Delete one or more of the intervals that are contained in a character set
Name – The name of the character set. You use this name to refer to the character set when
you associate it to a Business-Document in the Composer interface.
Type – A description of the character set type.
Description – Text that provides additional useful information about the character set.
3. From the File menu of the Character Sets Manager main window, select New to open the
Character Set definition window.
4. In the Character Set definition window, complete the following fields:
Field Content
Name Enter a name for the character set.
(text field) When you are done creating the character set, this name appears in the
list of the Character Sets Manager main window.
Description Optionally, enter a description of of the character set you are creating.
(text field) You can enter information that helps other users understand the content.
When you are done creating the character set, this information appears in
the list of the Character Sets Manager main window.
Valid ISO Use this pane to add ISO 10646 code intervals that define your character
10646 code set.
interval(s) You can define one or more code intervals from the ISO 10646 universal
(list pane) set. To add a code interval:
Click Add... The Code Interval definition dialog box is
displayed.
In the Code Interval definition dialog box, complete the
following fields:
l From: Enter the code number of the beginning code
of the interval
l To: Enter the code number of the ending code of the
interval
Click OK.
The Code Interval definition dialog box closes and a new line
entry in the Valid ISO 10646 code interval(s) frame is added.
l The new character set appears as a displayed entry in the Character Sets Manager main window.
You can view, modify or delete the character set at any time.
l You can enter the name of the character set (or any other character set that appears in the
Character Sets Manager main window) when you are defining the properties of nodes in
Business-Documents.
l The B2Bi integration engine then uses the specified character set in the Business Document node
to restrict incoming message data to data formatted only in the characters in that set. The
integration engine rejects data formatted in characters outside the set, for the node. An
explanatory message appears in the Message Log.
Remember: The character set you choose applies only to the data contained in the selected node.
Datamapper
If you are migrating to the current version of B2Bi, you can use Datamapper on a client machine.
However, the client machine must have an environmental variable set (DATAMAPPER) referring to
the installation path of Datamapper.
Start Datamapper
1. From the Windows Start menu select All Programs > Axway Software > Axway B2Bi
[Installation Name]> B2Bi Tools > B2Bi Tools Client.
2. Enter your Login and Password.
The default login and password are both admin. However, the admin user can change the
password and create additional logins in User Manager.
3. From the tool panel, double-click the Datamapper icon.
For more information about using Datamapper, refer to the AMTrix Datamapper User Guide.
Datamapper Builder
Use Datamapper Builder to take the map files produced by the Datamapper client and build
Message Builder Conversion programs (MBCs). Datamapper Builder generates and compiles code
into MBCs. It also tests and administers MBCs.
To use Datamapper Builder, Datamapper must be installed on the same client.
Archive projects
Archives replace the “Fold” files feature in previous versions of Datamapper Builder. You can use the
archive feature to store different versions of a project, or you can copy the archive file to the client
PC and distribute it to another Datamapper user who can then re-create the complete project. An
archive file contains all of the files that are required to create a project, including map and view files
on the client, the main code file, and the executable file.
Tip: To avoid destroying your work, make it a habit to create archives of your projects so you can
recreate a project if needed.
Use the Archive menu to create new archives, open existing archives, delete archives, and copy
archives between the client and server.
Note: You may overwrite existing files when opening an archive. However, template main code files
are never overwritten.
The ability to copy to and from the client file system is very useful when you want to send an archive
to somebody. You can copy the archive file directly to media, or you can save it to a location where
you can get to it and then attach it to an email.
Build programs
Use the options on the Program menu to complete the following tasks:
The provided main code files include the following:
l DMnormal.mc – Used for ordinary “file-to-file” mapping. The result is a loadable conversion
program.
l DMobject.mc – Used for object mapping. The result is a loadable conversion program.
l DMcompatible.mc – Used for ordinary “file to file” mapping. The structure of the code
resembles that of the main programs in earlier Datamapper versions. This main code should be
used when upgrading old maps that have modifications in the main code that make calls to
“Direct Data Access” functions. When using this main code, the Map Directive //COMPATIBILITY
must be entered in the map description field. Note that more code will be generated as a result.
Compilation options
Option Description
Include Click on this box to compile the detailed line number information into the map. From the
Line Datamapper Simulator, the line number information may be viewed in the Event Log.
Number
Information
Option Description
Strip Once a conversion program has been created and tested by using the Datamapper
Simulation Simulator, the map may be recompiled when this option is selected. By doing this, the
Information conversion program is be generated without simulation information. Note that when this
option is it is not possible to test data with Datamapper Simulator.
Store When this option is selected, only the fields and elements that are referenced in the map
Mapped are read when the conversion program is run. The result of this is that generation of the
Data Only conversion program may take a somewhat longer time but execution is faster. In maps
where user code that accesses data by use of Data tree functions is used, it is recommended
that this option is not enabled.
Auto Conversion programs that are to be used within Trading Partner Management need to be
Register registered. A conversion program can be registered either by selecting Program >
Conversion Register Conversion Program or by selecting this option, which automatically registers
Program the program each time it is built.
Displaying results
If this is the first build for this map, a Datamapper project is automatically created and a main code
file is selected. The project defines the combination of map, ADF files, EDI standard, test cases, main
code, and options. The project is named after the map. If the project already exists, this is used.
EDI Tracker
Use EDI Tracker to view the transfer status and other information about EDIFACT and X12
documents handled by B2Bi servers.
EDI Tracker is installed during B2Bi installation.
To work in EDI Tracker you launch searches. These searches are filtered requests to the database
that return lists of EDIFACT and X12 documents that have been handled by B2Bi and display their
transfer status.
EDI Tracker provides default searches, and you can also create, launch and save custom searches.
From any item in the list of search results you can open windows that display details bout various
characteristics of the document transfers, such as the parent Interchange, references, payloads, etc.
When B2Bi is installed and started and B2Bi Client is also installed, you can open the EDI Tracker.
l Public Searches – Displays searches that are visible to all users of the current B2Bi installation.
When you first install B2Bi, the EDI Tracker is populated with the following default public
searches:
o All entries within last 24 hours
o All entries within last 5 minutes
o All entries within last hour
o Unidentified within last 24 hours
o Waiting for acknowledgement within last 24 hours
l Private Searches – Displays your own searches, which you have specified as being visible only
to you.
l Results – Displays the results of any searches you launch.
From the EDI Tracker main window you can perform the following actions:
l Use an existing search on page 110
l Create a new search on page 112
l Modify a search on page 115
l View details of search results on page 116
l View details of message payloads on page 124
l View the transfer hierarchy on page 125
l Create and manage custom views on page 126
l Additional option settings on page 129
l Refresh search results on page 133
l Count the number of returned items for a search on page 133
l Delete a search on page 133
l Reprocess and resend messages on page 134
l Manually set a message status to correct on page 134
l Import and view log archives on page 135
l Sender ID – Routing ID of the document sender
l Recipient ID – Routing ID of the document recipient
l Standard – Document format standard (X12 or EDIFACT)
l Version – Document version
l Document – Document type (example: 997)
l Direction – Inbound or outbound
l Status – Current status of the document transfer. The transfer can have any of the
following statuses:
o OK
o Error
o OK, but errors during acknowledgement delivery
o No acknowledgement received
o No acknowledgement expected
o OK, interchange accepted by partner
o OK, acknowledgement sent to partner
o Sent to partner
o Interchange rejected by partner
o Waiting for transfer
o Waiting for enveloping
o Waiting for acknowledgement
o Waiting for outgoing acknowledgement
o Processing message
To modify your view of search results, see Create and manage custom views on page
126.
3. To view additional details for any entry in the results list, see View details of search results on
page 116.
l Unidentified input – Display inputs not related to an inbound classification
5. In the Acknowledgements section, select how to manage the display of acknowledgements:
l Exclude (default) – Do not display acknowledgments
l Include – Display all documents, including acknowledgements
l Display only – Only display acknowledgements
6. In the Default view section, in the View field, from the drop-down list select the initial view
you want to use for the results of searches. For information about creating custom views, see
Create and manage custom views on page 126.
7. In the Search type section, select an option:
l Public – Public searches are visible to all users.
l Private – Private searches are visible only to the user that creates them.
8. Select the Filter tab. In this tab you can create filtering criteria for your search results. For
example you can filter to view only messages from a specific sender or receiver. The following
attributes are available for filtering:
l Log ID – Identification of the log in the integration engine
l Sender ID – Identification of the sender of the transfer
l Recipient ID – Identification of the recipient of the transfer
l Standard – Document standard. You may select one or more of the displayed
document standards.
l Document – Document type (example: 812)
l Direction – Direction of the transfer (Inbound or Outbound)
l Status – Status of the transfer:
o Processing message
o OK
o OK, interchange accepted by partner
o OK, acknowledgement sent to partner
o Waiting for transfer
o Waiting for enveloping
o Waiting for acknowledgement
o Waiting for outgoing acknowledgement
o Error
o No acknowledgement received
o No acknowledgement expected
o Interchange rejected by partner
o OK, but errors during acknowledgement delivery
l Interchange ID
o For EDIFACT: UNB:0020 (Interchange control reference)
o For ANSI X12: ISA:13 (Interchange control number)
l Functional Group ID
o For EDIFACT: UNG:0048 (Functional group reference number)
o For ANSI X12: GS:06 (Group control number)
l Document ID
o For EDIFACT: UNH:0062 (Message reference number)
o For ANSI X12: ST:02 (Transaction set control number)
l Sender name – Name of the sender of the transfer
l Receiver Name – Name of the receiver of the transfer
l Agreement – Name of the agreement for the transfer
l Document Service – Name of the document service used for the transfer
l Document Agreement – Name of the document agreement used for the transfer
l Pickup Core ID – Identification of the trading engine pickup used for the transfer
l Warning – Messages tagged as warnings
l Resent – Messages tagged as "resent"
l Reprocessed – The Reprocessed flag is set for the original (parent) message that has
been reprocessed.
l Is reprocessed – The Is Reprocessed flag is set for the (child) message that is the
result of reprocessing, as it will lead to a duplicate set of documents in the Message Log
from its parent.
l Flag – All flagged searches
9. Select the User fields tab. In this tab you can create filtering criteria for your search results
based on user-defined tags and values. Select how to manage the effect of the filter:
l Exclusive – Optionally define a tag and value to use as filtering criteria. Records are
excluded from the search results if the tag exists for the record AND the value is correct.
l Inclusive – Optionally define a tag and value to use as filtering criteria. Records are
included in the search results only if the tag exists for the record AND the value is
correct.
l New – Add a new tag/value pair.
l Open – Open an existing tag/value pair for editing.
l Delete – Delete an existing tag/value pair.
10. Click Save.
EDI Tracker adds the search to the list of searches in the Favorite searches pane. You can
now use the search to view filtered results of document exchanges. SeeUse an existing search
on page 110
Modify a search
1. From the EDI Tracker main window click on a seach to select it, then click the Edit Search...
icon.
Alternatively, you can right-click a search and then select Edit... from the context menu.
EDI Tracker opens the c onfiguration page for the selected search. For a description of the
available fields and options, see Create a new search on page 112.
2. Modify the fields and options as required.
3. Click Save.
EDI Tracker modifies the search with your new settings.
Alternatively you can right-click an item in the list and select Open... from the context menu.
EDI Tracker opens a detail page for that item. The details displayed in the page vary depending on
the protocol and transfer type (send / recieve / acknowledgement).
Field descriptions: Outbound X12 document transfer example:
Field Description
General
Date and time Date and time of the transfer.
Log ID Internal log ID of the transfer.
Field Description
Type Transfer type (direction and protocol).
Status Current status of the transfer:
l OK
l Error
l OK, but errors during acknowledgement delivery
l No acknowledgement received
l No acknowledgement expected
l OK, interchange accepted by partner
l OK, acknowledgement sent to partner
l Interchange rejected by partner
l Waiting for transfer
l Waiting for enveloping
l Waiting for acknowledgement
l Waiting for outgoing acknowledgement
l Processing message
Agreement Name of the agreement that handles this transfer.
Document service Name of the document service that specifies the processing of this transfer.
Document Name of the document agreement that specifies the processing of this transfer.
Agreement
Pickup core ID CoreID value for the message, assigned when the message was consumed by the
pickup.
Consumption The original date and time when the message is consumed by the B2Bi core.
timestamp
Sender
ID ISA I06 and I05 5, colon-separated
Application code GS 142 Sender application code
Name Friendly name of the sending partner or community, as defined in the
B2Bi/Interchange UI.
Field Description
Recipient
ID ISA I07 and I05 7, colon-separated
Application code GS 124 Recipient application code
Name Friendly name of the receiving partner or community, as defined in the
B2Bi/Interchange UI.
Payloads
Interchange Characteristics of the X12 interchange. Select a display format and then click View
to see a detail window of the interchange structure.
Functional group Characteristics of the X12 functional group. Select a display format and then click
View to see a detail window of the functional group structure.
Document Characteristics of the X12 document. Select a display format and then click View to
see a detail window of the document structure.
Acknowledgement Characteristics of the X12 acknowledgement. Select a display format and then click
View to see a detail window of the acknowledgement structure.
Received The file that was received that either contained the document (Inbound), or the file
that was received, and then mapped into a document. Select a display format and
then click View to see a detail window of the document structure.
Sent The file that was sent. Select a display format and then click View to see a detail
window of the document structure.
References
Document type ST 143
Version GS 480
Functional GS 28
identifier code
Interchange ISA I12
control number
Group control
count
Field Description
Functional group GS 479
control number
Transaction set SE 02
control number
User data 1 Optional user-defined information associated with the transfer.
User data 2 Optional user-defined information associated with the transfer.
Transfers
Received Date and time of the document was received.
Description Information about the reception.
Sent Date and time of the document was sent.
Description Information about the send.
Acknowledgement
Date and time Date and time the acknowledgement was sent.
Log ID Internal log ID of the acknowledgement.
Type Message standard and direction of the transfer.
Status Status of the acknowledgement.
Errors
Errors Displays errors logged during document exchange processing.
Field descriptions: Inbound EDIFACT document transfer example:
Field Description
General
Date and time Date and time of the transfer.
Log ID Internal log ID of the transfer.
Field Description
Type Transfer type (direction and protocol).
Status Current status of the transfer.
l OK
l Error
l OK, but errors during acknowledgement delivery
l No acknowledgement received
l No acknowledgement expected
l OK, interchange accepted by partner
l OK, acknowledgement sent to partner
l Interchange rejected by partner
l Waiting for transfer
l Waiting for enveloping
l Waiting for acknowledgement
l Waiting for outgoing acknowledgement
l Processing message
Sender
ID UNB 002 0004 and 0007, colon-separated
Application UNG 006 0040
identifier
Name Friendly name of the sending partner or community, as defined in the
B2Bi/Interchange UI.
Recipient
ID UNB 003 0010 and 0007, colon-separated
Application code UNG 007 0044
Name Friendly name of the receiving partner or community, as defined in the
B2Bi/Interchange UI.
Payloads
Field Description
Interchange Characteristics of the EDIFACT interchange. Select a display format and then click
View to see a detail window of the interchange structure.
Functional group Characteristics of the EDIFACT functional group. Select a display format and then
click View to see a detail window of the functional group structure.
Document Characteristics of the EDIFACT document. Select a display format and then click
View to see a detail window of the document structure.
Acknowledgement Characteristics of the EDIFACT acknowledgement. Select a display format and then
click View to see a detail window of the acknowledgement structure.
Received Characteristics of the file that was received that either contained the document
(Inbound), or the file that was received, and then mapped into a document. Select a
display format and then click View to see a detail window of the document
structure.
Sent Characteristics of the file that was sent. Select a display format and then click View
to see a detail window of the document structure.
References
Document type UNH 009 0065
Version UNH 009 0052 and 0054 concatenated
Functional group UNG 0048
identifier
Interchange UNB 0020
control reference
Functional group UNG 0038
reference
Message reference UNH 0062
number
User data 1 Optional user-defined information associated with the transfer.
User data 2 Optional user-defined information associated with the transfer.
Transfers
Field Description
Received Date and time of the document was received.
Description Information about the reception.
Sent Date and time of the document was sent.
Description Information about the send.
Acknowledgement
Date and time Date and time the acknowledgement was sent.
Log ID Internal log ID of the acknowledgement.
Type Message standard and direction of the transfer.
Status Status of the acknowledgement.
Errors
Errors Displays errors logged during document exchange processing.
It can also call applications on client machines for viewing files.
To open a view of a payload:
1. From the search results page double click a search result item to open the details page for that
item
2. In the Payloads section of the detail page, locate the payload you want to view, and from the
drop-down list, select the viewer to use on the payload you have selected. You can select from
the following viewers:
l ISO8859_1 – Latin Alphabet No. 1 character encoding
l EDCBIC – Extended Binary Coded Decimal Interchange Code character encoding
l Hexadecimal - Hexadecimal character encoding
l EDI – EDI character set encoding
l XML client – Opens opens the program associated with the .xml extension on your
client machine.
l Txt client – Opens the program associated with the .txt extension on your client
machine.
l Custom client – Opens the program on your client machine that is associated with the
extension defined by the environment variable B2BI_USER_VIEWER_EXTENSION on
the server. For example, defining that environment variable as “.1st” will open
WordPad if it is installed on the client.
Note: The preferred way to assign a custom client extension is to use System Profile
Manager. In the Environment/Miscellaneous tab, use the field “User view extension”.
See Miscellaneous settings on page 225.
3. After you select a viewer type, click View to open the payload in the selected viewer.
EDI Tracker displays the payload content and a set of tools.
4. For the first four viewer types in the list above, you can use the following icon tools to view and
save the payload view:
Find Opens the search tool for the payload view. Use this tool to locate
specific character strings in the viewer.
You can optionally search using case matching. You can choose to
search upwards or downwards through the display.
Go To Opens a tool that enables you to navigate to a specific line of the
displayed payload.
Save As Opens the Save As dialog for saving to the machine where the EDI
tracker is installed.
Save As Opens the Save As dialog for saving to your local machine.
To Client
First Navigate to the first section display.
Section
Previous Navigate to the preceding section display.
Section
Next Navigate to the next section display.
Section
Last Navigate to the last section display.
Section
1. From a search results list, select the message for which you want to view the hierarchy.
2. From the menu bar, select File > View hierarchy .
Alternatively, you can right-click the message entry in the list and then select View hierarchy...
from the context menu.
After you launch a search, you can use the default view to display results, or you can create custom
views that display the columns of your choice.
For searches that return large numbers of results, the View menu also has commands that enable
you to display results by paging through sections. To set the size of a section, see Set the batch size
option on page 129.
This chapter describes the following tasks that you can perform to manage views:
l Create a custom view on page 126
l Use a custom view on page 127
l Edit a custom view on page 128
l Delete a custom view on page 128
l Associate a view with a search on page 128
The following items are the complete list of columns you can select to define your custom view:
l Date and Time*
l Sender ID*
l Recipient ID*
l Status*
l Interchange ID*
l Document ID*
l Standard*
l Version*
l Document*
l Direction*
l Log ID
l User Data 1
l User Data 2
l Sender Name
l Recipient Name
l Group ID (functional group ID)
l Agreement
l Document Service
l Document Agreement
l Pickup Core ID
l Resent
l Reprocessed
l Is Reprocessed
* = Items displayed in the view labeled Default, and used if you do not create and select
a custom view.
5. Use the Move Up and Move Down control buttons to arrange the order of columns to be
displayed in your custom view. The top items in the list will be the columns that are displayed
farthest to the left in your custom view.
6. Click OK to save the custom view.
The view is now visible for selection from the View menu.
The page displays a list of all available custom views.
2. Click the name of the view you want to delete to select it.
3. Click Delete.
The view is deleted and removed from the list of available views in the View menu.
1. Do one of the following:
l Click the New Search icon to open the new search editing page.
l From the Public Searches or Private Searches lists, select an existing search, then
click the Edit Search icon to open the editing page for the selected search.
2. In the Default view section, in the View field, use the drop down list to select the initial view
you want to use for the search.
3. Click Apply, Save and Close.
l On the icon bar of the main page, click the Options icon.
l From the menu bar, select Settings > Options.
l Memory – Use this tab to control the number and display of batch request entires.
l Log Date – Use this tab to set the date display format for search result items.
l Statusbar Date – Use this tab to set the date display format on the status EDI Tracker bar.
We recommend a batch size of 100, as a starting point when you have a large number of messages.
Note: This does not restrict the number of records that are returned and displayed in the EDI
Tracker list. It is a way of tuning the request processing.
To set the batch size option
1. Click the Options icon on the tool bar.
EDI Tracker displays the Options window.
2. Select the Memory tab.
3. In the Batch request entries count field, enter a batch size value.
4. Click OK.
1. Click the Options icon on the tool bar.
EDI Tracker displays the Options window.
2. Select the Memory tab.
3. Enter a new value in the Entries per section field. Enter the number of entries per section you
want to view in the search results. This specifies the number entries you view per section
(page). In your view you can use the section tools to page through the sections of entries.
4. Click OK.
There are two ways to use these navigation tools:
l Use the icon bar
First Section Go to first section of display items
Next Section Go to the next section of display items
Previous Section Go to the preceding section of display items
Last Section Go to the last page of display items
l Use the menu bar commands
On the menu bar click View and then from the drop-down menu select a command:
o First Section – Go to first section of display items.
o Next Section – Go to the next section of display items.
o Previous Section – Go to the preceding section of display items.
o Last Section – Go to the last page of display items.
4. Click OK.
l User defined – If you select this option, enter the date format that you want to use in
the formatting field. in the Example: %a %B %d %H:%M yields the format Mon May 13
19:05.
4. Click OK.
This may be useful when your search criteria encompass messages that may currently be in
processing.
To refresh the list of search results click the Refresh Search icon on the tool bar.
Alternatively, you can right-click the name of the search and then select Refresh from the context
menu.
EDI Tracker refreshes the display of items that correspond to the search criteria.
Alternatively, you can right-click a search and then select Count from the context menu.
In a popup window, EDI Tracker displays the number of items that correspond to the search criteria.
Delete a search
From the EDI Tracker main window click on a search to select it, then click the Delete Search...
icon.
Alternatively, you can right-click a search and then select Delete.... from the context menu.
EDI Tracker deletes the search and removes it from the list of available searches.
Reprocess
When you reprocess a message, the integration engine restarts processing at the point where the
original message was received. All message handling in the integration engine is repeated.
To reprocess a message:
1. From a search results list, select the message you want to reprocess.
2. From the menu bar, select File > Reprocess message.
Alternatively, you can right-click the message entry in the list and then select Reprocess message
... from the context menu.
Resend
When you resend a message, the integration engine restarts processing at the point where the
output message was sent. Only the transfer is repeated.
To resend a message:
1. From a search results list, select the message you want to resend.
2. From the menu bar, select File > Resend message.
1. From the list of search results, select the item to correct.
2. From the menu bar, select File > Mark as manually corrected.
Alternatively, you can right-click the item in the list, and then select Mark as manually
corrected… from the context menu.
Use the procedures in this chapter to enable EDI Tracker to display the log entry details contained in
a log archive.
Terminology
Log archive
A zipped file comprising log archive directories. Log archives are generated by loggers
according to a specified schedule.
Archive directory
A folder element of a log archive. When you unzip a log archive you unpack one or more
archive directories. An archive directory contains individual log entries. You cannot view
these entries without a viewer (EDI Tracker).
Archive instance
An archive directory that has been made available as a instance in EDI Tracker. You add
one or more archive instances to archive collections for organized viewing.
Archive collection
EDI Tracker archive object that comprises one or more available archive directory
instances. Archive collections are available for viewing from searches and from the
View>Run-time menu command.
Task summary
To view log archive content, complete the following tasks:
1. Run the installer in configure mode and select the "Enable Online Archive" feature.
2. In System Profile Manager, activate log archiving.
3. Run B2Bi to trade messages and generate archives.
4. Import zipped archives to EDI Tracker.
5. In EDI Tracker, create an Archive collection.
6. In EDI Tracker, view the details of the archive instance content.
1. Stop B2Bi.
2. Run the B2Bi installer in configure mode.
3. In the B2Bi Server Configuration screen, select the option: Enable Online Archive.
4. Complete the configuration session.
5. Restart B2Bi.
6. The browser displays any archive files that you can import. Select the log archive zip files that
you want to import to EDI Tracker. Then click Select to import them.
7. The wizard returns you to the Manage available archive instances page where you can see
that the new unzipped directories are added as available instances for EDI Tracker Archive
collections.
8. Click OK to complete the import procedure.
1. From the EDI Tracker View menu, select the name of an Archive collection. For example,
View>Recently_archived_logs. This step makes the selected archive collection the focus of EDI
Tracker seaches.
2. Click a Favorite search to search in the selected archive collection. Any logs that satisfy search
criteria are displayed.
File Viewer
Use File Viewer to view low-level log files that are not written to the Trace Viewer. You cannot edit
the files in the File Viewer, you can only view them. Access is limited to files within the B2Bi
Integration Engine directory structure.
Integration Manager enables you to perform the following tasks:
l Add a project
l Add an integration
l Edit an integration
l Delete an integration
l Deploy an integration to a runtime server
l Remove an integration from a runtime server
l Export a project
l Import an XIB dataset
l Modify an XIB dataset
l Deploy elements of an imported XIB dataset to a B2Bi Server.
Integration
An Integration is a set of processes that enable you to transfer information, in the form of
messages, from one application to another.
Integrations handle the transfer of information between a source and a target, both of
which may be external to the integration engine.
Activity
Each Integration contains one or more Activities. An Activity always belongs to a single
Integration. Together, the Activities contained in an Integration define how the integration
engine transmits messages and how it processes the exchanged information so that it is
readable by the recipient.
If the Integration that owns an Activity is deleted, the Activity is also deleted.
Specific activities can refer to corresponding application properties. For example, the
inbound EDIFACT Activity can use the EDIFACT application property from both the source
and the target application.
In the integration engine you use the following types of Activity:
l Classification Activity – Use Classification Activities to direct incoming messages to
the appropriate Activities for processing. Classification Activities enable the conditional
processing of incoming messages. A Classification Activity directs incoming messages
according to defined criteria. A message that meets the criteria is passed on for
processing to a subsequent Activity, even to an Activity in another integration.
A classification activity can only be attached to a Classification Anchor Activity.
l Classification Anchor Activity – A Classification Anchor is the parent activity for a
number of Classification Activities. A Classification Anchor Activity must be followed by at
least one Classification Activity.
Incoming messages are evaluated to determine if they meet the classification criteria
defined in the Activities. This evaluation is typically performed sequentially in the order
that the Activities are attached to the anchor, until the message meets the criteria of a
Classification Activity, in which case it is passed on for further processing.
If the broadcast function is used, the evaluation is performed in the same way, but the
message is passed on for further processing by all Classification Activities where the
criteria are valid.
l B2Bi Entrypoint Activity – Serves as a Service entry point in B2Bi message-processing
sequences.
l B2Bi Exitpoint Activity – Serves as a Service exit point in B2Bi message-processing
sequences.
l Sequential Activity – Acts on information that is flowing between applications. For
example, a message is received through an FTP communication connector, is then
converted to another format, such as an in-house file, and is then sent on using an email
communication connector. The Sequential Activity, in this case, is the conversion to
another format.
l Inbound EDIFACT Activity – Defines how the incoming EDIFACT message is broken
down from interchanges into functional groups and documents. The documents can
then be converted into other formats. Acknowledgment of receipt of a message can also
be configured.
l Outbound EDIFACT Activity – Defines how an outgoing EDIFACT message is
converted, assembled into documents, functional groups, and interchanges, and sent to
the recipient. Handling of an acknowledgment of receipt from the recipient is also
configured.
l Inbound X12 Activity – Defines how the incoming X12 message is broken down from
interchanges into functional groups and documents. The documents can then be
converted into other formats. Acknowledgment of receipt of a message can also be
configured.
Stage
A Stage is a sub-object of an Activity. It is a compiled component that determines how to
process a set of data. Each Activity can contain several Stages. The processing in a Stage is
provided by a unit of processing code that can be a Message Builder Component (MBC), or
a Datamapper map.
Branch
A Branch is a processing route that a functional group, or a document, takes if it meets the
requirements that you specify, in an Activity.
MBC
A Message Builder Component (MBC) is a utility program written in the Axway-proprietary
Message Builder language. MBCs resides on the integration engine. You can select MBCs to
use in Stages of Activities. Working in the B2Bi Mapping Services environment, you can
create your own MBCs that extend the standard functions of the integration engine.
l The B2Bi Entrypoint is available in the B2Bi UI in the Component configuration page drop-down
list. You can add the Integration by selecting its B2Bi Entrypoint. When you configure the
Component that uses the Integration, you can define the Integration as one of the following
types:
o Document
o Map
o Post-enveloping.
l The Integration's B2Bi Exitpoints (if any are specified) are represented as Outputs for
Components that are used by the Integration.
l After you configure an Integration's B2Bi Entrypoint as a Component in the B2Bi UI, you can use
the Integration in Services and Metadata Services (as any other Component).
l You can manage deployed HMEs in the System Profile Manager tool. See Use System Profile
Manager on page 214.
Prerequisites
l B2Bi Server, B2Bi Client and B2Bi Integration Engine tools are installed.
l The "Enable Integration Manager" option is selected during B2Bi Server installation (advanced
mode). If you did not select the option during installation, you can run the installer in configure
mode and activate the option.
Start procedure
1. From the Windows Start menu select All Programs > Axway Software > Axway B2Bi
[Installation Name] > B2Bi Tools > B2Bi Tools Client.
2. Enter your Login and Password.
The default login and password are both admin. However, the admin user can change the
password and create additional logins in User Manager.
3. From the tool panel, double-click the Integration Manager icon.
Add a Project
About Projects
In Integration Manager, a Project is a collection of integration-related objects that can be edited,
exported and deployed together. A Project is represented in Integration Manager as a directory tree
structure. You can create and manage multiple Projects in Integration Manager.
This topic describes the creation of a standard project. For the procedure to create a migration
project to manage a dataset that is imported from the XIB product, see Add an XIB migration project
on page 172.
Procedure
1. In the Integration Manager interface, in the left panel, make sure the Configuration Browser
tab is selected.
2. From the menu bar, select File > New > Project....
3. In the New Project wizard screen complete the fields:
l Name: Enter a name for the project. This is the name that is displayed on the parent
Project folder in the Integration Manger directory structure.
8. On the Activities page click New.
Select the new Activity type to add from the list. When you select an Activity type, the
Integration Manager opens additional pages that enable you to configure the Activity.
Configuration pages vary depending on the Activity type. Follow the links below for
descriptions of how to configure each Activity:
l Configure an Entrypoint Activity on page 148
l Configure an Exitpoint Activity on page 149
l Configure a Classification Anchor Activity on page 150
l Configure a Classification Activity on page 151
l Configure an EDIFACT Inbound Activity on page 152
l Use the EDIFACT Inbound Activity Wizard on page 153
l Configure an EDIFACT Outbound Activity on page 155
l Configure an HL7 Inbound Activity on page 156
l Configure an HL7 Outbound Activity on page 157
l Configure an X12 Inbound Activity on page 158
l Configure an X12 Outbound Activity on page 159
l Configure a Sequential Activity on page 160
9. After you add Activities, the list of Activities in the Integration is displayed in the New
Integration wizard. If everything is correct, click Finish to save the Integration. The Integration
is saved only in the current user transaction database. Before you can use the Integration you
must deploy it to a run-time server.
l Configure an X12 Inbound Activity on page 158
l Configure an X12 Outbound Activity on page 159
l Configure a Sequential Activity on page 160
8. After you add Activities, the list of Activities in the Integration is displayed in the New
Integration wizard. If everything is correct, click Finish to save the Integration. The Integration
is saved only in the current user transaction database. Before you can use the Integration you
must deploy it to a run-time server.
1. Select the General tab.
l Enter a Name and, if required, a Description and select Enable.
l If the Hierch Messaging Task field is blank, select a hierarchical messaging task,
using the Browse tool.
2. Select the Configuration tab.
l Broadcast – Select this option if you want all inbound messages to be evaluated and
then processed by all the matching Classification Activities.
l Classification Order – The classification order is the order in which the Classification
Activities associated with the anchor are evaluated. Change this order by using the
Move Up and Move Down buttons.
When you first configure the Classification Anchor Activity it is empty. You need to
create Classification Activities that are associated with this anchor and then edit the
anchor to change the processing order of the Classification Activities. You can only
change the execution order if you have different types of Activities (for example XML
classification, EDI classification, and so on). You can change the order of when the
different classification types are evaluated. For example, all you file content
classification can be done before EDI classification. If you have the same type of
classification, the evaluation order is:
a. The classification containing most criteria.
b. If the classifications have the same number of criteria, the classification with
the first criteria field filled out.
3. Select the Sentinel tab and, select a Sentinel logging setting:
l Log all events
l Log user-defined and erroneous events
l Log user defined events only
4. Click OK to save and close, or Cancel to close without saving.
1. Select the General tab.
l Name – Enter a name for the Activity or accept the default.
l Description – Optionally enter a description of the Activity.
l Enable – Select this option to enable the Activity. If you want to proceed with the
definition of the Activity but cannot, for some reason, complete all the information
required, do not select Enable (otherwise, the wizard will ask you to fill in missing
information.) You can complete the required information later.
l Next Activity – To select a next Activity for the Integration sequence, click Browse
and select an Activity from the list.
2. Select the Configuration tab.
l Classification Anchor – Each Classification Activity has to be attached to a
Classification Anchor Activity. Click Browse to select a Classification Anchor Activiy.
l Classifier MBC – A Classifier Activity uses an MBC (Message Builder Component) to
perform classification filtering. Click Browse to select a classifier MBC.
Depending on the classifier MBC you select, you may have additional classification
criteria options for the consumed file.
3. Click OK to save and close, or Cancel to close without saving.
1. Select the General tab.
l Name – Enter a name for the Activity or accept the default.
l Description – Optionally enter a description of the Activity.
l Enable – Select this option to enable the Activity. If you want to proceed with the
definition of the Activity but cannot, for some reason, complete all the information
required, do not select Enable (otherwise, the wizard will ask you to fill in missing
information.) You can complete the required information later.
l Hierarchical Messaging Task – Select a Hierarchical Messaging Task, using the
Browse tool.
2. Select the Branches tab.
1. To add a criteria-based processing Branch to the Activity, select a node of the Interchange
processing tree, and click New.
In the Branch General tab:
l Name - Enter a name for the Branch.
l Description – Optionally enter a description of the Branch.
2. in the Branch Criteria tab, use the fields to set the EDIFACT message attributes and values
that serve as filtering criteria to initialize the branch processing.
3. In the Branch Stages tab:
a. Click New to open the Select MBC page.
b. Navigate in the folder directories and select the MBC that provides processing for the
Branch.
c. Complete the fields in the dialog panel. The content of the dialog panel depends on
the MBC you selected in the previous step.
d. Click OK.
3. On the Sentinel page, select a Sentinel logging setting:
l Log all events
l Log user-defined and erroneous events
l Log user defined events only
4. Click Finish to save and close, or Cancel to close without saving.
1. Complete the fields on the first page:
l Name – Enter a name for the Inbound Activity. The name you enter here is used to
reference the Inbound Activity throughout Integration Manager.
l Analyze acknowledgements – Select this option if you want the integration engine
to analyze acknowledgments.
This only applies when you are sending EDIFACT messages, and defines what will
happen to acknowledgments received from the target application.
If you select this option, the integration engine sends the acknowledgment to the
application, and displays the acknowledgement in Message Log.
l Manage Functional Groups – Select this option if the Interchange contains
Functional Groups.
An Interchange can contain functional groups, each of which contains documents. The
integration engine can then process each functional group in a different way before
distributing them.
Alternatively, the interchange can contain documents directly, which limits further
processing.
l Generate acknowledgement – Select this option if you want to generate an
acknowledgment.
If you select this option, the integration engine sends an acknowledgment to the sender
of the Interchange.
2. Click Next.
3. On the Configure Sentinel page, select a Sentinel logging setting:
l Log all events
l Log user-defined and erroneous events
l Log user defined events only
4. Click Next.
5. In the Functional Group Branch configuration page, complete the fields:
l Name - Enter a name for the Functional Group Branch.
l Description – Optionally enter a description of the Branch.
l In the Branch Criteria section, use the fields to set the EDIFACT message attributes and
values that serve as filtering criteria to initialize the Branch processing.
6. Click Next.
7. In the Document Branch configuration page:
l Name - Enter a name for the Document Branch.
l Description – Optionally enter a description of the Branch.
l In the Branch Criteria section, use the fields to set the EDIFACT message attributes and
values that serve as filtering criteria to initialize the Branch processing.
l Converter – Click Browse to select a converter component.
8. Click Next.
The wizard asks you if you:
l Want to add a new Functional Group Branch
l Want to add a new Document Branch
l Are done
Make a selection and click Next.
When you are done adding Branches, the wizard displays an EDIFACT Activity summary page.
9. If you are satisfied with the Activity structure, click Finish to save and quit the wizard.
1. Select the General tab.
l Name – Enter a name for the Activity or accept the default.
l Description – Optionally enter a description of the Activity.
l Enable – Select this option to enable the Activity. If you want to proceed with the
definition of the Activity but cannot, for some reason, complete all the information
required, do not select Enable (otherwise, the wizard will ask you to fill in missing
information.) You can complete the required information later.
l Hierarchical Messaging Task – Select a Hierarchical Messaging Task, using the
Browse tool.
2. Select the Stages tab.
a. Click New to open the Select MBC page.
b. Navigate in the folder directories and select the MBC that provides processing for the
Stage.
c. Complete the fields in the MBC dialog panel. The content of the dialog panel depends on
the MBC you selected in the previous step.
d. Repeat this procedure until you have the necessary MBCs configured.
3. On the Sentinel tab, select a Sentinel logging setting:
l Log all events
l Log user-defined and erroneous events
l Log user defined events only
4. Click OK to save and close, or Cancel to close without saving.
1. Select the General tab.
l Name – Enter a name for the Activity or accept the default.
l Description – Optionally enter a description of the Activity.
l Enable – Select this option to enable the Activity. If you want to proceed with the
definition of the Activity but cannot, for some reason, complete all the information
required, do not select Enable (otherwise, the wizard will ask you to fill in missing
information.) You can complete the required information later.
l Hierarchical Messaging Task – Select a Hierarchical Messaging Task, using the
Browse tool.
2. Select the Branches tab.
1. To add a criteria-based processing Branch to the Activity, select a node of the Interchange
processing tree, and click New to add a new processing Branch to the tree.
2. In the Branch General tab:
l Name - Enter a name for the Branch.
l Description – Optionally enter a description of the Branch.
3. in the Branch Criteria tab, use the fields to set the EDIFACT message attributes and values
that serve as filtering criteria to initialize the branch processing.
4. In the Branch Stages tab:
a. Click New to open the Select MBC page.
b. Navigate in the folder directories and select the MBC that provides processing for the
Branch.
c. Complete the fields in the dialog panel. The content of the dialog panel depends on
the MBC you selected in the previous step.
d. Click OK.
3. Click Finish to save and close, or Cancel to close without saving.
1. Select the General tab.
l Name – Enter a name for the Activity or accept the default.
l Description – Optionally enter a description of the Activity.
l Enable – Select this option to enable the Activity. If you want to proceed with the
definition of the Activity but cannot, for some reason, complete all the information
required, do not select Enable (otherwise, the wizard will ask you to fill in missing
information.) You can complete the required information later.
l Hierarchical Messaging Task – Select a Hierarchical Messaging Task, using the
Browse tool.
2. Select the Stages tab.
a. Click New to open the Select MBC page.
b. Navigate in the folder directories and select the MBC that provides processing for the
Stage.
c. Complete the fields in the MBC dialog panel. The content of the dialog panel depends on
the MBC you selected in the previous step.
d. Repeat this procedure until you have the necessary MBCs configured.
3. On the Sentinel tab, select a Sentinel logging setting:
l Log all events
l Log user-defined and erroneous events
l Log user defined events only
4. Click OK to save and close, or Cancel to close without saving.
1. Select the General tab.
l Name – Enter a name for the Activity or accept the default.
l Description – Optionally enter a description of the Activity.
l Enable – Select this option to enable the Activity. If you want to proceed with the
definition of the Activity but cannot, for some reason, complete all the information
required, do not select Enable (otherwise, the wizard will ask you to fill in missing
information.) You can complete the required information later.
l Hierarchical Messaging Task – Select a Hierarchical Messaging Task, using the
Browse tool.
2. Select the Branches tab.
1. To add a criteria-based processing Branch to the Activity, select a node of the Interchange
processing tree, and click New.
2. In the Branch General tab:
l Name - Enter a name for the Branch.
l Description – Optionally enter a description of the Branch.
3. In the Branch Criteria tab, use the fields to set the X12 message attributes and values that
serve as filtering criteria to initialize the branch processing.
4. In the Branch Stages tab:
a. Click New to open the Select MBC page.
b. Navigate in the folder directories and select the MBC that provides processing for the
Branch.
c. Complete the fields in the dialog panel. The content of the dialog panel depends on
the MBC you selected in the previous step.
d. Click OK.
3. On the Sentinel page, select a Sentinel logging setting:
l Log all events
l Log user-defined and erroneous events
l Log user defined events only
4. Click Finish to save and close, or Cancel to close without saving.
1. Select the General tab.
l Name – Enter a name for the Activity or accept the default.
l Description – Optionally enter a description of the Activity.
l Enable – Select this option to enable the Activity. If you want to proceed with the
definition of the Activity but cannot, for some reason, complete all the information
required, do not select Enable (otherwise, the wizard will ask you to fill in missing
information.) You can complete the required information later.
l Hierarchical Messaging Task – Select a Hierarchical Messaging Task, using the
Browse tool.
2. Select the Stages tab.
a. Click New to open the Select MBC page.
b. Navigate in the folder directories and select the MBC that provides processing for the
Stage.
c. Complete the fields in the MBC dialog panel. The content of the dialog panel depends on
the MBC you selected in the previous step.
d. Repeat this procedure until you have the necessary MBCs configured.
3. On the Sentinel tab, select a Sentinel logging setting:
l Log all events
l Log user-defined and erroneous events
l Log user defined events only
4. Click OK to save and close, or Cancel to close without saving.
1. Select the General tab.
l Name – Enter a name for the Activity or accept the default.
l Description – Optionally enter a description of the add some descriptive text.
l Enable – Select this option to enable the Activity. If you want to proceed with the
definition of the Activity but cannot, for some reason, complete all the information
required, do not select Enable (otherwise, the wizard will ask you to fill in missing
information.) You can complete the required information later.
l Hierarchical Messaging Task – Select a Hierarchical Messaging Task, using the
Browse tool.
2. Select the Stages tab.
3. Click New to open the Select MBC page.
4. Navigate in the folder directories and select the MBC or DML Map that provides processing for
the Activity Stage.
5. Complete the fields in the dialog panel. The content of the dialog panel depends on the MBC or
DML Map you selected in the previous step.
6. Click Finish to save and close, or Cancel to close without saving.
Deploy an Integration
After you create an Integration, in order to use it, you must deploy it to a B2Bi server.
Before you deploy the Integration, the MBCs that the Integration uses must be registered on the
target run-time server. If any required MBC is missing, it is not possible to deploy the Integration.
When deploying to a cluster, deploying to one node deploys the Integration to the cluster.
The HMEs used by Integrations are deployed with the Integrations, with the following default
configuration: 1 Processing Engine, 5 sessions.
l Integration Manager admin rights – As an Integration Manager user you must have Integration
Manager admin rights. Integration Manager user rights are set in the User Manager tool. see User
Manager on page 239.
l Run-time server user access rights – You must have a valid user name and password for access to
the server where you are deploying the Integration. You must also have Integration deployment
rights associated with your account. Depending on the server configuration, your user access
rights may be managed in PassPort or in the B2Bi user interface.
l The B2Bi Entrypoint is available in the B2Bi UI in the Component configuration page drop-down
list. You can add the Integration by selecting its B2Bi Entrypoint. When you configure the
Component that uses the Integration, you can define the Integration as one of the following
types:
o Document
o Map
o Post-enveloping.
l The Integration's B2Bi Exitpoints (if any are specified) are represented as Outputs for
Components that are used by the Integration.
l After you configure an Integration's B2Bi Entrypoint as a Component in the B2Bi UI, you can use
the Integration in Services and Metadata Services (as any other Component).
l You can manage deployed HMEs in the System Profile Manager tool. See Use System Profile
Manager on page 214.
Prerequisite
Before you can remove an Integration from a server, you must make sure that it is not used in any
B2Bi component on that server.
Procedure
1. In the Integrator Manager main page, select the Runtime Server Browser tab.
2. In the left pane node tree, click the plus sign of a node to display the list of Integrations that
have been deployed to the node.
3. Right-click the name of the Integration you want to remove, and from the context menu, select
Remove from deployment.
4. Integration Manager prompts you to confirm the removal operation. Click OK to continue and
remove.
Edit an Integration
To edit an Integration:
1. Select the name of the Integration in the Configuration Browser tab of Integration Manager.
2. Select File > Open.
3. Make changes in any field of any integration tab as required.
4. Click OK.
Edit an Activity
To edit an Activity:
1. Double-click the name of an Integration in the Configuration Browser tab of Integration
Manager.
2. Double-click the name of the Activity you want to modify in the Activity frame.
3. Make changes in any field of any Activity tab as required.
4. Click OK.
Edit a Stage
To edit a Stage:
1. Double-click the name of an Integration in the Configuration Browser tab of Integration
Manager.
2. In the Activity frame, double-click the name of the Activity that contains the Stage you want to
modify.
3. Select the Stages tab.
4. Double-click the name of the Stage you want to modify.
5. Make changes in any field of any Stage tab as required.
6. Click OK.
Cut-and-paste behavior
In general, you can use the standard cut-and-paste tools for moving objects between Integration
Manager folders. However cutting and pasting Integrations is a special case. It is possible to
copy/paste an Integration between two projects, but you cannot cut/paste an Integration between
projects. This behavior assures against the accidental loss of important data.
To safely move an Integration from one project to another:
1. Select a Integration in the first project.
2. Copy the Integration.
3. Select an Integration folder in the second project.
4. Paste the copied Integration.
5. Check that the Integration and dependencies are correctly copied to the desired destination.
6. Select the original Integration in the first project and delete it.
Delete an Integration
1. In the Integration Manager main page, select the Configuration Browser tab.
2. In the left pane, right-click the name of the Integration you want to delete.
3. From the context menu, select Delete.
XIB migration
Overview
Integration Manager provides tools for migrating an XIB configuration to B2Bi.
The arrangement and handling of information is different in XIB and in B2Bi. XIB combines
integration and transport and functionality in a single architecture. B2Bi comprises a pair of engines
which provide integration and transport services respectively, and is built to enable the easy
deployment of maps and map-processing resources.
Migration tasks
To migrate an XIB dataset:
1. Enable XIB migration on page 171
2. Add an XIB migration project on page 172
3. Import an XIB dataset to Integration Manager on page 174
When you import an XIB dataset to Integration Manager, two processes occur:
l The import tool creates the objects that define the integration processing.
l The import tool collects the data that enables you to generate the application transports
that were formerly used in XIB.
4. Copy resources to the runtime server on page 175
5. Deploy an XIB integration configuration to B2Bi on page 176
6. Deploy an XIB transport configuration to B2Bi on page 177
The following figure illustrates the migration of an XIB dataset to a B2Bi environment using
Integration Manager.
The the objects that are generated in the Application folder during an XIB migration contain sets of
data (properties) that are linked to one or more integrations. The properties that are contained in
these Application objects are necessary for the deployment of the XIB dataset to a B2Bi Server.
After importing an XIB dataset to Integration Manager, you can edit and delete Application objects,
or add new objects of your own.
When an Activity in one Integration refers to an Activity in a different Integration, the entire folder
structure up to the common folder and all dependent integrations are copied.
l You have the option to activate applications and properties as part of the project creation
wizard.
l The layout of the integration manager screen is modified to accommodate the display of
applications.
l The Integration Manager menu enables you to import XIB V2 datasets (normally this option is
disabled).
Prerequisite
Enable XIB migration on page 171
Procedure
1. In Integration Manager, select File > New > Project to open the New Project wizard.
2. In the Project Details page, complete the fields:
l Name – Enter the name of the project as you want it to display in Integration Manager.
l Enable Migration Features – Select this option to enable XIB migration features for
this project directory in Integration Manager.
l Description – Optionally add a project description. The text you enter here appears in
the Integration Manager UI when you hover the mouse over the project name.
3. Click OK.
A message asks you to confirm that you want to activate the migration features on this project
directory. Click OK to continue.
Integration Manager generates a new project with the name you entered in the configuration
screen. The project directory includes an Applications sub-directory.
l Integrations
l MBC Registrations (Not the actual MBC executables)
l Task definitions
When you export a dataset from XIB, the software creates a new directory. This directory contains a
set of files that contain all the configuration data from the dataset. You can copy an exported
dataset by copying the entire directory.
Place the exported dataset in the $B2Bi_SHARED_LOCAL/cfgarchives/migration folder on the
B2Bi server.
Prerequisites
l XIB migration is enabled for the Integration Manager. See Enable XIB migration on page 171.
l You have created a project in Integration Manager, with migration features enabled.
l A dataset is available to be imported. You may have exported a dataset from an XIB environment
for use in B2Bi or you may have exported a dataset from Integration Manager. You can export
and import a dataset as either a folder or a zip file.
See your XIB documentation for information on exporting datasets from XIB.
Procedure
1. In the left pane of the Integration Manager interface, in the directory tree structure, expand the
directory tree of a project that has migration features enabled.
2. Select the Integrations folder of the project. An Integrations folder has to be selected in order to
continue with this procedure.
3. Select File>Export/Import>Migration>Import XIB Dataset Data and select whether
you are importing from a folder or from a zip file.
4. Use navigation screen to search your network and locate the XIB dataset folder or zip file that
you want to import. Select the dataset and then click Select.
5. An Import Details page displays a list of the objects of the dataset that you are about to import
to Integration Manager..
6. Click Import to continue. Depending on the size of the dataset, the import operation may take
several minutes.
These objects may include:
l Integrations with entry points , exit points and multiple activities.
l Applications
l Tasks
l Migration Objects
You can perform the following tasks with the imported dataset:
l Edit or add elements to the imported integrations.
See special note: . Copy/paste XIB imported Activities in Integration Manager on page 170
l Deploy an entire project containing the imported dataset configuration to a B2Bi runtime server.
l Divide the elements of a single project into multiple projects for partial deployment to a runtime
server.
l Deploy the integrations that comprise the project individually to a B2Bi runtime server.
l Deploy the migrated transport configuration to a B2Bi runtime server..
Paste resources to the following directory in the target runtime environment:
$B2BI_SHARED_LOCAL/4edi/component
For information on how to deploy XIB-migrated transports, see Deploy an XIB transport
configuration to B2Bi on page 177.
Prerequisites
You must have already imported XIB dataset to Integration Manager. See Import an XIB dataset to
Integration Manager on page 174.
Procedure
1. In Integration Manager, in the left pane (directory pane) , either right-click a project that you
imported as a dataset from XIB, or right-click an Integration within the imported XIB project.
2. From the context menu, select Deploy/Update.
3. In the "Deploy to server" wizard page, enter the name of the target B2Bi Server for deployment.
4. Click OK.
Integration Manager deploys the integrations to the selected B2Bi environment in the form of
Resources.
In the Resource drop-down field you can now view available Resources for the selected component
type, and confirm that your XIB configuration elements are available in B2Bi.
Prerequisites
You must have already imported XIB dataset to Integration Manager, and this dataset must include
migrated application transport data.
Procedures
Message Log
Use Message Log to view integration engine message-processing related information, such
transfer status, synchpoints, etc. Although Message Log includes reprocessing functions, we
recommend the use of EDI Tracker for message reprocessing.
Message Log is for message-processing information. To view logs of system-related errors on the
integration engine, see Trace Viewer on page 234.
Each message is associated with a unique LogId. A message going through the integration engine
may change its shape (mapping, splitting, enveloping). For each new form of the message a new
LogId is obtained. When you look at the Message Log, Hierarchical view, it is possible to see the
complete chain of LogIds and log events and how each LogId is related to its child-LogIds and/or
parent-LogIds.
Normal message-related errors are logged in the Message Log. For example:
l Parsing failures
l Classification failures (no matching criteria)
l Send failures
l Retrieve failed polling (ftp server down, or directory missing, POP server down…)
Some of the above log events also yield an Alert, normally only the first one of many.
The Message Log displays the logged information (including the relationships between messages).
Each log entry represents a message for which events were logged. The exception is an entry that
represents a failure to receive a message.
Favorite searches ; Searches that were saved.
Log entries (top right) Entries returned by the last search.
Log entry events (bottom right) Events for the selected entry.
Favorite Searches
Favorite Searches are searches that have been created and stored in the system by users. A search
can be viewed as a subset of all existing messages. For example, a search can be specified as all log
entries within the last 24 hours. This search would then display in the Log entries pane only the
messages that match that specific criteria.
Log Description
Details
Creation The date and time at which the first event was created.
Date
Modification The date and time at which the last event was created.
Date
ID Log ID. See detailed description below.
Integration The name of the integration in which the event occurs.
Activity The name of the Activity within the integration in which the event occurs.
Description The description shows the first event associated with this log entry.
The icon shows the highest grade of classification severity of the associated events. For
example, if an entry has two events associated with it (one of classification severity "Error"
and one of classification severity "Fatal"), the classification severity of the entry is shown
as Fatal.
Event classifications
There are six event classification severity levels.
OK
Debug
Information
Warning
Error
Fatal
Active means that the processing is still under way. Inactive means that the processing has finished.
Command Description
Show Hierarchy... Select this command to display the Hierarchical view display window. You can use
this window to explore the parent/child relationships between log entries.
Resend This command is active only when you right-click on a SynchPoint-type log entry.
Use this command to resend a failed message transfer.
Clear Select this command to delete the selected entry from the entry list.
Columns Select this command to show/hide any of the Log entry
pane columns.
Date The date and time when the event was created The icon shows the event classification.
Type The event type.
Code The event code. The interpretation of this code depends on the event type.
Integration The name of the (dynamic) integration in which the event occurs.
Activity The name of the Activity within the (dynamic) integration in which the event occurs.
Description Description of the event type.
You can also choose to display log entries in a hierarchical view, refer to Viewing details of a log
entry.
If you have selected to generate SyncPoint in your configuration you can resend your message from
the View menu. (Refer to Integration Manager for more details on SyncPoint). Refer to Re-sending
messages for more information.
Options
The Options that can be set affect the format of the date display.
Miscellaneous
Option Description
Allow successful log entries to be marked Administrative setting: allow finished, successful messages to
as manually corrected be marked as manually corrected
Expand Log Entry By default Changes the default behavior for displaying Log Entries in the
Message Log
Show deleted activity information Show deleted activity information in the Message Log
Memory
Option Description
Entries per section Each search returns max 500 entries (default). These are sliced up in sections. With
this setting, you can change the number of entries per section.
Batch request Each search returns max 500 entries (default). With this setting, you can change the
entries count maximum number of returned results.
Warn if a log entry Set the threshold value for the Integration Engine to provide a warning for large
exceeds X bytes entries.
Reduce memory
consumption
Log Date
Pick the date format to be displayed in log entries.
Statusbar Date
Pick the date format for status bar.
Item Action
Only active Active entries are those that are still being processed. If you wish only to view such
entries entries, select the Only active entries check box.
Add Search Add an additional search.
Save As Click Save As to save the search settings under a new favorite search. In the window
displayed next, enter a name for the search you want to save. This search then appears
in the Favorite Searches pane.
Save Click Save to save the search setting when editing a favorite search.
Absolute If you want to search a specific calendar period, select Absolute Range, then From and
Range To.
Relative Range If you want to search backwards in the past, select Relative Range. Then select the time
period using the drop-down list boxes for Days and Time. The period can range from 1
second to 100 days.
Remove Removes a previously added Search
Search
To Select the end year, month, day, and time for the search.
Severity Select the Severity check boxes for the event severities search. If you do not select any
of the check boxes, the Integration Engine searches for entries that have any severity. If
you select one or more check boxes, the Integration Engine searches for entries that
have the given severities.
From Select the start year, month, day, and time for the search.
Close To close the dialog box, click Close
Count To display the number of entries that match the search, click Count. The Integration
Engine displays the number of entries that match the search criteria.
Apply To perform the search, click Apply. The search results are displayed in the log entry list in
the Message Log window.
Favorite Searches
1. Double-click the required search name in the Favorite Searches pane.
The search results are displayed in the log entry list.
2. Click an entry in the log entry list to display the events in the lower right pane.
In the Message Log window, the search results are listed in the log entry list.
If you perform a search and the matching entries are more than 500, the integration engine will:
1. Sort the entries according to the default sort order.
2. Divide the entries into batches. Each batch, except possibly the last batch, will contain the
maximum number (500) of entries.
3. Display the first batch in the log entry list.
To display batches other than the current batch, use the following toolbar buttons:
Button Description
Display first batch
Display previous batch
Display next batch
Display last batch
Log entry Shows the parent and children (if any) of the selected entry. You can expand and
hierarchy compress the diagram to show or hide the children of the entry by clicking the plus
(left pane) and minus signs.
Log entry events Lists the events associated with the entry that is selected in the diagram.
(bottom right
pane)
The display of the log entry hierarchy can be altered to facilitate tracing of errors. In very complex
message handling, for example an EDIFACT message with multiple functional groups and
interchanges, the hierarchy is not displayed in full and no connection paths between parents and
children are shown.
You can set a particular entry as the root entry, in other words the “top” of the tree. You can also
view from the selected entry in different directions. For example, from a particular entry, you can
look in the processing backwards, towards the parent, or forwards, towards the children.
Properties tab
Item Description
Description Description of the activity. This can either be generated by the Integration Engine, or be
connected to the name of the integration. In this case, Message passthrough is the name of
the integration, and Send is the name of the activity.
Code The code number, if one has been defined.
Severity The severity of the event, used for identifying the severity of an error. Under normal
circumstance, the severity is classed as Info.
Activity The activity identification number assigned by the Integration Engine.
Integration Name of the integration to which the message event refers.
Date The date and time when the activity was carried out.
Type The type of message. This refers to the tabs in the dialog box. In this case, there is only one
tab, therefore, there is only one listing under the Transfer message. If there were two tabs,
there would be two listings.
Item Description
Transfer ID The transfer ID for the message, unique for the message, preceded by the host name.
Attributes The listing of attributes in the message that can be viewed.
Size Size of message.
View content Click View to view the details of the selected attribute in the format selected from the
as drop-down list box.
SyncPoint
Another type of general log entry is called a SyncPoint.
From the SyncPoint tab, you can re-send the message, or edit the contents of the message before
re-sending. You can also select another activity to resend the message from. To be able to use this
functionality you must have SyncPoints.
To edit the message:
1. Click the Edit button.
2. Edit the contents of the message in the dialog box, and click OK.
Only user categories User and Administrator are authorized to perform a re-send or to edit a
SyncPoint.
To perform a recursive search:
Re-sending messages
To be able to resend a message, a synchronization point must be created for the message. A
SyncPoint event is automatically created for those messages encountering an error during
communication transfer or other errors occurring in the integration engine framework. Errors
occurring in user MBCs do not automatically generate synchronization points. For errors occurring in
user MBCs, the MBC_HIERCHMSGENV.CreateSyncpoint statement is used to create a synchronization
point. In the Message Log, you can search for these messages using the SyncPoints search dialog
box.
1. Open the SyncPoint event from where you want to re-send the message.
This is done in the event window for the message. See Viewing details of a log entry.
2. Browse for any activity in which to re-send the message.
This is useful for an instance when a message could not be classified to an activity, allowing
you to manually select which activity to use for further processing.
3. Double-click the SyncPoint log event.
4. Click Browse and select the activity to re-send.
1. In the log entry window of the Message Log, select those messages you want to re-send.
2. Select the View > Resend menu option to re-send the selected messages.
The messages are re-sent from the last SyncPoint event logged for respective message.
When re-sending a log event that is still active, log events from the original message and the re-sent
message are mixed in the log event window. You should therefore avoid re-sending active log
entries. Wait until they are inactive before re-sending the message.
Reprocess a message
When you reprocess a message, the integration engine restarts processing at the point where the
original message was received. All message handling in the integration engine is repeated.
To reprocess a message:
1. From a search results list, select the message you want to reprocess.
2. From the menu bar, select File > Reprocess message.
Alternatively, you can right-click the message entry in the list and then select Reprocess message
... from the context menu.
1. From the list of search results, select the item to correct.
2. From the menu bar, select File > Mark as manually corrected.
Alternatively, you can right-click the item in the list, and then select Mark as manually
corrected… from the context menu.
Metadata Browser
Use Metadata Browser to display metadata retrieved from one or more systems.
Limitation
The Metadata Browser plugins for API Connector and Datamapper ADF should not be used. They will
be removed from the product in a future release.
To build up metadata in the source metadata pane, do one or both of the following:
l Double-click an expandable item (preceded by a plus sign) in the tree. This will display the items
it contains.
l Select a non-expandable item (not preceded by a plus sign) in the tree. On the Source menu,
click Access Site. This will add a plus sign to the item, indicating that its items have been
loaded and that it can be expanded.
For some items, a dialog box opens, prompting you to select a subset of data for the next level.This
enables you to select one or more items rather than download details for all items.
For other items, the Metadata Browser must retrieve a large amount of metadata. In this case, the
metadata is retrieved asynchronously and a progress bar appears.
You can copy one or more items to the imported metadata pane. If a subset of one of the items is
required, you can select each unnecessary item and click Import > Delete Item.
Metadata types
Metadata Browser displays two metadata types:
l Default metadata
l Application metadata
Default metadata
The table below lists the types of metadata that are automatically provided.
Type Description
Data Definition Tool Used for converting models created by the Data Definition Tool to ADF files.
Datamapper ADF Used for copying records from existing ADF files.
Datamapper EDI Used for converting Datamapper EDI tables to ADF files.
Application metadata
Some application connectors, such as the XML Application Connector, support metadata retrieval.
After you install a connector, new metadata types are available automatically in the Metadata
Browser.
Metadata properties
To see the properties for an item, select it and click View > Properties. A dialog box displays the
list of properties for that item.
Menu options
The following table describes the tasks you can perform using the menu options in Metadata
Browser.
Option Description
File > Clears the right (imported metadata) pane.
New
File > Enables you to select an existing client Datamapper ADF file and import the data from that file
Open into the imported metadata pane.
File > Saves the data from the imported metadata pane to a Datamapper ADF file. If the data was
Save not imported from an existing ADF file or saved previously, you are prompted to provide a file
name.
File > Saves the data in the imported metadata pane to a Datamapper ADF file and enables you to
Save As specify a new file name.
File > Exit Closes Metadata Browser.
View > Displays a dialog box that contains a list of properties for the selected item.
Properties
Source > Retrieve the next level of metadata for the selected metadata item in the source metadata
Access pane.
Site
Source > Removes all child metadata items from the selected item in the left (source metadata) pane.
Clear Item
Import > Copies the selected source metadata item to the imported metadata pane. If a data tree
Add Item structure already exists in the imported metadata pane, then the new tree is appended to that
structure.
Import > Copies the selected source metadata children to the imported metadata pane.
Add
Children
Import > Removes the selected imported metadata item and all of its children from the display.
Delete
Item
Performance Monitor
Use Performance Monitor to view the performance of the integration engine.
l Task Monitor tab on page 194
l Queue Monitor tab on page 199
Both of these tabs provide real-time insight in the inner workings of the integration engine. In
addition, statistics are collected. These statistics are collected from the moment the integration
engine was started.
In addition, you can export reports from Performance Monitor.
Alerter tasks
The Alerter sub-tab displays information about Alerter tasks. The Alerter sub-tab contains two fields:
Field Description
Access time How much time it takes to access this server
Number of alerts The number of Alerts registered for this server
The Task References sub-tab shows which other tasks are depending on this server task.
Field Description
Parent tasks The Parent tasks column typically displays the task or machine on which the
selected task is running.
Task The task column displays the name of the task selected in the left column.
Referenced The Referenced tasks column displays a list of tasks that are either used by or
tasks use the selected task.
Filer tasks
The Filer sub-tab displays information about Filer tasks. The Filer sub-tab contains two fields:
Field Description
Access time Indicates how fast the filer server can process requests. The access time is
calculated as an average of 100 authentication requests. A high access time
might indicate system disk saturation or too many tasks using this task
Configuration Indicates how fast the filer server can get access to the requested information.
Access time
The Task References sub-tab shows which other tasks are depending on this server task.
Field Description
Parent tasks The Parent tasks column typically displays the task or machine on which the
selected task is running.
Task The task column displays the name of the task selected in the left column.
Referenced The Referenced tasks column displays a list of tasks that are either used by or
tasks use the selected task.
Logger tasks
The Logger sub-tab displays information about Logger tasks. The Logger sub-tab contains two
fields:
Field Description
Access time How much time it takes to access this server
Active Entries The number of entries that are in the system, and are “waiting” for something
else to happen, before they become inactive
The Task References sub-tab shows which other tasks are depending on this server task.
Field Description
Parent tasks The Parent tasks column typically displays the task or machine on which the
selected task is running.
Task The task column displays the name of the task selected in the left column.
Referenced The Referenced tasks column displays a list of tasks that are either used by or
tasks use the selected task.
l Disk Access Time (local disk)
o Synchronized – each time a file is written the file content is synchronized with the physical
disk
o Unsynchronized – indication of how fast content can be written to the disk buffers of the
operating system
Disk access time is calculated as an average time for writing and deleting 100 files of 10 Kb.
l Free Disk Space, (CORE_ROOT / CORE_DATA / CORE_LOCAL)
Performance Monitor displays the following information for this task:
l Disk Access Time (local disk)
o Synchronized – each time a file is written the file content is synchronized with the physical
disk
o Unsynchronized – indication of how fast content can be written to the disk buffers of the
operating system
Disk access time is calculated as an average time for writing and deleting 100 files of 10 Kb.
Field Description
MFC cache size This field displays the number of Message Flow Components (MFCs) that the
Processing Engine tries to keep loaded at the same time. A high cache number
improves performance at the cost of higher memory and resource usage.
Enable time Time profiling indicates:
profiling on
l How often the MBCs are called
MBCs
l The execution time for each MBC
Note: Time profiling adds overhead for measuring processes, so should be
avoided in production environments.
MBC statistics This table displays information about Message Flow Components (MFCs)
table running in the selected Processing Engine Task. The information is displayed
in the following columns:
l Name: Name of the MFC as defined in the Component Registry
l Status: Whether or not the MFC is currently loaded in memory
l Total used: How many times the MFC has been loaded
l Exec. count: How many times the MFC has been invoked
l Exec. time: Total time the MFC has been executing.
This information is only available if you have enabled time profiling.
The Processing Engine entries provide a detailed view on the components that have been executed
within a certain processing engine and which components have been cached (loaded in memory).
Per component it shows how often the component has been loaded (Total used) and how often the
component has been invoked (Exec. Count). The execution time shows the total amount of time the
component has been executed. In order to get this information, time profile must be enabled first:
After running files through the system, more details appear on the Task Monitor tab.
Table tasks
The Table sub-tab displays the following fields containing information about the Table task:
Item Description
Server uptime Table server uptime
Number of clients Number of Table server clients
Number of open files Number of open files on the Table server
Item Description
Added table entries Number of table entries added on the server
Deleted table entries Number of table entries removed from the server
Read table entries Number of table entries that have been read
Active table entries Number of table entries active
Oldest table entry The age in minutes of the oldest table entry
Access time Time it takes to access this server
The Task References tab shows which other tasks are depending on this server task.
Field Description
Parent tasks The Parent tasks column typically displays the task or machine on which the
selected task is running.
Task The task column displays the name of the task selected in the left column.
Referenced The Referenced tasks column displays a list of tasks that are either used by or
tasks use the selected task.
Timer tasks
The Timer sub-tab displays the following fields containing information about the Timer task:
Field Description
Server uptime Timer server uptime
Server idle time Timer server idle time
Number of clients Number of Timer server clients
Number of open files Number of open files on the Timer server
Added timers Number of timers added on the server
Removed timers Number of timers removed from the server
Expired timers Number of expired server
Active timers Number of active timers
Field Description
Last expire time Time when last timer expired
Access time Time it takes to access this server
The Task References tab shows which other tasks are depending on this server task.
Field Description
Parent tasks The Parent tasks column typically displays the task or machine on which the
selected task is running.
Task The task column displays the name of the task selected in the left column.
Referenced The Referenced tasks column displays a list of tasks that are either used by or
tasks use the selected task.
For each queue, the following items are tracked:
Item Description
Task The name of the task as it is defined in the Core tab of the Task Monitor
properties window. The task name is followed by the queues related to the
task.
Queue Task Queue Task associated to this queue
Entries Number of entries being processed or waiting to be processed
Exist Mean Average time currently queued items have been waiting in the queue to be
Time processed
Exist Min. Minimum time an entry out of all currently queued items has been waiting in
Time the queue to be processed
Exist Max. Maximum time an entry out of all currently queued items has been waiting in
Time the queue to be processed
Item Description
History Mean Average time all queued items have been waiting in the queue to be processed
Time
History Min. Minimum time an entry out of all queue entries has been waiting in the queue
Time to be processed
History Max. Maximum time an entry out of all queue entries has been waiting in the queue
Time to be processed
Item Description
Integration The name of the Integration-Process that contains the Activity that has called
the processing task
Activity The name of the Activity that has called the processing task
Waiting The number of queue entries that are waiting for processing
Processing The number of entries that are currently being processed
In the page that appears, various settings can be set (non-permanent) for the Task Monitor.
Reports
The Task Monitor can produce two report types, which can either be copied to the clipboard or
exported as CSV files.
These statistics are calculated from the moment the integration engine was started.
Queue Monitor
Use Queue Monitor to view lists of queues on a selected integration engine.
l Queue
l Management
l Settings
Queue
The left pane of the Queue tab shows the queue servers running within the integration engine. The
order of the queues can differ from one installation to another, as well as on the same installation if
you removed the queue server, performed a force rebuild, and restarted all tasks.
The sorting of the queues inside the queue server is the creation sequence of the queue names
inside the queue server. It is completely random because the position is given by the start sequence
and the IDs or the tasks
After double-clicking the entry (or a single left click on the + sign in front of the entry), the queues
managed by this queue server are shown. Each queue entry is prefixed with the number of active
entries in the queue.
Queue Operations
Statistics – Displays the statistics of a specific queue:
Item Description
Number of Number of entries being processed or waiting to be processed
messages in queue
Current Mean Time Average time currently queued items have been waiting in the queue to
be processed
Current Min. Time Minimum time an entry out of all currently queued items has been
waiting in the queue to be processed
Current Max. Time Maximum time an entry out of all currently queued items has been
waiting in the queue to be processed
Total Mean Time Average time all queued items have been waiting in the queue to be
processed
Total Min. Time Minimum time an entry out of all queue entries has been waiting in the
queue to be processed
Total Max. Time Maximum time an entry out of all queue entries has been waiting in the
queue to be processed
Commit history – Shows the history of commits on the queue.
Clean queue – Deletes all entries from the selected queue. All serialized entries will be put into the
audit trace for reference.
Remove queue – This function has a special purpose; it deletes the queue from the disk. This is
used only on the deleted queues (as is displayed on the queue server). When a task is deleted in the
integration engine and the queue remains on the disk, disk space can be recovered using this
function. All entries from that queue will be deleted forever. Caution: Be very careful when using
this function because it cannot be undone.
View modes
l Original – This mode is the original mode and kept for backward compatibility. The order of the
queues can differ from one installation to another, as well as on the same installation if you
removed the queue server, performed a force rebuild, and restarted all tasks.
The sorting of the queues inside the queue server is the creation sequence of the queue names
inside the queue server. It is completely random because the position is given by the start
sequence and the ids or the tasks.
l Alphabetic sort on Queue Server – Sorts the queues in alphabetic mode (A-Z) inside the queue
servers.
l Sort by Queue Type – Sorts the queues (over the various queue tasks) by type. The types can be
found from the internal functions of the queues (which can be found from the queue directory).
Those types are: MFP, HME, TG, and for trans.-adapters (comm.-adapters) Expire, Receive, Send
and Trigger.
The queues are sorted alphabetically within each queue type.
l Sort by Functionality – Sorts the queues (over the various queue tasks) by functionality. This
sorting is based of the type of the processing, communication or between. The types are:
Hierarchical Message Processor (HME), Message Feed Processor (MFP), Trans-adapters (comm.-
adapters) and Transfer Gateway.
The functions are sorted alphabetically within each queue type.
l Alphabetic sort using prefixes on Queue Server – To use this special type of sort, select the
Management tab.
Different profiles display in the top half of the split window. The content of the selected profile is
displayed as a tree in the lower half of the split window. Each profile contains folders which will
be displayed as servers in the first tree view (for example: old, new, and the rest of the queues).
Each folder contains a number of prefixes. Those prefixes are used to populate the folders from
the queue tab. The separator between the prefix and the name should be the underscore “_”.
This option sorts the queues (over the various queue tasks) by name (A-Z).
l Sort by depth of the queues on Queue Server – This type of sort is used by those who use the
Queue Monitor as a monitoring device to see congestion on the system. This type of monitoring
requires manual refresh of the system to avoid a slow-down from over-querying the system.
o Each manual refresh recreates the view and sorts the queues using the number of contained
messages as criteria.
o When you click Refresh, the entire view is rebuilt and the queue servers are displayed in a
collapsed format.
Column Description
Task The name of the task as it is defined in the Core Task tab of the Axway
Server properties window. The task name is followed by the queues related
to the task.
Queue Task Displays the Queue task that is related to each queue.
Entries Displays how many entries are waiting or are being processed for each
queue.
Exist Mean Time Displays the mean time that the entries currently in the queue have been
waiting in the queue.
Exist Min Time Displays the minimum time that the entries currently in the queue have been
waiting in the queue.
Exist Max Time Displays the maximum time that the entries currently in the queue have
been waiting in the queue.
Column Description
History Mean Displays the mean time that all queue entries processed from the time the
Time queue task was started, have waited in the queue.
History Min Time Displays the minimum time that all queue entries processed from the time
the queue task was started, have waited in the queue.
l The Delete function deletes the entry.
l The Syncpoint function deletes the entry and creates a syncpoint entry in the message log with
the copy of that queue entry. This is used when an entry blocks a processing operation and the
user wants to save the data and re-send the modified data.
l The Syncpoint without delete function creates a SyncPoint entry in the message log with the
copy of the queue entry. This is used to create a copy of that data into the Message Log for
future resend or analysis.
l The Change priority function changes the priority of an entry. This is used especially in
dynamic loading the MBC to prioritize an entry.
l The Open function opens the entry in read-only view.
Settings
Caution: If this value is set too large, the entire queue server may crash if messages are too large in
relation to allocated memory.
The color of the queues is changed with the following algorithm:
l White, if the queue is empty
l Green, if the queue has less than Multiplication factor elements
l Yellow, if the queue has between Multiplication factor and 5 * Multiplication factor elements
l Red, if the queue has more than 5 * Multiplication factor elements.
To bypass this algorithm, enter the exact threshold for each transition.
Remote Compiler
Use the Remote Compiler to compile, simulate, and register new/changed Message Builder (MB)
components on the server side.
Compilation options
Option Description
Trace Generate code for source code trace. The compiler generates code that makes it
possible for the Message Builder interpreter to keep track of the current source
code file name and line number when executing the program. When a message is
written to the log and the program has been compiled with this option, the file
name and line number is output together with the rest of the message.
Note: A program compiled with this option executes slower than if it was
compiled without it.
No Do not optimize. By default, the compiler tries to replace constant expressions
Optimization with constant values, remove non-reachable code, make some peephole
optimizations, etc. This option disables these actions.
Ignore Ignore the specified PRAGMA statements.
Pragma
Statements
Check only, Perform the compilation without producing any executable (".x4") file
no output
generated
Output When the compilation has completed without errors, the number of read
statistics characters, lines and files are written to the standard error output.
Option Description
Output If this option is specified, for each include and library file (see the description of
include and the EDI_LIB environment variable, page 56), an informational message is
library output. Include files that are ignored because of the ONCE modifier to the
information INCLUDE statement are also indicated to be so.
Allow New lines are, by default, not allowed in Message Builder string constants. For
newlines in backward compatibility reasons, this option can be used to allow them.
strings
Auto register Register the program after compiling automatically
Source files
Option Description
Input Specify the source file to be compiled. Up to 3 input files can be specified
file
Library Specify the libraries to be used for compilation
file(s)
Custom Provide additional compilation options. With the “Browse” button, an overview of all
options compiler options can be obtained.
Output Specify the location of the destination file (By default this is the same directory and
file file name as the source file, with a different extension)
To run a simulation, click the Simulate button. You can set several options:
Option Description
HierchEnv Trace Generates debugging information in a trace file.
(hierarchical
environment)
Message File Enables you to specify a test message.
MBC Configuration Select String or Generic property stage (preferred because it enables
(UI) you to specify individual values for the various component options).
Work files directory Browse to the location where you want to store the simulation results.
Start Runs the simulation.
To use the component, you must first register it. Click Register. The status bar will read "registered
successfully" and will show the path to the component.
SeqNum Utility
The SeqNum Utility enables you to view and manage the expected identities for exchanged EDI
documents with various partners. Use the SeqNum Utility to view and edit ID entries between
multiple partners in a single view.
Each entry in the SeqNum interface represents the next ID that is expected for an exchange with a
particular partner.
In normal circumstances, the management of EDI IDs is done in the B2Bi user interface, in the
configuration pages for EDIFACT and X12 inbound agreements.
You can use this utility to:
• View entries
• Add entries
• Modify entries
• Remove entries
The SeqNum Utility page opens and displays a tab for each of the following sequence types:
l Interchange X12
l Functional Group X12
l Transaction Set X12
l Interchange EDIFACT
l Functional Group EDIFACT
l Transaction Set EDIFACT
Each of these tabs displays a table with the following columns:
l ID number – This is an internal ID.
l Key – The key is comprises the content of the fields corresponding to the identifiers for an EDI
message Interchange or Functional Group.
l Sequence number – The next number expected in the sequence for this partner and
document format exchange.
l State – Indicates whether the next expected sequence number value has been set correctly
(OK/Undefined).
1. Click any entry to open an editing page for a specific EDI interchange, transaction set, or
functional group.
2. Modify the fields as required.
3. Click OK to save changes, or Cancel to close the editing page without modifying.
The environment variable changes that you make in System Profile Manager are automatically
updated in the file:
/shared/data/system/b2bi.properties
When you make any changes using System Profile Manager, the tool automatically restarts B2Bi so
that changes take effect in your runtime system.
l HME Config
l Logger Config
l Logger Searches
l Tasks Config
l Environment
l Queue Monitoring
Additionally, System Profile Manager has a "hidden" Advanced Properties page, that you can use to
control debugging and disk space allotment. See System Profile Manager advanced properties page
on page 227.
Definitions
Hierarchical Messaging Environment (HME) – A B2Bi integration engine server task for
processing hierarchical messages such as EDIFACT, X12, and most inhouse formats.
Processing Engine (PE) – A B2Bi integration engine task that provides a general execution
environment for more specialized integration engine execution environments, such as the transfer
(adapter and gateway) and Hierarchal Messaging Environment.
An HME task is started by a PE task. The number of PE tasks defines how many instances of the HME
are started. You must define at least one PE for the HME.
Tab display
The HME Config tab displays a list of the HME Tasks running on the B2Bi integration engine server to
which you are currently connected. For each Task, the tab also displays the number of sessions and
number of Processing Engines (PE's) that are attributed to the Task.
Initial settings
During B2Bi Server installation the installer user is prompted to provide a value for the number of
CPUs to use for B2Bi Server operations. The installer uses this value to calculate initial settings for:
l Number of Processing Engines for HME1, HME2, and HME3
l Number of Logger Tasks
The following table shows a typical scaling per number of CPUs.
B2Bi uses CPUs automatically, to align with the scaling of the Processing Engines and Logger Tasks
in the HMEs. When you assign additional Processing Engines, you ensure that more instances of the
B2Bi message processing programs will run in parallel.
To make decisions on how to tune Process Engines and Tasks you should analyze the message
handling behavior in the Queue Monitor. See Queue Monitor on page 202. For example, the HME
labeled "HME3" typically handles the mapping load. Adding Processing Engines to this HME is a
good strategy if you see a build up of messages in the HME3 queue.
Finding the ideal settings for your environment may require some trials and adjustments.
l Add a logger task on page 217
l Delete a logger task on page 217
Modify an HME
To modify an HME Task:
1. Double click any HME Task in the list of HME Tasks.
System Profile Manager displays the Modify HME Parameter page.
2. Enter new parameter values as required:
l Number of sessions – Enter an integer in the range 1 -500.
l Number of PE (Processing Engine) – Enter an integer in the range 1-100. The number
of PE tasks defines how many instances of the HME are started
l MBC cache size – Enter a cache size for the Message Builder Component Cache.
Note: Default for HME 3 is 50 MB. This is the recommended minimum. If you run a large
number of dynamically loaded components you may want to increase the HME 3 cache
size limit.
3. Click OK.
Tab display
The Logger Config tab displays a list of B2Bi Logger tasks that are active on the integration engine to
which you are currently connected. The number of default loggers in a B2Bi installation is directly
dependant on the number of CPU(s) that were selected to use during installation.
1. Double click the name of a logger in the list of loggers .
System Profile Manager displays the Modify Logger Parameter page.
2. Select the Scheduling tab and complete the fields:
l Archive every – Use the selection boxes to set the frequency at which B2Bi evaluates
inactive logger files for potential archiving. This can be any time period from once an
hour to once every 10 years. Default = 1 day.
l Start archiving at – Use the selection boxes to set the start time of archiving
(month/day/hour/minute). Default = First minute of the selected frequency period.
l Skip archiving on the following days of the week – If there are any days of the
week for which you do not want to run the archiver, enter the days here. During periods
of high volumes of message transfers you may wish to avoid archiving to enhance
throughput performance.
l Archive inactive logs older than – Specify the selection criteria for an inactive file
to be archived, based on the age of the file. Default + 30 days.
l Inactive every – Set the period of time that the active log file should be inactivated.
l Starting at – Set the time at which the newly inactivated log is converted to an
archive.
l Print info traces when starting and finishing the archive process – Select this
option to have the integration engine write an informational entry to the trace viewer
when it starts the archive process and another one when it finishes the process.
l Force logger to switch to log file when archiving even if current log file
isn't full
l Write debug information to trace log
3. Select the Script tab:
l Archiver script panel – In this panel you can develop the script used by the archiver.
For tips and examples on how to edit this script, see Manage log file archiver scripts on
page 218.
l Test script – Click to verify that the script runs as you expect. The test produces input
files to the script for the test. These test files cannot be used for anything else. This
means that you cannot restore the test archive into the system.
When you click Test script the a confirmation message is displayed. Click OK to
continue with the test or Cancel to abort.
When you start the script, a dialog shows that it is running. If the script is in error and
hangs, click Cancel to stop it.
l Exclude non-existing files from the archiver file list – During archiving, a list of
all files referenced in this archive is provided to the archiver script. This list includes all
of the message files. If for any reason a message file no longer exists, by default, the
archiver list contains a reference to this file. If your archiver script is not able to deal
with non-existing files, select this option to have the archiver remove the references to
the non-existing files.
Note: References to non-existing files indicate configuration errors, possibly caused by
MBCs or Maps that violate rules for correct copy/deletion of message data.
l Write warnings about non existing files to trace log – Write a warning to the
trace log when the archiver detects non-existing files.
4. Click OK.
1. Right-click anywhere in the Logger Config screen and select New.
System Profile Manager opens the configuration screen for the new logger task.
2. Select your preferred options (described in the previous section) and click OK.
When you add a new logger:
l B2Bi restarts all integration engine tasks
l A new B2Bi Archiver task is created, corresponding to the newly added logger.
l You can view the new logger task and its corresponding archiver tasks in Task Monitor. See Task
Monitor on page 229.
1. Make sure that all user tasks are stopped. You cannot delete a logger task that is actively in use.
2. Right-click the logger task you want to delete in the Logger Config screen. Only logger tasks
that were added after installation can be deleted.
3. Select Delete and then confirm the deletion when prompted.
When you delete a logger task:
l Related inactive log entries are archived.
l Related logger singleton tasks are deleted.
l Related archiver tasks are deleted.
l Indexes related to the logger are deleted.
l The deleted logger is no longer referenced in the HME.
You must never manually delete any of the internal, index, or external files. Manual deletions lead to
faults in the archiving. Following a successful archiving process, the logger automatically carries out
the deleting function.
For ease of tracing archive files, the name of the archive file should contain the date. In scaled
systems, use a different technique, such as the date followed by a suffix denoting the name of the
logger.
Error handling
The physical archive file that the archive creates can be very large. If the file is too large for the
location where it should be stored, the script should exit with a non-zero exit status to abort the
archiving. This is typically done automatically since commands like tar exit with a non-zero exit
status in case of failure.
Deactivating outputs
If you do not want to archive the data, configure the archiver script to do nothing. The script then
simply discards the data.
Reference file
The first input parameter to the user-defined script is a file that contains file references to all files
that need to be part of the archive. The referenced files are absolute paths from the root directory of
the file system.
Default script
When you install the System Profile Manager (with the B2Bi tools client) the Archiver script has the
following form:
exit 0;
#java -jar $CORE_SOLUTIONS/java/lib/b2bi.archiver.jar -f $1 -p Logger_1_
-d
Where:
l exit 0; discards data without archiving.
l #java -jar $CORE_SOLUTIONS/java/lib/b2bi.archiver.jar -f $1 -p Logger_1_
-d is a commented line that, if un-commented, enables archiving to zip file.
#exit 0;
java -jar $CORE_SOLUTIONS/java/lib/b2bi.archiver.jar -f $1 -p Logger_1_
-d
When you enable archiving, you should check the Scheduling tab of the Modify Logger
Parameters page, and set the logging schedule to correspond to your system requirements.
source=$1
archive=$CORE_DATA/`date +%Y-%m-%d`.tar
cd / ;tar cfv $archive `cat $source | sed -e s!/!!`
How this script works:
l Extracting files – The tar command extracts files from an archive using the same name as is
used when the files were inserted. This can create a conflict with the run-time system when you
try to restore an archive with the absolute paths, as specified in the input file to the script.
exit 0
@echo off
for /F "tokens=2" %%T in ('Date /T') do (set DATE=%%T)
for /F "delims=/ tokens=1" %%T in ("%DATE%") do (set DAY=%%T)
for /F "delims=/ tokens=2" %%T in ("%DATE%") do (set MONTH=%%T)
for /F "delims=/ tokens=3" %%T in ("%DATE%") do (set YEAR=%%T)
set WINZIP="C:\Program Files\WinZip\wzzip"
set SOURCE=%1
set ARCHIVE=%CORE_DATA%\%YEAR%-%MONTH%-%DAY%.zip
%WINZIP% -a -P @%SOURCE% %ARCHIVE%
How this script works:
l Prerequisites – To run the sample script you need to have WinZip with the WinZip Command
Line Support Add-on installed. The parser for generating the archive file name also requires that
you have Windows command extensions enabled. Refer to WinZip and Windows help files for
description of the different commands.
exit 0
To create a custom logger search:
Message Feed Protocol (MFP) is an Axway proprietary protocol that you can use to submit files
directly into the B2Bi integration engine. The protocol also enables you to view a status report after
files are submitted. For details about MFP, see Message Feed Protocol on page 382.
Tab display
The Tasks Config tab displays:
l Fields for modifying MFP Task connections
l Fields for modifying Interchange Connector connector
l A button for opening the XML Metadata Browser configuration page
l Fields for modifying how the integration engine sends events to Sentinel
1. Enter a new value in either of the displayed fields:
l Listen for connections on TCP/IP port – Specify the listening port for the MFP
Task.
l Max number of concurrent connections – Enter an integer value between 1 and
500.
2. Click OK.
1. Enter a new value in either of the displayed fields:
l Max number or concurrent server connections (outbound) – Enter an integer
value 1-n (no upper limit).
l Max number of concurrent receive connections (inbound) – Enter an integer
value 1-n (no upper limit).
2. Click OK.
l Send mode - Select one of the two following modes for sending integration engine events to
Sentinel:
o Direct – Select this option if you want to send events directly to Sentinel without first
writing them to the overflow file. This increases the rate of transfer of events to Sentinel,
however, it can significantly reduce integration engine performance.
o Batch – By default, the integration engine sends events to Sentinel in batch mode, first
sending them to the overflow file. The integration engine then consumes these events from
the overflow file and sends them to Sentinel as processing resources allow. This consume-
and-send process can take several minutes, but reserves maximum integration engine
performance for message-processing tasks.
l Overflow file safety limit – Enter a percentage as a threshold for when the integration
engine will begin sending events to Sentinel from the overflow file. The default value is 80% of
the overflow buffer size. If this threshold percentage is reached, the integration engine begins
sending events to Sentinel, potentially reducing normal message-processing resources.
Environment tab
Use the Environment tab to control Message Log behavior, to set the message-processing
environment variables and to perform other miscellaneous tasks.
Manage logging
To manage Message Log logging behavior, use the following fields:
Miscellaneous settings
Use the Misc tab to set the following environmental variables:
Sequencing
Use the Sequencing tab to set the environment variable that controls how the integration engine
controls out of sequence messages. Select one of the options in Out of sequence action drop
down list:
The default behavior is to not generate any warning or errors.
In this tab you can modify the following fields:
Relationship to Sentinel
If integration engine queue monitoring is activated, and B2Bi also is configured for Sentinel
monitoring, the integration engine queue warnings and events are also sent to Sentinel.
Field descriptions
l Debug activation – select the servers to run in debug mode:
o Kernal server debug
o Kernal server client debug
o Kernal server system debug
o Deployment server debug
o MapStage server debug
o System profile manager debug
Task Monitor
Use Task Monitor to view the processes that are running on integration engines on all installed
nodes. The task definitions for these processes are stored in the run-time dataset on each node.
The top frame (B2Bi Node frame), displays an alphabetical list of all B2Bi nodes that are
currently running.
For each node, you can see:
o Name – Name of the node.
o Role – Role of the node in the cluster architecture. The role of the node can be System
and/or Primary Node.
o The status of the System Tasks and User Tasks. Tasks can have one of the following statuses.
Status Description
Stopped The task is stopped.
Stopping The task is stopping and shutting down.
Waiting stop The task is ready to stop.
Starting up The task is starting and initializing.
Running The task is running.
l Tasks frame
The bottom frame (Tasks frame) displays a tab for each node listed in the top frame. Select a
tab to see a list of all tasks running on that node.
For each task, the frame has a column that displays:
o Task – Name of the task on the node.
o Process – Name of the process that supports the task.
o PID – The ID number the process has in the operating system.
o Status – The status of the process. Processes can have one of the following statuses:
Status Description
Standby The process is in failover condition and is expected to start up.
Stopped The process has stopped.
Stopping The process is stopping and shutting down.
The integration engine starter server has called the STARTER_STOP
statement in the stop program or has sent at least one signal to the
process and is waiting for it to terminate.
Waiting stop The process is waiting to be stopped.
Other processes have to be stopped before the process itself can be
stopped.
Waiting start The process is waiting to be started.
Other processes have to be started before the process itself can be
started.
Starting The process is starting and initializing.
The integration engine starter server is waiting for the process to
create the file specified by the create-file parameter to the
STARTER.ProcessAdd statement, or waiting for a server to respond.
Started The process has started and is ready to execute processing.
Terminated This status occurs for a short while when a process has unexpectedly
terminated.
Waiting to be The process is waiting to be executed.
run
Running The process has started and is executing.
Finished The process has completed.
running
Relationship to Sentinel
If B2Bi is configured for Sentinel monitoring, changes in the status of a task in the integration
engine are also sent to Sentinel.
l All tasks – Displays all tasks of an integration engine in each integration engine tab.
l System – Displays only system tasks in each integration engine tab. The System processes
correspond to the tasks started when you start the integration engine on Windows by starting
the Integration Engine service in Services, or on UNIX by executing core_servers start.
You cannot start or stop these processes from within the integration engine, but you can view
details about them.
l User – (Default view) Displays only user tasks in each integration engine tab.
You cannot modify these properties, but you may need to display them for technical support
purposes.
This detail window has three tabs:
l General tab - Displays the following fields:
o Name – Displays the name of the task.
o Command – Displays the location and name of the command that drives the task.
l Start/Stop Parameters tab – displays the following fields:
o Start attempts
o Start delay (seconds)
o Start timeout (seconds)
o Minimum uptime (seconds)
o Create file
o Stop timeout (seconds)
o Stop delay (seconds)
o Stop program
l Other tab – displays the following fields:
o Dependencies
o Stop signals
The processes correspond to the tasks in the integration engine run-time dataset. One process exists
for each task on the Tasks tab.
Caution: Although you can use Task Monitor to start and stop processes from within the integration
engine, this is not recommended. We recommend you stop the B2Bi trading engine instead.
Stopping the B2Bi trading engine triggers the stop of the integration engine processes.
Stop a process
To manually stop a started process,
1. Right-click the name of a user task in the list of tasks for an integration engine
2. On the File menu, click Stop.
The integration engine stops the selected process. Its status changes to "Stopped" and the
stopped icon appears. The integration engine also stops any processes that are dependent on
the selected process.
Start a process
To manually start a stopped process:
1. Right-click the name of a user task in the list of tasks for an integration engine.
2. On the File menu, click Start.
The integration engine starts the selected process, and the started icon is shown. After the
integration engine starts, the status of the selected process is displayed as "Started" and the
started icon appears.
To create a variable dump file:
Trace Viewer
Use Trace Viewer to view system related trace messages generated as the integration engine runs.
A trace message is a non-structured message. It has no predefined form, and it is up to the program
generating the trace to freely create the message. Trace messages are typically generated in two
cases:
l If an error occurs in a program it might generate a trace message detailing the error.
l If a program has been started in debug mode it might write informative trace messages about its
actions. In this case, trace messages are continuously generated. To view them, you must define
filters to restrict the amount of information displayed.
Viewing trace messages is helpful to system administrator troubleshooting problems.
Trace Viewer is for monitoring system-related events. To view message-processing traces generated
on the integration engine, see Message Log on page 179.
Icon Type
Debug
Information
Warning
Icon Type
Error
Fatal
Audit
Resend
Custom
Trace Viewer displays all trace log entries that correspond to the search criteria.
1. On the File menu, click New.
Alternatively, you can click the New search... icon on the toolbar.
2. Enter a name for the filtered search.
3. Specify the filtering criteria for your search:
Item Action
Relative range To specify a relative start period in the past as a point of departure for
search results, select the Relative range option, then enter a number of
days/hours/minutes/seconds as the beginning relative time for filtering.
Severity If you do not want to limit the trace display to the severity of warning
messages, choose Select All.
To filter based on the severity of warning messages, and/or to include
audit event entries, select Specify Level, then select the check boxes for
the items you want to display:
l DEBUG
l FATAL
l INFO
l AUDIT
l WARNING
l CUSTOM
l ERROR
Custom If you selected Specify Level, you can also enter a custom severity type in
Severity type the text box.
Source type Select a filter trace type:
filter l Select all – No source filter applied
l B2Bi – B2Bi traces only
l Specify – Select this filter option, then enter a string to use as
a filter on traces.
Program Specify the program that generates the trace messages. For example, if you
enter cfgserver, only entries that refer to the cfgserver program are be
listed. If you enter r4edi*, all entries referring to programs that begin with
the string r4edi are listed. If you enter *tool_toolbox*, all entries that refer
to program names that contain the string tool_toolbox are listed.
Item Action
Free text Specify text that should be contained in the trace message. For example, if
you enter admin, only messages containing the text “admin” will be listed.
If you enter admin*, all messages that begin with the text “admin” will be
listed. If you enter *admin*, all messages that contain the text “admin”
anywhere in the message will be listed.
Continuously Select this option to continuously collect and update display information.
collect trace
information
4. Click Apply to store the selections.
5. Click OK.
The selected trace log is displayed in Trace Viewer. It displays the data and time of each
occurrence, the node generating the occurrence, the program generating the occurrence, and
the message.
3. Modify the search filters as required.
4. Click Apply.
User Manager
Use the User Manager to specify and control user access to the B2Bi tools on the local node. You
can create users and groups of users and assign different rights to them to control the tools and
features they can access.
Note: This tool is only available if Passport AM is not used as the identity provider.
Access rights are specified for each tool in such a way that access can be denied. If access is denied,
the icon for the tool will not appear, making it impossible to run.
There are three access levels to each tool, and the access rights associated with the three levels vary
depending on the tool:
Access Description
Level
Admin Administration access rights are allowed. This access is typically unrestricted
and all of the functionality of the tool is available.
User User access rights are allowed. For some tools, this includes read-only access
to the information managed with the tool.
Other Access rights other than those for Admin or User are allowed. For some tools,
Other access allows partial access to the tool; some (but not all) of the
functionality is available.
Group Access
B2Bi Monitoring Trace Viewer
(Support Level 1) Message Log – Read only (cannot edit or resend)
EDI Tracker – Read only (cannot edit or resend)
File Viewer
B2Bi Operators Trace Viewer
(Support Level 2) Message Log – Can edit and resend
EDI Tracker – Can edit and resend
File Viewer
Performance Monitor
Queue Monitor
Logger Utility
Alert Manager
Task Monitor
B2Bi Developers Same as B2Bi Monitoring, plus:
Remote Compiler
Character Sets Manager
Datamapper (link to start the Datamapper client on the local PC)
Datamapper Builder
Metadata Browser
Administrators All of the above, plus User Manager
Note: You cannot modify or delete the Administrators group.
Users Alert Manager (restricted access)
Datamapper Builder (restricted access)
Message Log
Metadata Browser (restricted access)
Trace Viewer
Delete a group
Before you delete a group, keep the following in mind:
l You cannot delete the Administrators group.
l You cannot delete a group that is associated with one or more logins. First modify each login to
remove the group association, then delete the group.
A change to a login (or to a group) takes effect immediately in the following sense:
l Any users who subsequently log in using the login have the modified access rights.
l Any users logged in using the login at the time of the change continue to have the original
access rights until they log off.
Note: The admin login is predefined. It cannot be deleted and its associated access rights cannot be
changed. However, you can change the password, which by default is "admin". Anyone logging in
to the B2Bi Tools Client with this login has unrestricted access to all of the tools and their functions.
Delete a login
Before you delete a login, keep the following in mind:
l You cannot delete the admin login.
l You cannot delete a login if a user is currently signed in using that login.
To delete a login:
1. Select the login that you want to delete.
2. On the File menu, click Delete.
3. Click OK to confirm that you want to delete the login.
If you use Windows, the names of these tools have an extension of .cmd (for example,
as1Tool.cmd). If you use UNIX, whether or not the tools have an extension depends on the
specific brand of UNIX or Linux OS (for example, as1Tool or as1Tool.sh). This does not apply to
scripts with extension of .sql.
Depending on whether you use Windows or UNIX, some of these tools may not have been installed
with the application. If a tool changes a value in the database, restart the server for the change to
take effect.
Tool Description
as1Tool Packages, unpackages and dumps EDIINT messages. This
tool is not for use by end users.
as2Tool Packages, unpackages and dumps EDIINT messages. This
tool is not for use by end users.
as3Tool Packages, unpackages and dumps EDIINT messages. This
tool is not for use by end users.
asxTool Packages, unpackages and dumps EDIINT messages. This
tool is not for use by end users.
certScan Scans public-key certificates and reports information,
warnings and errors. See Analyze certificates for errors.
certStats Collects statistics about the certificates in the database.
crlPurgeHttpsClient Removes outdated CRLs from the CRL table and file system.
See Purge old CRLs.
dataMover Migrates data from one database to another. This tool is to
be used only by some Activator users under the supervision
of technical support.
Tool Description
deallocateClob De-allocates Character Large Object (CLOB) data types in
Oracle databases. Recommended for use only by database
administrators and experienced users of Oracle.
Before version 5.6, Interchange misallocated CLOB for
some database tables. This could result in a database
eventually running out of space. We recommend using this
tool if you use a version before 5.6 or are upgrading. The
tool performs a limited unit of work at each invocation. As
the amount of misallocated storage can vary from one
installation to another, you can run the tool repeatedly
until the tool generates messages indicating there is no
more unnecessary CLOB storage to remove.
Run the tool without parameters from a command line to
display instructions for use.
This tool is not necessary if you are using a current version
of Interchange.
deletePedigrees Deletes ePedigree records in the database. This tool is
intended for use in a development or test environment
only. Run the tool without parameters to display
instructions. Only users whose licenses support ePedigree
should use this tool.
derby_IJ Enables SQL queries of a Derby database. This tool is not
for use by end users.
diagnose When performing troubleshooting, this tool can be used to
compress and send log files to technical support. Technical
support often requests log files when helping users. Run
the tool from a command line and follow the menu
prompts.
For information about how to submit log files to technical
support through the user interface, see Send log files to
technical support.
diff The diff tool is similar to the Unix diff tool and the
Windows comp tool. It improves upon these tools by
reporting the offsets of differences even in binary files.
Also, it is platform independent, and it sets exit codes to
allow shell scripts or batch files to make use of the tool.
Very large files are processed using Java nio buffers for
efficiency. The tool provides help if you invoke it with -?.
dirTester Tests the Java temp or other specified directory by writing
an unbuffered and a buffered temp file. This tool is for use
only upon advice of technical support.
ebxmlCpaSchematronValidator Performs tests on the content of the ebXML CPA. The tool
makes sure matching elements in each PartyInfo
element of the ebXML CPA are consistent.
ebxmlCpaSecurityGuard Used for digitally signing CPAs. Its various functions all
relate to signing and verifying digital signatures of a CPA.
ebxmlCpaValidator Performs a schema validation on a CPA.
exportProfile Exports community and partner profiles to XML files.
Community profiles are exported as partner profiles.
Partner profiles also can be exported as partner profiles,
either singly or in a batch. Run exportProfile without
parameters to display directions for using the tool. This
tool is only for use with B2Bi 5.4 or later.
externalConfigBackupRestore.cmd For the use of this tool, see Back up and restore a custom
configuration on page 377.
extractSpecialTarFiles The tar command has a known limitation for archive entries
bigger than 8 GB in size.
Use this tool to unpack tar files created by B2Bi when they
have a size larger than 8GB.
Use this tool from the command line with the following
syntax:
extractSpecialTarFiles <soruce_file.tar.gz>
<destination_directory>
fillInMessageDirection For messages traded before upgrading to version 5.4 or
later of B2Bi, this tool adds metadata to the database
regarding the direction of traded messages (inbound,
outbound). Message direction displays in the search results
of Message Tracker.
Use of this tool is optional if you have used versions earlier
than 5.4. The tool is not needed if you did not use a
version earlier than 5.4.
Run this tool only when the server is not running. If the
database contains thousands of records, it may take hours
for the tool to run. If you start the tool and end the process
before it is completed, you can re-start the tool later and it
picks up where it left off.
Tool Description
ftpTester Verifies interoperability of the trading engine with FTP
servers.
httpTester Tests whether an HTTP client can connect to the HTTP
server. This tool is for use only upon advice of technical
support.
jmsTester Checks for proper configuration of JMS queues.
keyInfoWriter Extracts KeyInfo element information from a certificate for
use in a CPA for ebXML trading. See Extract KeyInfo
element for a CPA.
listTimeZones Lists all available time zones for the JRE in use on a
computer.
logViewer Interleaves multiple log files and sorts log entries
chronologically. It also can filter log categories, log levels
and threads. This tool is not for use by end users.
manageTrading Starts/stops trading engines and pauses trading engine
message consumption. This tool supports the following
options:
l manageTrading help
l manageTrading processing start [transient] – Starts all
trading engines on the local server. When "transient"
option is specified, the nodesmembership table does not
update the flag that indicates if a node is automatically
started or not at server startup.
l manageTrading processing stop [transient] – Stops all
trading engines on the local server. Same comment on
the "transient" option as for start.
l manageTrading consumption pause – Engages the
Pause Consumption system throttler across all trading
engines in the cluster.
l manageTrading consumption resume – Stops the Pause
Consumption system throttler across all trading engines
in the cluster.
mapProxyDeployer Enables you to list, deploy, and remove containers from
the deployment server Map Proxy.
For details of how to use this tool, see Command line
interface for the deployment server on page 79.
messagePurgeTool Immediately deletes all database records of traded
messages and all files in the backup directory. See Purge
trading engine manually on page 459.
mmdGenerator Used to generate all possible MMDs or a specific MMD for
an ebXML CPA.
modifyUIPorts Resets the HTTP user interface port in the event of a port
conflict that makes the UI inaccessible. This tool is for use
only upon advice of technical support. Run the tool
without parameters to display instructions for use. In
addition, read the port resetting instructions in the
startup.xml file at <install directory>\conf.
netInfo Finds network interfaces for a computer.
oracle_create_table Script for implementing Oracle custom tablespaces. See
Spaces.sql the Oracle custom tablespaces option of the Interchange
Installation Guide.
partyInfo Lists the names and details about the community, partner
and WebTrader profiles configured in B2Bi and the totals
for each profile type. This tool provides a way to obtain
information about profiles outside of the user interface.
passportIntegrationConfig Enables you to set the B2Bi / PassPort connection
parameters from a command line.
rejectInprocessMessages Sets B2Bi messages that are stuck in the in-process state to
a status of failed. This tool is for use only upon advice of
technical support, and only when all TE and CN nodes are
stopped. Run the tool without parameters to display a list
of valid parameters.
sftpTester Verifies the operation of the SFTP client in the trading
engine and a partner’s SFTP server.
sysInfo Displays system information such as the operating system,
memory statistics, JVM class path and JVM library path.
This information also writes to <install
directory>\logs\sysInfo.log.
systemBackupRestore Command line tool for backup and restore functions. See
System configuration backup/restore using the command
line on page 357.
Tool Description
treeScan Scans the collaboration and action trees for corruption.
uiSslConfig Backup tool for editing the ...conf\startup.xml file in
case the setup method described in Configure
UI connection does not work properly.
upgradeCompare Searches for and lists changes made to an installation tree
after a snapshot was taken with the upgradeList tool.
This tool is used when upgrading.
upgradeDiff Recursively compares the sizes of system files in the old
and new installation directory trees. This tool is used when
upgrading.
upgradeList Generates a snapshot of the entire application installation
directory tree. This tool is used when upgrading.
versionInfo Lists the version and build number of the installed
application. Run the tool without parameters to generate
the list.
l B2Bi I/O management on page 276
l Java memory usage on page 279
l Manage sockets and Oracle processes on page 286
l Network tuning on page 285
l Troubleshoot unexpected trading engine restarts on page 288
l Troubleshoot startup failure with large numbers of trading pickups on page 294
l Troubleshoot integration engine selects wrong IP address from backup network interface on
page 270
[integration_engine_install_directory]\solutions\4edi\pgm\b2bi_
diskaccesstime.x4
This tool returns two values: synchronized and unsynchronized.
Command recommendations:
l Run the command when there is no traffic.
l Run the command several times to obtain average values.
Example results
> r4edi diskaccess.x4
Synchronized Unsynchronized
B2BI_SHARE_DATA 1.220 ms 0.085 ms
CORE_ROOT 1.395 ms 0.085 ms
CORE_DATA 1.385 ms 0.050 ms
In the B2Bi server, set the limit for caching message content in memory (default = 4096 kb) :
l When the message size is < limit, the message is cached in memory
l When the message size is > limit, the message is written in to physical file in the integration
engine filer directory
Example : Suppose you typically process a high load (more than 5 msg/sec) and messages are
around 10kb in size.
1. Make sure also that there is enough memory available on the system to handle the load.
2. Increase the limit in order to use more memory and limit the I/O usage.
To increase the limit, run the B2Bi installer in Configure mode. Select to configure the Server,
and reset the value Message size limit for caching message content in memory to a
higher value.
Example : Suppose you want to handle high loads of messages (more than 5 msg/sec) and have
configured B2Bi so that message entry creates a large number of entries in the Message Log.
Use System Profile Manager to control the logging behavior.
On the System Profile Manager > Environment tab > Message logging section, set the Message
logging level value to no logging.
Attention: Execute the following actions only with the assistance of an Axway expert or support.
1. Check Message Log for entries older than 7 days.
2. Check if there are fxxx files older than 7 days located in $B2BI_SHARED/data/filer .
Note: You can use filer_list.x4 and filerutil.x4 to list and remove filer files. See
procedures below.
3. Verify that there are no “old” active entries (old means that the message processing stopped for
different reasons) which can slow down the logger. If old entries exist, analyze the entries to
understand why the processing is not complete, and reprocess or inactivate active entries
accordingly. To inactivate entries, do one of the following:
l Use LOGGER_OPTIONS –R.
l Manually inactivate the entries in Message Log.
Using Filer_list
1. Run Integrator profile.
2. Run r4edi ~/filer_list.x4 $B2BI_SHARED_DATA/filer > $CORE_ROOT/filer_data
3. Check the results in filer_data.
Using Filerutil
1. To list all files older than <_no_of_days_> :
r4edi filerutil.x4 -a $CORE_LOCAL/config/passwd -r $B2BI_SHARED_
DATA/data/filer/singleton/ -t _no_of_days_
2. To delete the files older than <_no_of_days_> :
r4edi filerutil.x4 -a $CORE_LOCAL/config/passwd -r $B2BI_SHARED_
DATA/data/filer/singleton/ -t _no_of_days_ -D
l "Failed to start filer directory"
l "Filer insert reply: Returned filer file xxx is not empty"
l "UTIL_FILESTATUS: failed to fetch status for file xxx"
l "Stale NFS file handle"
l "Filer insert reply xxx doesn't exist"
Attention: Execute the following actions only with the assistance of an Axway expert or support.
l Test IO coherence using http://wiki.samba.org/index.php/Ping_pong.
Note: This IO coherence test requires working fs locks.
l “no memory left”
l “java heap space”
To manage this type of error:
l Check the amount of memory that is available when you perform a load test. To do this you can
use tool such as top, topas, free, nmon or Task Manager, depending on the OS. Make sure the
system has enough memory to handle the load. The minimum memory required is 4GB.
Depending on your configuration and processing load, this number varies. In many cases, 10 to
16 GB of memory is required.
l Check if Xmx values are high enough in the integration engine and in the trading engine.
o Integration engine – Set the Xmx value in the file $CORE_
LOCAL/config/java/jvm.cfg
o Trading engine – Open the file <trading_engine_install_
directory>/conf/jvmArguments.xml in a text editor, and modify the Xmx value for the
related node. For additional information see Tune the trading engine on page 280.
The place to change the actual Xms and Xmx values used for the Control Node (CN) and Trading
Engine (TE) JVM nodes is in the Interchange/conf/jvmArguments.xml file. The best practice is
to keep both parameters equivalent to each other's value for a given JVM. For example, the CN can
have Xms512m and Xmx512m while the TE has Xms1024m and Xmx1024m, because within a JVM
the values are equivalent.
Setting the optimal value for the JVM is often a trial and error process, based on transaction size,
transaction volumes, OS, hardware, etc. On most low-volume systems, no tuning is required. High-
volume systems can typically benefit from tuning.
The effective maximum value you can use for this setting is 2048 MB. Note that for values above
1024 MB, there is a risk that JAVA consumes more resources for memory management than it
benefits from additional memory allotment.
In any case, the values of the Executive, CN and TE Java heaps combined must not exceed the
physical memory of the machine. Additionally, be sure to reserve some free RAM for the OS to use
(minimum recommended is one half Gigabyte).
To change the default heap size settings:
1. Go to <Interchange_install_directory>/conf/ and open the jvmarguments.xml file
in an editor.
2. Locate the following lines in the file (TE example):
<NodeType type="TE"
class="com.axway.clusterold.startup.Boot">
<Option>${axway.haboob.heap.initial}</Option>
<Option>${axway.haboob.heap.maximum}</Option>
3. Replace the initial and maximum heap variables with values as shown in the following
examples:
<NodeType type="TE"
class="com.axway.clusterold.startup.Boot">
<Option>Xms1024m</Option>
<Option>Xmx1024m</Option>
4. Save the file.
tuning.properties file
You can manage a specific set of trading engine properties from the tuning.properties file,
located in [Interchange_install_directory]/conf. Trading engine properties not supported
in this file can be set in the System properties page, see System properties page on page 258.
The properties in this file are applied only to the node where the tuning.properties file is
located. You must set the property for each node of a cluster by modifying the file for each node.
By default, tuning.properties is empty, which indicates that all of its entries are operating at
their default values.
You can set the following tuning parameters by adding them to tuning.properties. As with
setting the JVM value, it is impossible to suggest optimum values for these settings. Any changes in
this file only take effect after stopping and restarting B2Bi.
The following is an example of a line that has been added to this file:
systemThrottle.pausePickups=true
The following table lists parameters that you can add to the tuning properties file.
messagePurge.onceADayHour 0 If messagePurge.onceADay is set to true,
this field controls the hour of the day to
run the purge. Values are from 0 to 23
only.
messagePurge.Tasks 5 The number of tasks that will be
scheduled to perform message purge.
startupSynchronizer.multicastAddre The multicast address uses to synchronize
ss 232.92.56. the startup of Control nodes.
63
systemThrottle.minimumBytesFree 0 The system throttle engages if available
heap falls below this number. Default
value is zero, which indicates that the
memory should not be checked.
systemThrottle.pausePickups true
taskScheduler.maxThreads 50 This value should always be 1/3 of the
systemThrottle.maximumTaskQueue
Size value (as illustrated by the default
values displayed in this table).
Caution: Incorrectly modifying values on this page can severely degrade product behavior. Do not
modify values on this page without explicit guidance from Axway support.
To access the "hidden" default System Properties configuration page, point your browser to
http://<hostname>:6080/ui/core/SystemProperties. Trading engine properties not
displayed on this page can be set in the tuning.properties file; see tuning.properties file on
page 255.
In some cases, changes in this page take effect immediately, while in other cases a product stop and
restart are required. If you fail to see an immediate change in B2Bi behavior, try a stop/restart.
Network tuning
l Shared disk
l B2Bi database
l Integration engine database
An acceptable maximum response time is to the shared disk is 15 seconds.
You must obtain a response time to the remote database of less than 5 seconds.
If the database response is over five seconds, the failover functionality shuts down the node. The
following error is generated:
A good first step is to confirm that the database failover is due to slow response time and network
issues. For normal operations (no forced stop) the B2BI_ISALIVE_TIMER setting of the integration
engine environment.dat file defines the maximum time the kernel server waits before shutting
down the node.
To resolve network issues:
l Make sure that the network interface is at least 1 Gbps.
l Make sure no external applications are heavily loading the machine where the database resides,
even during a short period of time.
To resolve this type of error:
Windows
On Windows, you can monitor the number of open files, using a tool such as Process Explorer in
"handle mode".
To modify socket availability you can:
l Modify Windows registry keys –
o Decrease TCPTimeWait (for example: from 240 to 60)
o Increase MaxUserPort to 20000
l Open the file $CORE_LOCAL/config/environment.dat in a text editor, and increase
BSOCKET_SOCKET_COUNT. The default value is 64. You may wish to increased the value to, for
example, 1024.
UNIX/Linux
On UNIX, to check the number of files open, use the command lsof | wc -l.
On Linux, the file /proc/sys/fs/file-nr provides the number of files allocated / available /
maximum.
For example, the file might indicate: 20055 (allocated) 0 (available) 6553600 (maximum).
To extend the maximum number of allowed open files, modify the value in proc/sys/fs/file-
max
Oracle processes
When you reach the maximum number of process connected to Oracle you receive an error of the
type “tns no appropriate service handle found”
To resolve this error, monitor the number of processes connected, and change number of process in
Oracle accordingly. A typical value is 1500 processes.
FATAL :20150813:05.47.17.72:procengine(procengine(hierchmsgenv)):memory
allocation error WARNING:20130813:05.47.17.74:procengine:A fatal error
has ocurred in the interpreter, a dump of active coroutines has been
written to the file procengine_fatal.dmp.
WARNING:20130813:05.47.18.63:procengine:A dump of interpreter variables
has been written to the file procengine_fatal_vars.dmp.
FATAL :20151028:16.48.37.55:procengine(procengine(hierchmsgenv)):memory
allocation error file "l:\\/src/4edi/interpreter/interp.c", line 30148
WARNING:20141028:16.48.43.16:procengine:A fatal error has ocurred in the
interpreter, a dump of active coroutines has been written to the file
procengine_fatal.dmp. WARNING:20141028:16.49.33.53:procengine:A dump of
interpreter variables has been written to the file procengine_fatal_
vars.dmp
ERROR :20151210:04.09.30.24:procengine(procengine
(transgateway)):../../transferlogger.s4:448:failed to write entry to
logger server: the log entry is too big
These types of performance degradation and errors are often due to the high number of
simultaneous log commits done by the Hierarchical Messaging Environment task of the integration
engine.
To lower the memory requirement of the Hierarchical Messaging Environment task, and allow a
larger number of output messages to be created, execute the following tasks:
Note: If the performance or errors continue to occur with this setting, you may need to
increase the value, depending on the number and size of the output messages.
4. Restart the B2Bi integration engine.
1. Start the B2Bi integration engine tools client.
See B2Bi integration engine management tools on page 87.
2. From the Dataset menu of the main tool page, select Force a rebuild of run-time data.
l Operating system issues
l Product issues
l Configuration issues
Collect logs
The first step in troubleshooting is to collect trading engine logs, B2Bi logs and integration engine
traces,, and then analyze this data to identify the module that threw the first error.
Debug mode
If the logs and traces do not provide enough information, the next step is to run the servers in
debug mode.
4. Save the file.
5. Restart B2Bi.
3. Add CORE_HMEDATAIO_DEBUG=yes.
4. Save the file.
5. Restart the integration engine.
1. Open a Windows command console.
2. Run [integration_engine_install_directory]\profile.bat.
UNIX
1. Go to [integration_engine_install_directory]/bin
2. Run CORE_SETUP.
Where:
–d = 2 (debug)
–t = The name of the task. For a list of tasks, look in the B2Bi user interface, System
Management page. Click Integrator tasks on an integration engine node. (Example task
name: “B2Bi HME 4 Task”)
1. Open a Windows command console.
2. Run [integration_engine_install_directory]\profile.bat.
UNIX:
1. Go to [integration_engine_install_directory]/bin
2. Run CORE_SETUP.
4. Check the integration engine trace: Look for debug entries coming from starter.x4.
Windows
Use the Windows task manager to kill the cn (control node) and the (trading engine) processes.
View the output files:
l <trading_engine_install_directory>/logs/[machine_name_cn_console.log]
l <trading_engine_install_directory>/logs/[machine_name]_b2b_console.log
UNIX
Use the following commands to kill the control node:
l ps -ef | grep cn
l kill -3 <pid>
View the output file: <trading_engine_install_directory>/logs/[machine_name_cn_
console.log]
Use the following commands to kill the trading engine node:
l ps -ef | grep te
l kill -3 <pid>
View the output file: <trading_engine_install_directory>/logs/[machine_name]_b2b_
console.log
Most startup scripts start 8 instances of nfsd (RPCNFSDCOUNT=8). You can increase this value to 30
threads (RPCNFSDCOUNT=30). Alternatively, you can change values when starting nfsd, using the
number of instances as a command line option.
Procedure:
1. On each node, go to the /etc directory.
2. In the stab file, set lookupcache=none.
For additional information about nfs stab file formats and options, see
http://unixhelp.ed.ac.uk/CGI/man-cgi?nfs+5 .
To test for the existence of the nsf cache:
1. Go to the same folder on both nodes.
2. On node 1 execute: stat filename
If a filename does not exist, an error is returned.
3. On node 2 execute: touch filename
4. On node 1 execute again: stat filename
You may still obtain errors for a certain period of time.
5. Repeat the stat filename command until the file appears.
NFS read lease bug workaround for Linux distributions running kernel
version 2.x (2012/11/14)
NFS uses a read lease to guarantee that a client can perform local read opens without informing the
server. Axway tests show that under some circumstances this read lease is not updated correctly and
causes inconsistency on what the different nodes see on the shared file system. This can cause the
nodes in a cluster to stop and restart unexpectedly and repeatedly.
Because this issue is not currently resolved in the 2.x Linux kernel, Axway recommends that you turn
off the read lease function on the NFS server. This is done by setting a flag in the /proc/sys file
system to tell the kernel to not allow any use of this feature.
The following procedure provides an example of how to set the flag on a Red Hat machine acting as
NFS server. Similar procedures can be adapted for other distributions.
1. Important: Stop B2Bi before you perform this procedure.
2. As root, execute the command:
echo 0 > /proc/sys/fs/leases-enable
3. Restart the NFS daemon:
/etc/init.d/nfs restart
4. After you complete the previous steps, unmount and re-mount from the NFS clients.
Note The change implemented by this procedure disappears when you reboot the server. To
make this change persistent over machine restarts, add the following lines to a start-up
script that is executed before the NFS daemon is started. A good place for this is in
/etc/init.d/nfs in the "start" section, after the check for root UID but before the nfsd
is started (insertion in bold):
artStopMessage.execute(StartStopMessage.java:118)
at com.axway.cluster.messaging.MessageExecutionWrapper.execute
(MessageExecutionWrapper.java:29)
at com.axway.cluster.bus.connection.Connection.handleMessage
(Connection.java:249) at
com.axway.cluster.bus.comm.MessageExecutor.run
(MessageExecutor.java:48)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
Source) at java.lang.Thread.run(Unknown Source)
at com.axway.cluster.extensions.thread.EventedThread.primRun
(EventedThread.java:102)
at com.axway.cluster.extensions.thread.EventedThread.run
(EventedThread.java:80)
Caused by: java.lang.NullPointerException
at org.eclipse.jetty.server.handler.HandlerCollection.setHandlers
(HandlerCollection.java:86)
at org.eclipse.jetty.server.handler.HandlerCollection.addHandler
(HandlerCollection.java:155)
at com.axway.transport.http.server.Jetty.handlerAsCollection
(Jetty.java:333)
at com.axway.transport.http.server.Jetty.getHandlersFromJettyConfig
(Jetty.java:318)
at com.axway.transport.http.server.Jetty.createHandlersFromConfig
(Jetty.java:182)
at com.axway.transport.http.server.Jetty.doStart(Jetty.java:71) ...
14 more
Solution: Working with your system administrator, modify the B2Bi user account on the operating
system where B2Bi is installed. Set the ‘open files’ property to a high number. The optimum setting
depends on the way your system is to be used. The greater the number of pickups, the larger the
setting should be.
For example, for a community configured with 2000 Trading pickups, begin with an 'open files'
setting of 102400, check the results and adjust accordingly.
Run the command in a loop while the system starts to determine if the existing setting for the "open
files" property is acceptable. If the errors listed above appear in the logs, the setting is not high
enough. If the system starts cleanly the setting is correct.
Example:
Use the lsof command to estimate an appropriate value for the ‘open files’ property:
Run the following loop while the nodes are restarting - (run as your user, not root).
This command reports the open file count for the specified user every 5 seconds. For example:
15387
15387
15387
15387
15387
15387
15387
15387
Working with the system administrator, use these results to set the 'open files' value. The values that
are returned will vary depending on the number of community pickups that are defined. The setting
you use for the 'open files' property should be higher than the maximum 'open file count' value that
is returned, allowing a buffer for additional resources.
In B2Bi Windows installations where you have set up two network adapters, the integration
engine retrieves the wrong IP address from the backup network interface.
Resolution
For Windows platforms, from B2Bi 2.1.0 SP6 patch 1, you must complete the following
procedure to ensure the retrieval of the IP addresses list in the correct order (primary vs
secondary) when two network adapters are set up:
1. Go to:
<B2Bi_installation_directory>/Integrator/local/config/
2. Open environment.dat in a text editor.
3. Set the value of CORE_BSOCKET_USE_ADDRINFO to "1".
Example:
CORE_BSOCKET_USE_ADDRINFO=1
4. Save the file and restart the integration engine.
Note When B2Bi is installed in a cluster, the variable needs to be set on all the nodes with
the same value.
When you run the installer in configure mode you have access to the original installation pages as
well as to additional pages that you can use to fine-tune the B2Bi performance and behavior.
1. In Windows Explorer, go to the root of the B2Bi Client or Server installation directory and right-
click configure64.exe.
2. Select Run as administrator.
Alternatively, you can:
1. Go to Start > All Programs > Axway Software > [B2Bi Client or Server installation
name] > Configure.
2. Right-click Configure and select Run as administrator.
UNIX/Linux:
Go to the root of the B2Bi Client or Server installation directory and launch configure.sh.
Configuring
After you start an installer in configure mode, you use it much as you do in installation mode.
You can work in either in a console display or a graphic interface display (Windows only) to view
your current installation settings and modify fields to meet your operating requirements.
Integrator
l License Number
l License Key
l SAP connector
o Library path
l Queue size
l Message size limit – Default = 16384
l Use B2Bi Visibility
l Enable online archive
l Enable Integration Manager
l WebEDI
l ALE
l FTP
l File system
l HTTP
l Email
l Secure Transport
l Enable migration
The recommended setup is to configure the B2Bi integration engine to run with fail-safe
operation enabled (synchronized write mode). This guarantees maximum message integrity and
overall system consistency.
l Enable EDIFACT batch file splitter – Enables the use of EDIFACT batches.
l Enable X12 batch file splitter – Enables the use of X12 batches.
l Use DML Maps – Enable the use of of DML maps within B2Bi. Enabled by default. You should
only clear this option in exceptional cases (for example when migrating environments).
l Generate acknowledgements for undefined X12 transaction sets – Enables the
generation of acknowledgements for non-defined transaction sets for specified partners within
your system. Disabled by default.
l Stop possible duplicates from being reprocessed flag – Messages that transit in B2Bi can
potentially be duplicated when the B2Bi system is unexpectedly halted before it can notify a
partner system that the communication is completed / confirmed. Use this option to control
B2Bi handling of possible duplicates.
Important: If you modify this setting (select or clear), for the changes to take effect you must
save the changes by opening the System Profile Manager application of the B2Bi Client tool set,
and selecting File > Save Dataset.
Option not selected = The integration engine automatically reprocesses completed (but
uncommitted) transfers when the system restarts.
Option selected = The Integration Engine does not automatically reprocess messages which
potentially have been processed already. Instead, it marks these messages as possible duplicates.
The system administrator must decide whether or not to retransmit flagged messages manually.
B2Bi SAP exchanges require SAP version 3.0.9 libraries.
l B2Bi I/O management on page 276
l Java memory usage on page 279
l Manage sockets and Oracle processes on page 286
l Network tuning on page 285
l Troubleshoot unexpected trading engine restarts on page 288
l Troubleshoot startup failure with large numbers of trading pickups on page 294
l Troubleshoot integration engine selects wrong IP address from backup network interface on
page 270
[integration_engine_install_directory]\solutions\4edi\pgm\b2bi_
diskaccesstime.x4
This tool returns two values: synchronized and unsynchronized.
Command recommendations:
l Run the command when there is no traffic.
l Run the command several times to obtain average values.
Example results
> r4edi diskaccess.x4
Synchronized Unsynchronized
In the B2Bi server, set the limit for caching message content in memory (default = 4096 kb) :
l When the message size is < limit, the message is cached in memory
l When the message size is > limit, the message is written in to physical file in the integration
engine filer directory
Example : Suppose you typically process a high load (more than 5 msg/sec) and messages are
around 10kb in size.
1. Make sure also that there is enough memory available on the system to handle the load.
2. Increase the limit in order to use more memory and limit the I/O usage.
To increase the limit, run the B2Bi installer in Configure mode. Select to configure the Server,
and reset the value Message size limit for caching message content in memory to a
higher value.
Example : Suppose you want to handle high loads of messages (more than 5 msg/sec) and have
configured B2Bi so that message entry creates a large number of entries in the Message Log.
Use System Profile Manager to control the logging behavior.
On the System Profile Manager > Environment tab > Message logging section, set the Message
logging level value to no logging.
Attention: Execute the following actions only with the assistance of an Axway expert or support.
1. Check Message Log for entries older than 7 days.
2. Check if there are fxxx files older than 7 days located in $B2BI_SHARED/data/filer .
Note: You can use filer_list.x4 and filerutil.x4 to list and remove filer files. See
procedures below.
3. Verify that there are no “old” active entries (old means that the message processing stopped for
different reasons) which can slow down the logger. If old entries exist, analyze the entries to
understand why the processing is not complete, and reprocess or inactivate active entries
accordingly. To inactivate entries, do one of the following:
l Use LOGGER_OPTIONS –R.
l Manually inactivate the entries in Message Log.
Using Filerutil
1. To list all files older than <_no_of_days_> :
r4edi filerutil.x4 -a $CORE_LOCAL/config/passwd -r $B2BI_SHARED_
DATA/data/filer/singleton/ -t _no_of_days_
2. To delete the files older than <_no_of_days_> :
r4edi filerutil.x4 -a $CORE_LOCAL/config/passwd -r $B2BI_SHARED_
DATA/data/filer/singleton/ -t _no_of_days_ -D
l "Failed to start filer directory"
l "Filer insert reply: Returned filer file xxx is not empty"
l "UTIL_FILESTATUS: failed to fetch status for file xxx"
l "Stale NFS file handle"
l "Filer insert reply xxx doesn't exist"
Attention: Execute the following actions only with the assistance of an Axway expert or support.
l Test IO coherence using http://wiki.samba.org/index.php/Ping_pong.
Note: This IO coherence test requires working fs locks.
l “no memory left”
l “java heap space”
To manage this type of error:
l Check the amount of memory that is available when you perform a load test. To do this you can
use tool such as top, topas, free, nmon or Task Manager, depending on the OS. Make sure the
system has enough memory to handle the load. The minimum memory required is 4GB.
Depending on your configuration and processing load, this number varies. In many cases, 10 to
16 GB of memory is required.
l Check if Xmx values are high enough in the integration engine and in the trading engine.
o Integration engine – Set the Xmx value in the file $CORE_
LOCAL/config/java/jvm.cfg
o Trading engine – Open the file <trading_engine_install_
directory>/conf/jvmArguments.xml in a text editor, and modify the Xmx value for the
related node. For additional information see Tune the trading engine on page 280.
The place to change the actual Xms and Xmx values used for the Control Node (CN) and Trading
Engine (TE) JVM nodes is in the Interchange/conf/jvmArguments.xml file. The best practice is
to keep both parameters equivalent to each other's value for a given JVM. For example, the CN can
have Xms512m and Xmx512m while the TE has Xms1024m and Xmx1024m, because within a JVM
the values are equivalent.
Setting the optimal value for the JVM is often a trial and error process, based on transaction size,
transaction volumes, OS, hardware, etc. On most low-volume systems, no tuning is required. High-
volume systems can typically benefit from tuning.
The effective maximum value you can use for this setting is 2048 MB. Note that for values above
1024 MB, there is a risk that JAVA consumes more resources for memory management than it
benefits from additional memory allotment.
In any case, the values of the Executive, CN and TE Java heaps combined must not exceed the
physical memory of the machine. Additionally, be sure to reserve some free RAM for the OS to use
(minimum recommended is one half Gigabyte).
To change the default heap size settings:
1. Go to <Interchange_install_directory>/conf/ and open the jvmarguments.xml file
in an editor.
2. Locate the following lines in the file (TE example):
<NodeType type="TE"
class="com.axway.clusterold.startup.Boot">
<Option>${axway.haboob.heap.initial}</Option>
<Option>${axway.haboob.heap.maximum}</Option>
3. Replace the initial and maximum heap variables with values as shown in the following
examples:
<NodeType type="TE"
class="com.axway.clusterold.startup.Boot">
<Option>Xms1024m</Option>
<Option>Xmx1024m</Option>
4. Save the file.
tuning.properties file
You can manage a specific set of trading engine properties from the tuning.properties file,
located in [Interchange_install_directory]/conf. Trading engine properties not supported
in this file can be set in the System properties page, see System properties page on page 284.
The properties in this file are applied only to the node where the tuning.properties file is
located. You must set the property for each node of a cluster by modifying the file for each node.
By default, tuning.properties is empty, which indicates that all of its entries are operating at
their default values.
You can set the following tuning parameters by adding them to tuning.properties. As with
setting the JVM value, it is impossible to suggest optimum values for these settings. Any changes in
this file only take effect after stopping and restarting B2Bi.
The following is an example of a line that has been added to this file:
systemThrottle.pausePickups=true
The following table lists parameters that you can add to the tuning properties file.
messagePurge.onceADayHour 0 If messagePurge.onceADay is set to true,
this field controls the hour of the day to
run the purge. Values are from 0 to 23
only.
messagePurge.Tasks 5 The number of tasks that will be
scheduled to perform message purge.
startupSynchronizer.multicastAddre The multicast address uses to synchronize
ss 232.92.56. the startup of Control nodes.
63
systemThrottle.minimumBytesFree 0 The system throttle engages if available
heap falls below this number. Default
value is zero, which indicates that the
memory should not be checked.
systemThrottle.pausePickups true
taskScheduler.maxThreads 50 This value should always be 1/3 of the
systemThrottle.maximumTaskQueue
Size value (as illustrated by the default
values displayed in this table).
Caution: Incorrectly modifying values on this page can severely degrade product behavior. Do not
modify values on this page without explicit guidance from Axway support.
To access the "hidden" default System Properties configuration page, point your browser to
http://<hostname>:6080/ui/core/SystemProperties. Trading engine properties not
displayed on this page can be set in the tuning.properties file; see tuning.properties file on
page 281.
In some cases, changes in this page take effect immediately, while in other cases a product stop and
restart are required. If you fail to see an immediate change in B2Bi behavior, try a stop/restart.
Network tuning
l Shared disk
l B2Bi database
l Integration engine database
An acceptable maximum response time is to the shared disk is 15 seconds.
You must obtain a response time to the remote database of less than 5 seconds.
If the database response is over five seconds, the failover functionality shuts down the node. The
following error is generated:
A good first step is to confirm that the database failover is due to slow response time and network
issues. For normal operations (no forced stop) the B2BI_ISALIVE_TIMER setting of the integration
engine environment.dat file defines the maximum time the kernel server waits before shutting
down the node.
To resolve network issues:
l Make sure that the network interface is at least 1 Gbps.
l Make sure no external applications are heavily loading the machine where the database resides,
even during a short period of time.
To resolve this type of error:
Windows
On Windows, you can monitor the number of open files, using a tool such as Process Explorer in
"handle mode".
To modify socket availability you can:
l Modify Windows registry keys –
o Decrease TCPTimeWait (for example: from 240 to 60)
o Increase MaxUserPort to 20000
l Open the file $CORE_LOCAL/config/environment.dat in a text editor, and increase
BSOCKET_SOCKET_COUNT. The default value is 64. You may wish to increased the value to, for
example, 1024.
UNIX/Linux
On UNIX, to check the number of files open, use the command lsof | wc -l.
On Linux, the file /proc/sys/fs/file-nr provides the number of files allocated / available /
maximum.
For example, the file might indicate: 20055 (allocated) 0 (available) 6553600 (maximum).
To extend the maximum number of allowed open files, modify the value in proc/sys/fs/file-
max
Oracle processes
When you reach the maximum number of process connected to Oracle you receive an error of the
type “tns no appropriate service handle found”
To resolve this error, monitor the number of processes connected, and change number of process in
Oracle accordingly. A typical value is 1500 processes.
FATAL :20150813:05.47.17.72:procengine(procengine(hierchmsgenv)):memory
allocation error WARNING:20130813:05.47.17.74:procengine:A fatal error
has ocurred in the interpreter, a dump of active coroutines has been
written to the file procengine_fatal.dmp.
WARNING:20130813:05.47.18.63:procengine:A dump of interpreter variables
has been written to the file procengine_fatal_vars.dmp.
FATAL :20151028:16.48.37.55:procengine(procengine(hierchmsgenv)):memory
allocation error file "l:\\/src/4edi/interpreter/interp.c", line 30148
WARNING:20141028:16.48.43.16:procengine:A fatal error has ocurred in the
interpreter, a dump of active coroutines has been written to the file
procengine_fatal.dmp. WARNING:20141028:16.49.33.53:procengine:A dump of
interpreter variables has been written to the file procengine_fatal_
vars.dmp
ERROR :20151210:04.09.30.24:procengine(procengine
(transgateway)):../../transferlogger.s4:448:failed to write entry to
logger server: the log entry is too big
These types of performance degradation and errors are often due to the high number of
simultaneous log commits done by the Hierarchical Messaging Environment task of the integration
engine.
To lower the memory requirement of the Hierarchical Messaging Environment task, and allow a
larger number of output messages to be created, execute the following tasks:
1. Start the B2Bi integration engine tools client.
See B2Bi integration engine management tools on page 87.
2. From the Dataset menu of the main tool page, select Force a rebuild of run-time data.
l Operating system issues
l Product issues
l Configuration issues
Collect logs
The first step in troubleshooting is to collect trading engine logs, B2Bi logs and integration engine
traces,, and then analyze this data to identify the module that threw the first error.
Debug mode
If the logs and traces do not provide enough information, the next step is to run the servers in
debug mode.
4. Save the file.
5. Restart B2Bi.
4. Save the file.
5. Restart B2Bi.
1. Open a Windows command console.
2. Run [integration_engine_install_directory]\profile.bat.
UNIX
1. Go to [integration_engine_install_directory]/bin
2. Run CORE_SETUP.
2. Launch the command:
r4edi task_setdebug.x4 -A password [-m host] [-c config server port] [-t
name] [-d level]
Where:
–d = 2 (debug)
–t = The name of the task. For a list of tasks, look in the B2Bi user interface, System
Management page. Click Integrator tasks on an integration engine node. (Example task
name: “B2Bi HME 4 Task”)
Set debug on the starter server (when B2Bi server cannot start)
1. Open a Windows command console.
2. Run [integration_engine_install_directory]\profile.bat.
UNIX:
1. Go to [integration_engine_install_directory]/bin
2. Run CORE_SETUP.
4. Check the integration engine trace: Look for debug entries coming from starter.x4.
Windows
Use the Windows task manager to kill the cn (control node) and the (trading engine) processes.
View the output files:
l <trading_engine_install_directory>/logs/[machine_name_cn_console.log]
l <trading_engine_install_directory>/logs/[machine_name]_b2b_console.log
UNIX
Use the following commands to kill the control node:
l ps -ef | grep cn
l kill -3 <pid>
View the output file: <trading_engine_install_directory>/logs/[machine_name_cn_
console.log]
Use the following commands to kill the trading engine node:
l ps -ef | grep te
l kill -3 <pid>
View the output file: <trading_engine_install_directory>/logs/[machine_name]_b2b_
console.log
Most startup scripts start 8 instances of nfsd (RPCNFSDCOUNT=8). You can increase this value to 30
threads (RPCNFSDCOUNT=30). Alternatively, you can change values when starting nfsd, using the
number of instances as a command line option.
Procedure:
1. On each node, go to the /etc directory.
2. In the stab file, set lookupcache=none.
For additional information about nfs stab file formats and options, see
http://unixhelp.ed.ac.uk/CGI/man-cgi?nfs+5 .
To test for the existence of the nsf cache:
1. Go to the same folder on both nodes.
2. On node 1 execute: stat filename
If a filename does not exist, an error is returned.
3. On node 2 execute: touch filename
4. On node 1 execute again: stat filename
You may still obtain errors for a certain period of time.
5. Repeat the stat filename command until the file appears.
NFS read lease bug workaround for Linux distributions running kernel version 2.x
(2012/11/14)
NFS uses a read lease to guarantee that a client can perform local read opens without informing the
server. Axway tests show that under some circumstances this read lease is not updated correctly and
causes inconsistency on what the different nodes see on the shared file system. This can cause the
nodes in a cluster to stop and restart unexpectedly and repeatedly.
Because this issue is not currently resolved in the 2.x Linux kernel, Axway recommends that you turn
off the read lease function on the NFS server. This is done by setting a flag in the /proc/sys file
system to tell the kernel to not allow any use of this feature.
The following procedure provides an example of how to set the flag on a Red Hat machine acting as
NFS server. Similar procedures can be adapted for other distributions.
1. Important: Stop B2Bi before you perform this procedure.
2. As root, execute the command:
echo 0 > /proc/sys/fs/leases-enable
3. Restart the NFS daemon:
/etc/init.d/nfs restart
4. After you complete the previous steps, unmount and re-mount from the NFS clients.
Note The change implemented by this procedure disappears when you reboot the server. To
make this change persistent over machine restarts, add the following lines to a start-up
script that is executed before the NFS daemon is started. A good place for this is in
/etc/init.d/nfs in the "start" section, after the check for root UID but before the nfsd
is started (insertion in bold):
artStopMessage.execute(StartStopMessage.java:118)
at com.axway.cluster.messaging.MessageExecutionWrapper.execute
(MessageExecutionWrapper.java:29)
at com.axway.cluster.bus.connection.Connection.handleMessage
(Connection.java:249) at
com.axway.cluster.bus.comm.MessageExecutor.run
(MessageExecutor.java:48)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
Source) at java.lang.Thread.run(Unknown Source)
at com.axway.cluster.extensions.thread.EventedThread.primRun
(EventedThread.java:102)
at com.axway.cluster.extensions.thread.EventedThread.run
(EventedThread.java:80)
Caused by: java.lang.NullPointerException
at org.eclipse.jetty.server.handler.HandlerCollection.setHandlers
(HandlerCollection.java:86)
at org.eclipse.jetty.server.handler.HandlerCollection.addHandler
(HandlerCollection.java:155)
at com.axway.transport.http.server.Jetty.handlerAsCollection
(Jetty.java:333)
at com.axway.transport.http.server.Jetty.getHandlersFromJettyConfig
(Jetty.java:318)
at com.axway.transport.http.server.Jetty.createHandlersFromConfig
(Jetty.java:182)
at com.axway.transport.http.server.Jetty.doStart(Jetty.java:71) ...
14 more
Solution: Working with your system administrator, modify the B2Bi user account on the operating
system where B2Bi is installed. Set the ‘open files’ property to a high number. The optimum setting
depends on the way your system is to be used. The greater the number of pickups, the larger the
setting should be.
For example, for a community configured with 2000 Trading pickups, begin with an 'open files'
setting of 102400, check the results and adjust accordingly.
Example:
Use the lsof command to estimate an appropriate value for the ‘open files’ property:
Run the following loop while the nodes are restarting - (run as your user, not root).
This command reports the open file count for the specified user every 5 seconds. For example:
15387
15387
15387
15387
15387
15387
15387
15387
Working with the system administrator, use these results to set the 'open files' value. The values that
are returned will vary depending on the number of community pickups that are defined. The setting
you use for the 'open files' property should be higher than the maximum 'open file count' value that
is returned, allowing a buffer for additional resources.
Within clustering, there are two main clustering concepts:
l Active / Passive – In an active / passive cluster, one node owns the services, while the other
one remains inoperative. Should the primary node fail, the secondary or backup node takes the
resources and reactivates the services, while the ex-primary remains in turn inoperative. This
concept is also known as a failover cluster.
l Active / Active – In an active / active setup, there is no concept of a primary or backup node:
both nodes provide services. Should one of these nodes fail, the other must also assume both its
own services and the failed node's services.
B2Bi provides full active / active clustering capabilities. active / passive is not supported .
The most common size for a high-availability (HA) cluster is a two-node cluster. This is the minimum
node requirement to provide redundancy. Should one node fail (for a hardware or software
problem), the other node acquires the resources that were previously managed by the failed node,
in order to re-enable access to these resources.
B2Bi supports two-node clustering.
l Vertical scaling – Adding resources to a single node in a system, typically through the addition
of CPUs or memory to a single node.
B2Bi supports vertical scaling. It can leverage more CPUs available on a single server to increase
the throughput.
o The trading engine is multi-threaded. It can be configured to scale automatically to specified
limits determined by hardware and configuration settings.
o The integration engine can be configured to be more scalable, by adapting it to the number
of CPUs in the machine.
l Horizontal scaling – Clustering is a form of horizontal scaling: adding more nodes to a
system. However, clustering in B2Bi involves more than adding additional machines and
processing resources.
The effect of adding B2Bi nodes is not linear. That is, adding a second node does not double the
throughput capacity. There are many factors that have an influence on performance ( CPU /
Memory / Disk / Network). Results of horizontal scaling will depend on the root causes of
processing bottle-necks.
For more information, see Tune and scale clusters on page 325.
Information summary
Concepts
B2Bi cluster features on page 298
Clustering for Axway products working with B2Bi on page 305
File systems for B2Bi clusters on page 308
B2Bi cluster behavior on page 309
Tasks
Install B2Bi cluster nodes on page 307
Deploy objects and configurations to clusters on page 319
Set and modify environment variables on page 321
View cluster node status on page 321
Add a cluster node on page 323
Delete a cluster node on page 323
Start a node on page 324
Stop a node on page 324
Tune and scale clusters on page 325
Best practices for cluster environment development on page 327
l Distributed processing across multiple physical servers
l High-availability processing continuity on failure of one or more nodes
l Capability of adding and removing non-primary nodes without B2Bi stopping message
processing
l Security from loss of currently processed messages in the event of a single server failure
l Scaling per node and extended capacity by adding nodes
l Multi-node synchronized configuration and object deployment
To set up clustering you install the B2Bi server on multiple hosts, where each host constitutes a
single service node. Trading engines and integration engines are linked together in a 1:1
relationship. You must install the trading engine and the integration engine of each node on the
same host machine.
The clustered nodes share:
l Database for the B2Bi Server.
l Access to a common file system
Binary resources used at runtime are shared through the shared file system (MBC and
Datamapper maps).
DML resources are deployed and synchronized to all nodes from Mapping Services.
The following figure illustrates a B2Bi cluster architecture on two nodes. Each cluster node consists
of exactly one trading engine and one integration engine:
About nodes
In the B2Bi active/active environment the term "node" is used in two ways:
l In the generic usage, a node is a computer that supports a set of related applications and is
member of a high-availability cluster.
l In a B2Bi-specific usage, a node is a sub-server of the B2Bi implementation. Each node supports
a specific subset of the B2Bi system tasks. A single hosted instance of B2Bi requires three nodes:
o B2B – Provides contextual processing information to the integration engine
o Trading engine - Provides transport protocol and security handling
o Integration engine - Provides message-content handling
l System role – The integration engine operating in the System role runs the cluster singleton
tasks. Cluster singleton tasks are always running. They can only run on one integration engine
instance in the cluster. The integration engine with the System role is the first node that is
started in supervised mode.
l Primary node –The integration engine operating in the Primary role runs the integration
engine user singleton tasks. Integration engine singleton tasks support polling pickups,
dedicated processing environments (integration engine HMEs) for the multi-document
envelopers and sequencing activities. The integration engine with the primary role is the first to
start the user tasks.
These two B2Bi-specific roles can be attributed to the same cluster node or to different nodes,
however it is best to configure them on different nodes.
Task distribution
Certain tasks, by their nature, can only be executed on a single B2Bi-specific node. This is the case,
for example, for the B2Bi application pickups provided by the integration engine that execute
polling (FTP, file system). B2Bi p rocesses use cluster singletons to ensure that these types of tasks
run only on the node which was started first. An internal load balancer then distributes the messages
within the cluster if they are picked up as a result of the polling task.
Similarly, certain internal integration engine tasks can only run once in the cluster. For these tasks, a
new proxy reroutes the initial connection from a client task to the system singleton task on the
system node. All data exchange after that goes directly between the client task and the system
singleton task.
Singleton activities
For certain activities it is necessary to process messages in sequence and avoid any parallelism. This
is the case for enveloping multiple documents and for sequencing activities. These activities all use
dedicated processing environments (integration engine HMEs) which only run on the primary node.
HMEs are configured to process messages one by one. Enveloping of single documents takes place
in separate activities that do not have these restrictions.
An instance of the integration engine can only run when its associated trading engine instance is
running. If the trading engine on a specific cluster node fails, it attempts to stop the associated
integration engine instance. Additionally, the integration engine listens for a "keep-alive" message
from the trading engine. If the keep-alive is missing, the integration engine stops itself.
Similarly, if the integration engine associated with a trading engine fails, the trading engine on the
same cluster node will stop itself.
A senior clustering control agent, located on the principal cluster, monitors and coordinates node
activity.
In the case of messages coming in through the integration engine, the messages are balanced over
the various integration engines across the multiple nodes. The integration engine typically pushes
the processed message to the local trading engine, which then produces it to the final destination.
Load balancing
l For processes driven on the trading engine:
o Protocols where messages are “pulled” by B2Bi (FTP, file system, …) – B2Bi provides built-in
load-balancing.
o Protocols where messages are “pushed” to B2Bi (HTTP, …) – Load-balancing must be
provided externally (typically through hardware load-balancing).
l For processes driven on the integration engine:
o There is no active load balancing inside the integration engine. The queues are shared and
all cluster nodes have equal access to all messages once messages are in the queue. This
means that the processing can switch between cluster nodes for every activity.
o Protocols where messages are “pulled” by B2Bi (FTP, file system, …) – No load-balancing
mechanism (only the primary node polls).
Note: Once the message has been read, the processing of the message can switch between
cluster nodes.
o Protocols where messages are “pushed” to B2Bi (HTTP, …) from partner or application side –
Load-balancing must be provided externally (typically through hardware load-balancing).
o After message consumption – An automatic dispatching step of a single node distributes the
load between the nodes in the cluster.
o Messages received through the trading engine are p rocessed on the node on which they
have been delivered.
B2Bi engines
Trading engine
In B2Bi, the trading engine "drives" the integration engine. The trading engine connects to a server
inside the integration engine (the kernel server). Whenever changes appear in the cluster, the
trading engine communicates these events to the integration engine.
Within the cluster there are control nodes and trading engine nodes. Control nodes host the B2Bi
user interface. Trading engine nodes host trading activity. The oldest node in the cluster runs
singletons that control the cluster.
Whenever you add a node, it is assigned processing work and becomes responsible for a set of
exchange points.
Integration engine
Within the cluster there are primary and secondary nodes. There is only one primary, typically the
oldest (first started) node in the cluster. The primary node runs certain singletons not running on
the secondary nodes and manages the “pull” protocols on application side. It also handles
sequencing activities through single-message dedicated processing environments (integration
engine HMEs). When the primary node fails, another node runs these particular singletons. When a
node is added to a running cluster, the new node is a secondary node that receives workload from
the primary node.
Limitations
Architectural limitations
l You can implement a maximum of one trading engine per physical server.
l You can implement a maximum of one integration engine per physical server.
l A single integration engine is associated with a single trading engine to form one cluster node
unit.
l Polling activities (example: FTP polling) and message consumption activities occur only on the
primary node. They are not load balanced over the multiple nodes. Once messages are
consumed, their processing is balanced across the cluster nodes.
Performance limitations
l In the event of an uncontrolled stop of one of the nodes in the cluster, the whole cluster needs
to “recycle”, causing a temporary outage due to a restart of both the trading engine and
integration engine on the remaining nodes.
l In the event of a controlled stop of the node running the integration engine in system mode, the
whole c luster needs to “recycle”, causing a temporary outage due to a restart of the remaining
trading engine and integration engine.
l In the event of a controlled stop of the non-system node (integration engine), there is no cluster
recycling.
l In the event of a stop of one of the nodes, there is risk of failure of messages in process at the
moment of the stop of the node. These messages need to be reprocessed in the trading engine
(from the Message Tracker).
Management features
In the B2Bi user interface, users with B2Bi administrative rights can view and manage cluster nodes
on the System Management page.
The following figure illustrates a System Management page view of the processing engines running
on a host server:
In this figure you see three B2Bi processing nodes (engines) running on a single host to provide a
cluster node:
l B2Bi engine ( linux28_b2b)
l B2Bi trading engine (linux28_te)
l B2Bi integration engine
From this page you can add, delete, start, stop and restart engines, as well as view and manage
integration engine tasks.
l Secure Relay
l PassPort
l Transfer CFT
Secure Relay
Secure Relay DMZ nodes in the demilitarized zone (DMZ) or perimeter network securely relay
messages from partners to the trading engine in the protected, internal network. For security
reasons, DMZ nodes do not initiate connections to the protected network. DMZ nodes also do not
connect to the database or write messages to the file system. As it is stateless, messages cannot be
lost when Secure Relay fails. Secure Relay, which uses Axway technology, supports inbound and
outbound proxying.
An external load balancer is required, situated before the Secure Relay in the DMZ (in the case of
multiple Secure Relay instances).
If the connection between one Secure Relay (in the DMZ node) and all trading engine is gone,
Secure Relay stops listening for external, inbound connections coming from partners and waits for
the “internal” communication with at least one trading engine to be re-established.
PassPort
Active / active is supported by PassPort. PassPort requires manual configuration and management.
If you have multiple PassPort nodes, a load balancer is required between B2Bi and PassPort.
Transfer CFT
A B2Bi cluster can interact with Transfer CFT (active/passive) using PeSIT on both the partner side
and on the application side.
l For transfers initiated by B2Bi – another node takes over
l For transfers initiated by Transfer CFT (sending, or polling) – Transfer CFT can define back-up
addresses that enable Transfer CFT connection to a secondary node (back-up address ).
Prerequisites
l Shared database is installed on your network.
l Shared file system is available on your network.
l A load balancer is available for the B2Bi user interface and for the integration engine client user
interface.
1. Set up node 1
To set up a cluster, you begin by installing the primary cluster node using the B2Bi server
installer. When the installer prompts you for the B2Bi database and the shared file system, you
must specify connections to database and file system that are located on an external host.
The first node you install is automatically identified as the senior control node of the cluster.
2. Set up node 2
Next install the secondary cluster. You use the same values for the external B2Bi database
connection properties and the shared file system.
3. Start all servers.
l Local file system (one for each node)
l Shared file system (shared between cluster nodes)
The integration engine uses environment variables to reference specific directories:
Important: The file system you use for B2Bi cluster file sharing must be a high-performance system
that allows a large number of file to be opened simultaneously.
The trading engine uses the /opt/axway/shared/common directory for the storage of runtime
data.
In B2Bi, the trading engine has a central role in the cluster because it receives the notifications of
these events. When changes occur in the cluster, the trading engine communicates this information
to the integration engine. The trading engine and the integration engine are closely linked in a 1:1
relationship. Whenever you start or stop a trading engine, this results in the start / stop of the tasks
within the integration engine.
This topic describes the behavior and recovery mechanisms of B2Bi in the event of a cluster
membership change.
Definitions
Configuration Adapter Node
The Configuration Adapter Node provides services to the integration engines which enable
them to search and query for Agreements and Metadata Profiles to assist in the runtime
message processing. It also provides a number of other B2Bi-related search services for
looking up and validating Messaging ID existence, Detector functionality and other
information. This node is also responsible for publishing the current Application state to
the integration engines on startup.
Control Node
The Control Node manages the start and stop of the integration engine server, and runs the
user interface. It logs these activities in <hostname>cn.log. Each host in a cluster has a
single Control Node.
Instance of the trading engine on a single cluster node. Each trading engine is paired with
an instance of the integration engine on a host.
Instance of the integration engine on a single cluster node. Each integration engine is
paired with an instance of the trading engine on a host.
System Node
The System Node is the integration engine instance that runs the cluster singleton tasks.
Singleton tasks are always running and can only run on one integration engine instance in
the cluster.
Singleton
Code that supports a single instance of a service in the cluster. Singletons in B2Bi clusters
are classified as:
l Host singletons – There is a single instance of a host singleton for a specific host
machine. If the trading engine is stopped on a host, then the host singleton service is
stopped on that node, and an attempt is made to restart it on another trading engine in
the cluster.
l Cluster singletons – Unique instances of service controllers in the entire B2Bi cluster. A
cluster singleton can only run on a specific node type. There are cluster singletons that
can start on each type of node. When the node that hosts a cluster singleton is stopped
(due to a system failure), the oldest node (of the appropriate type) that can host the
singleton takes over its work.
Adding nodes
When you add a node to the cluster, it immediately starts to receive and process workload. The way
the B2Bi engines manage the change is different for the trading engine and the integration engine.
Trading engine
When a trading engine is stopped, a cluster membership change event message is received by each
cluster node.
Trading engine services are managed and distributed through cluster singletons. There can only
be one instance of any singleton-controlled service in the entire B2Bi cluster. Each cluster singleton
can only run on a specific node type. There are cluster singletons that can start on each type of
node. Some start only on Control Nodes, and some only on Trading Engine Nodes.
In case of an abrupt trading engine stop, B2Bi does not wait for the in-process messages to finish.
Processing work is immediately distributed to the remaining running trading engine nodes. If there
was a cluster singleton running on the trading engine that stopped, it is restarted the oldest node
that supports that specific type of singleton.
In addition to cluster singletons, B2Bi has host singletons. There is only one instance of a host
singleton for a specific host machine (however, note that you c an have multiple nodes on each
host: Control Node, Profile Node, Trading Engine Node). If the trading engine is stopped on a host,
then the host singleton service is stopped on that node.
Integration engine
An integration engine node can run in system mode and/or in primary or secondary mode. A single
integration engine can act in both system and in primary roles. When there is more than one active
integration engine in the cluster, B2Bi distributes the roles as described in the following paragraphs:
System role
Only one integration engine node can have the system role assigned to it. The integration
engine node that is assigned the system role runs the cluster singleton tasks. Singleton
tasks always run, and can only run, on one integration engine node in the cluster. The
system role is assigned to the first node that is started in supervised mode.
When the integration engine node that is assigned the system role fails, the remaining
integration engine nodes try to take over the system role. Only one node will succeed, and
become the new node running in the system role. Once the new system role assignment
has been established, all user tasks on the remaining nodes are restarted. Until user tasks
are restarted, there may be a temporary interruption of service until tasks are restarted.
Primary/secondary role
There is only one primary node, which is typically the oldest (first started) node in the
cluster. The primary node runs certain singletons that do not run on the secondary nodes,
and also manages the “pull” protocols on the application side. When the node acting with
the primary role fails, another node must run these particular singletons. When a node is
added to a running cluster, the new node is initially assigned a secondary node. The new
node receives workload assignments from the primary node.
Removing nodes
The removal of a node can occur in two general ways:
l graceful / controlled
l unexpected / uncontrolled
This type of stop is initiated when the user stops the trading engine.The integration engine owns a
monitoring connection that detects the loss of service when the trading engine is stopped.
Trading engine:
Whenever a trading engine is stopped, a cluster membership change event message is received by
each node. If the stopped trading engine is paired with an integration system node, and if there are
remaining nodes in the cluster, another node becomes the system node.
Integration engine:
When you intentionally stop a trading engine, a cluster membership change event message is
received by each node. Each node verifies if its host name is still among the valid cluster members. If
not, the user tasks of that node are stopped. This stopping is in a graceful way, which means that all
active sessions within the integration engine are completed. If it is the primary node, the singletons
are stopped as well. If there are remaining nodes in the cluster, the next oldest member becomes the
primary node.
In the case where you are intentionally stopping the supporting machine, you must stop the
integration engines before powering down.
l The node was stopped by the user (not following the procedure as described in the graceful
method, but by killing the service or another vital main process).
l The server on which the node is running is powered down.
l Hardware failure occurs ( local drive / CPU / ...)
l The node loses the connection to the network.
l The node cannot connect to the database or to the shared file system.
Of this list, the first two can be avoided with correct procedures and access rights The third
(hardware failure) is less common.
The following paragraphs describe the remaining two items.
Network failure between B2Bi and its database or file system (or the
database / file system are not available)
Note: B2Bi also supports retry mechanisms between engines:
l Trading engine retry send to the integration engine
l Integration engine retry send to the trading engine
l A control node that manages the B2Bi user interface. External load balancer provides scalability
on the B2Bi user interface and on the integration engine client user interface.
If a node fails, the load balancer detects the failure, and redirects the user to a new available
node. In this case, the user must reconnect to the user interface. The user may lose any
configuration data entered in his browser that was not saved by the node that failed.
l A single ISDN router node.
This node can only be active-passive.
New messages
For messages “pushed” to B2Bi (through protocols like HTTP and other listening protocols), the
external load-balancer detects that the B2Bi node (trading engine or integration engine) is
unavailable and redirects all new incoming messages to another available B2Bi node.
For messages “pulled” by B2Bi (through the FTP client), the B2Bi cluster detects that the B2Bi node
(trading engine or integration engine) is unavailable and assigns the pulling tasks to another
available node.
l Messages pushed to B2Bi – The partner or application must resend the message.
l Messages pulled by B2Bi: B2Bi (trading engine or integration engine) automatically restart the
pulling tasks to try to get the message again from the partner or the application from a different
node.
l All messages are processed.
l There is a limited risk of message duplicates, compensated by the trading engine, depending on
the format being used.
o For the protocols ebMS/EDIINT/RosettaNet, the trading engine checks received message IDs
for duplicates.
o If duplicates represent a risk, the administrator must check the Message Tracker to see if
messages where processed twice, or alternatively, a specific code must be implemented to
check for duplicates.
l Processing automatically continues (restart) on a different node:
o inbound message: Start from beginning (unpackaging…)
o outbound message:
o If packaging completed: send to partner
o If package not completed: create package
The following behavior is applied to messages fully received and in progress in the integration
engine when a particular node fails. This includes synchronous acknowledgements if they are
expected by a partner.
l All messages are processed.
l No risk of message duplication.
l The integration flow automatically restarts from the last processing step that is part of the
execution chain (registered in the service) when the node failed on an alternative node in the
cluster.
Note: For each B2Bi processing step, the integration engine reads input queues, executes the
activity, stores the output in internal queues, and then commits the message read. Each B2Bi
step not fully completed is automatically restarted from scratch, polling the un-committed
message again from this internal queue.
If another integration engine is running on another cluster node and is already consuming messages
from the trading engine on that node, that integration engine begins consuming messages from
both trading engines, until the other integration engine returns to service.
If no other integration engine is ready to consume messages, the trading engine waits for one of
these conditions:
l 60 second timeout (default setting) for detection of an active integration engine
l Queue limit of 2000 messages (default setting) to send to an integration engine
When either of these conditions are met, the trading engine throttles message delivery, holding
messages in the queue until an integration engine becomes active.
You can view the status of all trading engine and integration nodes on a cluster from the B2Bi user
interface, on the System Management page.
From the System Management menu, click System Management to view the page. The following
image illustrates a cluster node in which the integration engine is stopped, and in which the trading
engine found no other integration engines to direct messages to. As a result, the trading engine has
gone into "throttled" status.
Acknowledgements (outbound)
Documents which did not successfully complete the step that detects, splits, maps the message and
generates the acknowledgement, will re-execute completely on another node and generate the
acknowledgment.
Enveloping
Messages waiting to be enveloped (by the integration engine) on a node that is failing before the
enveloping is executed, are enveloped on another available node.
Acknowledgements (inbound)
When you have a pending acknowledgement to be received on a node which fails before the actual
acknowledgement has been received, the incoming acknowledgement is handled and correlated to
the original entry by one of the alternative nodes.
Message duplication
When nodes are removed from the cluster, there is a risk in the integration engine that messages
may be duplicated and processed twice.The system administrator should check the Message Log
and Message Tracker in the B2Bi user interface to look for duplicates and take action.
l The B2Bi node that failed, automatically recovers under certain conditions: The trading engine
and integration engine automatically restart and process new messages and documents.
l For messages “pushed” to B2Bi (HTTP, …), the external load-balancer detects that the B2Bi node
(trading engine or integration engine) is available and pushes new incoming messages to this
node.
l For messages “pulled” by B2Bi (FTP, …), the B2Bi cluster detects that an additional B2Bi node is
available and load balances pulling tasks to this node again (this only applies to the trading
engine protocols, the integration protocols will continue to run on the primary node).
l Message Tracker (trading engine)
l Message log (integration engine)
l EDI Tracker (integration engine)
l Sentinel
Resubmit messages
You can resubmit messages from Message Tracker even if the message fails on a node that is no
longer available.
You can resubmit documents from Message Log even if the documents have failed on a node that is
no longer available.
l Deploy objects and configurations to clusters on page 319
l Set and modify environment variables on page 321
l View cluster node status on page 321
l Add a cluster node on page 323
l Delete a cluster node on page 323
l Start a node on page 324
l Stop a node on page 324
The B2Bi configuration can be described as comprising the following configuration layers:
l System Layer
l Partner Layer
l Flow Layer (processing flow)
l Resource Layer (modular resource code and binaries used in specific processing flows)
The information for the bottom three layers in the preceding figure is stored in the B2Bi shared
database. This information is used by all nodes.
The Resources Layer presents a special case.
About resources
In order to control the way B2Bi handles the messages and files that transit in its servers, it uses
resources. Resources are small installable programs that extend the standard set of message-
handling processes provided by B2Bi.
A resource might enable B2Bi to converse with a specific type of remote application, or it might
modify the structure of a transiting file, or retrieve a specific data element from a database. The
functional possibilities of components are virtually limitless.
In B2Bi there are two main categories of resources:
l DML maps
DML (Data Manipulation Language) is an Axway proprietary programming language.
l Other components
In the trading engine:
o Inline processors (Custom Java code to influence the behavior of the trading engine)
In the integration engine:
o Datamapper Maps
o Message Builder Components (MBCs)
The following table describes the correct placement:
Inline processors %B2BI_SHARED%\ (1)
Datamapper Maps %B2BI_SHARED%\local\4edi\component
MBC %B2BI_SHARED%\local\4edi\component
1 Requires additional changes to be used from the shared location (class path / file registry).
<B2Bi_installation_directory>/Integrator/config/environment.dat
You can set and modify the following environment variables to influence cluster-behavior:
B2BI_ISALIVE_TIMER
The maximum time to get a response from the database. If this time (in seconds) is
exceeded, the node(s) experiencing the time-out are stopped.
B2BI_CLUSTER_NODE_TIMER
Each cluster node registers a process at the System Node. The cluster node process is
started/stopped by the node when the trading engine for a node is started/stopped.
If a node can’t update/reset the timer or stop the process (Uncontrolled shut-down or
network outage) the timer times-out.
To monitor nodes:
1. Open a session in the B2Bi user interface.
2. Click System management on the toolbar to open the System management page.
3. Make sure that the Processing nodes tab is selected.
The interface displays a view of all of the nodes that are currently running in your B2Bi
environment.
For each cluster node you can view the engines that are running, as in the following example:
4. On each cluster node you must have the following processing nodes running:
l B2Bi
l Trading engine
l Integration engine
An entry for each individual engine on each node shows the current status (running / stopped /
starting) of the engine.
5. For additional details about the integration engine sub-server task status, click Integrator
tasks on the Integration engine entry.
A page similar to the following figure is displayed:
Start a node
1. Open a session in the B2Bi user interface on an active node.
2. Click System management on the toolbar to open the System Management page.
3. Make sure that the Processing nodes tab is selected.
4. Select a trading engine node and click Start all nodes to start both the trading engine and the
integration engine.
The engines running on the nodes can be started and stopped using the Start / Stop / Restart
buttons next to each engine entry, with the exception of the integration engine. The tasks inside the
integration engine are started by the trading engine. This implies that when you start the trading
engine, the integration engine is started. Stopping the trading engine results in stopped tasks in the
integration engine.
Stop a node
1. Open a session in the B2Bi user interface on an active node.
2. Click System management on the toolbar to open the System Management page.
The engines running on the nodes can be started and stopped using the Start / Stop / Restart
buttons next to each engine entry, with the exception of the integration engine. The tasks inside the
integration engine are started by the trading engine. This implies that when you start the trading
engine, the integration engine is started. Stopping the trading engine results in stopped tasks in the
integration engine.
l Distribution – Adding nodes to the cluster
l Scaling – Adding processing capacity to a node in the cluster.
l Tuning – Task optimization and node configuration
Caution: Performance optimization must be done on a case-by-case basis. An implementation that
processes a low volume of large file size messages requires a different configuration from an
implementation that processes a high volume of small messages. Do not adjust parameters unless
you are experienced with B2Bi.
Processing factors
Before adjusting parameters, keep in mind that processing is heavily influenced by the following:
l Available (primary) memory
o Avoid swapping
l CPU/cores, speed and size
o Allow parallel processing
o Be aware that this is typically a bottleneck when processing large volumes of large files with
complex operations
l Shared disk speed
o B2Bi generates numerous synchronized disk writes to ensure data persistence; easily 50 disk
write actions for 5 activities (simple flow)
o Be aware that this is typically a bottleneck when processing large volumes of small messages
l Network capacity
When the file requires integration (transformation, enrichment, validation, and so forth), the
following elements can impact performance:
l Complexity of the mapping (number of elements mapped)
l Number of steps required in the service (each step requires multiple disk write operations, which
consume time and resources)
l Global design of the mapping (optimizations depending on the bottleneck)
o Input file can be split in multiple sub-messages, resulting in better usage of the CPU but
requiring more I/O
B2Bi scaling
To improve throughput, you can install additional nodes on your network and then add them in the
System Management section of the B2Bi user interface.
In B2Bi implementations, there is a 1:1 relationship between trading engines and integration
engines: Adding a trading engine (running on a specific node) implies the addition of an additional
integration engine as well.
The effect on performance of adding B2Bi nodes is not linear. That is, adding a second node does
not double the throughput capacity. There are many factors that have an influence on performance
(CPU / memory / disk / network). The results of adding nodes will depend on the root causes of
your message-processing bottlenecks.
Initial configuration
During B2Bi Server installation the installer user is prompted to provide a value for the number of
CPUs to use for B2Bi Server operations. The installer uses this value to calculate initial settings for:
l Number of of Processing Engines for HME1, HME2, and HME3
l Number of Logger Tasks
B2Bi balances the number of Integration Engine processes with the number of processors.
Depending on the number of CPUs set during the installation, an implicit scaling is done, with the
following results:
CPU #PE/ HME1 #PE/ HME2 #PE/ HME3 #PE/ HME4 Logger tasks
2 2 2 4 1 2
4 4 4 8 1 4
6 8 8 16 1 8
Caution: Turning fail safe operation off is generally not supported for cluster installations. It can be
turned off for single node clusters or certain Axway-verified and approved cluster installations with
no node-specific caching toward the shared file system.
Log level
Logging can become a performance bottleneck, especially in situations in which the system needs
to handle a high volume of small messages. To avoid this, you can change the log level to reduce
the amount of log information per transaction.
To do this, you set an environment variable in the file:
<B2Bi_installation_directory>/Integrator/config/environment.dat
Set this level through the following environmental variable:
B2BI_MESSAGE_LOG
The default level is 1. Valid values are 0, 1, or 2 (minimal, normal, full).
When a cluster changes (that is, when the node that is processing the message fails) the integration
message processing flow automatically restarts on an alternative node in the cluster, from the last
processing step that is part of the execution sequence.
For each B2Bi processing step in a flow, the integration engine:
l Reads input queues
l Executes a processing activity
l Stores the output of the activity processing in internal queues
l Commits the message read
Any processing step that is not fully completed is automatically restarted from the beginning,
starting with polling the uncommitted message again from the internal queue.
Because these steps are started from the beginning, it is important to follow a basic programming
principle: (using a DB insert as a sample)
It is important to perform the commit of all changes at the VERY END of the processing activity. If
the node fails before the end of the activity, the process restarts the activity, and the uncommitted
database mutation from the first try will automatically be rolled back. A more limited risk remains if
the node fails between the database commit and the activity commit.
The following example illustrates a database write activity structure, with the placement of the
commit at the end of the activity.
customprogram.java
-----------------------------------------
Start Activity
...
Write to Database
Other things
...
Commit
End Activity
-------------------------------------------
For the use case described in this topic:
l WebSphere MQ is installed on two servers.
l One queue manager, QM1, has been created.
l One instance of QM1 is active, and is running on one server.
l The other instance of QM1 is running in standby mode on the other server. This server performs
no active processing, but is ready to take over from the active instance of QM1, if the active
instance fails.
It is important to take into account the time required for the standby instance of a multi-instance
queue manager to become active after the active instance becomes unavailable. Preliminary tests
indicate that the required time may vary between 30 seconds and two minutes, depending on
configuration and reason for failure.
Set up WebSphere MQ
To use a WebSphere MQ queue manager as a multi-instance queue manager:
1. Use the crtmqm command to create a single queue manager on one of the servers.
2. Place the queue manager data and logs in shared network storage directory.
3. On the other server, rather than create the queue manager again, use the addmqinf command
to create a reference to the queue manager data and logs on the network storage.
All the connection information for the standby server must be the same as for the main server,
including the port.
Set up B2Bi
After you install the MQ multi-instance queue manager, you must configure B2Bi to use the multiple
instances. To do this you can create a new pickup or modify an existing pickup.
During the MQ pickup or delivery creation, the creation wizard displays the Settings page. On
this page:
3. Enter settings for the main MQSeries server.
4. Select the option Multi-instance queue manager. When you select this option, the
following field is displayed: MQSeries standby server. Enter the standby server address.
5. Click Save.
Existing pickup
Alternatively, you can add the multi-queue instance to an existing IBM MQ pickup. To do this:
1. Open the modification page for an existing MQSeries pickup or delivery.
2. Select the IBM MQSeries settings tab.
3. Select the option Multi-instance queue manager. When you select this option, the
following field is displayed: MQSeries standby server.
4. Enter the standby server address.
5. Click Save changes.
During the period when neither of the WebSphere MQ server instances is available, B2Bi tries to
connect to the primary and to the secondary server on each retry. After a connection is successfully
made to a server instance, the WebSphere MQ cache is updated. In the cache, the last functional
host for the exchange point is specified, so that all future messages are exchanged directly with the
active WebSphere MQ server.
l Start and stop B2Bi on page 331
l Start B2Bi servers individually on page 333
l Start the Axway Database on page 334
l Start PassPort on page 336
l Start Sentinel on page 335
l Stop B2Bi servers individually on page 336
l Stop Sentinel on page 337
l Stop P assPort on page 338
l Stop the Axway Database on page 338
The tool is located in the /B2Bi directory of the B2Bi installation. It is named:
l UNIX/Linux – B2Bi
l Windows – B2Bi.bat
Note In a cluster you must run the B2Bi stop script on all machines of the cluster to stop the
product.
To open a command console in Windows:
1. In Windows explorer, navigate to the root directory of our B2Bi installation.
2. Press Shift and right-click the root folder of your installation directory (for example
C://Axway).
3. From the context menu, select Open command window here.
Syntax
B2Bi [command]
Possible commands:
l start [-T] – Starts B2Bi.
o Option [-T] starts only the B2Bi trading engine. The default setting (running the command
without the option) is "False".
l stop [-t] – stops B2Bi.
o Option [-t] <timeout> enables you to set the maximum time to wait for integration
engine user tasks to stop. The default timeout value is 120 seconds.
l status – Displays whether B2Bi nodes are running or stopped.
l help – Displays command usage information.
Examples
UNIX/Linux
Start B2Bi
./B2Bi start
Stop B2Bi
./B2Bi stop
Stop B2Bi and allow 240 seconds for integration engine tasks to stop
./B2Bi status
./B2Bi help
Windows
Start B2Bi
B2Bi start
Stop B2Bi
B2Bi stop
Start B2Bi and allow 240 seconds for integration engine tasks to stop
B2Bi status
B2Bi help
1. Start the Axway Database on page 334- If you are using a third-party database, refer to the
documentation provided with that product.
2. Start PassPort on page 336
3. Start Sentinel on page 335
4. Start B2Bi. See Start and stop B2Bi on page 331.
Prerequisite
Install the Axway Database.
Procedures
For details about how to install the Axway Database as a Windows service, refer to the Axway
Database Administrator Guide.
To start the database with Windows already running:
l From the desktop, open the Windows Services manager and select:
Axway_Database_start
l From a DOS prompt, run:
net start AxwayDatabase
UNIX
From the directory <db_installation_root_directory> run:
Axway_Database start.
Start Sentinel
Prerequisites
1. Install:
l Axway Database (or alternatively, install a third-party database)
l Sentinel
2. Start the database.
Procedures
The Sentinel server console (or the sentinel.log file if the console is redirected to a file) displays
the following message: "[STARTED] FrontEnd server"
If you need to start the Sentinel server with Windows running:
UNIX
Navigate to the Sentinel installation directory and enter the command: / startserver
The Sentinel server console (or the sentinel.log file if the console is redirected to a file) displays the
following message: "[STARTED] FrontEnd server"
Start PassPort
Prerequisites
1. Install:
l Axway Database (alternatively, you can install a third-party database)
l PassPort
2. Start the database.
Procedures
Windows
Do one of the following:
l Open a Windows command console from the Axway\PassPort\bin directory and enter the
server start command: startPassport.cmd
UNIX
Navigate to the PassPort installation directory and enter the command: startPassport
1. Stop B2Bi. See Start and stop B2Bi on page 331.
2. Stop Sentinel on page 337
3. Stop P assPort on page 338
4. Stop the Axway Database on page 338
Stop Sentinel
Prerequisites
Log off from the Sentinel monitoring interface.
Procedures
Enter the command: Windows:stopserver.bat
For details of how to configure the Sentinel server to run as a Windows service, refer to the Axway 5
Suite Installation and Prerequisites Guide.
If you need to start the Sentinel server with Windows running:
UNIX
Navigate to the Sentinel installation directory and enter the command: / stopserver
Stop PassPort
Windows
Do one of the following:
l Open a Windows command console from the Axway\PassPort\bin directory and enter the
command: stopPassport.cmd
l From the desktop S tart menu, select All Programs > Axway Software >Axway >
PassPort > Stop PassPort
UNIX
Enter the command: stopPassport
For details about how to install the Axway Database as a Windows service, refer to the Axway
Database Administrator Guide.
To start the database with Windows already running:
l From the desktop, open the Windows Services manager and select:
Axway_Database_stop
l From a DOS prompt, run:
net stop AxwayDatabase
UNIX
From the directory <db_installation_root_directory> run:
Axway_Database stop
Introduction
The following topics describe a backup and restore strategy for a clustered B2Bi configuration,
running on a Windows environment.
These descriptions represent general good practices for conserving the capability of recovering in
case of disaster.
For details of how to execute different types of B2Bi backups, see Backup and restore on page 344.
Example environment
In a comprehensive B2Bi backup and restore strategy, the following three elements must be
conserved:
Nodes
A typical two-node B2Bi cluster installation on Windows has the following base directories
(paths are examples and may be different in your installation):
Node 1
Installation directory: C:\Axway\B2Bi\..
Node 2
Installation directory: C:\Axway\B2Bi\..
Shared file system accessible from both nodes (%B2BI_SHARED%):
E:\Axway\B2Bi\shared\..
Shared database
The two B2Bi nodes rely on a shared database. This database can be either the Axway
Database or a third-party database. See the B2Bi Installation and Prerequisites Guide for a
list of supported databases.
l Restore the base shared file system in a stable state
l Preserve the typical changes to the system with the incremental partial backups
In case of disaster recovery, however, there are limitations to message recovery capabilities in the
integration engine, as the queues and files containing messages that are currently being processed
are not saved. To handle integration engine message recovery, see Integration engine message
recovery after node failure on page 447.
l After the initial installation
l Before a patch or service pack is applied to the operating system or to B2Bi
For the integration engine, backups of the shared file system have limited value. When the system is
running, the internal storage structures and the payload storage for the integration engine are likely
to be modified during the backup process, so it is likely that the snapshot of the system that is
provided by a backup would result in a corrupt system after restoration.
The trading engine uses the shared file system only to store the payloads while the rest of the
configuration is stored in the database. Typically, a backup of the shared file system results in
(slight) mismatches between the database and file system content. The magnitude of mismatches
depends on the synchronization between the two backups.
To make sure that you can recover your system in a stable state, we recommend the following
backup procedure:
1. Make an initial backup of the full shared file system when the system is not running. Repeat this
step whenever you apply patches and service packs to B2Bi, and whenever you change from
one version of B2Bi to the next.
2. Perform regular backups of the following directories on the shared file system. (You do not
need to shut down the system for these backups.)
Directory Content
\Axway\B2Bi\shared\common\conf Private keys are stored here (in the
keys subdirectory). These keys are
referenced from the database.
\Axway\B2Bi\shared\common\data\backup Directory where the trading engine
stores the payloads.
This directory only needs to be
backed up in case you want to
preserve “in flight” data and / or
have the ability to resubmit and
reprocess messages from the
trading engine after a restore. I this
directory is backed up for that
purpose, you must align the
database backup with a backup of
this directory from a time
perspective.
\Axway\B2Bi\shared\common\b2bi\containers\cache Deployed DML maps if you use DML
maps. Instead of backing up this
directory (and restoring it in case
of disaster recovery), you can
redeploy maps from Mapping
Services interface.
If you use Java custom functions,
you must also backup %B2BI_
SHARED_
LOCAL%\java\dmlfunctions.
\Axway\B2Bi\shared\b2bi\local Custom components, including
Datamapper maps, and MBCs
(%B2BI_SHARED_LOCAL%\4edi
and %B2BI_SHARED_
LOCAL%\java).
Custom configuration (attributes /
property files) for the integration
engine (%B2BI_SHARED_LOCAL%).
Custom document standards
(%B2BI_SHARED_
LOCAL%\config\b2bx\repos).
\Axway\B2Bi\shared\b2bi\data\unique Unique counters used for
enveloping and other purposes
To ensure that you can re-submit or reprocess messages from the trading engine after a restore, you
must schedule the database backups to occur at the same time as file system backups. This is
because the database contains references to files (payloads) on the shared file system. De-
synchronization b etween the database and the shared file system backup can potentially result in
failed messages (messages that were being processed at the moment of backup) or the inability to
resubmit and reprocess a message after the backup has been restored and the system is restarted.
Additional recommendations
If you have not yet stored or otherwise persisted private keys outside of the application, it is a good
practice to export the private keys that are used for encryption, signing and SSL certificates, for
each community profile. Keep the key backups in a secure location.
The following are typical steps to follow:
1. Restore the B2Bi node(s) from the backup. Make sure the products are not started (stop them if
needed). You may see numerous errors at this stage, if the product is not stopped (as the
restore of the other pieces is not complete yet).
2. Restore the shared file system:
a. Restore the base shared file system (snapshot from a stopped B2Bi installation) – see
backup procedures above.
b. Restore the latest partial backup of the shared file system.
c. Run a “configure” on the restored B2Bi node(s). This will regenerate all necessary runtime
configuration files (logger, queuer, filer, table, ...).
3. Restore the backup of the database.
4. Restart the products.
l System backup – Creates a comprehensive backup of your entire B2Bi configuration.
See Back up a B2Bi system configuration on page 348.
l Community backup – Creates a backup of a specific Community and its related Partners.
See Back up a community and its partners on page 358.
l Partner backup – Creates a backup of a specific Partner and its associated configuration
setting and keys.
See Back up a partner on page 361.
l Selective object backup – Creates a backup of a single selected object or a group of objects
of the same object type.
See Back up selected objects on page 366.
After you create backups, you can use them in several ways:
Restoration:
l Restore all or part of the B2Bi system configuration to the B2Bi instance where you created the
backup.
l Restore a single community (and its partners) to the B2Bi instance where you created the
community backup.
l Restore a single partner to the B2Bi instance where you created the partner backup.
l Restore a single object, or restore a selected group of objects of the same type.
l Deploy or promote all or part of the B2Bi System configuration to another B2Bi environment.
l Deploy or promote a single community (and its partners) to another B2Bi environment.
l Deploy or promote a single community as a partner to another B2Bi environment.
l Deploy or promote a single partner to another B2Bi environment.
l Share B2Bi partner configurations with your trading partners (if they use Axway applications).
l Deploy or promote a single object, or selected group of objects of the same type.
For information comparing the Community backup and System backup, see Compare community
backup and system backup on page 346.
l Document agreements
l Outbound agreements
Note The contents of the system backup vary based on licensing and the features
enabled/configured in your implementation.
Contacts Agreements
Attributes templates Metadata profiles
Routing IDs Services
Password policies Components and processing connections
Trading pickup exchanges Global system settings:
Collaboration settings l Node configuration (trading engine, DMZ nodes/zones)
Message validation settings
l Sentinel settings
Partners
l IP whitelists
l Messaging IDs
l Global server settings
l Routing IDs
Peer Network settings
l Contacts Embedded servers
l Certificates l Transport users and policies
l Delivery exchanges User administration
l Roles/users
l Password policy
Message Handler configuration
Message Validation settings
Collaboration settings
Application/Integration exchanges
l Global application pickups
l Global application deliveries
l DocTypes/Attributes
PassPort settings
For a description of the content of the generated system backup ZIP file, see System configuration
backup file content on page 348.
For the procedure for restoring the system backup ZIP file, see Import a backed-up B2Bi system
configuration on page 354.
Procedure
Note You may notice temporarily reduced performance or throughput of your system while the
backup is running.
You can restore the backup of one system into another environment, although certain limitations
apply. (See Import a backed-up B2Bi system configuration on page 354.)
The following tables provide an overview of the various XML files you may need to customize prior
to performing the import in the destination environment.
Note The ZIP file content varies based on your product license and on the features that are
enabled and configured in your implementation.
OS Hostname Path
specific references references
references
Webtraders directory(one XML file per WebTrader - - -
Contains WebTrader partner and user configuration.
AgreementAttributeTemplates.xml - - -
Contains the attributes templates used by
agreements.
Categories.xml - - -
Contains the partner categories defined in
the system.
Certificates.xml - - -
Contains the various (public) certificates
defined in the system.
DefaultCollabSettings.xml - - -
Contains the collaboration rules defined in
the system.
EmbeddedServers.xml - Yes -
Contains the servers defined in the system.
ErrorProcessor.xml - - -
Contains the error processing configuration
defined in the system.
GlobalTransportSettings.xml - - Yes;
Contains the global transport settings (user GlobalBackup
retry and lock, backup) defined in the Config
system.
GlobalUiSettings.xml - - -
Contains the global UI settings defined in
the system. Session Management and global
user security settings.
MessageHandler.xml - - -
Contains the Message Handler configuration
defined in the system.
MetadataProfileAttributeTemplates.xml - - -
Contains the attribute templates used for
metadata profile definitions.
PartyAttributeTemplates.xml - - -
Contains the attribute templates used for
partner definitions.
PasswordPolicies.xml - - -
Contains the stored password policies in the
system.
PeerNetworking.xml - - -
Contains the peer network settings.
ProcessingNodes.xml - Yes -
Contains the defined nodes in the system.
PurgeConfig.xml - - -
Contains the purge settings defined in the
system.
ReportProfile.xml - - -
Contains the Reports defined in the system.
SentinelConfig.xml - Yes -
Contains the Sentinel configuration defined
the system.
TslRetrievalProxyMode.xml - - -
Contains the configured Trust-service Status
List proxy mode.
UiConnectionConfig.xml - Yes -
Contains the configured UI connection
modes (HTTP/HTTPS).
UsersAndGroups.xml - - -
Contains the configured Users and Groups in
the system.
GlobalProcessingConfig.xml - Yes -
Contains the global processing configuration
defined in the system.
Each deployed container from Mapping - - -
Services
Each certificate used for the connection with - Certificates are -
Secure Relay instances host specific
To successfully import a system backup, the platform to which you are importing the backup must
have a valid trading engine node and a valid integration engine node.
l If you are importing to the same instance, all nodes are reconstructed with the configuration
contained in the zip file.
l If you are importing to a different instance (i.e. a different trading engine node name), none of
the processing node configuration is imported.
In addition, consider the following before performing the import:
l If the system configuration you are importing contains file path definitions defined in various
objects, the import attempts to re-create them. If the user running the trading engine has the
appropriate rights, it succeeds. If not, the import succeeds but the created objects refer to non-
existing paths that result in processing errors if used without modification (or manual creation of
the referenced paths).
l The system backup is a compressed file containing multiple XML files. Normally, you import the
compressed file. It is possible to uncompress the backup file and import the XML files
individually. However, such importing can be done only through the B2Bi profile management
API (PMAPI). You may need the help of Axway Professional Services consultants to use the API.
Peer Network auto-cloning will not occur for the objects you import through the system import
feature.
Workaround : After you execute a system import and restart the trading engine, if you have Peer
Network auto-cloning configured, you can force peer cloning by re-saving the objects that you want
to clone.
Procedure
5. Select whether to update, ignore, or replace existing objects. See Select the system import
mode on page 356 for more information.
6. Click Next to import the file. The Import entire system results page displays details about the
imported configuration.
7. Click Finish to exit.
l Replace existing objects if encountered (Default).
Imports all backed-up objects replacing (unmodified) previous content and imports objects that
do not exist.
l Ignore imported objects if existing objects are encountered.
If an identical existing object is encountered, it remains as is and is not imported, a message is
logged, and the import process moves on to evaluate the next object.
l Update (add objects that don't already exist, replace objects that do exist).
Adds new objects that do not exist. Adds backed-up content to objects that do exist
(incremental updates).
The Update option enables you to incrementally update certain objects. It is only effective for multi-
layer objects with parent-child associations. For flat/single object definitions, the Update mode
operates in the same way as the Replace mode. The following table lists the updates applicable to
multi-layer objects.
Community Deliveries
l Add one or more additional deliveries to existing communities
l Assign a new default
Messaging IDs
l Add one or more additional messaging IDs
l Assign a new default
Certificates
l Add one or more additional certificates (personal, PGP, trusts, etc.) to
existing communities
l Assign a new default
Contact information
l Add additional (or updated primary or secondary) contact information
to existing communities
l Change the set of assigned attributes
Partner Messaging IDs
l Add one or more additional messaging IDs
l Assign a new default
Deliveries
l Add one or more additional deliveries to existing partners
l Assign a new default
Certificates
l Add one or more additional certificates to existing partners
l Assign a new default
Contact information
l Add additional (or updated primary or secondary) contact information
to change existing partners
Attributes
l Change the set of assigned attributes
Inbound • Add one or more additional document agreements to an existing
agreement agreements
• Add one or more additional grouping agreements to existing inbound
agreements
• Change the set of assigned attributes
Outbound • Add one or more additional grouping agreements to existing outbound
agreement agreements
• Change the set of assigned attributes
You can use this tool to create backup files and to import these files to B2Bi environments.
Note As part of the restore procedure, the trading engine restarts.
For a complete list of tool options, open a command console and enter the command:
systemBackupRestore ?
Command options are:
l backup – Create an export/backup file.
l restore – Import/restore a backup file.
l user – The username to access the trading engine system.
l pwd – The password to access the trading engine system.
l certpwd – The export password, only used when importing private certificates.
l importmode – (Optional) Options are REPLACE/IGNORE/UPDATE. Default = Replace) See
Select the system import mode on page 356 for details.
l backupfile – The file name for the exported compressed file. Defaults to ExportSystem.zip
if not specified.
Export example:
Import example:
Log files
The systemBackupRestore tool records information in two log files.
l To view a summary of import details, open this file:
<trading engine install directory>\logs\ImportOutput.log
l To view a complete log, open this file:
<trading engine install directory>\logs\systemBackupRestore.log
Also included are configurations for all deliveries, except community application pickups, which are
common to all communities and not to a single one.
Message handler actions are not backed up, as these are set up in an area of the user interface
shared by all communities. Moreover, community and partner collaboration settings are backed up,
but not default collaboration settings.
You can back up a community and keep the file in reserve for disaster recovery or other reasons. The
file can be imported to an installation of the trading engine with a fresh database, providing instant
configuration of a community and its associated partners.
You cannot export a community from a trading engine running on one type of platform and import
it to a trading engine on another platform. For instance, if you export a community from a trading
engine instance on Windows, you must import it to the same or different instance also on Windows.
If you have configured any pluggable transports, these are saved to the backup file, except for
application pickups. Pluggable transports are customized message consumption and production
exchanges available to users of the Software Development Kit. Restoring pluggable transports upon
importing the backup file requires having a pluggabletransports.xml file in <install
directory>\conf that describes the transports. For example, if the backup saves two application
deliveries based on a custom transport configured in pluggabletransports.xml, the instance of
the trading engine that imports the backup also must have identical configuration for the custom
transport in its pluggabletransports.xml file.
Back up a community
Use this procedure to back up a community and all of its partners to a single compressed file. Also
see: Import a backed-up community.
To create a community backup, see Back up a community and its partners on page 358.
You can import a backed-up community file to a new instance of the trading engine with a fresh
database or to an existing instance.
You must work in the B2Bi user interface to import the community backup. You import the same
compressed file that was exported. Do not copy the file to the profiles\autoImport directory.
The file is not compatible with auto-importing.
If you have associated a certificate with the community, the certificate and public key is exported
with the profile.
If the community has more than one trading pickup for receiving messages from partners, the
default pickup is the preferred transport. The default pickup is the one that displays at the top of the
list on the community’s Trading pickups page. Before you export the community, you can change
the default trading pickup or disable a transport to refine the options available to your partner.
Distribute the profile to our trading partners by a secure means. If you send the profile file to trading
partners as an e-mail attachment, we recommend compressing it with WinZip or similar software to
ensure file integrity.
You only can export a community in a form usable as a partner profile. You cannot export the
community by itself as a usable community profile. You can, however, export an entire community
(the community profile and all associated partner profiles). For more information see Back up a
community and its partners on page 358.
If you give the profile file to a trading partner who uses Interchange or Activator 4.2.x or earlier, the
partner’s system prompts for additional information in the imported partner profile. The partner’s
system prompts for the community’s address, city and state or zip code. This information is not part
of a community profile. You can either provide your trading partner with this information or the
partner can determine what to enter upon importing the profile.
The trading engine generates a partner profile XML using the community characteristics.
Back up a partner
You can back up (or "export") a partner and public key certificate to an XML file. You can then use
the backup file (known as a partner profile) deployment, promotion, local restoration, or give it to
your trading partners who use B2Bi, Interchange or Activator. If your partners use other
interoperable trading software, consult with them on the data they require of you to establish
trading relationships.
Similarly, a trading partner who uses B2Bi, Interchange or Activator can export a partner
configuration and public key certificate to a file and give it to you to import as a partner. For trading
partners who use other interoperable software, collect the trading data you need and then manually
create a partner object to represent the trading partner in the user interface. The
A backed up partner profile contains information such as the partner name and routing ID, as well as
the configured transports for sending messages to the partner. If there is more than one exchange,
the order of the transports is preserved. The first listed exchange is the default.
Other preferences included in exported partner profile files include:
l All advanced settings that display on transport maintenance pages in the user interface, such as
maximum concurrent connections, retries, connection, response and read time-outs.
l HTTP chunking and attempted restarts for HTTP.
l Transport friendly names.
l The transport’s setting for backing up files.
l The paths and file names of post-processing scripts, but not the scripts themselves.
l Information for inline processing Java classes is included in exported profiles, but not the Java
classes themselves.
l If the profile has an FTP transport with an alternate command set file, that preference is included
in the exported file, but not the command set file itself.
Import a partner
You can restore (or "import") a partner configuration and public key certificate that have been
backed up to a partner profile XML file.
For partner configuration backup procedure, see Back up a partner on page 361.
A backed up partner profile contains information such as the partner name and routing ID, as well as
the configured transports for sending messages to the partner. If there is more than one exchange,
the order of the transports is preserved. The first listed exchange is the default.
Other preferences included in exported partner profile files include:
l All advanced settings that display on transport maintenance pages in the user interface, such as
maximum concurrent connections, retries, connection, response and read time-outs.
l HTTP chunking and attempted restarts for HTTP.
l Transport friendly names.
l The transport’s setting for backing up files.
l The paths and file names of post-processing scripts, but not the scripts themselves.
l Information for inline processing Java classes is included in exported profiles, but not the Java
classes themselves.
l If the profile has an FTP transport with an alternate command set file, that preference is included
in the exported file, but not the command set file itself.
1. Go to the partner summary page and click Certificates in the navigation graphic at the top of
the page.
2. Click the certificate name and then click the Trusts tab.
3. Check the details of third-party certificates imported with profiles to make sure trusted roots are
present.
After importing a partner profile, go to the partner summary page for the imported partner and
check whether any tasks are required to complete the profile configuration.
The following objects can be exported and imported either individually or a as a collection of similar
objects:
l Components
l Services
l Inbound agreements
l Document agreements
l Outbound agreements
Required permissions
The user must have a role that includes the permission to manage the object type that he or she
wants to back up.
General procedure
To export an object or a collection of objects of the same type to a backup file:
1. Go to the management page for the object type.
2. From the list of objects, select one or more objects to export.
3. From the action selection box located below the list of objects, select Export.
4. Click Selected.
5. B2Bi creates a compressed file of the object or objects in the default download location that is
defined for your browser.
The export package name has the format <object_type>-export.zip.
You can use the export file to move the selected objects from one environment to another, provided
that they are running the same version of B2Bi.
See Import / restore selected objects on page 368.
To view the backup (export) procedure, see Back up selected objects on page 366.
The following objects can be imported either individually or a as a collection of similar objects:
l Components
l Services
l Inbound Agreements
l Document Agreements
l Outbound Agreements
Required permissions
The user must have a role that includes the permission to manage the object type that he or she
wants to import.
General procedure
The behavior of B2Bi when importing selected objects varies depending on the object that is being
handled. The following is a generic procedure that describes basic steps for importing all selected
objects backups. For details of importing specific object types, see Special conditions and
exceptions on page 369.
To import objects:
1. Go to the management page for the object type you want to import.
2. From the Related tasks list at the bottom of the page, select Add a <object_type> to open
the add wizard.
3. In the "Choose the source" page of the wizard, select the option Import one or more [object
type], and click Next.
4. In the Enter import file path page:
l Click Browse, and use the browse tool to locate and select the export file that contains
the objects you want to import. Click Open to select the file for import.
l Choose the import mode - Select the action for B2Bi if the import encounters any
existing objects with the same names as objects in the import file:
o Replace the objects.
o Ignore conflicting objects and do not import them.
5. Click Next.
6. Review the import summary and click Close.
l Click Browse, and use the browse tool to locate and select the export file that contains
the service-export.zip file you want to import. Click Open to select the file for
import.
l Choose the import mode – Select the action for B2Bi if the import encounters any
existing services with the same names as services in the import file:
o Replace the objects.
o Ignore conflicting objects and do not import them.
5. Click Next.
6. Review the import summary and click Close.
B2Bi executes the import and updates the list of services with the newly imported services.
B2Bi writes the results to the log4j file, located in <trading_engine_install_
directory>\logs.
Components
About Components
The role of a B2Bi Component object is to provide specific processing for a B2Bi Service. There are
many types of Components in B2Bi.
A Component has two types of dependent child objects:
l Connection
l Resource
Special considerations
Child dependencies:
When you execute a selective import of a Component, the child dependency of the Connection and
of the Resource must be re-established in the new environment, otherwise the import will result in
an “incomplete” component definition. This means if the Connection and the Resource do not
already exist in the target import system, the Component will be displayed in the UI as incomplete,
and will not be valid for use in a Service.
You must check that all child connections and resources that existed in the originating
configuration (from which you exported) also exist in the target system (to which you import).
Components are used in Services and Services can be used in multiple Agreements. When you
replace or update an existing Component by importing and overwriting an existing Component
definition, there can be an impact on the flow logic. For example, if the original component
produced 1 output and the new component produces 2 outputs, the Service is modified, resulting
in a non-reversible impact on the definition of the Service that uses the component, and on any
objects that use the Service.
During the import process, you select a import mode that controls the impact of these potential
object changes:
l Replace – (Default) Replace every current object definition with the imported definition.
l Ignore – Ignore any existing objects (of the same type) with the same name during the import.
If any Component in the export/import file is modified so that it contains invalid data, the import
fails and the failure is displayed in the import result summary page of the import wizard.
Services
About Services
The role of a B2Bi Service object is to specify the processing sequence for a message exchanged
between two or more exchange participants.
A Service has the following dependent child objects and parameters:
l Service Attributes
l Application Delivery
l Component
Special considerations
Service dependencies:
When you execute a selective import of a Service, the Service Attributes and Component parameters
(defined in the Service object) are maintained. However the child dependency of the Application
Delivery and the Component (with Resource and Connection) must be re-established in the new
environment, otherwise the import will result in an “incomplete” Service definition. This means if the
Application Delivery and Component (with Connection and the Resource) do not already exist in the
target import system, the Service will be displayed in the UI as incomplete, and will not be valid for
use in message handling.
You must check that all child Application Deliveries and Components that existed in the originating
configuration (from which you exported) also exist in the target system (to which you import).
Services can be used in multiple Agreements and in Metadata Profiles. When you replace or update
an existing Service by importing, there can be an impact on the flow logic.
Example 1:
The Service output in the target environment (before import) has a different delivery option
than the updated Service (e.g. deliver to partner vs. deliver to application). This
modification will have a non-reversible impact on the objects using this Service (Agreement /
Metadata Profile).
Example 2:
The Service output in the target environment (before import) uses a Component that
produces 1 output and the updated Service uses a Component with the same name
producing 2 outputs. This will have a non-reversible impact on flow logic.
Example 3:
The Service in the imported file introduces a required attribute value to an existing Attributes
Template. This generates an updated Attributes Template in the target environment. Any
existing Services that use that Template will be incomplete because they lack the required
value.
During the import process, you select a import mode that controls the impact of these potential
object changes:
l Replace – (Default) Replace every current object definition with the imported definition.
l Ignore – Ignore any existing objects (of the same type) with the same name during the import.
The following are conditions in which Service import may fail:
l Wrong import file type.
l Invalid object version.
l Wrong object data in import file.
l A referenced Service is not available in the target system of the import.
l A referenced Application Delivery is not available in the target system of the import.
If any Service in the export/import file contains an attribute that does not exist on the target system,
the attribute is created and a warning is displayed on the summary page of the import wizard.
Inbound Agreements
An inbound Agreement has the following dependent objects and parameters:
l Functional Group Agreements
l Document Agreements
l Outbound Agreements
l Document Services
l Object references:
o Agreement Attributes
o Messaging IDs of participants (partners or communities)
o Community
o Routing ID
Special considerations
Agreement import order:
Inbound Agreements, outbound Agreements and Document Agreements work together in B2Bi
message flows to provide such services as acknowledgements and receipts. To correctly maintain
relationships, export/import these objects in the following order:
1. Outbound Agreements
2. Inbound Agreements
3. Document Agreements
When you execute a selective import of an inbound Agreement, the Agreement Attribute and
Functional Group Agreement relationships are maintained. However the links to Outbound
Agreements (for acknowledgements), Services (for acknowledgements) and Document Agreements
must be re-established in the new environment, otherwise the import will result in an “incomplete”
Agreement definition. This means that if the linked outbound Agreement, Service or Document
Agreement do not already exist in the target import system, the Agreement will be displayed in the
UI as incomplete, and will not be valid for use in message processing.
You must check that all linked outbound Agreements, Services and Document Agreements that
existed in the originating configuration (from which you exported) also exist in the target system (to
which you import).
During the import process, you select a import mode that controls the impact of these potential
object changes:
l Replace – (Default) Replace each agreement with the same name with the imported definition. If
an agreement on the target import system contains child Functional Group Agreements (FGAs),
delete all existing FGAs and add the FGAs contained in the file.
l Ignore – Ignore any existing objects (of the same type) with the same name during the import.
The following are conditions in which inbound Agreement import may fail:
l Wrong import file type.
l Invalid object version.
l Wrong object data in import file.
l Referenced Partner or Community Messaging ID not found. If an inbound Agreement references
a Messaging ID that does not exist on the target import system, the import of the agreement will
fail.
l Referenced Community not found: If the dependency on the referenced Community is not
fulfilled, the import will fail.
l Referenced Routing ID not found: If the dependency on the referenced Routing ID is not
fulfilled, the import will fail.
l Reference outbound Agreement not found: If the dependency on the referenced Outbound
Agreement is not fulfilled, the import will result in an incomplete inbound Agreement.
l Referenced Document Agreement not found: If the dependency on the referenced Document
Service (acknowledgment-related) is not fulfilled, the import will result in an incomplete
inbound Agreement.
If any Agreement in the export/import file contains an attribute that does not exist on the target
system, the attribute is created and a warning is displayed on the summary page of the import
wizard.
Document Agreements
A Document Agreement has the following dependent objects and parameters:
l Document Agreement Attributes
l Inbound Agreement
l Outbound Agreement
l Service
Special considerations
Agreement import order:
Document Agreements work together with inbound Agreements and outbound Agreements to
provide such services as acknowledgements and receipts. To correctly maintain relationships,
export/import these objects in the following order:
1. Outbound Agreements
2. Inbound Agreements
3. Document Agreements
When you execute a selective import of a Document Agreement, the Document Agreement Attribute
relationships are maintained. However the links to inbound Agreements, outbound Agreements (for
acknowledgements), and Services must be re-established in the new environment. If the linked
inbound Agreement, outbound Agreement or Service does not already exist in the target import
system, the Document Agreement is displayed in the UI as "incomplete", and will not be valid for use
in message processing.
You must check that all linked inbound Agreements, outbound Agreements and Services that
existed in the originating configuration (from which you exported) also exist in the target system (to
which you import).
During the import process, you select a import mode that controls the impact of these potential
object changes:
l Replace – (Default) Replace every current Document Agreement definition with its new
definition.
l Ignore – Ignore any existing objects (of the same type) with the same name during the import.
The following are conditions in which inbound Agreement import may fail:
l Wrong import file type.
l Invalid object version.
l Wrong object data in import file.
l Referenced outbound Agreement not found: If the dependency on the referenced Outbound
Agreement is not fulfilled, the import will result in an incomplete Document Agreement.
l Referenced Service not found: If the dependency on the referenced Service is not fulfilled, the
import will result in an incomplete Document Agreement.
If any Document Agreement in the export/import file contains an attribute that does not exist on the
target system, the attribute is created and a warning is displayed on the summary page of the import
wizard.
Outbound Agreements
An outbound Agreement has the following dependent objects and parameters:
l Agreement Attributes
l Functional Group Agreements
Special considerations
Agreement import order:
Outbound Agreements, inbound Agreements and Document Agreements may work together in B2Bi
message flows to provide such services as acknowledgements and receipts. To correctly maintain
relationships, export/import these objects in the following order:
1. Outbound Agreements
2. Inbound Agreements
3. Document Agreements
When you execute a selective import of an outbound Agreement, the Agreement Attribute and
Functional Group Agreement relationships are maintained and included in the import.
During the import process, you select a import mode that controls the impact of these potential
object changes:
l Replace – (Default) Replace each agreement with the same name with the imported definition. If
an agreement on the target import system contains child Functional Group Agreements (FGAs),
delete all existing FGAs and add the FGAs contained in the file.
l Ignore – Ignore any existing objects (of the same type) with the same name during the import.
The following are conditions in which outbound Agreement import may fail:
l Wrong import file type.
l Invalid object version.
l Wrong object data in import file.
l Referenced Partner or Community Messaging ID not found. If an outbound Agreement
references a Messaging ID that does not exist on the target import system, the import of the
Agreement will fail.
l Referenced Community not found: If the dependency on the referenced Community is not
fulfilled, the import will fail.
If any outbound Agreement in the export/import file contains an attribute that does not exist on the
target system, the attribute is created and a warning is displayed on the summary page of the import
wizard.
<trading_engine_install_directory>\tools
The following table shows the types of customizations this tool can back up and restore, and where
they are typically located.
Note This tool is not limited to these directories and customization types.
Property files and XML files <B2Bi_Install_Dir>\Interchange\conf
Inline processor (from one of the nodes) <B2Bi_Install_Dir>\Interchange\site\jars
Message Builder component (MBC) %B2BI_SHARED%\local\4edi\component
JTF %B2BI_SHARED%\local\BOM
TF-XSLT %B2BI_SHARED%\local\BOM
Procedure:
1. Customize the externalConfigBackupRestore.xml file to include the appropriate paths to
the custom components.
2. Run the externalConfigBackupRestore.cmd tool.
The tool provides information on the available commands.
If the trading engine is unable to import a profile file in the autoImport directory, the system
moves the file to \profiles\autoImport\rejected.
Note If you have exported a community and its partners to a backup file (see Back up a
community and its partners on page 358 ), do not use the autoImport directory to import
the file. The trading engine rejects the file.
The system also has some profile staging directories, as shown in the following illustration. The
system does not write to these directories, except during installation when a user is upgrading from
a previous version. The directories are a place to hold profile files before a user moves them to the
autoImport directory. The software installer, for instance, writes profile files to the staged
directories if the user during installation chooses to have profiles exported from an earlier version of
the trading engine.
\backup \community\partner
\staged \community\partner
The trading engine cannot import a password-protected profile through the autoImport directory.
The password protects the certificate private key. Make sure to securely handle a company profile
exported without a password.
Events for profile imports and rejects are written to control node events log(hostname_cn_
events.log)in the logs directory. The events are:
l Administration.Configuration.Party.CommunityImported
l Administration.Configuration.Party.PartnerImported
l Administration.Configuration.Party.ImportFailedSecret1
Uniqueness in B2Bi
In order to process message flows in a coherent and predictable manner, B2Bi enforces the
uniqueness of processing configurations at two levels:
l Unique object names
l Unique exchange context
As a general rule, each object that you create in the B2Bi interface must have a unique name.
Object creation: If you try to create an object with a name that already exists, B2Bi displays a
message informing you of the naming conflict and instructing you to enter a unique name.
Object importing: If you import an object or set of objects with a name that is already used in a
configuration, the importer executes one of the following actions to insure uniqueness:
l ignore - ignores the imported object and conserves the existing object
l replace - replaces the existing object with the imported version of the object
l update - modifies the existing object by adding, deleting or revaluing attributes from the
imported object
Upgrades: If you upgrade from a previous version, the upgrade logic never deletes previous data.
When the upgrade logic detects or must generate a potentially duplicated object, it automatically
renames the duplicates with unique names. In some cases it may also have to disable the renamed
object to conserve the functional uniqueness of the configuration.
There can be no ambiguity in the handling of a message flow. When B2Bi consumes a specific
document format from a specific sender, it must execute a unique set of processing. When B2Bi
envelops and addresses a message to send to a partner, it must correctly conform to the receiving
partner's handling criteria.
Inbound agreement uniqueness is determined by:
l The message format (X12, EDIFACT, ...)
l Sender messaging ID
l Receiver messaging ID
l For X12 and EDIFACT formats, grouping agreement selection
l For VDA formats, the customer/supplier orientation is also applied to the agreement sender and
receiver IDs
l Unique document agreement (and embedded service) to apply for transformation and
processing
When B2Bi consumes a message, it selects an agreement to use from a list of applicable and enabled
agreements. B2Bi makes this selection by using the following priority:
1. Explicit – A specified messaging ID that matches that of the sending or receiving participant
(partner or community)
2. Implicit – Matching any of the enabled messaging IDs of a sending or receiving partner or
community
3. Semi-anonymous – Matching any defined messaging ID for any sending or any receiving
partner or community
4. Full anonymous – Matching any defined messaging ID for any sending and any receiving
partner or community
Only one inbound agreement at each of these priority levels can be enabled at the same time, so
there is only one enabled inbound document agreement context for B2Bi to be select and apply.
In cases where there is more than one matching agreement at a level, during message processing
B2Bi selects the most explicit matching agreement and logs a warning indicating the names of other
similar agreements that exist.
Outbound agreement uniqueness is determined by:
l The message format (X12, EDIFACT, ...)
l Sender messaging ID
l Receiver messaging ID
l For X12 and EDIFACT formats, grouping agreement selection
l For VDA formats, the customer/supplier orientation is also applied to the agreement sender and
receiver IDs
l Enveloping context (which is specified on the outbound agreement/enveloping tab)
There must be only one unique pattern of values to select from for the sender/receiver/enveloping
context.
Outbound agreement selection can be either manual (defined by the user to apply to one or more
inbound processing contexts) or automatic (selected when the user chooses the "Use Best
Outbound Agreement" mode). Note that in both cases the selected or resolved outbound agreement
is in the output format of the processed message.
In cases where B2Bi automatically selects the best outbound agreement to be used for enveloping
(similar to the inbound agreements selection), it makes the selection from a list of applicable and
enabled outbound agreements, by level of priority:
1. Explicit – B2Bi matches the specified “default” (or other) messaging ID with that of a sending
and receiving partner or community
2. Implicit – B2Bi matches the current “default” messaging ID of a sending or receiving partner or
community
3. Semi-anonymous – B2Bi resolves to the current default messaging ID for any sending or any
receiving partner or community
4. Full anonymous– B2Bi resolves to the current default messaging IDs for any sending and any
receiving partner or community
Only one unique outbound agreement at each of these priority levels can be enabled at the same
time, so there is only one enabled outbound c ontext to be selected.
In cases where there is more than one matching agreement at a level, B2Bi displays a warning and
indicates the names of other similar agreements that exist.
When an exchange is configure in "Use Best Outbound Agreement" mode, B2Bi logs a warning
indicating the other similar agreements that could have been selected.
On upgrades, B2Bi analyzes duplicate inbound c ontexts and keeps one context active while
duplicating others. Any document agreements (generated from B2Bi 1.5 document profiles) that are
found on the duplicate and disabled inbound agreement contexts are moved to the enabled
inbound agreement, so that processing is preserved in the active agreement.
Most B2Bi exchanges with the integration engine are communications-based and depend on transfer
adapters. MFP provides an application-based interaction with the integration engine, enabling direct
access to the inbuilt integrations inside B2Bi. This means that you do not have to define a pickup in
order to have your messages processed by B2Bi.
A native MFP server is built into B2Bi. The default B2Bi MFP server listening port is 8877. This port is
registered in the integration engine /etc/services file as:
Communication methods
There are two principal methods for using Message Feed:
l Command line – See MFP command line tool on page 386.
l API – Use the APIs provided and access the Message Feed Server from utility programs. The API
is provided for C, Java and the Axway-proprietary Message Builder Language. See:
o Message Feeder C-language access on page 389
o Message Feed Message Builder access on page 404
o Message Feed Java access on page 417
The general order of processing events is the following:
1. A Message Feed client, located on an application, sends a message to the Message Feed server
in the B2Bi integration engine using the Message Feed API. The message is addressed with a
qualified name of the integration and activity that should process the message.
2. The Message Feed server translates the qualified activity name to an internal activity ID, and
sends the message to the activity using an integration engine queue. If the qualified activity is
not located, a negative result status is sent back to the application Message Feed client through
the Message Feed API.
3. The Message Feed server waits for the message to be processed in the integration engine
activity.
4. When message processing is complete a result status is sent back to the Message Feed client.
The result status can be one of the following:
l OK – The message was processed without being stopped,
l Partial error – Some of the messages created in the activity are stopped.
l Error – All of the messages being created in the activity are stopped.
5. The Message Feed server logs the submitted message and the reply status in the message log.
Both the Message Feed server and the Message Feed client can handle multiple concurrent requests.
Alternatively, any time after the original installation, you can add the Message Feed connector by
running the B2Bi installation program in configure mode and selecting the Message Feed Connector
option for the integration engine.
For details about B2Bi installation procedures, refer to the B2Bi Installation Guide.
An integration comprises one or more "activities". Each activity in an iteration defines some step in
how the message is transmitted and processed.
When an activity is started using Message Feed, the number of instances of the activity being started
is determined by the configuration of the integration engine that HME is running in multiple
processing engines. For this reason only static scaling of activities is possible. The default number of
processing engines for the HME used by the default inbound MFP activity (A02 MFP Inbound) is 2.
The maximum number of concurrent client sessions to the MFP server is set by default to 10.
1. Open Message Log: Go to Start > All programs > Axway Software > Axway B2Bi >
B2Bi Tools > B2Bi Tools Client, log on to the tool box and click the Message Log icon.
2. Right-click anywhere in the Message Log Favorite Searches pane, and from the context
menu, select New.
3. In the New Search window, click Message Feed.
4. Click Select.
5. In the General tab, choose filtering criteria for the message selection:
l User
l Message ID
6. Click Save As....
7. Click Close.
You can now select this search in the Message Log main window to display MFP-related messages.
l Command line – See MFP command line tool on page 386.
l Utility programs using Axway Message Feed APIs:
o C language – See Message Feeder C-language access on page 389.
o Message Builder – Message Feed Message Builder access on page 404
o Java – Message Feed Java access on page 417
To submit messages, use the Message Feed client program, mfp, located in the $CORE_ROOT/bin
directory. You can run this program either directly from the command line or from a shell script or
batch file.
UNIX
For UNIX / Linux systems, the formal syntax for the Message Feed client program is:
Windows
For Windows systems, the formal syntax for the Message Feed client program is:
mfp [/?] [/V] [/h host] /s service /u user [/p password-file | /P password]
/i integration /a activity [/m message-id] [/A attributes-file] /M message-
file [/t timeout]
Note: Tthe UNIX syntax is also accepted in Windows
Command arguments
Argument Description
? Show command syntax and exit.
V Show program revision and exit.
h TCP/IP host on which the Message Feed server is running. This is an optional
argument, the default value is localhost.
Argument Description
s TCP/IP service used to connect to Message Feed server. This must be the same
TCP/IP service as that defined for the Message Feed Server task.
u User identification. The user must have defined in the Message Feed Manager.
p File containing the password for the user. This option could be used instead of the
P option.
P Password for the user. This option could be used instead of the p option.
i The integration in which the activity is defined. This is a qualified name containing
any folders and integration name separated by the slash (/) character. The
sequence \\ is used to specify a single \ character in a folder or integration name.
The sequence \/ is used to specify a single / character in the folder or integration
name.
In B2Bi 1.x: Use “B2Bi Express 1”
In B2Bi 2.x: Use “B2Bi 1”
a The activity to in which the submitted message will be processed.
In B2Bi: Use “A02 MFP Inbound”
m User defined message identifier which will be logged for the submitted message in
the Message Log. This is an optional argument.
A Path to a file that contains the attributes to set for the message being submitted.
The file should contain one attribute on each line. Each attribute line has an
attribute name followed by a = character, followed by the attribute value. The file
can contain remark lines beginning with a # character. This is an optional
argument.
M Path to the file that contains the data contents of the message being submitted.
t Timeout (in seconds) for the submitted message to be processed. This is an
optional argument and the default timeout is 60 seconds.
Code Description
OK The submitted message was processed OK.
Partial The submitted message was processed with some errors – One or several of the
error messages created from the submitted message was stopped.
Error The submitted message was processed with error – All messages created from the
submitted message were stopped.
Timeout The submitted message was not processed within the timeout specified by the t
argument.
The API is designed for use in both synchronous and asynchronous modes.
The include file mfp.h is located in the directory $CORE_ROOT/c/include. The library file mfp.o
or mfp.obj is located in the directory $CORE_ROOT/c/lib.
The following topics describe the C-language MFP functions:
mfp_Authenticate on page 390
mfp_Connect on page 392
mfp_Disconnect on page 393
mfp_FreeReply on page 393
mfp_GetLastError on page 394
mfp_GetReply on page 395
mfp_GetSocket on page 399
mfp_Revision on page 399
mfp_StatusToString on page 400
mfp_SubmitMessage on page 401
mfp_Authenticate
Description
Authenticates to the Message Feed server. Each client submitting messages to the Message Feed
server must authenticate itself with a name and a password.
Syntax
int mfp_Authenticate(h_mfp hMFP, int RequestId, char const* pUser, char
const* pPassword);
Parameters
l hMFP – The handle to the Message Feed connection.
l RequestId – Identifies this request. It is returned in the reply received by the mfp_GetReply
function in order to correlate a reply with its request.
l pUser – User name for the Message Feed client.
l pPassword – Password for the specified user. The function returns 0 if the request was
successfully sent to the Message Feed server, or -1 in case of error. Use the mfp_GetLastError
function to get a textual description of the error.
Example
...
if(mfp_Authenticate(hMFP, 0, "user", "passwd") < 0)
{
}
if(pReply->Status != MFP_REPLYSTATUS_OK) {
printf("failed to authenticate: %s\n", mfp_StatusToString(pReply-
>Status));
exit(1);
}
mfp_FreeReply(pReply);
...
Related commands
mfp_GetReply on page 395
mfp_GetLastError on page 394
mfp_Connect
Description
The mfp_Connect function establishes a connection to the Message Feed server.
The function returns an opaque handle for the connection. This handle must be freed using the
mfp_Disconnect function. If an error occurs, the function returns a NULL value.
Syntax
h_mfp mfp_Connect(char const* pHost, char const* pService, int Timeout);
Parameters
l pHost – Name of the TCP/IP host where the Message Feed server is located.
l pService – Name of the TCP/IP service which the Message Feed server listens to.
l Timeout – Timeout, in seconds, being used for socket access to the Message Feed server.
l Use the mfp_GetLastError function to get a textual description of the error.
Example
#include "mfp.h";
...
h_mfp hMFP;
hMFP = mfp_Connect("localhost", "12345", 60);
if(hMFP == NULL) {
printf("failed to connect to Message Feed server: %s\n",
mfp_GetLastError());
exit(1);
}
...
if(mfp_Disconnect(hMFP) != 0) {
printf("failed to disconnect from Message Feed server: %s\n",
mfp_GetLastError());
exit(1);
}
...
Related commands
mfp_Disconnect on page 393
mfp_GetLastError on page 394
mfp_Disconnect
Description
The mfp_Disconnect function disconnects from the Message Feed server.
The function returns 0 if the disconnect was successful, or –1 in case of an error.
Use the mfp_GetLastError function to get a textual description of the error.
Syntax
int mfp_Disconnect(h_mfp hMFP);
Parameters
hMFP – The handle to the Message Feed connection to disconnect.
Example
See mfp_Connect on page 392.
Related commands
mfp_Connect on page 392
mfp_GetLastError on page 394
mfp_FreeReply
Description
Frees a reply received by the mfp_GetReply function.
Syntax
void mfp_FreeReply(t_mfpreply* pReply);
Parameters
pReply – The reply to free.
Example
See mfp_GetReply on page 395.
mfp_GetLastError
Description
Gets a textual description of the last error occurring in the MFP library.
The function returns a string describing the last error.
Syntax
char const* mfp_GetLastError();
Example
See mfp_Connect on page 392.
mfp_GetReply
Description
Gets the reply to a request to the Message Feed server.
The mfp_GetReply function is blocking. To avoid blocking the program while waiting for the
reply, use the mfp_GetSocket function to obtain a socket that can be used in the C select function
to wait for the socket to be ready for read or write access. To avoid memory leaks, use the mfp_
FreeReply function to free the reply structure.
Syntax
t_mfpreply* mfp_GetReply(h_mfp hMFP, int Timeout);
Parameters
l hMFP – Handle to the Message Feed connection.
l Timeout – Timeout, in seconds, to wait for the reply. This timeout is not the same timeout as
the socket access timeout which is defined in the mfp_Connect function.
l The function returns a pointer to a structure of type t_mfpreply as defined below. This
structure must be freed using the mfp_FreeReply function. A NULL value is returned in case of
an error.
l Use the mfp_GetLastError function to get a textual description of the error:
typedef struct {
int RequestType;
int RequestId;
int Status;
t_mfpreply_submitmsg SubmitMessage;
} t_mfpreply;
The RequestType field in the t_mfpreply structure is one of the following:
o o MFP_REQUESTTYPE_AUTHENTICATE – if it is a reply to a mfp_Authenticate
request
o MFP_REQUESTTYPE_SUBMITMESSAGE – if it is a reply to a mfp_
SubmitMessage request.
l The RequestId field in the t_mfpreply structure is the same request id as supplied in the
mfp_Authenticate or mfp_SubmitMessage function.
The Status field in the t_mfpreply structure is the status of the request. You can use the mfp_
StatusToString function to get a textual description of the status. The status can be any of
the following:
o o MFP_REPLYSTATUS_OK – The request was processed successfully.
o MFP_REPLYSTATUS_AUTHENTICATE_INVALIDDATA – A mfp_Authenticate
request contained invalid data.
o MFP_REPLYSTATUS_AUTHENTICATE_AUTHENTICATIONFAIL – The user or
password in an mfp_Authenticate function was invalid.
o MFP_REPLYSTATUS_SUBMITMESSAGE_INVALIDDATA – A mfp_
SubmitMessage request contained invalid data.
o MFP_REPLYSTATUS_SUBMITMESSAGE_NOTAUTHENTICATED – Trying to
submit a message without having authenticated to the Message Feed server.
o MFP_REPLYSTATUS_SUBMITMESSAGE_UNKNOWNACTIVITY – The Integration
or Activity parameters in the mfp_SubmitMessage function refers an activity
that is not known by the Message Feed server.
o MFP_REPLYSTATUS_SUBMITMESSAGE_INVALIDFILE – The FilePath parameter
in the mfp_SubmitMessage function does not refer a valid file.
o MFP_REPLYSTATUS_SUBMITMESSAGE_UNKNOWNQUEUE – The Integration
and Activity parameters in the mfp_SubmitMessage function refers an activity
for which a queue does not exist. This error indicates an inconsistent run-time
configuration in the system where the Message Feed server runs.
o MFP_REPLYSTATUS_SUBMITMESSAGE_SERVERERROR – An internal error
occurred in the Message Feed server.
o MFP_REPLYSTATUS_SUBMITMESSAGE_TIMEOUT – No reply was received
from the Message Feed server within the specified timeout.
The SubmitMessage field in the t_mfpreply structure is valid in case the reply is related to a
mfp_SubmitMessage request. This field is a structure of type t_mfpreply_submitmsg as
defined below.
typedef struct {
char* pFilePath;
char* pMessageId;
char* pLoggerId;
int Status;
} t_mfpreply_submitmsg;
l The pFilePath parameter in the t_mfpreply_submitmsg structure is only set in case a reply
message was requested in the mfp_SubmitMessage function. In this case, this field specifies
the file path to the reply message.
l The pMessageId parameter in the t_mfpreply_submitmsg structure specifies the user
defined message ID of the message being submitted. This is the same as the MessageId
parameter in the mfp_SubmitMsg function.
l The pLoggerId parameter in the t_mfpreply_submitmsg structure specifies the logger ID
assigned for the submitted message by the Message Feed server. The logger ID uniquely
identifies a message in the integration engine.
l The Status parameter in the t_mfpreply_submitmsg structure specifies the processing status
of the submitted message. The status can be any of the following:
o MFP_SUBMITMESSAGE_STATUS_OK - The submitted message was processed OK.
o MFP_SUBMITMESSAGE_STATUS_PARTIALERROR - The submitted message was processed
with some errors, i.e. one or several of the messages created from the submitted message
was stopped.
o MFP_SUBMITMESSAGE_STATUS_ERROR - The submitted message was processed with error,
i.e. all messages created from the submitted message was stopped.
o MFP_SUBMITMESSAGE_STATUS_TIMEOUT – The submitted message was not processed
within the timeout specified in the mfp_SubmitMessage function.
Example
#include "mfp.h"
...
h_mfp hMFP;
t_mfpreply* pReply;
hMFP = mfp_Connect("localhost", "12345", 60);
if(hMFP == NULL) {
printf("failed to connect to Message Feed server: %s\n",
mfp_GetLastError());
exit(1);
}
if(mfp_Authenticate(hMFP, 0, "user", "passwd") < 0) {
printf("failed to authenticate to Message Feed server: %s\n",
mfp_GetLastError());
exit(1);
}
pReply = mfp_GetReply(hMFP, 60);
if(pReply == NULL) {
printf("failed to get authentication reply:
%s\n",
mfp_GetLastError());
exit(1);
}
if(pReply->Status != MFP_REPLYSTATUS_OK) {
printf("failed to authenticate: %s\n", mfp_StatusToString(pReply-
>Status));
exit(1);
}
mfp_FreeReply(pReply);
mfp_Disconnect(hMFP);
...
Related commands
mfp_FreeReply on page 393
mfp_Connect on page 392
mfp_GetSocket on page 399
mfp_Authenticate on page 390
mfp_SubmitMessage on page 401
mfp_StatusToString on page 400
mfp_GetLastError on page 394
mfp_GetSocket
Description
Gets the socket descriptor for a connection handle. The socket descriptor can be used in the
standard C-function select to wait for a socket to be ready for read or write access.
This function is only needed if the program using the Message Feed connection must not block
while waiting for a reply from the Message Feed server.
Note: Use the socket descriptor only for select function calls.
Syntax
int mfp_GetSocket(h_mfp hMFP);
Parameters
hMFP – The handle to the Message Feed connection. The function returns a socket descriptor, or –1
in case of an error.
Use the mfp_GetLastError function to get a textual description of the error.
Related commands
mfp_Connect on page 392
mfp_GetLastError on page 394
mfp_Revision
Description
Returns the revision of the MFP library. The revision is returned as a string with a functional version
number, a major version number and a minor version number, separated by a dot (.) character.
Syntax
char const* mfp_Revision();
Example
#include "mfp.h";
...
printf("MFP revision is %s\n", mfp_Revision());
...
Returns the result:
mfp_StatusToString
Description
Gets a textual description of a status code returned by the mfp_GetReply function.
The function returns a description of the status code defined by the Status parameter.
Syntax
char const* mfp_StatusToString(int Status);
Example
See mfp_GetReply on page 395.
mfp_SubmitMessage
Description
Submits a message to the Message Feed server. Before submitting a message, the mfp_
Authenticate function must be called to authenticate to the server. A message is submitted to a
specified activity. The message consists of data and attributes and is optionally identified by a
message ID.
Syntax
int mfp_SubmitMessage (h_mfp hMFP, int RequestId, char const* pIntegration,
char const* pActivity, char const* pFilePath, t_mfpattribute**
ppAttributes, char const* pMessageId, int Acknowledge, int Timeout);
Parameters
l hMFP – The handle to the Message Feed connection.
l RequestId – Identifies this request. It is returned in the reply received by the mfp_GetReply
function in order to correlate a reply with its request.
l pIntegration – Specifies in which integration the activity exists. This is a qualified name
containing any folders and integration name separated by the slash (/) character. The sequence
\\ is used to specify a single \ character in a folder or integration name. The sequence \/ is used
to specify a single / character in the folder or integration name.
l pActivity – Specifies the name of the activity in which the submitted message should be
processed.
l pFilePath – The file path to the data part of the message being submitted. The file being
referred is automatically deleted after it has been submitted.
l ppAttributes – An array of pointers to attribute structures of type t_mfpattribute as
defined below. The array must be ended with a NULL pointer. This parameter can be NULL in
case no attributes are provided.
typedef struct {
char* pName;
char* pValue;
} t_mfpattribute;
l pMessageId – Identifies the message with a user-defined message identifier. This parameter can
be NULL in case no message identifier is provided.
l Acknowledge – Specifies the type of acknowledgement being expected for the submitted
message. It can be any of:
o MFP_SUBMITMESSAGE_ACKNOWLEDGE_COMMIT – Expect that the submitted message is
successfully queued and secured in the Message Feed server.
o MFP_SUBMITMESSAGE_ACKNOWLEDGE_STATUS – Expect a reply status for the submitted
message.
o MFP_SUBMITMESSAGE_ACKNOWLEDGE_STATUSMESSAGE – Expect a reply message for the
submitted message. This functionality is not available in version 1.0 of Message Feed.
l Timeout – Specifies the timeout, in seconds, for processing the submitted message in the
specified activity.
Example
...
t_mfpattribute Attribute;
t_mfpattributes* ppAttributes[2];
char const* pStatusDescription;
Attribute.pName = "MyAttribute";
Attribute.pValue = "somedata";
ppAttributes[0] = &Attribute;
ppAttributes[1] = NULL;
if(mfp_SubmitMessage(hMFP, 0, "MyFolder/MyIntegration", "MyActivity",
"Id123", "msg.dat", ppAttributes, MFP_SUBMITMESSAGE_ACKNOWLEDGE_STATUS,
600) != 0) {
printf("failed to submit message: %s\n", mfp_GetLastError());
exit(1);
}
pReply = mfp_GetReply(hMFP, 60);
if(pReply == NULL) {
printf("failed to get submit message reply:
%s\n",
mfp_GetLastError());
exit(1);
}
if(pReply->Status != MFP_REPLYSTATUS_OK) {
printf("failed to submit message: %s\n", mfp_StatusToString(pReply-
>Status));
exit(1);
}
switch(pReply->SubmitMessage.Status) {
case MFP_SUBMITMESSAGE_STATUS_OK:
pStatusDescription = "Ok";
break;
case MFP_SUBMITMESSAGE_STATUS_PARTIALERROR:
pStatusDescription = "Partial error";
break;
case MFP_SUBMITMESSAGE_STATUS_ERROR:
pStatusDescription = "Error";
break;
case MFP_SUBMITMESSAGE_STATUS_TIMEOUT:
pStatusDescription = "Timeout";
break;
default:
pStatusDescription = "Unknown";
break;
}
printf("LoggerId\t%s\n", pReply->SubmitMessage.pLoggerId);
printf("Status\t%s\n", pStatusDescription);
mfp_FreeReply(pReply);
...
Related commands
mfp_GetReply on page 395
mfp_GetLastError on page 394
This chapter describes how to access the Message Feed server using the Message Builder language.
The access to the Message Feed server is socket based. Tnd the API is designed to be used in both
synchronous and asynchronous modes. The API is implemented using a MB extension module to
access the Message Feed server.
l The include file mfp.s4h is located in the directory $CORE_ROOT/4edi/include.
l The library file mfp.s4m is located in the directory $CORE_ROOT/4edi/lib.
l The extender shared library, mfp.so on UNIX / Linux systems or mfp.dll on Windows systems is
located in the directory $CORE_ROOT/4edi/load.
The following topics describe the Message Builder MFP functions:
MFP.Authenticate on page 405
MFP.Connect on page 407
MFP.Disconnect on page 408
MFP.GetReply on page 409
MFP.GetSocket on page 412
MFP. Revision on page 413
MFP.StatusToString on page 413
MFP.SubmitMessage on page 414
MFP.Authenticate
Description
Authenticates to the Message Feed server. Each client submitting messages to the Message Feed
server must authenticate itself with a name and a password.
Syntax
MFP.Authenticate hMFP [RequestId request-id] User user Password password;
Parameters
l hMFP – The handle to the Message Feed connection. This is a record of type MFP.Handle.
l request-id – Identifies this request. It is returned in the reply received by the MFP.GetReply
statement in order to correlate a reply with its request. This is an optional expression of type
integer.
l user – The user name for the Message Feed client. This is an expression of type string.
l password – The password for the specified user. This is an expression of type string.
Error handling
When an error occurs, the function throws an exception of type MFP.$Exception and sets the
$Error reserved variable to one of the following below values:
l MFP.$Error_ErrorAuthenticateInvalidid – The hMFP parameter refers an unknown
connection handle.
l MFP.$Error_ErrorAuthenticateFailed – Failed to send the authenticate request to the
Message Feed server.
Example
...
MFP.Authenticate $hMFP User "user" Password
"passwd";
MFP.GetReply $hMFP Reply $Reply;
IF $Reply.$Status <> MFP.$ReplyStatus_Ok {
LOG FORMAT("failed to authenticate: %s",
MFP.StatusToString($Reply.$Status));
EXIT 1;
}
...
Related commands
MFP.GetReply on page 409
MFP.Connect
Description
Establishes a connection to the Message Feed server.
The function returns a handle of type record MFP.Handle for the connection. This handle must be
freed using the MFP.Disconnect statement.
Syntax
MFP.Connect(host, service, timeout)
Parameters
l host – The name of the TCP/IP host on which the Message Feed server is located.
l service – The name of the TCP/IP service to which the Message Feed server listens.
l timeout – The timeout, in seconds, being used for socket access to the Message Feed server.
Error handling
When an error occurs, the function throws an exception of type MFP.$Exception and sets the
$Error reserved variable to the following value:
MFP.$Error_ErrorConnectConnectfail – The connection to the Message Feed server failed.
Example
INCLUDE "mfp.s4h";
...
DELCARE $hMFP RECORD MFP.Handle;
$hMFP = MFP.Connect("localhost", "12345", 60);
...
MFP.Disconnect$hMFP;
...
Related commands
MFP.Disconnect on page 408
MFP.Disconnect
Description
The MFP.Disconnect statement disconnects from the Message Feed Server.
Syntax
MFP.Disconnect hMFP;
Parameter
hMFP parameter is the handle to the Message Feed connection to disconnect. This is a record of type
MFP.Handle previously returned by the MFP.Connect function.
Error handling
When an error occurs, the function throws an exception of type MFP.$Exception and sets the
$Error reserved variable to the following value:
MFP.$Error_ErrorDisconnectInvalidid – The hMFP parameter refers an unknown
connection handle.
Example
See MFP.Connect
Related commands
MFP.Connect on page 407
MFP.GetReply
Descripiton
Gets the reply to a request to the Message Feed server.
Note: The MFP.GetReply statement is blocking. To avoid blocking the program while waiting for
the reply, use the MFP.GetSocket function to obtain a socket that can be used with the Message
Builder REGISTER module to wait for the socket to be ready for read or write access.
Syntax
MFP.GetReply hMFP [Timeout timeout] Reply reply;
Parameters
l hMFP – The handle to the Message Feed connection.
l Timeout – The timeout, in seconds, to wait for the reply. This timeout is not the same timeout
as the socket access timeout defined in the MFP.Connect function. This is an optional
expression of type integer. The default value is 60 seconds.
l reply – An output parameter of type record MFP.Reply:
Where:
$RequestType is:
o MFP.$RequestType_Authenticate – if it is a reply to an MFP.Authenticate request
o MFP.$RequestType_Submitmessage – if it is a reply to an MFP.SubmitMessage request.
$RequestId is the same request ID as supplied in the MFP.Authenticate or
MFP.SubmitMessage statements.
$Status is the status of the request. Use the MFP.StatusToString function to get a textual
description of the status. The status can be one of the following:
o MFP.$ReplyStatus_Ok – The request was processed successfully.
o MFP.$ReplyStatus_AuthenticateInvaliddata – A MFP.Authenticate request
contained invalid data.
o MFP.$ReplyStatus_AuthenticateAuthenticationfail – The user or password in an
MFP.Authenticate statement was invalid.
o MFP.$ReplyStatus_SubmitmessageInvaliddata – A MFP.SubmitMessage request
contained invalid data.
o MFP.$ReplyStatus_SubmitmessageNotauthenticated – Trying to submit a message
without having authenticated to the Message Feed server.
o MFP.$ReplyStatus_SubmitmessageUnknownactivity – The integration or activity
parameters in the MFP.SubmitMessage statement refers an activity that is not known by the
Message Feed server.
o MFP.$ReplyStatus_SubmitmessageInvalidfile – The file-path parameter in the
MFP.SubmitMessage statement does not refer a valid file.
o MFP.$ReplyStatus_SubmitmessageUnknownqueue – The integration and activity
parameters in the MFP.SubmitMessage statement refers an activity for which a queue does
not exist. This error indicates an inconsistent run- time configuration in the system where the
Message Feed server runs.
o MFP.$ReplyStatus_SubmitmessageServererror – An internal error occurred in the
Message Feed server.
o MFP.$ReplyStatus_SubmitmessageTimeout – No reply was received from the Message
Feed server within the specified timeout.
$SubmitMessage is valid when the reply is related to an MFP.SubmitMessage request. This
field has a structure of type MFP.ReplySubmitMessage as defined here:
Where:
$FilePath is only set when a reply message was requested in the MFP.SubmitMessage
statement. In this case, this field specifies the file path to the reply message.
$MessageId specifies the user-defined message ID of the message being submitted. This is the
same as the message-id parameter in the MFP.SubmitMessage statement.
$LoggerId specifies the logger ID assigned for the submitted message by the Message Feed
server. The logger ID uniquely identifies a message in the integration engine.
$Status specifies the processing status of the submitted message. The status can be:
o MFP.$SubmitMessage_Status_Ok – The submitted message was processed OK.
o MFP.$SubmitMessage_Status_Partialerror – The submitted message was processed
with some errors, i.e. one or several of the messages created from the submitted message
was stopped.
o MFP.$SubmitMessage_Status_Error – The submitted message was processed with
error: All messages created from the submitted message were stopped.
o MFP.$SubmitMessage_Status_Timeout – The submitted message was not processed
within the timeout specified in the MFP.SubmitMessage statement.
Error handling
When an error occurs, the function throws an exception of type MFP.$Exception and sets the
$Error reserved variable to one of the values:
l MFP.$Error_ErrorGetreplyInvalidid – The hMFP parameter refers an unknown
connection handle.
l MFP.$Error_ErrorGetreplyFailed – Failed to get the reply from the Message Feed server.
Example
INCLUDE "mfp.s4h";
...
DECLARE $hMFP RECORD MFP.Handle; DECLARE $Reply RECORD MFP.Reply;
$hMFP = MFP.Connect("localhost", "12345", 60); MFP.Authenticate $hMFP
User "user" Password
"passwd";
MFP.GetReply $hMFP Reply $Reply;
IF $Reply.$Status <> MFP.$ReplyStatus_Ok {
LOG FORMAT("failed to authenticate: %s",
MFP.StatusToString($Reply.$Status));
EXIT 1;
}
MFP.Disconnect $hMFP;
...
Related commands
MFP.Connect on page 407
MFP.GetSocket on page 412
MFP.Authenticate on page 405
MFP.SubmitMessage on page 414
MFP.StatusToString on page 413
MFP.GetSocket
Description
Gets the socket descriptor for a connection handle. The socket descriptor can be used with the
Message Builder REGISTER module to wait for a socket to be ready for read or write access. This
function is only needed if the program using the Message Feed connection must not block while
waiting for a reply from the Message Feed server.
The function returns a socket descriptor.
Note: Do not attempt to use the socket descriptor for any other purpose than waiting for read or
write access using the REGISTER or equivalent module.
Syntax
MFP.GetSocket(hMFP);
Parameter
hMFPis the handle to the Message Feed connection. It is a record of type MFP.Handle.
Error handling
When an error occurs, the function throws an exception of type MFP.$Exception and sets the
$Error reserved variable to one of the values:
l MFP.$Error_ErrorGetSocketInvalidid – The hMFP parameter refers an unknown
connection handle.
l MFP.$Error_ErrorGetSocketFailed – Failed to get a valid socket descriptor for the
specified handle.
Related commands
MFP.Connect on page 407
MFP. Revision
Description
Returns the revision of the MFP library. The revision is returned as a string with a functional version
number, a major version number and a minor version number, separated by a dot (.) character.
Syntax
MFP.Revision()
Example
INCLUDE "mfp.s4h";
...
PRINT FORMAT("MFP revision is %s\n", MFP.Revision());
...
Returns the result:
MFP.StatusToString
Description
Gets a textual description of a status code returned by the MFP.GetReply function.
The function returns a description of the status code defined by the status parameter.
Syntax
MFP.StatusToString(status);
Example
See MFP.GetReply on page 409.
MFP.SubmitMessage
Description
Submits a message to the Message Feed server. Before submitting a message, the
MFP.Authenticate statement must be called to authenticate to the server.
A message is being submitted to a specified activity. The message consists of data and attributes and
is optionally identified by a message ID.
Syntax
MFP.SubmitMessage hMFP [RequestId request-id] Integration
integration Activity activity FilePath file-path [Attributes
attributes] [MessageId message-id] [Acknowledge acknowledge]
[Timeout timeout];
Parameters
l hMFP – The handle to the Message Feed connection. It is a record of type MFP.Handle.
l request-id– Identifies this request. It is returned in the reply received by the MFP.GetReply
statement in order to correlate a reply with its request. It is an optional expression of type
integer.
l integration – Specifies in which integration the activity exists. This is a qualified name
containing any folders and integration name separated by the slash (/) character. The sequence
\\ is used to specify a single \ character in a folder or integration name. The sequence \/ is used
to specify a single / character in the folder or integration name. This parameter is an expression
of type string.
l activity – Specifies the name of the activity in which the submitted message should be
processed. This parameter is an expression of type string.
l file-path – The file path to the data part of the message being submitted. The file being
referred is automatically deleted after it has been submitted. This parameter is an expression of
type string.
l attributes – An optional array of attribute records of type MFP.Attribute, defined as:
l message-id – Identifies the message with a user-defined message identifier. This is an optional
expression of type string.
l acknowledge – Specifies the type of acknowledgement expected for the submitted message. It
can be:
o MFP.$SubmitMessage_Acknowledge_Commit – Expect that the submitted message is
successfully queued and secured in the Message Feed Server.
o MFP.$SubmitMessage_Acknowledge_Status – Expect a reply status for the submitted
message. This is the default value.
o MFP.$SubmitMessage_Acknowledge_Statusmessage – Expect a reply message for
thesubmitted message.
l timeout – Specifies the timeout, in seconds, for processing the submitted message in the
specified activity. This is an optional expression of type integer. The default value is 60
seconds.
Error handling
When an error occurs, the function throws an exception of type MFP.$Exception and sets the
$Error reserved variable to one of the values:
l MFP.$Error_ErrorSubmitMessageInvalidid – The hMFP parameter refers an unknown
connection handle.
l MFP.$Error_ErrorSubmitMessageFailed – Failed to send the submit message request to
the Message Feed server.
Example
...
DECLARE $Attributes[] RECORD MFP.Attribute;
DECLARE $StatusDescription STRING;
Attributes[1].$Name = "MyAttribute"; Attributes[2].$Value = "somedata";
MFP.SubmitMessage $hMFP
Integration "MyFolder/MyIntegration" Activity "MyActivity"
FilePath "msg.dat" Attributes $Attributes
MessageId "Id123";
MFP.GetReply $hMFP Reply $Reply;
IF $Reply.$Status <> MFP.$ReplyStatus_Ok {
LOG FORMAT("failed to submit message: %s",
MFP.StatusToString($Reply.$Status));
EXIT 1;
}
CASE $Reply.$SubmitMessage.$Status
WHEN MFP.$SubmitMessage_Status_Ok {
$StatusDescription = "Ok";
}
WHEN MFP.$SubmitMessage_Status_Partialerror {
$StatusDescription = "Partial error";
}
WHEN MFP.$SubmitMessage_Status_Error {
$StatusDescription = "Error";
}
WHEN MFP.$SubmitMessage_Status_Timeout {
$StatusDescription = "Timeout";
}
WHEN OTHERS {
$StatusDescription = "Unknown";
}
PRINT FORMAT("LoggerId\t%s\n",$Reply$SubmitMessage.$LoggerId);
PRINT FORMAT("Status\t%s\n", $StatusDescription);
...
Related commands
MFP.GetReply on page 409
$CORE_ROOT/java/lib/core/mfp.
The native class shared object, libmfp_jni.so or libmfp_jni.sl on UNIX / Linux systems, or
mfp_jni.dll on Windows systems is located in the directory $CORE_ROOT/java/load.
The following topics describe the Message Feed Java methods and classes:
Mfp. revision on page 418
Mfp.constructor on page 419
Mfp.authenticate on page 421
Mfp.submitMessage on page 423
Mfp.close on page 425
MfpAttribute.constructor on page 426
MfpAttribute.setName on page 427
MfpAttribute.setValue on page 428
MfpAttribute.getName on page 429
MfpAttribute.getValue on page 429
MfpAttribute.toString on page 431
MfpSubmitMessageReply.getStatus on page 432
MfpSubmitMessageReply.getLoggerId on page 433
MfpException.toString on page 433
Mfp. revision
Description
Returns the revision of the MFP package. The revision is returned as a string with a functional
version number, a major version number and a minor version number, separated by a dot (.)
character.
Class
core.mfp.Mfp
Method
revision
Syntax
public static String revision()
Example
import java.io.*;
import core.mfp.*;
...
System.out.println("MFP revision is " + Mfp.revision());
...
Returns the result:
Mfp.constructor
Description
The constructor establishes a connection to the Message Feed server.
To avoid memory leaks, the Mfp object must be disconnected from the Message Feed server, using
the method close, before being finalized.
Class
core.mfp.Mfp
Method
constructor
Syntax
public Mfp(String host, String service, int timeout) throws Exception,
MfpException
Parameters
l host – Name of the TCP/IP host on which the Message Feed server is located.
l service – Name of the TCP/IP service to which the Message Feed server listens to.
l timeout – Timeout, in seconds, for socket access to the Message Feed server.
Error handling
Mfp.constructor throws an Exception when system resources are exhausted, or an MfpException
in when the connection to the Message Feed server fails.
Use the MfpException.toString method to get a textual description of the exception.
Example
import core.mfp.*;
...
Related topics
Mfp.close on page 425
MfpException.toString on page 433
Mfp.authenticate
Description
Authenticates the client to Message Feed server. Each client submitting messages to the Message
Feed server must authenticate with a name and a password.
Class
core.mfp.Mfp
Method
authenticate
Syntax
public void authenticate(String user, String password) throws MfpException
Parameters
l user – The user name of the Message Feed client.
l password – Password for the specified user.
Error handling
The authenticate method throws an MfpException when authentication fails.
Use the MfpException.toString method to get a textual description of the exception.
Example
import core.mfp.*;
...
Mfp mfp = new Mfp("localhost", "12345", 60);
mfp.authenticate("user", "passwd");
...
mfp.close();
Related topics
MfpException.toString on page 433
Mfp.submitMessage
Description
Submits a message to the Message Feed server. Before submitting a message, the authenticate
method must be called to authenticate to the server.
A message can be submitted either using a file path or an input stream, hence the two alternative
method syntaxes.
A message is submitted to a specified integration engine activity. The message consists of data and
attributes and is optionally identified by a message ID.
The method returns the reply as an object of class MfpSubmitMessageReply.
Class
core.mfp.Mfp
Method
submitMessage
Syntax
public MfpSubmitMessageReply
public MfpSubmitMessageReply
Parameters
l integration – Specifies in which integration the activity exists. This is a qualified name
containing any folders and integration name separated by the slash (/) character. The sequence
\\ is used to specify a single \ character in a folder or integration name. The sequence \/ is used
to specify a single / character in the folder or integration name.
l activity name of the activity in which the submitted message should be processed.
l filePath – The file path to the data part of the message being submitted. The file being
referred is automatically deleted after it has been submitted.
l dataStream – An input stream used to stream the data to the Message Feed server rather than
using a regular file. The data from the input stream is read from the current position until the end
of the input stream. The input stream must be opened and closed by the method invoking the
submitMessage method.
l attributes An array of class MfpAttribute objects containing the attributes for the
submitted message.
l messageId – Identifies the message with a user-defined message identifier.
l acknowledge – Specifies the type of acknowledgement being expected for the submitted
message. It can one of the folloiwing:
o Mfp.ACKNOWLEDGE_COMMIT – Expect that the submitted message is successfully queued
and secured in the Message Feed server.
o Mfp.ACKNOWLEDGE_STATUS – Expect a reply status for the submitted message.
o Mfp.ACKNOWLEDGE_STATUSMESSAGE – Expect a reply message for the submitted message.
l timeout – Specifies the timeout, in seconds, for processing the submitted message in the
specified activity.
Error handling
The submitMessage method throws an MfpException when authentication fails.
Use the MfpException.toString method to get a textual description of the exception.
Mfp.close
Description
The close method disconnects from the Message Feed server. It must be called to avoid before the
Mfp object is finalized to avoid resource leaks.
Class
core.mfp.Mfp
Method
close
Syntax
public void close() throws MfpException
Parameters
l host – Name of the TCP/IP host on which the Message Feed server is located.
l service – Name of the TCP/IP service to which the Message Feed server listens to.
l timeout – Timeout, in seconds, for socket access to the Message Feed server.
Error handling
The close method throws an MfpException when authentication fails.
Use the MfpException.toString method to get a textual description of the exception.
Example
See Mfp.constructor on page 419.
Related topics
Mfp.constructor on page 419
MfpException.toString on page 433
MfpAttribute.constructor
Description
The MfpAttribute constructor creates an attribute instance to be used when submitting a message.
Class
core.mfp.MfpAttribute
Method
constructor
Syntax
public void MfpAttribute()
Parameters
l name – Name of the attribute. The value parameter is the value of the attribute.
Example
See Mfp.submitMessage on page 423.
MfpAttribute.setName
Description
Sets the name for an attribute.
Class
core.mfp.MfpAttribute
Method
setName
Syntax
public void setName(String name)
Parameters
name – Name of the attribute.
Related topics
Mfp.constructor on page 419
Mfp.submitMessage on page 423
MfpAttribute.setValue
Description
Sets the value of an attribute.
Class
core.mfp.MfpAttribute
Method
setValue
Syntax
public void setValue(String value)
Parameters
value – Value of the attribute.
Related topics
Mfp.constructor on page 419
Mfp.submitMessage on page 423
MfpAttribute.getName
Description
Gets the name of an attribute.
Class
core.mfp.MfpAttribute
Method
getName
Syntax
public String getName()
Related topics
Mfp.constructor on page 419
MfpAttribute.setName on page 427
MfpAttribute.getValue
Description
Gets the value of an attribute.
Class
core.mfp.MfpAttribute
Method
getValue
Syntax
public String getValue()
Related topics
Mfp.constructor on page 419
MfpAttribute.setValue on page 428
MfpAttribute.toString
Description
Returns a text representation of the attribute. The returned string contains the name and value of
the attribute, separated by an equal (=) sign.
Class
core.mfp.MfpAttribute
Method
toString
Syntax
public String toString()
Related topics
Mfp.constructor on page 419
MfpAttribute.setName on page 427
MfpAttribute.setValue on page 428
MfpSubmitMessageReply.getStatus
Description
Use the getStatus method to get the status of a message submitted to the Message Feed server.
The method returns a value that specifies the processing status of the submitted message. The
status can be:
l MfpSubmitMessageReply.STATUS_OK – The submitted message was processed OK.
l MfpSubmitMessageReply.STATUS_PARTIALERROR – The submitted message was processed
with some errors. One or several of the messages created from the submitted message was
stopped.
l MfpSubmitMessageReply.STATUS_ERROR – The submitted message was processed with
error. All messages created from the submitted message were stopped.
l MfpSubmitMessageReply.STATUS_TIMEOUT – The submitted message was not processed
within the timeout specified in the Mfp.submitMessage method.
Class
core.mfp.MfpSubmitMessageReply
Method
getStatus
Syntax
public int getStatus()
Example
See Mfp.submitMessage on page 423.
MfpSubmitMessageReply.getLoggerId
Description
Use the getLoggerId method to get the logger ID of a message submitted to the Message Feed
server. The logger ID uniquely identifies a message in the B2Bi integration engine.
Class
core.mfp.MfpSubmitMessageReply
Method
getLoggerId
Syntax
public String getLoggerId()
Example
See Mfp.submitMessage on page 423.
MfpException.toString
Description
Returns a text description of the exception.
Class
core.mfp.MfpException extends Exception
Method
toString
Syntax
public String toString()
l Check database connections on page 435
l Deploy and promote B2Bi configurations on page 439
l Create audit files of UI object changes on page 442
l Enable non-standard version/release/ID codes for X12 message exchanges on page 446
l Integration engine message recovery after node failure on page 447
l Naming conventions on page 454
l Manage B2Bi purges on page 458
l Manage licenses and keys on page 463
l Manage the system throttle on page 466
l Modify HTTP upload file size limit on page 469
l Set the log file archiving schedule on page 470
l Set up Message Tracker on page 472
Use dbConfig to:
l Test database connections (Windows only)
l Change from one database to another
l Change or correct a database connection parameter
Warning: The database must allow a minimum of 800 maximum connections.
B2Bi database
The B2Bi database is used by the trading engine to read and write data. B2Bi can use any of the
database types the trading engine supports.
If you deploy B2Bi in a cluster of multiple computers, all instances of B2Bi use the same d atabase.
The B2Bi d atabase driver and connection information is stored in a file named
datastoreconfig.xml. By default, this file is located in the d irectory <Interchange_install_
directory>\conf\db.
To view or change the information in this file, use the database configuration tool, located in
<Interchange_install_directory>\bin. This tool is named dbConfig.cmd in Windows
installations, and dbConfig in UNIX installations.
Note: The B2Bi database password is encrypted in datastoreconfig.xml, so you must use the
tool to change the password. You cannot manually edit datastoreconfig.xml to change the
password.
PassPort database
If you installed PassPort for access management, you can use the dbConfig tool to check PassPort
database driver and connection information.
Database driver and connection information for PassPort is in a file named datastoreconfig.xml
in <Passport_install_directory>\conf.
To view or change the information in this file, use the database configuration tool, located in
<PassPort_install_directory>\bin. This tool is named dbConfig.cmd in Windows
installations, and dbConfig in UNIX installations.
Note: The PassPort database password is encrypted in datastoreconfig.xml, so you must use
the tool to change the password. You cannot manually edit datastoreconfig.xml to change the
password.
Run dbConfig
Windows
To run dbConfig on Windows machines.
4. Save.
Unix / Linux
B2Bi database
1. Stop the B2Bi server.
2. Using the cd command, go to <Interchange_install_directory>\bin .
3. Execute the command:
./dbConfig [-? | -help] [-d (sqlsrvr|ora|db2|derby|mysql|synchrony)] [-
p port] [-h host] [-n dbname] [-u username] [-pd password] -b2bx true
4. If necessary modify connection details.
5. Save.
PassPort databases
1. Stop the trading engine or PassPort server.
2. Using the cd command, go to: PassPort_install_directory>\bin
3. Execute the command ./dbConfig
4. If necessary modify connection details..
5. Save.
l Partner – A partner backup (or "partner profile") contains the definition of a single partner.
l Community (with associated partners) – A community backup (or "community profile")
contains the backup a single community and all of its associated partners.
l Selected objects – Back up and restore individual objects or selected groups of objects of the
same object type:
o Components
o Services
o Inbound agreements
o Document agreements
o Outbound agreements
l System – The system backup is the most complete backup, containing (among other objects):
o Communities
o Partners
o Agreements
o Services
o Components
o Connections
o Collaboration Settings
o Embedded Servers
o Global Transport Settings
o Metadata Profiles
o Password Policies
o Processing Nodes
o Deployed Maps
o Application Pickups
o Application Deliveries
Additionally, the system backup enables you to selectively restore your files, by selecting a
restoration mode:
o Replace – Imports all backed-up objects replacing (unmodified) previous content and
imports objects that do not exist
o Ignore – If an existing object is encountered, it remains as is and is not imported, a message
is logged, then it and moves on to evaluate the next import object
o Update – Adds new objects that don't exist, adds backed-up content to objects that do exist
(incremental updates)
For details about using the backup / restore tools, see Backup and restore on page 344.
Trading engine
Inline processors
To deploy and promote inline processors to each node in a cluster, put inline processors in the
following directory on the nodes.
<B2Bi_Install_Dir>\Interchange\site\jars
After you deploy the inline processor you must restart the trading engine.
Integration engine
l MBCs – Message Builder Components
l Datamapper maps – Maps created with Datamapper
l DML maps – Maps created with DML
To deploy and promote MBCs and Datamapper maps to a B2Bi environment, place them in the
appropriate folder on the shared file system, as indicated in the following table:
Item Location
MBC %B2BI_SHARED%\local\4edi\component
Datamapper map %B2BI_SHARED%\local\4edi\component
To deploy DML maps, use the Mapping Services deployment tool.
Introduction
B2Bi provides tools for generating audit logs of the changes that users perform on objects in the UI.
This includes changes implemented when creating objects (using the add wizards) and when
modifying objects (working in modification pages). The resulting logs show which user made
changes, at what time and date the change was made, and provide details of the change.
Audit information is collected for changes to the following objects:
l Parties (Partners and Communities)
l Contacts
l Routing IDs
l Messaging IDs
l Certificates
l PGP Certificates
l Delivery Exchanges
l Attributes
Configuration file
The information that B2Bi audits is controlled by the audit_config.xml file located in <B2Bi_
install_directory>/Interchange/conf.
Log files
By default, auditing is disabled on system startup, however the CSV audit file, <machine_name>_
cn_audit.csv, file is created for each node in the directory <B2Bi_install_
directory>/Interchange/logs. When you activate object change auditing, B2Bi generates the
audit information to this file.
Additionally, by activating an option in the audit_config.xml file, you can generate audit logs to
<machine_name>_cn_audit.xml. This XML formatted file provides raw trace data that you can
use for additional fine tuning of information, which you can then convert to the CSV format. When
activated, the XML version of this log file is also located in the <B2Bi_install_
directory>/Interchange/logs directory.
To activate logging to the XML file, see Activate logging to XML file on page 444.
When both output types are enabled, the logger formats and outputs information both CSV and
XML files. You must enable at least one of the output types to enable auditing.
l User – Account name of the user who implemented the change.
l Timestamp – Time and date of the change.
l Transaction ID – Unique ID used to group a set of auditing changes. When a user implements
more than one change on a single object and then saves the changes (for example, modifies
several fields in an object configuration page), the modifications are displayed in the audit log as
several actions that share a single Transaction ID.
l Object ID – Database ID of the object that has been changed.
l Object Type – Type of object that was changed (Partner, Certificate, Attribute, ...).
l Object Name – Display name of the object that was changed in the UI.
l Action – Nature of the change (Added, Updated, Deleted, ...).
l Related Object ID – Database ID of the object’s parent.
l Related Object Type – Object type of changed object's parent.
l Related Object Name – Display name of the changed object's parent.
l Attribute Name – Database name (not UI display name) of object's changed attribute.
l Old Value – Value of the attribute before the change.
l New Value – Value of the object after the change. If the modified value is an element of a list, the
entire list is recorded as the new value.
1. Go to <Interchange_install_directory>/conf.
2. Open audit_config.xml in a text editor.
3. Set the following attribute to "true" as in the following line:
<NodeType type="CN" enabled="true">
4. Save the file.
5. Restart B2Bi.
1. Go to <Interchange_install_directory>/conf.
2. Open audit_config.xml in a text editor.
3. Remove the comment from the line:
<!--<AuditedTransactionHandler
class="com.cyclonecommerce.persistence.audit.LogXmlTransactionHandle
r"/>-->
4. Save the file.
5. Restart B2Bi.
To control the information that is generated to the output files, you can modify the attributes of the
audit_config.xml file.
By default, the configuration is set to audit specific partner-related configuration changes made in
the UI.
The audit_config.xml configuration file controls which objects are logged, based on the
following class settings:
<IncludedClasses regex=".*ExchangePoint"/>
<IncludedClasses regex=".*PropertyFieldValue"/>
<IncludedClasses regex="com.cyclonecommerce.collaboration.*Party"/>
<IncludedClasses
regex="com.cyclonecommerce.collaboration.messagingids.*MessagingId"/>
<ExcludedClasses
regex="com.cyclonecommerce.cachet.administration.*"/>
<ExcludedClasses
regex="com.cyclonecommerce.cachet.security.session.*"/>
<ExcludedClasses regex="com.cyclonecommerce.alerts.*"/>
<ExcludedClasses regex="com.cyclonecommerce.tradingengine.alerts.*"/>
<ExcludedClasses regex="com.cyclonecommerce.collaboration.alerts.*"/>
Removing any of the above settings affects the type of objects that are logged.
To log all objects, remove the comment markers from the following line:
<!--<IncludedClasses regex=".*"/>-->
Enabling the above setting, and commenting out the “Included/Excluded Classes” settings,
results in the capture of all activity persisted in the database, and enables logging of activities in the
default CSV log file.
Note Only the partner-specific objects are formatted properly in the log file, based on
configuration file settings. All other objects are logged without formatting, and in most
cases will derive names from database naming instead of UI display naming.
3. Make a copy of the 004010.txt file, and then rename it to 4010.txt.
2. Open the elecod.dsc file in a text editor.
3. Add the following text:
480 4 010Draft Standards Approved for Publication by ASC X12 Procedures
Review Board through October 1997
4. Save the file.
Before you execute these procedures, restart the system and make sure that the integration engine is
up and running again.
l Date and time of crash
l Date and time of restart
l Select all severity levels.
l Select the option: Only active entries
5. Click Save.
6. Click Close.
7. Inspect the new search display results.
6. Click Close.
7. Inspect the new search display results.
For message queue resolution we deal with two categories of queue elements:
l Single queue elements
l Queue elements with a log parent
Use the B2Bi Queue Monitor delete these elements. See Delete a queue element on page 450.
In all other cases the message elements have an entry in the B2Bi Message Log.
• Syncpoint – (Recommended) When you select this option the element is moved from the queue
to Message Log. This enables you to resend the data after correcting the message error. See Apply a
synchpoint to a queue element on page 450.
• Delete – (Not recommended) When you select this option, the message is deleted. Even if the
failure was due to a known and correctable error, it is not possible to resend the data. See Delete a
queue element on page 450.
Use the syncpoint option when the flow contains only one entry into the queue. The syncpoint
moves the entry into the Message Log and inactivates the active log entry that is the parent of this
element. Use this option only you are sure that there is one entry in the queue.
1. In Queue Monitor, select the Queue tab to view the list of current queues.
2. Select a queue that corresponds to transactions that may have failed messages. [examples?]
3. Right-click on an entry, and from the context menu select Delete.
8. Message Log displays a list of any message entries that have the logger ID.
9. Right-click each entry, and from the context menu select Inactivate entire flow.
The integration engine:
l Creates the hierarchical tree of the flow
l Finds any queue entries that have a parent log ID from the flow, and places them in
Message Log as a syncpoint.
l Inactivates all active logs from the flow
Before you delete queues, run:
r4edi clean_queues.x4 /P
This command returns the list of queues that are not actively being used. These are queues that are
left after you delete a Hierarchical Message Environment, Transfer Gateway or Transfer Adapter from
the runtime dataset.
Then run the command:
or,
The command removes the inactive queues.
In logger, all files beginning with with axxxxxx (active log file) and ixxxxxxx (inactive) must have
the same size (typically 10Mb). The size of these files must match the size of the preactv and
preiactv files in that same directory.
In the queue directory (if the system is running in the default "fail-safe" mode) all files except
cleanflag and the uncleanflag should have the same size.
To obtain the correct file size, delete the incorrectly sized files (note: this results in potential data
loss). The integration engine will regenerate these files.
After fixing files that have an incorrect size, apply the soft crash procedures below.
First empty the cache of the porter server. Delete the file $CORE_DATA/porter/porter.mem and
then recreate the file with value 0, so that the file exists as an empty file.
Check if the integration engine services are still running. You must stop these services because, in
some cases, integration engine tasks hang without properly stopping. Monitor both the open
connection in listen and the established connection. At the same time, check the connections in
"close wait" status and validate with the OS administrator what time the connection was closed. The
"close wait" time is the minimum time that the system was down.
Exception: If the queue does not have the option –c activated, when the queue crashes all entries
are logged as possible duplicates. If you notice errors when the queue is started of the type
"uncommitted file … ", enable –c temporarily. To do this:
1. Go to the directory $CORE_ROOT/config.
2. Open the environment.dat file in an editor.
3. Set the value of the environmental variable QUEUE_OPTIONS to -c.
4. Save the file.
5. Restart the integration engine.
Duplicate handling
1. Go to the directory $CORE_ROOT/config.
2. Open the environment.dat file in an editor.
3. Set the value of the environmental variable B2BI_STOP_POSSIBLE_DUPLICATES to 0.
4. Save the file.
5. Restart the integration engine.
This enables all messages to be processed and lets the back-end system deal with duplicates. This
also generates multiple Message Log entries. The integration engine behavior then depends on how
the back-end system is set and where the fail-over occurred.
Naming conventions
To be reused easily, B2Bi objects should respect the following naming conventions:
l To make object names easier to read, use "mixed case" notation and an underscore "_"
l Start most object names with an upper case "object type" prefix followed by an underscore "_"
and then use lower-case characters, separating words with an underscore "_"
l All applications, communities, and partners can be any kind of business or IT entity
l To make the entity type clearly visible, start these object names with a one-character prefix (a, c,
p)
l Start all types of Pickups (<x>P) and Deliveries (<x>D) with a two-character "object type"
prefix and end them with a "P" or a "D"
Applications [application name]
Application Pickups AP_[protocol]_[application name](_xy)
l Example: AP_fs_cLSON_M3
o Type: File system
Application Deliveries AD_[protocol]_[application name](_xy)
l Example: AD_FS_cLSON_M3
o Type: File system
l Example: AD_ftp_cLSON_M3
o Type: FTP
Metadata Profile MDP_[C|A|P]_<type>_[criteria(s)=value(s)]
l Example: MDP_A_mig_[DocumentType=MIG1]
l Manage metadata profiles
Service – Metadata SMP_[C|A|P]_[action]
Processing l Example: SMP_A_t_CopyMessage_LSN
o Type: Metadata
Service – Partner Processing SPP_[P|A]_[format]_[srcDocType]_[formatVersion]_to_
[dstDocType]_(xy)
l Example: SPP_A_4010_862_to_CopyMessage_LSON
Component - Custom CUD_[delivery type]
Delivery l Example: CUD_DeliverToApplicationX
o Type: Custom Delivery
l Component: B2BX Application/B2Bi Application X Delivery
Component - Detector DET_[action]
l DET_[format][_subFormat/type] _<sequence ID>_[_
partner/flow][_variant]
l Example: DET_EMB_M2
o Type: Detector
l Component:
MAPSERVICE.B2BiEMBDETECTOR.B2BiIDetector_M3
Component - Document DOC_
l Example: DOC_AMTrixMigration_Override
o Type: Document
l B2BX Application/AMTrixMigration_OR
Component - Enveloper ENV_[format][_subFormat/type][_variant]
l Example: ENV_EBM_XML
o Type: Enveloper
o Component:
MAPSERVICE.B2BiEBMEnveloper.B2BiEnveloper_M3
Component - Map MAP_[srcFormat] _[srcDocType] --> [dstFormat][_
subFormat/type] [_variant]
l Example: MAP_4010_862_to_EBMXML
o Type: Map
o Component: MAPSERVICE.DEMO.Edi2XML_004010_862
Component - Post PEN_[action]
Enveloping l Example: PEN_MPDP_Override
o Type: Post enveloping
o Component: B2BX Application/MPDP_ORDetection
Component - Post Transfer PTF_[action]
Failed
Component - Post Transfer PTS_[action]
Success
Connection CON_[type]_[name]
Community [community name]
Community Routing ID Routing IDs are generated automatically for Messaging IDs. If created
manually, use the following pattern:
l CRID_[community name]
l + others automatically inherited from Messaging Profiles
Trading Pickup & TP_[protocol]_[community name](_xy)
delivery exchange
Partner [partner name]
Partner RoutingID PA_[partner name]
l + others automatically inherited from Messaging
Profiles
Partner Messaging ID Standard EDI identifiers + custom identifiers
Partner Delivery PD_[protocol]_[partner name](_xy)
Inbound Agreement and Outbound IA_[name] / OA_[name]
agreement l Example: IA_TESTAGREEMENT
l Example: OA_TESTAGREEMENT2
Document Agreement (DA) DA_[docType]
l Example: DA_ORDERS
Document Agreement Attributes Any pair of Tag | Value
l for setting GI attributes use prefix “B2BXGISend_“
l for accessing AMTrix “agreement parameters” use
prefix “B2BIDOC_“
Configure automated trading engine purge on page 458
Configure event purge on page 459
Purge trading engine manually on page 459
Purge integration engine manually on page 461
To configure automatic purging:
Notes:
If you use a database other than Oracle, an event such as the following is written to the event log file
when a message is deleted:
With Oracle, events are handled with a stored procedure, and events such as this are not written to
the log.
We recommend setting the age for deleting messages identical to the age for deleting message-
related events. For more information, see Configure event purge on page 459.
The default configuration is to delete database records for message-related events after 45
days. The system checks every 15 minutes to delete events that have reached the age threshold.
We recommend setting the age for deleting message-related events identical to the age for
deleting trading engine messages. For more information, see Configure automated trading
engine purge on page 458.
5. Click Save changes.
This tool has special options for use in DB2 environments. See parameter descriptions below.
Caution Make sure the trading engine server is turned off before using messagePurgeTool. This
includes servers on all machines if you have a cluster environment.
Parameters
Run messagePurgeTool with one of the following parameters. You can only use one parameter at
a time.
deleteAll
Deletes all records in the database and related backup files regardless of age or state.
Depending on the volume of records in the database and backup files, the utility may have
to run a while to delete all.
Using with DB2 databases:
For DB2 database environments only, this parameter can be run with the -noLog option.
Use the –noLog option to disable transaction logging in DB2. If you do not use the -noLog
option, transaction logging remains enabled.
In a single-host DB2 environment, the only impact of –noLog is to reduce the log space
used on the DB2 system.
Warning: Do not use the -noLog option in clustered DB2 implementations. The
transaction log is needed to maintain the synchronization of clustered databases.
deleteAllSkipFiles
Deletes all records in the database, but does not delete backup files. If you have a large
number of records, this option works faster than the deleteAll option. However, if you use
it, you must manually delete backup files. For example, by deleting the backup directory.
Using with DB2 databases:
For DB2 database environments only, this parameter can be run with the -noLog option.
Use the –noLog option to disable transaction logging in DB2. If you do not use the -noLog
option, transaction logging remains enabled.
In a single-host DB2 environment, the only impact of –noLog is to reduce the log space
used on the DB2 system.
Warning: Do not use the -noLog option in clustered DB2 implementations. The
transaction log is needed to maintain the synchronization of clustered databases.
resetMessages
Changes the purge dates of records in the database and of documents in the back-up
directory. When messages are processed, the trading engine assigns future purge dates,
based on the age interval set on the purge configuration page in the user interface. If you
change the age interval, you can use the resetMessages option to change purge dates of
existing records and documents in line with the changed interval.
For example, if the interval is 45 days, the system sets the purge date for messages
processed today as the date 45 days in the future. On day 44, you change the interval to
90 days and then run the resetMessages option. The system re-calculates the purge dates
of existing messages as 90 days in the future of the messages’ origination dates. This
means on day 44, the purge date is set ahead 46 additional days, for a total of 90 days
before purging.
The utility may take a while to run with this option if there are a large number of records to
reset. See Configure automated trading engine purge on page 458.
Example
To delete all records in the database and all related backup files, run the following command:
messagePurgeTool -deleteAll
Event logging
Events related to running the messagePurgeTool are written to the messagePurgeTool.log file
in the <trading engine install directory>/logs directory.
The following procedure shows the UNIX commands. Equivalent commands are available for
Windows.
1. Open a terminal session.
2. Navigate to <B2BI_install_directory>\Integrator.
3. Run the command:
profile
4. Navigate to <B2BI_install_directory>\Integrator\solutions\4edi\pgm.
5. Run the command:
r4edi b2bi_clean.x4
6. The program asks you to stop the integrator service.
7. Open a second terminal session.
8. Navigate to <B2Bi_Install_directory>\Administration\bin.
9. Run the command:
Admin_Integrator stop
10. Return to the first terminal session and confirm that the integration engine has been stopped by
pressing Enter.
11. Close the first terminal session.
12. Return to the second terminal session and run the command:
Admin_Integrator start
13. Close the second terminal session.
Caution: Do not use the command xib_clean.x4 (located in <B2BI_install_
directory>\Integrator\4edi\pgm). This program wipes out all deployed maps and creates an
empty non-B2Bi-compatible dataset!
l Trading engine (Interchange)
l Integration engine (Integrator)
l Axway Database (optional)
The license key for the trading engine is an XML file. The license keys for the integration engine and
the Axway Database are a more traditional keys with a number of license bits.
Typically, a license is valid for one or more years. At the end of the validity period, you must update
or replace expired keys. In some cases you may need to replace a key in order to change your license
to enable more or less B2Bi functionality.
1. Stop the B2Bi trading engine. For the stop procedure, see Start and stop B2Bi on page 331.
2. Go to the directory [B2Bi installation name]/Interchange/conf and locate the file
license.xml.
3. Rename the old license file from license.xml to license.old
4. Copy the new license into the directory, and rename the new file license.xml.
5. Start the engine. For the start procedure, see Start and stop B2Bi on page 331.
1. Stop all B2Bi servers. See Stop B2Bi servers individually on page 336.
2. Go to [B2Bi installation name]/Axway_Database/license.
3. Open the license.key file in a text editor, and replace the old license details with the new
information.
4. Start B2Bi. See Start B2Bi servers individually on page 333.
The system throttle can be engaged for different reasons:
l Intentionally engaged at startup (via tuning.properties file setting).
l Intentionally engaged during runtime (via a UI control button).
l The trading engine Task Scheduler load exceeds the default limit of 150.
l The JVM (trading engine or control node) experiences an OutOfMemory exception.
l The file system takes too long to write a test file to the backup directory. In rare instances, Java
Garbage Collection by the control node or trading engine node can require so much time to
complete that the timer for the file system health check (when invoked just prior to the onset of
a Garbage Collection event) exceeds the 15 second default limit, due to activity suspension until
the Garbage Collection completes).
When a system throttle is engaged, consumption of new inbound work is halted. The trading engine
stops polling and returns a 503 error to your trading partners when they try to connect.
The trading engine then re-prioritizes all traffic based upon these rules:
l New outbound files are consumed, packaged, and sent before any other work
l Inbound connections are then opened and inbound messages are processed.
l Failed outbound/inbound messages are then processed.
When the system throttle is invoked you may receive a log entry similar to the following
(TaskScheduler load example):
When a B2Bi trading engine node sees that all the connected integration engine nodes are not
started, it queues messages, and places them in scheduled production until it reaches the
b2bi.integrator.transferqueue.max.size property value (default = 2000). When that
threshold is reached, B2Bi engages the system throttle on that specific trading engine node. As soon
as at least one integration engine starts (if the system throttle was engaged only because the transfer
queue threshold was reached) B2Bi disengages the system throttle and un-queues messages by
sending them to the integration engine. This default threshold o f 2000 messages, is valid per
trading engine node, so that each trading engine node can queue up to 2000 messages before it
engages the system throttle.
You manage general system throttle settings from the tuning.properties file. These settings are
described in the following paragraphs. Additionally, you can manually engage/disengage system
throttling at runtime in the UI. For information on UI system throttle controls, see the "Manage B2Bi
nodes" chapter of the B2Bi Administrator Guide.
The properties in this file are applied only to the node where the tuning.properties file is
located. You must set the property for each node of a cluster by modifying the file for each node.
By default, tuning.properties is empty. This indicates that all of its possible entries are
operating at their default values.
This chapter describes how to use the two properties related to system throttling that you can set in
this file:
l systemThrottle.pausePickups – Default=false
l systemThrottle.maximumTaskQueueSize – Default=150
To to force startup in throttled mode:
1. Go to <Interchange_install_directory>/conf and open the tuning.properties file
in a text editor.
2. Add the property:
systemThrottle.pausePickups
3. Set the value of the property to "true":
systemThrottle.pausePickups=true
4. Save the file.
5. Restart B2Bi.
When the system restarts, it will operate in throttled mode. This status will be displayed for trading
engine notes on the System Management page of the UI. You can temporarily override throttling by
clicking Restart pickups for all trading engine nodes. When you restart B2Bi, it will restart in
throttled mode until you modify this property.
We recommend that you change this property value in increments of 25.
To change the Task Scheduler load limit:
1. Go to <Interchange_install_directory>/conf and open the tuning.properties file
in a text editor.
2. Add the property:
systemThrottle.maximumTaskQueueSize
3. Set the value of the property to the desired value, for example:
systemThrottle.maximumTaskQueueSize=175
4. Save the file.
5. Restart B2Bi.
When the system restarts, it will engage throttling at the new load limit threshold.
1. Go to <Interchange_install_directory>/webapps/ui/WEB-INF/.
2. Open the file web.xml in a text editor.
3. Modify the value of the following lines, resetting the value as required:
<!--
Specify
maximum
allowed
size of
the
upload.
Default
s to 50
Mb.
-->
<init-param>
<param-
name>up
load-
max-
size</p
aram-
name>
<param-
value>5
2428800
</para
m-
value>
</init-param>
4. Save the file.
5. Restart B2Bi for the change to take effect.
Note: This value is not conserved on upgrades. If you upgrade your B2Bi installation you will need
to reset this value in the upgraded file.
l For log files – [B2Bi_common_directory]/data/logger/
l For message payloads – [B2Bi_common_directory]/data/filer/
When B2Bi first starts, it logs processing-event data to an active log file named a0000000. Active
logs have a size limit of 10 Mb. If a0000000 contains active file entries when it reaches its maximum
size limit of 10 Mb, B2Bi creates another active log file named a0000001.
When all entries in a0000000 reach the “inactive” state, which means that the processing is
complete, B2Bi renames the file i0000000.
In this manner, B2Bi logs continuously to active log file a000000x, and generates inactive logs in
the pattern:
i0000000
i0000001
i0000002
i0000003
i000000n
...
As inactive log files are generated, the log directory becomes larger and larger in size as each new
active or inactive file is added. If B2Bi logged indefinitely with no control on log file life, the log
directory would become very large and eventually case data storage issues.
The rate at which your log directory grows depends on the amount of file exchange activity and on
the amount of data you elect to log on our exchanges.
By default, the archiver deletes inactive files. It behaves as a purging tool. You can use the log file
archiving schedule page to control the volume of inactive log files that are saved in the B2Bi
directories by setting the purge schedule. You may want to set or modify the archiving schedule if
you find that your inactive log files are occupying too much d isk space.
The archiver behavior is controlled by a script. It is possible to modify this script to enable the saving
of inactive log files for retrieval.
generated to the trace log and log archives are not generated.
For details of how to use the System Profile Manager to manage logger archiving, see Use System
Profile Manager on page 214.
Query restrictions
l Number of days for default searches – The value in this field is the number of days that
appears in the search option Within the last [n] days under the date area on the custom
search panel of the Message Tracker page.
This also is the value for a search condition on the message attributes tab of the message details
page. The value applies to the option Restrict search to messages that originated within
the last [n] days at the bottom of the tab.
Additionally, if you click Find without specifying search conditions on the Message Tracker
search page, the search finds all messages traded within this number of days.
This field’s value also sets the number of days for the default searches Failed messages and
Negative responses on the Message tracker menu on the top toolbar.
l Maximum search results allowed to delete – The maximum number of search results users
can delete with a single delete action on the Message Tracker page. For example, if more than
the maximum number of search results are returned and you try to delete all results, the trading
engine does not delete any of the messages. Instead you are prompted to define a search that
returns fewer records than the maximum allowed to delete. Limiting the number of search results
that can be deleted at once helps maintain database performance.
l Default number of search results to return – The number in this field is the default value
of the Maximum # of search results field on the custom search panel of the Message Tracker
page.
l Maximum number of search results to return – The number in this field is the highest
allowed value of the Maximum # of search results field on the custom search panel of the
Message Tracker page. If you are performing a search and enter a number in the Maximum # of
search results field that is larger than this value, an error message displays.
l Default days for before or after date searches – The number in this field is the default
value for the on a maximum [n] days period field for the After and Before option under date
searches on the custom search panel of the Message Tracker page.