Professional Documents
Culture Documents
Workbench 1 Book
Workbench 1 Book
Workbench 1 Book
71-7
August 1999
Copyright © 1999
GE Information Services, Inc.
Preface.................................................................................................................................... iii
Workbenc
hUser
sGuide i
Table of Contents
Section 6: Translating and Debugging.........................................................................307
Overview of Translating and Debugging............................................................................... 308
Translating Using Workbench ................................................................................................. 311
Translating at the Command Line........................................................................................... 319
If the Translator Does Not Execute Successfully ................................................................... 328
Using the Translation Trace Log.............................................................................................. 33 0
Understanding the Trace Output ............................................................................................ 336
Viewing Input and Output Files.............................................................................................. 366
Using Trade Guide Reporting Features to Debug................................................................. 375
Glossary...............................................................................................................................403
Index.....................................................................................................................................417
Document Purpose
Release Notes Provides an overview of the latest
release.
System Configuration Provides information on system
Requirements Document resource requirements and
(UNIX only) procedures for configuring the
Control Server on various UNIX
platforms.
TRADACOMS Provides information on using
Standards Application Integrator in a
Implementation Guide TRADACOMS Standards
Implementation.
Trade Guide for System Provides instructions on using the
Administration Guide Trade Guide for setting up and
running translations, and
administering the system.
Transaction Modeler Provides information and procedures
Workbench User’s for developing data models. Used
Guide together with both the Trade Guide
(this manual) for System Administration Guide and
the applicable standards
implementation guide.
UN/EDIFACT Provides information on using
Standards Application Integrator in a
Implementation Guide UN/EDIFACT Standards
Implementation.
User Exit Extension Provides information on using the
User’s Guide Application Integrator feature that
allows you to write user-defined
functions which can be invoked (like
standard Application Integrator
functions) during data modeling.
ODBC Developer’s Provides information about
Guide if licensed connecting to different Structured
Query Language (SQL) databases and
creating Open Database Connectivity
(ODBC) data sources.
About the Workbench This guide is designed to give you a working understanding of the
Users Guide operation and capabilities of the software. The information in this
document is arranged so you can quickly and easily understand the
Trade Guide features, menus, and operations.
The Transaction Modelers Workbench User’s Guide is divided into the
following sections and appendixes:
Section Description
Section 1: Provides an overview of the Workbench
Overview of Data features and terminology.
Modeling
Section 2: Provides complete procedures for
Creating Source and creating data models.
Target Data Models
Section 3: Provides procedures for using
Building Rules into RuleBuilder ® and MapBuilder ™ to
Data Models incorporate logic into data models.
Section 4: Provides instructions for creating the
Creating environment for electronic commerce.
Environments
Section 5: Provides a discussion and steps to the
The Data Modeling data modeling process, including
Process Application Integrator conventions and
tips.
Section 6: Provides procedures for translating and
Translating and debugging data models.
Debugging
Section 7: Provides background information and
Migrating to Test and steps for migrating models to a test or
Production production functional area.
Environments
Appendix A: Lists all the files shipped with
Application Integrator Application Integrator, including
Files Workbench files.
Appendix B: Lists the complete functions and
Application Integrator keywords used with Workbench to
Model Functions create powerful data models.
Section Description
Appendix C: Lists the various forms of the ASCII
ASCII Character Set character set (for example, decimal and
hexadecimal).
Appendix D: Provides quick reference sheets of the
Quick Reference Workbench features.
Sheets
Appendix E: Provides an explanation of Application
Application Integrator Integrator scripts and programs.
Utilities
Appendix F Provides a list and description of the
Runtime Errors error codes you might encounter while
modeling, translating, and debugging.
Glossary Provides a description of the
Application Integrator terminology
used in this manual.
Index Provides an alphabetical list of subjects
and corresponding page numbers
where information can be found.
Prerequisites for
Workbench
Documentation
Conventions
Typographical Conventions
User Input In this document, anything printed in Courier and boldface type
should be entered exactly as written. For example, if you need to
enter the term “userid,” it will be shown in the documentation as
userid.
Notes, Hints, and Notes provide additional information and are boxed inside the text,
Cautions using the following format.
Screen Images The screen images in this manual were taken from the Windows
version of Workbench running on a Windows 95 platform. If you
are running Workbench on Windows NT or UNIX, the actual
screens (windows and dialog boxes) may differ slightly in
appearance. Differences between platforms are noted throughout
this manual where appropriate.
Mouse and Keyboard In most cases, you can use either a mouse or a keyboard to enter,
Conventions view, and manipulate the windows and dialog boxes of
Application Integrator.
Keyboard Shortcuts In many instances, you will have the option of either using the
mouse or keyboard to perform an action. These keyboard shortcuts
are shown to the right of the menu item. For example, to use the
Find shortcut you would press and hold the Ctrl key then press the
F key. This shortcut is shown as Ctrl+F on the drop down menu
and is how they will be listed in this manual.
On-line Help Workbench comes with a Help system to assist you on-line while
setting up and maintaining translations and administering the
system. The Application Integrator on-line Help system opens a
Help window with search and navigation features.
To Get Help on Help 1. From the Workbench main menu, choose Help. The Help menu
appears. From the Help menu, choose Error Code Reference.
2. From the dialog box, choose the Help menu and then choose
the Help on the Browser option. You can also press Ctrl+H.
A Help entry on the Help system itself appears.
Section value
entry box
Note: The Section value entry box on the Help window provides a
❖ means to quickly access all the major Help menus. Click the
indicator (in UNIX a small rectangle; in Windows a down arrow)
and a list of all major Help menus appear. Click any menu name to
move between them.
To Use Application 1. From the Workbench main menu, choose Help. The Help menu
Integrator Help appears.
2. From the Help menu, choose one of the following options:
r Error Number Reference
r Keyword/Function Reference
r About
To close the Help entry and Help browser, from the File menu,
choose Close Browser. You can also press Ctrl+W.
Calling for Customer To more effectively help you when you call in for support, follow
Support these steps:
1. If possible, attempt to resolve the question internally or through
the Application Integrator documentation, including the print,
on-line, and training documentation.
2. Make sure you have copied down the exact Application
Integrator version for which you are seeking assistance. The
version is found by selecting the About option on the Help
menu of Workbench. Also copy down the compile version date
of the Control Server (called cservr in UNIX and cservr.exe in
Windows) and the translator (called otrans in UNIX and
otrans.exe in Windows).
UNIX Users
To access these dates, type at the command line:
strings <program name> | grep otBuild
where <program name> is either “cservr” or “otrans”
Windows Users
To access these dates:
a. From File Manager or Windows Explorer (Windows 95 or
NT 4.0), right-click the filename cservr.exe (in Windows).
b. With the filename highlighted, choose Properties from the
File menu. The “Properties” dialog box opens and displays
information such as the filename, path, last change, version,
copyright, size, and other attributes.
5. Make sure the person placing the support call has taken
Application Integrator training and has a thorough background
on the issue for which you are seeking assistance.
6. Call the GE Information Services Application Integrator Help
Desk for support at (248) 324–1242.
Sending a Copy of Your At times, it may be necessary for an example of your problem to be
Files to Customer sent to the support staff. For consistency and compatibility across
Support the various platforms that execute Application Integrator, files
should be sent to Customer Support as follows:
where:
To restore the files sent from Customer Support, use the tar
command with the -xvf options, for example,
tar -xvf /dev/fd0
Windows Users
There is no method for reviewing the contents of the Windows
installation CDs. To review the contents of the installation media,
you must first install the product and then use File Manager,
Windows Explorer, or the MS-DOS dir command to list the
contents of the \Trandev, \Trantest, or \Tranprod directories.
Tag
Container
Group
Defining Items Defining items are the lowest level descriptors in the data model.
Examples of defining items include elements or fields. They define
a data string’s characteristics, such as size and type. Some
examples of item type characteristics are:
r Alpha characters (letters only [A-Z] [a-z])
r Numeric characters (numbers only [0-9])
r Alphanumeric characters (a combination of numbers and
letters [A-Z] [a-z] [0-9] and the “space” character)
r Date
r Time
You can specify that defining items are variable in length by using
an item type that includes delimiters to denote the end of one field
and the start of the next. Or you can define items fixed in length by
specifying the number of characters in the field (in which case, no
delimiters are necessary).
Numeric, date and time item types need a format definition to
describe how the field should be parsed or constructed. For
example, a date might be formatted as MMDDYYYY,
MM/DD/YYYY, or YYYY-MM-DD. During the building of the
data structure, Workbench provides an easy method of masking for
the appropriate format.
Tag Items Tag items enable you to identify different records or segments. The
“tag” is the string of data at the beginning of the record/segment.
A record delimiter in the input or output may separate tag items. If
multiple types of records exist in a file, there is normally a “tag”
referenced to differentiate each type. For example, a heading
record may begin with an ‘H’ in the input stream and a detail
record may begin with a ‘D.’
Tag items can be fixed length or variable length.
Fixed Length Record In a fixed length record, you determine the number of characters
allowed in each field. If the data is not long enough to fill each
field, space characters will be added (either to the beginning or the
end of the field, depending on whether you have right- or left-
alignment specified in each field).
Record
D 3 5 0 0 1 A B C 4
Fld
T
Field of Field of of
a
5 characters 3 characters 1
g
char.
Variable Length Record A variable length tag item uses delimiters to denote the end of one
field and the start of the next. You determine the minimum and
maximum number of characters to be used in each field. If there is
no data available for a particular field, the field’s two delimiters
will appear next to each other with no spaces between them. (See
Field 4 in the example below.)
Record
B E G * 0 0 * N E * 0 0 1 2 3 * * 0 1 0 1 9 7
Container Items Like a tag, a container is used to group two or more defining items.
Unlike a tag, a container does not include a tag (or match value) at
the beginning, and in its absence a place holder in the data stream
is usually required. For example, in the X12 translation session, a
composite element is used to determine a measurement based on
data within the input stream (height, width, and length, for
MEA04), it is defined using a container in Application Integrator.
Depending on your standards implementation, containers are not
used as often as the other data modeling items. A list of container
item types found in the access models supplied by Application
Integrator are described in the appendixes of each of the standards
implementation guides.
Group Items When two or more items have the ability to repeat or “loop,” you
use the group item to define this characteristic. The group item
does not reference any data in the input or output and, therefore,
has no value associated with it. For example, when an invoice
contains a series of individual line items, each line item would be
characterized as a record and the records would be grouped
together by a group item.
The Relationship of Organizing these data model items–defining, tag, container and
Data Model Items group–in a hierarchy further defines the data structure.
The highest level of data is the parent data model item. The next
level of data is the child data model item. Children on the same
hierarchical level are called siblings.
In the example below, “Heading_Record” is the parent item and
“Heading_Document_Number” and “Heading_Document_Date”
are siblings.
Child Heading_Document_Date
Source and Target The four data model items–defining, tag, group, and container–are
Data Models used to describe the structure of the input data in the source data
model and the structure of the output data in the target data model.
These two data models also contain actions to be performed on the
data to correctly map it from the source to the target.
In addition to creating a data model from “scratch,” you can open
an existing data model, modify the data model items and rules,
and save the model under a new name to create a new data model.
This is described in Section 2. As you define the data structure,
attributes (such as size, format, and occurrence) of each field or
record/segment are specified. GE Information Services supplies
data model templates for the major public standards, including
ASC X12, UN/EDIFACT, and TRADACOMS.
Both the input (source) and output (target) data require a data
model. Usually, the input data can be parsed using one source data
model and the output can be constructed using one target data
model. However, in some cases, multiple source or target data
models are involved in a single transaction.
For example, the public standard X12 utilizes enveloping segments
(records) to enclose documents which are part of the data structure.
(See the following figure.) The Application Integrator X12
implementation utilizes one data model to parse/construct the
envelope segments, and another data model for the processing of
the documents contained within the envelopes.
Application
OmniTrans Data
Integrator Data
Model
Model 11
Data
Application
OmniTrans Data
Integrator Data
Model 2
Model 2
The Access Model Each data model must be associated with an access model. The
access model contains a definition of the data model item types
available for the defining, tag, and container items that will be
associated within this data model. The access model information
describes the items that will be parsed (input) or constructed
(output) in the data streams. (The group type is always available,
although it is not specifically described in the access model.)
For example, a data model item named “InvoiceDate” could be
assigned an item type of “DateFld” in the source data model. By
examining the access model associated with the data model, we
would find that item type “DateFld” is defined by an Application
Integrator function # DATE where either spaces or all zeros in the
data field are valid.
Application Integrator supplies access models for the standards
implementation. See Section 2, “Creating Source and Target Data
Models,” for a list of these access models and more information on
the item types in these files.
The second part of data modeling uses variables and rules to map
Associating data between the input (source) and output (target) data models.
Input Data with Workbench provides two graphical tools known as RuleBuilder
Output Data and MapBuilder for defining rules and associating data model
items with variables.
In this example, a date function was used to accept the input date
and calculate a new date.
Profile Database The Profile Database is a resource of values that can be accessed
during a translation session. The Profile Database stores:
r Communication and trading partner profiles
r Substitutions, used to replace a label with a value
r Cross-references, used to replace a value with another value
r Verifications, used to verify a value against a specified code
list
Refer to the Trade Guide for System Administration User’s Guide for
more information on the Profile Database.
Administration The Administration Database consists of one or more files that are
Database used to capture information from translation sessions. The
Administration Database provides you with information for:
r Process tracking, recording information on all translation
sessions
r Archive tracking, recording information on archived
documents
r Message tracking, recording each outbound document
translation along with a status
r Bypass tracking, recording information on all exception
data (errors)
Refer to Section 6, “Translating and Debugging,” for hints on using
the Administration Database reporting features for debugging.
Refer to the Trade Guide for System Administration User’s Guide for
more information on the set up and full reporting features of the
Administration Database.
Environment Files An environment file (given the extension “.env”) can be used to
enhance the current configuration of the translator. It declares
user-defined environment variables with their associated values,
for example:
ACTIVITY_TRACK_SUM=“DM_ActS”
ACTIVITY_TRACK_DET=“DM_ActD”
MESSAGE_TRACK_IN=“DM_MsgI”
MESSAGE_TRACK_OUT=“DM_MsgO”
EXCEPTION_TRACK_SUM=“DM_BypS”
EXCEPTION_TRACK_DET=“DM_BypD”
❖ Caution: You must not change the tsid file because the
administration reporting may be corrupted.
Trace Logs
When a translation is run and depending on the data model
functionality and process, the system automatically can create up to
three trace logs:
To access Workbench
in UNIX
From Do this
Command Line At the UNIX command line, type:
(Only Workbench oTTg w
will be running.)
The main Workbench window should appear.
See the next page for details.
Workbench Main
Window
Minimize/Maximize
Control Menu Title Bar Buttons
Menu Bar
Tool Bar
Work
Area
Open
Save
MapBuilder
Work Area The Workbench display and work area.
Minimized source data models, target data
models, and map component files will display
in this area. Select the icon and use the right
mouse button to open a menu to work with
these files. The minimize file icons are:
Workbench - File The figure below shows the Workbench File menu:
Menu
Component Description
New — Allows you to create a new map component file
Map (environment definition) or data model. The
Components Map Components file contains a list of resource
Data Model names that are brought together for a specific
translation.
Open Allows you to open existing map component files
or data models, to maintain, test, and debug
resources. Ctrl+O is the keyboard shortcut for
performing the same command.
Save All Allows you to save all the open data models and
map component files. Throughout the use of
Workbench, all changes are maintained in
memory until saved. The source and target data
models are saved independently. Alt+A is the
keyboard shortcut for performing the same
command.
Minimize All Minimizes all the open data models and map
component files. Ctrl+M is the keyboard shortcut
for performing the same command.
Component Description
Print Setup Displays the Windows Print Setup dialog box.
This dialog box allows you to change your
default printer settings for the item to be printed.
Exit Exits from Workbench, prompting you to save if
any changes were made. Returns to the Trade
Guide main menu. Ctrl+Q is the keyboard
shortcut for performing the same command.
Workbench - Tools
Menu
Component Description
MapBuilder Opens the MapBuilder window to allow you to map data
from source to target data models. The keyboard
shortcut Ctrl+L performs the same function. See the
“Using MapBuilder” section in Section 2 for more
details.
MapBuilder Opens the MapBuilder Preferences dialog box which
Preferences allows you to review or change the default settings
within MapBuilder.
Workbench - Help This menu provides access to the on-line Help system. See the
Menu Preface for instructions on using the Help system.
Component Description
Error Codes Provides Help on the various Application
Reference Integrator error codes.
Keyword/Functions Provides Help on Application Integrator
Reference environment keywords and data model
functions.
About Workbench Opens a dialog box that lists the current
Workbench release number, release date,
server path, and copyright.
The access model sets three conditions for each item type: the pre-
condition, the base, and the post-condition. The pre-condition
describes any rules about the data that precedes this item, for
example, a leading delimiter or “tag.” The base describes the value
or character set allowed for this item, for example, a set of
alphabetic characters A-Z and a – z. The post condition describes
any rules about the data that follows this item, for example, a
trailing delimiter.
If you review any of the access models supplied by Application
Integrator, you will see these item type specifications in the format:
where
ElementA is the name assigned to an item type. This name appears
in the Item Type list box when defining data model items and its
meaning is established by its complete access model definition (pre-
condition, base, and post-condition).
(Elem_delim) refers to a pre-condition delimiter set earlier in the
access model.
^(Alpha) is the base condition. The caret (^) determines two things:
1) the value parsed is to be passed back to the data model and 2)
the item type should appear in the Item Type list box when this
access model is associated with a data model. (Alpha) refers to
another non-display base element in the access model which sets a
range of acceptable alphabetic values. A base value preceded by a
pound sign (# ), such as # CHARSET, refers to access model
functions that precisely describe the data with which they are
associated.
❖ Note: The caret (^) must appear in front of the base condition for it
to appear in the Item Type list and be used in the data model.
Tag Type Sets the size of the tag and the post delimiter.
Defining Sets the pre-condition value (delimiter before data),
Type the character set, and any special formatting.
Container Sets the base value to CONTAINER. See the
Type description of each “composite” item in the
appropriate standards manual for examples.
When defining each data model item in your data model, you will
specify an item type. The possible list of item types available will
be based on the access model you associated with the data model.
The complete set of attributes associated with each data model item
(such as, the possible format or maximum occurrence) is also
related to the item type.
Pre-Condition Values The following is a list of some pre-condition values. Certain values
may not apply to your standards implementation.
Pre- Description
Condition
Elem_delim Defined by the access function
# SECOND_DELIM or the data model function
SET_SECOND_DELIM.
Base Values The following is a list of some base values. Certain values may not
apply to your standards implementation.
Value Description
Seg_term Defined by the access function # FIRST_DELIM
or the data model function SET_FIRST_DELIM.
RecordDelim Defined by the access function # FIRST_DELIM
or the data model function SET_FIRST_DELIM.
For a list of all data model item types, Refer to the appendixes of
the Application Integrator standards implementation manuals, for
example, the ASC X12 Standards Implementation Guide.
Parsing Blank or Empty Data models must be modified to enable them to read through
Records empty records, that is, records that contain only the record
delimiter character (line feed). For version 3.0, the items
LineFeedDelimContainer and AnyCharO have been added to the
OTFixed.acc access model.
Use the OTFixed.acc access model with the following examples.
Use the following input data where (l/f) represent a single
character, the line feed:
1AB(l/f)
2ABCD(l/f)
(l/f)
4DEFG(l/f)
(l/f)
Example 1: In versions before 3.0, the following data model would
parse and display each record:
Init {
[]
SET_FIRST_DELIM(10)
}*1 .. 1
Group {
Record { LineFeedDelimRecord ""
Field { AnyChar @0 .. 10 none }*0 .. 1
[]
SEND_SMSG(1, STRCAT("READ: ", Field))
}*0 .. 10
}*1 .. 1
Viewing the Access You can view of the contents of the access model associated with
Model the model from the Layout Editor.
To view the access 1. From the View menu of the Layout Editor window, choose
model associated with Access Model.
your data model
A text display window appears with the access model
displayed. Use the scroll bars to see the entire contents of the
model.
2. You can search for text within the access model by using the
Find option.
To do this, choose the Find button to open a search-type dialog
box. Type the text in the Find What value entry box and choose
the Find Next button. Choose the Cancel button to exit this
dialog box.
❖ Note: In most cases, the term for which you are searching is
highlighted and the system scrolls to the first reference. To
see every reference to a term, choose Find Next again.
However, if the file you are viewing has long line lengths
requiring horizontal scrolling, the system highlights the term,
but does not automatically scroll to it. Your indicator that the
term has been found is the lack of a return message of Not
found or Search wrapped around in a file. Usually
stretching the display window as large as possible reveals the
highlighted terms, in other cases, manually scrolling through
the file isolates the term.
Menu Bar
Tool Bar
Field
Labels
Collapse/
Expand
Button
Access
Icons
RuleBuilder
Icon
Workbench Layout The Layout Editor window includes the following components:
Editor Window
Component Description Icon
Access Icons These icons represent the item type
of the data model item: group, tag,
container, and defining item.
Attribute Contains column buttons for the
Bar model. You can drag and drop
these buttons to rearrange the
columns.
Collapse/ As a minus sign (–), it collapses the
Expand model to show only the highest
Button hierarchical level of data models.
As a plus sign (+), it expands the
model to show all data model items.
Field Labels Field labels for the information
entered for each data model item.
Highlighted Indicates the active data model
Item item.
Horizontal Moves the content of the work area
and Vertical left/right and up/down by
Scroll Bars dragging the scroll bar to the
desired position, by clicking the
unshaded area to jump the scroll
bar up/down page by page, or by
clicking on a scroll bar arrow to
move the window to the desired
postion, one line at a time.
Menu Bar Contains menus that provide access
to various operations within the
modeler.
RuleBuilder This icon shows that rules have
Icon been assigned to the data model
item (when it appears colored). To
open RuleBuilder, click the icon.
Collapsing/Expanding Two options allow you to control the amount of information you
the Data Model Items see in the Layout Editor window. One option allows you to
collapse the entire data model or a family (parent with siblings)
within the data model, to see only the highest level entries. The
reverse option allows you to expand the entire model or a family of
data model items.
To view the entire data r From the Layout Editor Data Model menu, choose Expand
model All Levels.
r Click the Expand All Levels icon from the topmost data
model item.
To collapse the data model r From the Layout Editor Data Model menu, choose Collapse
so that only the highest All Levels.
level is displayed r Click the Collapse All Levels icon.
To display the children r Select the Expand icon to the left of the data model item.
of a data model item
To collapse the children r Select the Collapse icon to the left of the data model item.
of a data model item
Layout Editor After creating a new data model or map component file, or after
Menus opening an existing data model or map component file, the Layout
Editor window appears for the source and/or target data models
with the following menus:
Layout Editor File The figure below shows the Layout Editor File menu:
Menu
Minimizing the Layout You may want to minimize a particular editing session for a source
Editor or target data model if you have several data models open at once.
This avoids confusion among them.
Restoring the Layout Ø To restore the Layout Editor for the current data model
Editor
1. From the Workbench work area, click the data model icon to
select it.
To select and open a range of models, hold the Shift key down
while clicking the other data models to select them.
2. Once you have selected the model(s) to open, click the right
mouse button to display a menu.
3. Choose Restore from this menu.
Layout Editor The figure below shows the Layout Editor Edit menu:
Edit Menu
Undo and Redo Data The Undo and Redo functions can be performed on data model
Model Actions actions. The undo function allows you to reverse or cancel actions
you’ve performed. Redo allows you to repeat actions that you’ve
canceled. The undo and redo functions are found on the Edit drop
down menu and on the toolbar.
Undo and redo functions can be performed when the drop down
menu selections or icons on the toolbar are active (sensitive).
Actions are either undone or redone, depending on the function
selected. The type of actions that can be undone or redone include:
r cut
r copy
r paste
r duplicate
r change an item’s label
r change an item’s attributes: Item Type, Occ Min/Max, Size
Min/Max, Format, Match Value, Version, File Sort,
Increment
r insert or append an item
r change an item’s level left or right
Multiple actions are tracked, allowing for several actions to be
undone and then redone as necessary. However, this list of tracked
actions is cleared when any of the following occurs:
r when the Layout Editor is first opened for the data model
r when MapBuilder is used
r when the Layout is saved
r when Rules Editor is opened
r when applying rules.
A separate list of actions is traced for each Layout and Rules Editor
combination. When the Rules Editor is opened for a Layout Editor,
both editors take their actions on the same list, which can be
undone or redone as necessary.
Cutting, Copying, and Cut, Copy, and Paste Clipboard functions can be performed on
Pasting Data Model individual data model items as well as an entire data model item
Items hierarchy. Cut or Copy puts the selected information on the
Clipboard. Paste takes the information from the Clipboard to the
location specified in your data model.
Duplicating Data Model Duplicating performs the same operation as using Copy, then
Items Paste, but involves only one step. Duplicate does not copy and
paste the children of the selected item.
Finding Data Model The Find option displays the Find dialog box. The Find dialog box
Items provides the ability to locate a Data Model Item by label.
Fill in the appropriate value entry boxes using the following table
as a guide.
Find What Any part of the label can be entered. You can
narrow the search by selecting from the three toggle
buttons.
Find Next Locates the first occurrence of the item being
searched. Additional clicks will locate additional
occurrences.
Cancel Stops the search and exits the Find dialog box.
Toggle You can narrow your search by determining
Buttons whether to:
q Match whole word only: This option looks for
the entire character string entered in the Find
What value entry box, not parts of words.
q Match case: This option looks for text with the
same capitalization as the text entered in the
Find What value entry box.
q Regular expression: This option is a shorthand
way of specifying text patterns within the
Workbench model, including literal characters,
wild card characters, and repeated regular
expressions.
❖ Note: If the file you are viewing has long line lengths requiring
horizontal scrolling, the system highlights the term, but does
not automatically scroll to it. Your indicator that the term has
been found is the lack of a return message of Not Found or
Search wrapped around file. Usually, stretching the display
window as large as possible reveals the highlighted terms; in
other cases, manually scrolling through the file will isolate the
term.
Layout Editor The figure below shows the Layout Editor Data Model menu.
Data Model Menu
Layout Editor The figure below shows the Layout Editor View menu.
View Menu
Option Usage
Access Icons Allows you to view the icons representing the
item type and the rules for each data model item.
Selecting this entry again hides the access icons.
Grid Lines Allows you to view the grid lines between data
model items in the current Layout Editor
window. Selecting this entry again hides the grid
lines.
Option Usage
Input File Allows you to open the input file associated with
the current source and target data model. When
you select this menu entry, a dialog box appears
displaying the input file. You can also search for
a character string or regular expression by
selecting the Find button on this dialog box.
Output File Allows you to view the output file associated
with the current source and target data model
after a translation has taken place. You can view
the output file in either text or hexadecimal
mode. You can also search for a character string
or regular expression by selecting the Find
button on this dialog box.
Toggling Access Icons Each data model item has an icon placed to the left of the
Off/On RuleBuilder icon which represents the item type classification
(group, tag, container, or defining). These icons are referred to as
access icons and are placed in the Layout Editor window, by default.
❖ Note: Access icon settings are sessionspecific; they are not kept
for multiple sessions.
Access
Icon
Toggling Grid Lines By default, the system separates individual data model items in the
Off/On Layout Editor window with grid lines. These grid lines can also be
turned off and turned on at your will.
Grid Lines
Layout Editor The figure below shows the Layout Editor Debug menu.
Debug Menu
Option Usage
Run Allows you to start a translation. The Run list box
displays translation configurations, from which a
translation can be invoked. The work area window
provides information about the translation's status,
including, start/end time, session number, and
results. The keyboard shortcut Ctrl+U performs
the same operation.
Information about using the Run dialog box can be
found in Section 6.
Source to Allows you to generate a report detailing the
Target Map mappings between source and target data models.
Listing
Data Model Allows you to generate a report of any data models
Listing in the current directory.
❖ Note: The Layout Editor Help menu provides the same menu
options as the Workbench main menu. Refer to the section,
Workbench Help Menu, in Section 1 for details on this menu
and using on-line Help.
Overview of Data Depending on the item type, you will specify for each data model
Model Item Attributes item some subset of the following attributes:
Attribute Description
Occ Min/Max The minimum and maximum occurrence value
(Occurrence) controls the number of times a data model item
is required (min) and can be present (max) in
the data stream.
Size Min/Max The minimum and maximum size value controls
the data model item’s field size in the data
stream. Minimum=maximum for fixed length
data.
Format Sets the input or output format for data model
items defined as a date or time, or defines the
format for a numeric value.
Match Value For tag items, this optional feature compares a
value to the data in the input stream or defines a
value for the output stream.
During processing of the source data model, the
value in the Match Value box is compared to the
characters at the beginning of the record in the
input stream.
During processing of the target data model, the
value in the Match Value box will be
constructed in the output stream at the
beginning of the record.
Verify For defining items, you can enter the name of
(Verification the list from the Profile Database against which
List) the data for this item will be verified.
File The File option is only available for group items
(both source or target data models). The File
option allows the parsing from or the
constructing to the filename specified by this
option.
Attribute Description
Sort The Sort option is only available for group items
in a target data model. The Sort option allows
you to set a sort sequence for the defining items
in a group, for example, to sort all detail records
by invoice number or date.
Increment The Increment/Non-Increment option is only
(Increment available on group items and only pertains to
Counter) MetaLink variables. Set to increment, this
option will increment the instance on all
MetaLink variables used with the group item
and its children items.
This section describes how to create a new data model using these
Creating A Data basic processes:
Model
r Defining a new data model.
r Opening an Existing data model.
r Defining a data model item, Assigning attributes to data
model items, and Assigning an item type.
r Establishing Data Hierarchy.
r Assigning rules to the data model item.
r Adding data model items until the data model is complete
for either the source or target side.
r Saving the data model.
Both a source and target data model are necessary for a translation;
one or both may be standard models supplied by Application
Integrator. You can also define a data model based on a standard
model (copied and then modified).
To define a new data 1. From the Transaction Modeler Workbench File menu, choose
model New.
2. From the New menu, choose Data Model. The New Model
Definition dialog box will appear.
6. From the File menu, choose Save As. Since this is a new model,
a dialog box appears for you to specify a directory path and
type a new name. Complete this dialog box.
You are now ready to define the structure of the model.
Opening an Existing To return to a model previously created for further data modeling
or Standard Data or open a standard model shipped with an Application Integrator
standards implementation, follow these directions.
Model
❖ Note: Standard models and ID code files are found in the working
directory.
To open an existing 1. From the File menu, choose Open to display the following
data model or standard dialog box.
model
The current path is displayed in the top center button (UNIX) or
in the Look in box (Windows 95 and NT 4.0).
4. From the Layout Editor of the new data model, make all the
changes you need, for example, copying or duplicating data
model items.
Defining a Data
Model Item
Adding Data Model You have several options for adding new data model items to a
Items data model. The following section describes each method.
Note: The default hierarchy level for a new item is the same as the
Changing the Name of a Each data model item you add has a default name of NewItem.
Data Model Item
Assigning an Item For each data model item you add to your model, you must assign
Type an item type. The options you view in the Item Type selection list
is based on the access model associated with your model.
❖ Hint: If you are unsure of the exact definitions of the item types,
you can view the access model associated with your model. To do
this, from the Layout Editor View menu, choose Access Model.
Data Model Item There are four major data model item structures: group, tag,
Structures defining, and container items. One or more item type names may
be associated with each of these structures, based on your access
model. All data model items default to the item type Group. Once
you define the item type, the leftmost icon in the Layout Editor
(referred to as the access icon) will change to reflect the major
structural type of the item:
Group
Tag
Defining
Container
Assigning Attributes
to Data Model Items
Occ Min/Max
Setting Minimum and Maximum Occurrence
The minimum and maximum occurrence value controls the number
of times a data model item must and can be present in the data
stream. The minimum and maximum occurrence value of a new
data model item is user-defined. The default is 1. The minimum
occurrence value must be less than or equal to the maximum
occurrence value. A minimum occurrence of 0 indicates that the
data model item is optional. The maximum value can be set with
the asterisk (*) wild card to specify a variable amount.
2. For each box, type a numeric value that specifies the minimum
and maximum occurrence.
3. To accept the values entered, click outside the box or press Tab
to move to the next option.
Size Min/Max
Setting Minimum and Maximum Size
The minimum and maximum size value controls the data model
item’s field size in the data stream. The minimum and maximum
size value of a new data model item is user-defined. The minimum
size value must be less than or equal to the maximum size value.
The size maximum value cannot exceed 4092.
2. For each box, type a numeric value that specifies the minimum
and maximum size allowable for data mapped to this item.
3. To accept the values entered, click outside the box or press Tab
to move to the next option.
Format
Defining the Data Model Item’s Format
The data model item format box is only available if the data model
item is defined as a date, time, or numeric item type
❖ Note: See the on-line Help for examples of possible numeric, date,
and time formats.
Ø To add a format
1. Select the data model item to be modified and select the Format
box.
2. Type the format for the date, time, or numeric field using the
numeric and sign masking characters described in this section,
for example, you might type “MM/DD/YYYY” for a date item.
For a numeric field, be sure to consider the decimal placement,
positive or negative sign, and alignment desired.
3. To accept the new format, click outside the box or press Tab to
move to the next option.
Mask Example
9 Zero fill whole leading or decimal trailing zero digits
Examples:
123 à “99999” à “00123”
1.1 à “99.99” à “01.10”
Z Space fill whole leading or decimal trailing zero digits
Examples:
123 à “ZZZZZ” à “ 123”
1.1 à “ZZ.ZZ” à “ 1.1 ”
F Suppress whole leading or decimal trailing zero digits
(variable length)
Examples:
000123 à “FFFFF” à “123”
1.100 à “FF.FF” à “1.1”
$ Monetary symbol, treated like the “F” mask character,
but inserts the dollar sign at the beginning of the string
(variable length)
Examples:
134567 à “$ZZZ,Z99.99” à “$ 134,567.00”
134567 à “$FFF,F99.99” à “$134,567.00”
1.25 à “$$$,$$$.99” à “$1.25”
Sign (Masking)
Character Explanation Examples
N Displays a negative sign for a Negative:
negative value. -123 à“99999N” à “00123-”
No character is used to indicate a Positive:
positive value. 123 à “99999N” à “00123”
- Displays a negative sign for a Negative:
(use the hyphen negative value. -123 à “99999-” à “00123-”
character) Displays a space for a positive Positive:
value. 123 à “99999-” à “00123 ”
None No character is used to indicate a Negative:
positive or negative value. -123 à “99999” à “00123”
Positive:
123 à “99999” à “00123”
+ Displays a negative sign for a Negative:
(use the plus negative value. Displays a plus -123 à “99999+” à “00123-”
sign character) sign for a positive value. Positive:
123 à “99999+” à “00123+”
_ Displays a negative sign for a Negative:
(use the negative value. -123 à “99999_” à “00123-”
underscore Displays a zero (0) for a positive Positive:
character) value. The zero is dropped when 123 à “_99999” à “000123”
only whole digits and right 123 à “999.99_” à “001.230”
justified. 123 à “99999_” à “000123”
(right justified)
A The ASCII overpunch table is Negative:
(must be placed used to indicate a negative or -123 à “99999A” à “00012s”
in the rightmost postive value. Positive:
position) 123 à “99999A” à “000123”
E The EBCDIC table is used to Negative:
(must be placed indicate a negative or positive -123 à “9999E” à “0012L”
in the rightmost value. Positive:
position) 123 à “9999E” à “0012C”
MSB LSB
Platform Binary Packed Binary Packed
Intel/NT b p
Intel/SCO‡ b p
Intel/Linux‡ b p
DEC Alpha OSF/1 b p
Digital UNIX b p
HP PA–RISC B P
Sun SuperSparc B P
IBM B P
IBM PowerPC‡ B P
SGI MIPS B P
Motorola 680x0 B P
(Macintosh)‡
‡ As of date of publication, Application Integrator is not available on these
platforms.
For example, to format a field for mainframe data that will contain
the packed values of +123, -123, and 123, you would use the format
‘PP’. The translator would read and store the values as follows:
Mask Usage
:L Left justify
:R Right justify
Examples:
12 à “ZZZZZ.ZZ” à “ 12 ”
12 à “ZZZZZ.ZZ:L” à “12 ”
12 à “ZZZZZ.ZZ:R” à “ 12”
triads “,” or “.” can be used with “9”, “F”, “Z”, “$”, but
not with “R” for the thousand position placement
character.
@ Escape literal characters defined within the format.
(Escape) Example:
“@For: $ZZ,ZZZ” à escapes the “F” literal.
Mask Usage
M Date location for month, requires two Ms
Example:
19940902 à “MM/DD/YY” à “09/02/94”
D Date location for day of month, requires two Ds
Example:
19940902 à “DD/MM/YYYY” à “02/09/1994”
Y Date location for year, requires one, two or four Ys
Example:
19940902 à “YMMDD” à “40902”
m Replaces leading month digit (if zero) with space
Example:
19940902 à “mM/DD/YY” à “ 9/02/94”
d Replaces leading day digit (if zero) with space
Example:
19940902 à “dD/MM/YY” à “ 2/09/94”
0 Defines a date of all zeros to be constructed
(# DATE_NA)
y Date location for variable length year must be in this
form: “yyYY”
<spac A space “ ” as a leading character in a mask defines a
e> date of all spaces to be parsed or constructed
(# DATE_NA)
Mask Usage
H Time location for mandatory hours, requires two Hs
(Required) Example:
120959 à“HH:MM:SS” à “12:09:59”
M Time location for mandatory minutes, requires two Ms
(Required) Example:
120959 à“HH:MM:SS” à “12:09:59”
S Time location for mandatory seconds, requires two Ss
Example:
120959 à“HH:MM:SS” à “12:09:59”
s Time location for optional seconds, requires two s’
(Source Example:
only) 1209 à“HH:MM:ss:” à “12090000”
120959 à“HH:MM:ss” à “12095900”
D Time location for mandatory decimal seconds,
requires two Ds
Example:
12095900 à “HH:MM:SS:DD” à “12095900”
d Time location for optional decimal seconds, requires
(Source two ds
only) Example:
120959 à “HH:MM:SS:dd” à “12095900”
1209591 à “HH:MM:SS:dd” à “12095910”
12095912 à“ HH:MM:SS:dd” à “12095912”
Mask Usage
<space> A space “ ” as a leading character in a mask defines
a time of all spaces to be parsed or constructed
(# TIME_NA). The value parsed and passed back to
the source data model will be spaces, not zeros.
Target Processing
r H, M, S, and D are the target formatting characters. Use of
the source masking characters ‘s’ of ‘d’ will be taken as
literals and output as such, for example, “12:14:ss:dd.”
r The value received is first converted to an 8-digit number
by adding trailing zeros and then output based the format
definition. If the value is a single digit (e.g., “2”), a leading
zero is first inserted before the trailing zeros are added (e.g.,
“02”).
r A value of more than 8 digits generates error code 146.
3. From the List box, select each defining item by which to sort the
data model items in the group. To place the item in the Sort
box, choose the >> button. To remove an item from the Sort,
select it and choose the << button, returning it to the List box.
The first item you select is the primary sort, the second item
becomes the secondary sort, and so forth.
4. Choose the Apply button to save your sort order for the group
or choose the Cancel button to return to the Layout Editor
window without specifying a sort order.
Once you return to the Layout Editor and select another area of
the window, the sort order appears in the Sort box. To review
or edit the complete list of defining items (since only the first
few characters of the primary sort appear in the box), select the
Sort box and click the ellipses to return to the Sort dialog box.
Establishing Data When a new data model item is inserted or appended, it is placed
Hierarchy into the data model at the same hierarchy level as the original data
model item.
Parent Item 1
Child Item 1
Child Item 2
Child Item 3
Parent Item 2
The hierarchy also determines processing flow, child to sibling and
then back to parent.
Parent (3)
Child (1)
Sibling (2)
Refer to the “Understanding Environments” section in Section 4 for
a discussion of processing flow.
Including Files in The Include… option allows you to attach Include files to your data
Data Models models. Include files contain rules that you can reference from
your data model so you can use them once or multiple times. The
Include file’s extension is “.inc”. The rules are in the form of
declare statements.
To access the Include
1. From the Layout Editor or the RuleBuilder File main menu,
option choose File. The File drop down menu appears.
2. Choose Include… The Include dialog box appears.
The Include dialog box displays Available Files on the left side
and Included Files on the right. The Available Files are those
Include files that are available to this data model. The
Available Files cannot be accessed by the data model until they
are linked to the data model. This is done by moving the
filename from the Available Files list into the Included Files list,
then applying the change and saving the data model. The
following table describes the items found on the Include dialog
box.
Item Description
Available Files This list box displays the filenames of the
Include files available to this data model.
Included Files This list box display the filenames of the
Included files that will be or are linked to
the data model
<< Choosing this button will move the filename
from the Included Files list to the Available
Files list.
Item Description
>> Choosing this button will move the filename
from the Available Files list to the Included
Files list.
Apply Saves the changes.
View Allows you to review the highlighted file.
Cancel Exits the Include dialog box.
To link an Include file to 1. From the Layout Editor or the RuleBuilder File main menu,
a data model choose File. The File drop down menu appears.
2. Choose Include… The Include dialog box appears.
3. In the Available Files list box, highlight the filename of the
Include file to be linked to the data model.
4. Choose the >> button. The filename will move from the
Available Files list box to the Included Files list box.
5. To complete the entry, choose the Apply button.
To unlink an Include file 1. From the Layout Editor or the RuleBuilder File main menu,
to a data model choose File. The File drop down menu appears.
2. Choose Include… The Include dialog box appears.
3. In the Included Files list box, highlight the filename of the
Include file to be unlinked from the data model.
4. Choose the << button. The filename will move from the
Included Files list box to the Available Files list box.
5. To complete the entry, choose the Apply button.
To view an Include file 1. From the Include dialog box, highlight the filename of the file to
be viewed.
2. Choose the View button. The View dialog box will appear.
Assigning Rules to When you assign rules to data model items, you are adding
Data Model Items processing logic to your model. Information about adding rules
can be found in Section 3 Building Rules into Data Models.
Saving a Data It is good modeling practice to save your model frequently during
Model development. It is recommended that you save your data models
and map component files to the working directory.
Note: During the Save operation in the Layout Editor, a set of rules
[ ]
; mapbuilder predefined do not remove this line
To avoid this problem, choose the OK or the Not Used button on the
message box. Insert another Group type item under the top level
item and define the sort on it. Save the data model.
To save a data model 1. Activate the Layout Editor window of the data model to be
saved. (Click the title bar of the window to activate it.)
2. Save the data model in one of the following ways:
r Menu – From the File menu, choose Save.
r Toolbar Icon – Click the Save icon.
3. If you have already named your data model, the application
will save the work under the current name. If you have not
named the data model, a dialog box appears for you to enter a
path and name.
;; MapBuilder (predefined)
Do Not Remove This Line
VAR>OTTargetSuccessful
VAR>OTSourceSuccessful
PERFORM(OTSrcEnd)
PERFORM(OTTrgEnd)
To save a data model 1. Activate the Layout Editor window of the data model to be
under a new name saved with a new name. (Click the title bar of the window to
activate it.)
2. From the File menu, choose Save As.
3. In the Save As dialog box that appears (see the figures below to
note differences between operating systems), type the new
name in the box provided.
To completely exit from 1. Activate the Layout Editor window to be exited. (Click the title
the Layout Editor bar of the window to activate it.)
2. From the File menu, choose Close Editor or press Ctrl+Q).
If changes were made, but not saved or applied (rules), a
prompt displays asking if you want to apply and save changes.
Format You can create a report that shows the different formats used in
Representation defining a data model by running a translation that contains a
model and map component file which contain examples of each of
Report
the valid formats. They have been provided for the evaluation of
target formatting.
In UNIX, the translation can be executed by running the following
script at the command line:
OTFmt.sh
Output is automatically displayed to the screen from the file
OTFmt.out. (This filename is defined in the map component file
OTFmt.att).
In Windows, the translation is executed by issuing the command:
otrun.exe -at OTFmt.att –cs %OT_QUEUEID% -I
You can then use a Windows editor, such as MS Word or Notepad,
to view the file OTFmt.out. In Windows, the program must print to
a file named OTFmt.out.
Standard Data Two listings are available to show the contents of data models
Model Listings derived from a standard data model. The Group/Tag listing
includes the Group and Tag data model item types. The All Data
Model Items listing includes all data model item types: Group,
Tag, and Defining.
For Group item types, the description will be the data model item
label. For Tag type items, the description will be a cross-reference
of the match value to a database-stored description.
For Defining type data model items, the cross-reference is based on
the element number (included in the data model item label) to a
database-stored description. If the data model item labels does not
include the element number (such as, standard data models created
before version 3.0), the description is the data model item label. All
descriptions are based upon the Standard: ASC X12,
UN/EDIFACT or TRADACOMS.
Running the All Data Ø To run the All Data Model Items listing
Model Items Listing
Rules allow for the movement of data from the source to the target
Overview of data model. Rules can be placed on any type of data model item in
Rules Entry the data model (group, tag, container, or defining items) to
describe how data is referenced, assigned, and/or manipulated.
In the source data model (input side), the rules are normally placed
on the parent item (tag) to ensure the entire tag has been parsed in
and validated before any rules are executed and data mapping
occurs. In the target data model (output side), the rules are placed
on the defining items in order to specify from which variables
values are to be mapped. (These variables having been assigned
via rules in the source data model.)
Modes for There are three modes for processing rules that are available for all
Processing Rules data model items within a data model. They are performed in the
following sequence:
Mode Description
PRESENT Rules will be performed when entering rules
processing with a status of 0
(no errors).
ABSENT Rules will be performed when entering rules
processing if one of the following statuses is found:
138-data model item not found
139-data model item no value found
140-no instance
171-no children found
These rules will also be performed when leaving
PRESENT mode with the same statuses.
Mode Description
ERROR Rules will be performed when entering rules
processing with any other status than the following:
0-okay
138-data model item not found y
139-data model item no value found
140-no instance
171-no children found
These rules will also be performed when leaving
ABSENT mode processing with a non-zero status.
Types of Rule Each rule consists of a condition with one or more actions. There
Conditions are two types of conditions: Null and Conditional Expression.
A Null condition is always true and the actions will always be
performed. It is also referred to as No Condition.
With a Conditional Expression, the condition must come true before
the actions will be performed. Any data model item can have one
or more conditions, and each condition can have one or more
actions.
Variables Variables are the links between the source and target data model
items. There are three types of variables supported by Application
Integrator, as noted in the following table:
Variable Description
Variable This type of variable is a single value, also referred to
as a temporary Variable. If more than one
assignment is made to the same variable name, the
last assigned value is the value that will be
referenced. A variable is useful for referencing the
same value multiple times, as a counter, or in a
concatenation.
Array This type of variable is a list of values. Manual
controls are recommended with this variable
whenever multiple levels in the data model are
mapped. These controls are used to ensure that the
proper data stays together, such as: detail records
with the proper header record or sub-detail records
with the proper detail records. There are a set index
and a reference index associated with the list of
values. The set index points to the last value placed
on the list and the reference index points to the next
value to be referenced from the list. The reference
index can be reset to the top of the list by using the
data model keyword RESET_VAL.
Variable Description
MetaLink This type of variable is a list of values. A data model
item’s instance and its parent’s instance are
maintained with each value placed on the variable.
These instances eliminate the need for manual
controls to ensure that the proper data stays together,
such as: detail records with the proper header record
and sub-detail records with the proper detail record.
Only a source data model can assign values to the
MetaLink. If the target data model attempts to assign
a value to the MetaLink, the last obtained value will
be overwritten with the new value. Only the target
data model can reference a value from the MetaLink.
Value
Prod_1
Prod_2
Prod_3
Prod_4
Prod_5
Two Methods for There are two different methods for creating rules to map your
Creating Rules data:
r RuleBuilder
r MapBuilder
RuleBuilder allows you to create customized mapping rules. Using
RuleBuilder, you have access to the full functionality of the
Workbench rules system. Depending on the expertise of the
developer, rule definition can be done either in a free-format text
editor or through prompting via the RuleBuilder interface. When
using the RuleBuilder, the order that the rules appear in is the order
in which they will be executed during a translation session.
Children data model items are acted on before parent data model
items, hence the re-ordering of the rules to match the execution
order.
RuleBuilder provides a series of tabbed pages or “tabs” which
organize the components of rules (conditions, data model items,
functions, variables and so forth) into categories. Using the mouse
or keyboard shortcuts, you can quickly build the data model logic.
The RuleBuilder interface is described in the “Using RuleBuilder”
section.
MapBuilder is an automated way of applying rules on data model
items. MapBuilder uses a drag and drop feature to map from
source to target data model. The rules are placed on the defining
items only and are a NULL condition (that is, the actions will
always be performed). In the source data model, MapBuilder
creates a rule that assigns a data model item’s value to a variable.
In the target data model, MapBuilder creates a rule that references
the variable for its value and assigns it to the data model item.
MapBuilder is an efficient way to map from source to target data
models when the input and output stream are the same, or
extremely similar, in structure. Refer to the “Using MapBuilder”
section for details.
RuleBuilder Window RuleBuilder provides an interface for quickly defining null and
conditional expressions. The following illustration shows the
RuleBuilder interface:
RuleBuilder
Rule Notebook
Toolbar
Tabs
The RuleBuilder Window has two parts: the Rule Notebook (the left
portion of the window) and the Rule Edit Workspace (the right
portion of the window). You can directly type lines of rule
expressions in the Rule Edit Workspace or you can use the Rule
Notebook options to create expressions via mouse selection.
The Rule Notebook contains tab pages and each page represents
different rule components. You can bring any page forward by
clicking its tab, for example, the illustration shows the DM Items
tab. You can also use the arrow keys at the bottom of the Rule
Notebook to move between pages.
The Rule Edit Workspace is a list of rules to be processed during
translation. When you open RuleBuilder, the entire file displays in
the order in which the existing rules are to be executed (not the order of
the data model items as displayed in the Layout Editor window).
The insertion point is placed on the data model item from which
you opened RuleBuilder. The Present, Absent, and Error mode
rules displays for each data model item.
RuleBuilder Toolbar The RuleBuilder toolbar provides buttons to quickly add rules
logic. The description of the function of each button can found in
the RuleBuilder–File Menu and the RuleBuilder–Edit Menu
sections.
Insert Insert
Cut Paste Assignment Return Apply
Toolbar Keyboard
Menu Option Icon Shortcut Description
FileàIncludes… Displays the Include
dialog box.
FileàApply Ctrl+A Provides options for
applying the rules.
RuleBuilder - Edit The Edit menu provides options for working with the Rule Edit
Menu Workspace.
The following table indicates the drop down menu options, their
corresponding toolbar icons and keyboard shortcuts, and
descriptions for each of the RuleBuilder items.
RuleBuilder Help The Help Menu provides options for working with the Help
Menu facility. Additional information about using the Help system can
be found in the Preface.
Accessing RuleBuilder
To display the 1. From the Layout Editor dialog box, highlight the data model
RuleBuilder dialog box item to which you want to add rules.
Adding/Modifying Once you open the RuleBuilder window, you are ready to add or
Rules modify rules of the data model. As entries and selections are made
from the Rule Notebook, they appear in the Rule Edit Workspace.
When you open the RuleBuilder window, the focus is on the
present mode rules of the data model item you have currently
selected.
The same methods are used to insert PRESENT, ABSENT, or
ERROR mode rules.
Rules for any data model item can be displayed or hidden by
clicking the Collapse or
Expand icons to the right of the data model item’s name in the Rule
Edit Workspace. The empty brackets [ ] indicate that no rules are
defined for the data model item. An equal sign within the brackets
[=] indicate that there are rules defined for the data model item. An
example of an expanded rule is shown in this following illustration.
To insert an Assignment
1. Either highlight the data model item to add rules to and open
RuleBuilder, or once in the Rule Edit Workspace, move the
insertion pointer to the data model item to which you want to
add rules. Be sure to move the insertion pointer to the mode to
which you want to add rules (Present, Absent, or Error).
2. To insert an Assignment = at the insertion point, use one of the
following methods:
r Menu-From the Edit menu, choose Insert Assignment.
r Toolbar Icon-Click the Insert Assignment icon.
r Keyboard Shortcut-Press Ctrl+G.
r Keyboard–Press = (equal sign).
3. Insert the appropriate statements for the rule by either typing
them directly into the Rule Edit Workspace or by following the
procedure for inserting a Rule Notebook option.
4. Apply your changes to the Rule Edit Workspace. Changes to
the Rule Edit Workspace are not complete until they are
applied, using one of the following methods:
r Menu-From the File menu, choose Apply.
r Toolbar Icon-Click the Apply icon.
r Keyboard Shortcut-Press Ctrl+A.
❖ Caution:
applied.
Rules are not saved in RuleBuilder until they are
Rules, when applied, are updated to the Layout
memory area, however, rules are not permanently saved to disk
until the data model is saved.
❖ Caution:
applied.
Rules are not saved in RuleBuilder until they are
Rules, when applied, are updated to the Layout
memory area, however, rules are not permanently saved to
disk until the data model is saved.
To insert a Condition
1. Either highlight the data model item to add rules to and open
RuleBuilder, or once in the Rule Edit Workspace, move the
insertion pointer to the data model item to which you want to
add rules. Be sure to move the insertion pointer to the mode to
which you want to add rules (Present, Absent, or Error).
2. To insert a conditional expression at the insertion point, use one
of the following methods:
r Menu-From the Edit menu, choose Insert Condition.
r Toolbar Icon-Click the Condition icon.
r Keyboard Shortcut-Press Ctrl+I.
r Keyboard-[=] (bracket equal sign bracket)
3. Insert the appropriate statements for your rule by either typing
them directly into the Workspace or following the procedure
for inserting a RuleBuilder Tab option.
4. Save your changes to the Rule Edit Workspace. Changes to the
Rule Edit Workspace are not complete until they are applied,
using one of the following methods:
r Menu-From the File menu, choose Apply.
r Toolbar Icon-Click the Apply icon.
r Keyboard Shortcut-Press Ctrl+A.
❖ Caution:
applied.
Rules are not saved in RuleBuilder until they are
Rules, when applied, are updated to the Layout
memory area, however, rules are not permanently saved to
disk until the data model is saved.
To insert a Rule 1. Make sure the insertion point is placed in the Rule Edit
Notebook option Workspace under the desired data model item name and under
the label for the mode for which you are adding a rule
expression.
2. Select the Rule Notebook tab from which you want to select
options.
3. Double-click the desired option or value. For example, to place
a keyword into the rule, click the Keyword tab, scroll through
the list, and double-click the desired keyword.
4. To add new entries for Arrays, Variables, MetaLinks and
Substitutions, click the appropriate tab, click in the entry box at
the top of the list, type in the desired value and press Enter.
The Rule Notebook will be refreshed to display the new entry.
5. Apply your changes to the Rule Edit Workspace. Changes to
the Rule Edit Workspace are not complete until they are
applied, using one of the following methods:
r Menu-From the File menu, choose Apply.
r Toolbar Icon-Click the Apply icon.
r Keyboard Shortcut-Press Ctrl+A.
❖ Caution:
applied.
Rules are not recorded in RuleBuilder until they are
Rules, when applied, are updated to the Layout
memory area, however, rules are not permanently saved to
disk until the data model is saved.
To insert literals
1. Use any of the following methods to insert a literal:
r Menu-From the Edit menu, choose Insert Literal.
r Toolbar Icon-Click the Literal icon.
r Keyboard Shortcut-Press Ctrl+L.
r Keyboard-“<Literal>” (enclose the text of the Literal
between the left and right quotation marks)
2. The insertion cursor will be placed between the quotation
marks. Type the text to be interpreted literally.
3. Apply your changes to the Rule Edit Workspace. Changes to
the Rule Edit Workspace are not complete until they are
applied, using one of the following methods:
r Menu-From the File menu, choose Apply.
r Toolbar Icon-Click the Apply icon.
r Keyboard Shortcut-Press Ctrl+A.
To insert comments Comments can be inserted into a data model to describe the process
into a data model being modeled, to identify modifications to models, or to explain
rules. Comments can be placed on individual lines or immediately
following a rule.
1. Make sure the insertion point is placed in the Rule Edit
Workspace under the desired data model item name and under
the label for the mode for which you are adding a comment.
2. To insert a comment on its own line,
a. Place the cursor in the first position of an empty line.
b. Type a semicolon character (;). This indicates to
RuleBuilder that a comment will follow and that all text
appearing after the semicolon should be ignored.
c. Type the comment immediately after the semicolon. You
can enter any character into the comment except the
following special characters: {, }, @, *, and |.
d. At the end of the comment, use the Enter key to type a
Return. This indicates to RuleBuilder that the comment is
ended.
b. When you reach the end of the rule, type a space followed
by the semicolon character (;). This indicates to
RuleBuilder that a comment will follow and that all text
appearing after the semicolon should be ignored.
c. Type the comment immediately after the semicolon. You
can enter any character into the comment except the
following special characters: {, }, @, *, and |.
d. At the end of the comment, use the Enter key to type a
Return. This indicates to RuleBuilder that the comment is
ended.
❖ Caution:
applied.
Rules are not recorded in RuleBuilder until they are
Rules, when applied, are updated to the Layout
memory area, however, rules are not permanently saved to
disk until the data model is saved.
RuleBuilder Tabs The RuleBuilder tabs allow you to easily add information to your
set of rules.
RuleBuilder Rule
Tabs Notebook
Conditions Tab From the Conditions tab, you can select a conditional expression
type for analyzing your data. RuleBuilder places the complete
syntax of the expression in the Rule Edit Workspace.
Ø To add conditions
1. From the Rule Edit Workspace, insert a condition.
2. From the RuleBuilder tabs, choose the Conditions tab. The
available conditions will appear in the rule notebook.
3. From the rule notebook, double-click the type of conditional
expression to add. For example, if you double-click the Equals
option (to select and paste it), RuleBuilder places the expression
in the Rule Edit Workspace:
<operand> = <operand>
4. You can replace the <operand> prompts by typing over them
with the correct operands, or press Ctrl+F, or use the Next
Parameter icon ( ) to move from parameter to parameter
completing the conditional expression. (The system
automatically deletes the prompts.)
Data Model Items Tab The Data Model (DM) Items tab provides a list of all the items in
your data model for use in constructing expressions. This list can be
sorted alphabetically by data model item name, or hierarchically by
placement in the data model.
The default sort option is “by Name.”
Operators Tab From the Operators tab, you select one of the following operators
for use in the rules for manipulating data.
Arrays Tab The Arrays RuleBuilder tab displays the array names available for
the chosen data model or allows you to add arrays the data model.
Refer to the “Variables” section earlier in this section for a
discussion of Application Integrator arrays.
Variables Tab The Variables tab displays a list of the temporary variables
available in this data model or allows you to add temporary
variables to the rules. Refer to the “Variables” section earlier in this
section for a discussion of Application Integrator temporary
variables.
MetaLinks Tab The MetaLinks tab displays the MetaLinks available in this data
model or is used to add MetaLinks to the data model. Refer to the
“Variables” section earlier in this section for a discussion of
Application Integrator MetaLinks.
Substitutions Tab The substitution variable is a single value variable where a label is
replaced with a value. Each label is associated with a value entry
box of the Trading Partner Profile dialog box (or other Application
Integrator value entry box). During processing, the value entered
in the box is returned when that label is used as a substitution. The
dollar sign ($) precedes all substitution variable names.
The Substitutions tab displays the substitution labels available in
this data model or is used to add substitution labels to the data
model.
Functions Tab Predefined functions can be used in the rules for manipulating the
data. Some of the operations performed by these functions include:
cross referencing, verifying against a code list, entering system date
and time information, extracting a substring, checking the current
error code value, outputting various log records, and resetting a
MetaLink pointer. For a complete list of predefined functions, refer
to Appendix B of this manual.
The Functions tab displays the Functions available in this data
model and is used to add Functions to the data model rules.
Keywords Tab Rule keywords provide you with a means to alter the natural
processing flow within an item, among items in a data model, and
within an environment.
The Keywords tab displays the Keywords available in this data
model and is used to add Keywords to the data model rules.
Keyword Description
ATTACH Causes a new map component file to be opened,
changing the environment configuration for the
translation session.
BREAK The file pointer status remains unchanged. The
error status is reset to 0 and no additional actions
are processed. Occurrence validation is skipped.
The flow control proceeds to the next sibling
data model item.
CLEAR_VAL Clears all values from the MetaLink and Array
variable lists. Once cleared, RESET_VAL
keyword will not restore the value.
CONTINUE The file pointer status remains unchanged. The
error status is reset to 0 and no additional actions
are processed. Occurrence validation is checked.
The flow control depends on the occurrence
maximum — repeat if not greater than the
maximum or proceed to the next sibling data
model item.
Keyword Description
EXEC Provides the ability within a data model to
execute a process outside of Application
Integrator. The executed process can be another
program, translation, or shell script.
EXIT The file pointer status remains unchanged. The
error status is as specified. Occurrence validation
is skipped. Flow control returns to the parent
environment, unless an error status of zero is
specified on the source side, in which case flow
control proceeds onto the target side.
EXPORT Provides the ability for a Variable (temporary),
MetaLink, or Array variable to exist beyond its
normal scope. A Variable, MetaLink, or Array
variable comes into existence when it’s first
declared by reference in a data model. It is then
available for reference in the current and all
children environments. Once the current
environment is exited, the variable no longer
exists. Using EXPORT, the variable can be
extended back to the parent environment.
REJECT The file pointer status is reset. The error status is
138-not found (source data model) or 139-no
value (target data model). Occurrence
validation is checked. Flow control depends on
the occurrence minimum — return back to the
parent data model item if not greater than the
minimum, or proceed to the next sibling data
model item.
RELEASE The file pointer status is reset. The error status is
reset to 0. Occurrence validation is skipped.
Flow control proceeds to the next sibling data
model item.
RESET_VAL Will reset the MetaLink and Array variable list
pointer to the top of the list. If CLEAR_VAL
keyword was used, RESET_VAL keyword will
not restore the value.
Keyword Description
RETURN The file pointer status remains unchanged. The
error status is reset to 0. Occurrence validation
is skipped. Flow control returns back to the
parent data model item.
SET_ERR Provides the ability to set an error value to force
an error condition.
Cutting, Copying, Cut, Copy, and Paste Clipboard functions can be performed on
and Pasting Rules rules on individual data model items for any of the modes: Present,
Absent, or Error. Cut or copy assigns the selected information to
the Clipboard. Only one mode at a time is allowed to be copied or
cut and placed on the Clipboard.
Paste takes the information from the Clipboard to the location you
specify in the data model rules.
To cut text from the 1. Highlight the text to cut in the Rule Edit Workspace.
Rule Edit Workspace 2. Use any of these methods to cut the text:
r Menu-From the RuleBuilder Edit menu, choose Cut.
r Toolbar Icon-Click the Cut icon.
r Keyboard Shortcut-Press Ctrl+X.
The text is assigned to the Clipboard until something else is
assigned which replaces it.
To copy text from the 1. Highlight the text to copy in the Rule Edit Workspace.
Rule Edit Workspace 2. Use any of these methods to copy the text:
r Menu-From the Edit menu, choose Copy.
r Toolbar Icon-Click the Copy icon.
r Keyboard Shortcut-Press Ctrl+C.
The text is assigned to the Clipboard until something else is
assigned which replaces it.
To paste text from the 1. Move the insertion pointer to the place to paste in the Rule Edit
Clipboard into the Rule Workspace.
Edit Workspace 2. Use any of these methods to paste the text:
r Menu-From the Edit menu, choose Paste.
r Toolbar Icon-Click the Paste icon.
r Keyboard Shortcut-Press Ctrl+V.
Until you make another copy or cut, this text will remain on the
Clipboard, allowing you to paste several copies of the current
text.
Finding the Next The system makes it easy for you to enter the parameters to
Parameter functions, conditions, and keywords by prompting you for the next
required parameter. Individual parameters of a parameter list can
be selected by repeatedly choosing Find Next Parameter.
To find the next 1. Insert the cursor after the point where you want to the system
parameter to begin the parameter search.
2. Use one of the following methods to issue the command:
r Menu-From the Edit menu, choose Find Next Parameter.
r Toolbar Icon -Click the Next Parameter icon.
r Keyboard Shortcut-Press Ctrl+F.
3. Complete the parameter as per the instructions.
Checking the Workbench provides a utility for checking the syntax of the rules
Syntax of Rules during rule entry.
To check the syntax 1. Use any of these methods to call the rule checking utility:
r Menu-From the Edit menu, choose Check Syntax.
r Toolbar Icon-Click the Check Syntax icon.
r Keyboard Shortcut-Press Ctrl+K.
2. If any errors are found, an Errors in Parse dialog box is
displayed showing the line numbers of the errors. You have
the ability to select the Go to Error button or double-click the
line in the list box to automatically transfer to the line with the
error. If no errors are found, a message is noted on the status
line.
Syntax Error Checking Syntax checking catches the first syntax error on each data model
item. A second or subsequent error will not be listed in the Parse
on Errors dialog box until the first is corrected.
The following types of errors are checked in the rules during syntax
checking or when applying the rules (using the Apply command):
1. Invalid constructed variable, for example,
VaR-> lower- vs. uppercase ‘A’
Array- missing ‘>’ and lowercase ‘rray’
M_L> missing ‘-’
2. Invalid (label) or undeclared data model item
r Checks spelling
r Checks character case (for example, ‘a’ vs. ‘A’)
3. Forgetting to define the condition before the action ([ ])
4. Incorrect number of parentheses ( ‘)’ ) or quotation marks ( ‘ “ ’ )
r Checks for too many
r Checks for not enough
Parsing Syntax Workbench will catch errors when it parses the model or map
Checking component file and will also catch errors before it saves.
Valid Example:
otrun.exe –at OTRecogn.att –cs dv –DINPUT_FILE=OTIn.Flt –I
Invalid Examples:
otrun –at –cs dv –DINPUT_FILE=OTIn.Flt –I (missing
string for –at argument)
otrun –aa OTRecogn.att -cs dv –dINPUT_FILE –I (-aa
spelling and -d case sensitivity errors)
otrun.exe –at OTRecogn.att OTEnvelp.att –cs dv
–DINPUT_FILE=OTIn.Flt –I (two strings for –at)
r Requires that one of the following is an argument: –at (map
component file), –s (source data model), or –t (target date
model).
Valid Example:
otrun.exe –at OTRecogn.att –cs dv –DINPUT_FILE=OTIn.Flt –DA=’Bob Smith’ –I
Invalid Example:
otrun –at OTRecogn.att –cs dv –DINPUT_FILE=OTIn.Flt –DA=Bob Smith –I
(There is a space between Bob and Smith; string should be in quotation marks.)
otrun –at OTRecogn.att –cs dv –DINPUT_FILE=OTIn.Flt –DA=’Bob Smith –I
(Missing closing quotation mark.)
Valid Example:
DMI {
[]
VAR->Tmp = STRCAT(VAR->A, VAR->B)
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Tmp = STRCAT(VAR->A VAR->B
*1 .. 1
(missing “,”, “)”, “}” characters)
r Checks for the item type and that all components are
present. For example, Definings require the following
syntax: label, open brace, access item label, ‘@’sign,
minimum, .., maximum, optional format, verify list ID,
closing brace, *, minimum occurrence, .., maximum
occurrence.
Valid Example:
DMI { AlphaNumericFld @5 .. 5 none
[]
VAR->Tmp = STRCAT(VAR->A, VAR->B)
}*1 .. 1
Invalid Example:
DMI { alphanumericfld @5 .. 5
[]
VAR->Tmp = STRCAT(VAR->A, VAR->B)
}1 .. 1
(Missing verify list ID and “*” for occurrence.)
Valid Example:
Group {
DMI_A { AlphaNumericFld @5 .. 5 none
[]
VAR->TmpA = DMI_A
}*1 .. 1
DMI_B { AlphaNumericFld @5 .. 5 none
[]
VAR->TmpB = DMI_B
}*1 .. 1
}*1 .. 1
Invalid Example:
Group {
DMI_A { AlphaNumericFld @5 .. 5 none
[]
VAR->TmpB = DMI_B
}*1 .. 1
DMI_B { AlphaNumericFld @5 .. 5 none
[]
VAR->TmpA = DMI_A
}*1 .. 1
}*1 .. 1
(DMI_B is being referenced out of scope – before it comes
into existence)
r Checks that data model item labels are not referenced in
include files
Valid Example:
DECLARATIONS {
INCLUDE “Example.inc”
}
DMI_A { AlphaNumericFld @5 .. 5 none
[]
PERFORM(“Ex1”, &DMI_A)
}*1 .. 1
Valid Example:
DMI { AlphaNumericFld @5 .. 5 none
[]
VAR->Tmp = DMI
}*1 .. 1
Invalid Example:
DMI { AlphaNumericFld @5 .. 5 none
[]
VAR->Tmp = Dmi
*1 .. 1
(Dmi is not a defined data model item label)
Rule Execution Syntax Workbench will catch errors when it executes rules in the translator
Checking and at runtime.
Valid Example:
DMI {
[]
VAR->Tmp = STRCAT(VAR->A, VAR->B)
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Tmp = STRCAT(VAR->A, VAR->B, VAR->C)
*1 .. 1
(STRCAT() only has two arguments, not three.)
Valid Example:
DMI {
[]
VAR->Pos = GET_FILEPOS(&DMI)
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Pos = GET_FILEPOS(DMI)
VAR->Tmp = STRCAT(&VAR->A, &VAR->B)
*1 .. 1
(GET_FILEPOS() requires ‘&’, STRCAT() does not.)
r Consistent use of ampersand (&) with arguments between
the PERFORM() and its declaration in the include file.
Valid Example:
DECLARATIONS {
INCLUDE “Example.inc”
}
DMI_A { AlphaNumericFld @5 .. 5 none
[]
PERFORM(“Ex1”, &DMI_A, VAR->Tmp)
}*1 .. 1
Valid Example:
DMI {
[]
VAR->Tmp = STRSUBS(VAR->Tmp, 2, 4)
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Tmp = STRSUBS(“ABCDEF”, 2, 4)
}*1 .. 1
(The string in STRSUBS() cannot be a string literal.)
Valid Example:
DMI {
[]
VAR->Tmp = STRSUBS(VAR->Tmp, 2, 4)
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Tmp = STRSUBSX(VAR->Tmp 2, 4)
}*1 .. 1
(The function STRSUBSX() is not an Application Integrator
function or User Exit Extension function.)
Syntax Checking That The following are not verified during syntax checking.
Does Not Occur
r Labels are not checked for consistent use of upper- and
lowercase letters throughout the data model.
r Reference to a variable’s value before it was set with a value
is not checked.
Valid Example:
DMI { AlphaNumericFld @5 .. 5 none
[]
VAR->Tmp = DMI
VAR->Temp = STRCAT(VAR->Tmp, DMI)
}*1 .. 1
Invalid Example:
DMI { AlphaNumericFld @5 .. 5 none
[]
VAR->Tmp = DMI
VAR->Temp = STRCAT(VAR->TMP, DMI)
*1 .. 1
(Does not catch VAR->TMP, was not previously assigned.)
Valid Example:
DMI {
[]
CLOSE_INPUT()
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Tmp = CLOSE_INPUT()
*1 .. 1
(CLOSE_INPUT() does not return a value. The translator
will attempt to obtain a value off the stack, which can cause
a stack underflow error if no values are on the stack.)
r User entry (outside of Workbench) for the proper sequence
of rule modes: PRESENT, ERROR, then ABSENT is not
checked.
Valid Example:
DMI {
[]
CLOSE_INPUT()
:ABSENT
[]
VAR->Error = ERRCODE()
:ERROR
[]
VAR->Error = ERRCODE()
}*1 .. 1
Invalid Example:
DMI {
[]
CLOSE_INPUT()
:ERROR
[]
VAR->Error = ERRCODE()
:ABSENT
[]
VAR->Error = ERRCODE()
}*1 .. 1
(ABSENT and ERROR are in the wrong sequence.)
Valid Example:
DMI {
[]
VAR->Tmp = STRCATM(2, VAR->A, VAR->B)
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Tmp = STRCATM(2, VAR->A, VAR->B, VAR->C)
*1 .. 1
(Using three arguments, but telling the function it’s using
only two.)
Invalid Example:
DMI_INFO (DMI , &ARRAY)
Error code 144 is returned in cases where the data model item has a
value assigned to it before the function is executed. If there is no
value assigned to the variable at runtime and the ampersand is
missing, the translator tries to evaluate the variable. It returns error
code 139 when it is unable to evaluate it.
A check mark will appear beside the drop down menu option to
indicate when MapBuilder mode is running. Also, the MapBuilder
icon on the toolbar will appear to be pressed in.
MapBuilder MapBuilder allows you to drag and drop rules between data
models using predefined settings for Variable Type, Variable
Name, Link Type, Select Data Assignment type, and Prompt with
Loop Control Warning Message. The following table shows the
predefined settings that are used when running MapBuilder.
Option Setting
Variable Type Array
Variable Name Both
Link Type Tag-To-Defining
Defining-To-Defining
Select Data Assignment Use DEFAULT_NULL() on Source EDI
Type Use STRTRIM() on Source non-EDI
Use NOT_NULL() on Target EDI
Accessing the
MapBuilder Function
Ø To enable MapBuilder
1. Open the source and target data models that you wish to map.
2. Open MapBuilder using one of the following methods:
r Menu-From the Workbench main menu, choose Tools.
Then select MapBuilder.
r Toolbar Icon-Click the MapBuilder icon.
r Keyboard Shortcut-With the Workbench main window
selected, press Ctrl+L.
❖ Note: First, map Defining data model items, then perform loop
control procedures. The loop control rules are inserted at the
beginning of PRESENT/ABSENT mode. These rules must be
executed before performing a data assignment, to maintain the
integrity of all mappings.
Accessing MapBuilder The MapBuilder Preferences dialog box will display with default
Preferences settings already set. These default settings should serve most of
your mapping needs. If you change the default settings they can be
reset back to their original settings using the Reset button.
Mapping Data In most cases, rules are placed in PRESENT mode as a null
condition (that is, actions are always performed). In the source
data model, MapBuilder creates a rule that assigns a data model
item’s value to a variable. In the target data model, MapBuilder
creates a rule that references the variable for its value and assigns it
to the data model item. Shown here are some hints to help your
modeling session:
r You should Map Defining items first, then continue with
Group, Tag, and Container items. This is because the loop
control feature places rules at the beginning of PRESENT
and ABSENT mode and the loop control rules must be
processed before any data assignments are made.
❖ Note: To use the drag and drop feature, click the data model
item to highlight it. Move the mouse pointer to the new
location and release the mouse button.
Notice that when the source data model item is dragged to the
target data model item that the defining item name along with
the mouse pointer changes to a bull’s-eye in UNIX and
continues to be the unique mouse pointer overlaid with the
MapBuilder symbol in Windows.
❖ Note: You can also drag and drop from the target to the
source defining items, however, MapBuilder always creates the
rules from source to target, maintaining the name of the
variable as <source defining item name>_<target defining
item name>.
❖ Note:
layout.
The MapBuilder rules are immediately applied to the
There is no need to enter the RuleBuilder Editor and
apply the rules. However, rules are not permanently saved to
disk until the data model is saved.
Loop Control The loop control feature provides the code to create processing
loops when one of the data model items is a Group, Tag, or
Container. Loop control ensures that detail records are kept
together with the proper header record. Also, subdetail records are
kept together with the appropriate detail records.
Loop control automates the process of mapping complex data
structures that repeat. Loop control automatically adds PRESENT
mode rules, ABSENT mode rules, or group data model items to
both the source and target layouts. These rules or Group items
contain Array variable assignments. Control of the array variable
automatically occurs during the MapBuilder loop control process.
Normally you would not apply loop control from Defining to
Defining. However, in the rare case when this is necessary, you
would first enable the “Enable Loop Control when mapping
Defining to Defining” option on the MapBuilder Preferences dialog
box.
Here are some points to remember when applying loop control:
r Loop control is needed on items where the maximum
occurrence is greater than the minimum.
r When indicating the target occurrences for loop control, be
sure the maximum is 1 greater than the maximum intended.
The last loop goes into the loop control rules to break out of
the looping process. Loop control automatically checks for
and corrects these situations.
r The Undo function is not operable when the Loop Control
Link Type is enabled. Refer to the “Undoing Loop Control”
section for more information about correcting errors.
Troubleshooting There are several warning or error messages that can appear when
mapping using loop control.
Illegal map messages will appear in the status bar of the
MapBuilder dialog box during the mapping session. When a
message such as this appears, the rules are not updated. For
example, if you try to map a source item to a source item the
following message will appear in the status bar, “Illegal map.
Source equals target.”
You cannot map or apply loop control to the topmost Group item
because it is a parent to all other Group items and their children. If
the source item does not have a parent, the following error message
appears.
If you attempt to perform loop control on the same items more than
once, the following error message appears.
Loop Control Needed The Loop Control Needed dialog box appears when MapBuilder
finds the items that do not have loop control applied, and the
“Prompt with Loop Control warning message” check box is
selected. When you drag and drop an item onto another,
MapBuilder checks all the items appearing above and, if it finds an
item whose maximum is greater than the minimum and is not the
topmost item, it will display the Loop Control Needed dialog box.
You can drag and drop loop control on any type of item: Group,
Tag, Container, or Defining. However, you must have the “Enable
Loop Control when mapping Defining to Defining” check box
selected to enable loop control on Defining items.
In this example, the two looping items were caught because they
had multiple iterations that were greater than 1 and they were not
the topmost item. If mapping was performed on an item appearing
higher on the map, for example, LinePONo to LinePONo, only
SrcLineLoop and TLineLoop would appear in the Loop Control
Needed dialog box because the mapping occurred higher in the
map. Remember, the system checks for the item needing loop
control above the items on which mapping occurred.
Undoing Loop Control Loop control clears the Undo list. Also, Undo does not function in
loop control. If you apply loop control to the wrong item, you must
remove the loop control code manually. This section shows the
actual code for loop control in the event you must remove it from
your map.
Document {
HeadRec { LineFeedDelimRecord "H"
HeadDate { DateFld @6 .. 6 "YYMMDD" none }*1 .. 1
HeadDocNo { AlphaNumericFld @1 .. 10 none }*1 .. 1
[]
ARRAY->HeadDate = HeadDate
ARRAY->HeadDocNo = HeadDocNo
}*1 .. 1
SrcLineLoop {
LineRec { LineFeedDelimRecord "L"
LinePart { AlphaNumericFld @10 .. 10 none }*1 .. 1
LinePONo { AlphaNumericFld @10 .. 10 none }*1 .. 1
LineQty { NumericFld @05 .. 05 "99999" none }*1 .. 1
[]
ARRAY->LinePart = LinePart
ARRAY->LinePONo = LinePONo
ARRAY->LineQty = LineQty
}*1 .. 1
SrcSubLineLoop {
SubLineRec { LineFeedDelimRecord "S"
SubLineQty { NumericFld @05 .. 05 "99999" none }*1 .. 1
SubLineStore { AlphaNumericFld @1 .. 10 none }*1 .. 1
[]
ARRAY->SubLineQty = SubLineQty
ARRAY->SubLineStore = SubLineStore
}*1 .. 1
[]
ARRAY->LoopCtl_SrcSubLineLoop_TSubLineLoop = “Data”
}*0 .. 100
[]
ARRAY->LoopCtl_SrcSubLineLoop_TSubLineLoop = “Stop”
[]
ARRAY->LoopCtl_SrcLineLoop_TLineLoop = “Data”
}*1 .. 100
[]
ARRAY->LoopCtl_SrcLineLoop_TLineLoop = “Stop”
}*1 .. 100
Document {
HeadRec { LineFeedDelimRecord "H"
HeadDate { DateFld @6 .. 6 "YYMMDD" none [] HeadDate = ARRAY->HeadDate }*1 .. 1
HeadDocNo { AlphaNumericFld @1 .. 10 none [] HeadDocNo = ARRAY->HeadDocNo }*1 .. 1
}*1 .. 1
TLineLoop {
TLineLoopLoopCtrl {
[]
VAR->LoopCtl_SrcLineLoop_TLineLoop = “Stop”
VAR->LoopCtl_SrcLineLoop_TLineLoop = ARRAY-> LoopCtl_SrcLineLoop_TLineLoop
[VAR->LoopCtl_SrcLineLoop_TLineLoop == “Stop”]
REJECT
}*1 .. 1
LineRec { LineFeedDelimRecord "L"
LineQty { NumericFld @05 .. 05 "99999" none [] LineQty = ARRAY->LineQty }*1 .. 1
LinePONo { AlphaNumericFld @10 .. 10 none [] LinePONo = ARRAY->LinePONo }*1 .. 1
LinePart { AlphaNumericFld @10 .. 10 none [] LinePart = ARRAY->LinePart }*1 .. 1
}*1 .. 1
TSubLineLoop {
TSubLineLoopLoopCtrl {
[]
VAR->LoopCtl_SrcSubLineLoop_TSubLineLoop = “Stop”
VAR->LoopCtl_SrcSubLineLoop_TSubLineLoop = ARRAY-
>LoopCtl_SrcSubLineLoop_TSubLineLoop
[VAR->LoopCtl_SrcSubLineLoop_TSubLineLoop == “Stop”]
REJECT
}*1 .. 1
SubLineRec { LineFeedDelimRecord "S"
SubLineStore { AlphaNumericFld @10 .. 10 none
[] SubLineStore = ARRAY->SubLineStore
}*1 .. 1
SubLineQty { NumericFld @05 .. 05 "99999" none
[] SubLineQty = ARRAY->SubLineQty ABSENT rule
}*1 .. 1
tests for
}*1 .. 1
“Stop”
:ABSENT
[VAR->LoopCtl_SrcSubLineLoop_TSubLineLoop == “Stop”]
REJECT
}*0 .. 100 ; |-- end TSubLineLoop --|
:ABSENT
[VAR->LoopCtl_SrcLineLoop_TLineLoop == “Stop”]
REJECT
}*1 .. 100 ; |-- end TLineLoop --|
}*1 .. 100 ; |-- end Document --|
Source Target
Input Access Model Access Model Output
Data (parsing) (construction) Data
The data structure may contain only group items, with no input or
output occurring, or just rule processing logic. Information
obtained in the parent and grandparent environments can be
referenced in child environments.
Processing Flow Within a source or target data model, processing flows down the
within the Model hierarchy from parent to child (starting with the first child
encountered) and then back to parent, as per the following
illustration:
In each case, the current status is returned from the child to the
parent data model item. The process moves down the data
structure from child to child; once the children are read, process
returns to the parent item, then proceeds to the next parent item.
Single Environment In a single environment process, once the map component file input
Process Flow and output streams are read or opened, environment processing
begins with the source data model. Once source processing is
completed, target data model processing occurs. Once target
processing is completed, the environment ceases to exist.
Environment Layer
Recognition Environment
ATTACH 1
ATTACH 2
ATTACH
Recommended When naming the map component files, keep the following
Naming Convention considerations in mind:
r Use “.att” for the suffix.
r Use no more than 8 characters in the base map component
filename.
r Do not use the prefix “OT,” since it will conflict with names
already assigned in the Application Integrator application.
The prefix “OT” is a reserved prefix for Application
Integrator application files. Using it can compromise the
software’s performance.
r Use upper- and lowercase letters and underscore “_” only.
r Do not use spaces.
Defining a New Map The Map Component Editor dialog box is used to define and
Component File modify map component files.
The colors appearing in the value entry boxes indicate whether data
is mandatory or optional. When a new map component editor
dialog box appears, the source and Target value entry boxes are
blue for mandatory or white for optional. (The map component file
must have at least the source or target model defined.) Once one is
defined, it's paired Access value entry box illuminates blue and the
other model is turned white. Whenever one of the two (Model or
Access) value entry boxes contains a string, the other illuminates
blue. Both value entry boxes in the Source and Target areas are
white when the other set contains strings.
The lists of the model drop down list boxes are populated with
*S.mdl and *T.mdl files depending on whether the source or the
target model drop down list is being accessed. The value entry box
is editable so you can type the preferred model name.
It is recommended that you save your map component files and
data models to the working directory.
3. Type the appropriate values for the map component file in the
Data section.
❖ Hint: You must have both the icon and text highlighted
to work with the map component file.
7. Either choose Save from the File menu to save your changes;
– or –
Choose Save As from the same menu to save the map
component file under a new name. In the dialog box that
appears (as shown below for UNIX systems), type a new name
for the map component file. Choose the OK button (UNIX,
Windows NT 3.51) or Save button (Windows 95 and NT 4.0) to
save to the new name and close the dialog box.
8. To minimize the Map component file dialog box, from the File
menu, choose Minimize.
[ ]
ATTACH VAR->map_component_filename (variable
name)
When ATTACH is encountered during a translation session, the
current environment’s processing stops. The map component file
associated with the ATTACH statement is opened and processing
begins. Processing continues in this environment until processing
completes successfully, an error is returned, or the data model
keyword ATTACH is encountered again.
Processing returns to the parent environment immediately
following the data model keyword ATTACH. The error code
returned to the parent environment can be captured and errors
handled, as per the following example:
[ ]
ATTACH “OTX12NxtStd.att”
[ ]
VAR->RtnStatus=ERRCODE( )
[VAR->RtnStatus > 0]
<actions to recover from error>
Common ATTACH During translation processing, the following errors are commonly
Errors Encountered found when there are problems with the map component file
definition. Refer to Appendix F or the on-line Help for a complete
description of these errors.
Processing Using The OTRecogn.att file is a map component file typically used for
OTRecogn.att processing public standards into application data.
The rule logic necessary to perform the extraction of the values
(De-enveloping)
from the input stream is already included in the OTRecogn.mdl. It
also automatically sets the HIERARCHY_KEY keyword
environment variable. To use this feature, you must define the
trading partner in the Profile Database by using the Trading
Partner option from the Trade Guide Profiles menu.
Recognizing the Trading The process of recognizing what trading partner needs to be read
Partner from the Profile Database is handled with the generic model
OTRecogn.mdl. This model is designed to allow multiple
interchanges from multiple trading partners within the input file.
To define the trading partner at the interchange level, the model
sets an environment variable XREF_KEY to the value “ENTITY.”
This means, in the simplest terms, that whenever the translator
attempts to do a cross-reference from the database, it will look for a
line or record within the database that starts with “ENTITY,” until
the environment variable XREF_KEY is changed to another value.
Once the environment variable XREF_KEY has been set, the model
uses the functions STRCAT and STRTRIM to concatenate the
Sender’s Qualifier, Sender’s ID, Receiver’s Qualifier, and the
Receiver’s ID that it has read from the input file, in that sequence,
and assigns the results to a temporary variable VAR-
>OTICRecognID. At this point a cross-reference is performed using
the function XREF and passing in some required parameters as per
the function:
XREF(“ENTITY”, VAR->OTICRecognID, &VAR->OTICHierarchyID, “N”)
Processing Using The OTEnvelp.att file is a map component file typically used for
OTEnvelp.att processing application data into the public standards (enveloping).
Unlike the processing of public standards, where generic models
(Enveloping)
are provided, each application system requires customized models.
When the models are created, the entity lookup logic must be
included. To do this, you must define the trading partner in the
Profile Database by using the Trading Partner option from the
Trade Guide Profiles menu.
Refer to the section on enveloping in the appropriate standards
implementation guide (for example, the ASC X12 Standards
Implementation Guide) for instructions on using this environment.
The logic to perform extraction, concatenation, and entity lookup, to
obtain a trading partner view into the database, needs to be
included in the custom application model. The logic is represented
below:
[ ]
;sets the cross reference view into the database as
;“ENTITY”
SET_EVAR(“XREF_KEY”, “ENTITY”)
[VAR->OTXRefStatus !=0]
;entity lookup failure
EXIT 501
[ ]
;sets the trading partner’s substitution view
;into the database
SET_EVAR(“HIERARCHY_KEY”, VAR->OTHierarchyID)
The concatenated lookup must be specified in the Application
Cross-reference value entry box of the Outbound X12 Values dialog
box exactly as it is in the model. (This dialog box is opened at the
Message Level of the Trading Partner Profile dialog box.)
Changes to Source The following table identifies the error numbers returned from
Model Processing access model parsing. Two levels of error reporting are offered,
with RECOVERY active and with RECOVERY inactive.
Migrating otrans With all the changes that occurred in the translator, there are some
process flow changes that will be addressed here to simplify the
process of end user migration from previous version to the current
version of 3.0. This section will explain the changes in the
translator and help clarify any issues or concerns. This section is
only intended to users that specify special error routines to capture
errors and unique data flow. This section is solely for technical
users that are already using previous versions of Application
Integrator and wish to understand the detail changes that occurred
in the translator in version 3.0.
Translator Process Flow Previous to the 3.0 release, otrans was limited to a few errors
Before Version 3.0 returning to the data model. Error code 138 was returned when
something was wrong with the data element. Even though an error
existed, processing continued to the next sibling if the occurrence of
the element was met. What this meant was if you had an element
that was too long and optional, no hard error was recognized on
the item and processing would continue to the next. The errors
returned in versions before 3.0 are shown in the following table.
Error
Code Description
-1 End Of Stream
0 OK
138 Element error - either missing, invalid character, too
long, and post condition failure
139 Item not instantiated
140 No more values on ARRAY
141 Missing TAG
146 Incorrect format
152 Lookup ID failure
171 No Children
176 Element is too short
When the error occurred, the file position was at the position where
the error occurred. This meant that the post condition of the
element, if defined was not read. If the error remained after the
element went through Rule Mode Processing, the file position was
reset to the position where the element was entered.
Version 3.0 Translator The Version 3.0 otrans improves the way the translator processes
Process Flow data and makes the rules consistent for the process flow. This
translator allows for multiple errors to be caught because there is a
built in Recovery routine that positions the file pointer to the next
item. The areas of the translator are:
r Data Access Parsing
r Rule Mode Processing
r Occurrence Validation
r Process Flow between elements.
Data Access Parsing With the version 3.0 of otrans, new error codes return more
and Recovery descriptive meanings. Also with Recovery on, the file position is
reset so that the next element can be read if the error is cleared.
Recovery deals with Tag, Container and Defining type items only.
To enable Recovery, you would type:
VAR->OTPriorEvar = SET_EVAR(“RECOVERY“,“Yes“)
By default in the enveloping and de-enveloping generic models,
recovery is set to “Yes”.
The following table shows which error codes are returned
depending if recovery is set to “Yes” or “No”, and which mode of
rules are entered upon returning from the access model with the
specific error code.
Difference
Error Recovery Recovery Rule mode from prior
Code Description “No” “Yes” entered version
-1 End Of File* Returned Returned ABSENT ERROR
0 OK Returned Returned PRESENT
138 Hard Error Returned ERROR ABSENT
141 Missing TAG* Returned Returned ABSENT
146 Incorrect format Returned Returned ERROR
152 Lookup ID failure Returned Returned ERROR
171 No Children* Returned Returned ABSENT
176 Element is too short Returned Returned ERROR
177 Element is too long Returned ERROR
Difference
Error Recovery Recovery Rule mode from prior
Code Description “No” “Yes” entered version
190 Element is missing* Returned Returned ABSENT
191 Invalid character in Returned ERROR
the data
192 Post condition Returned Returned ERROR
failure
200 Unrecoverable error Returned ERROR
*Represents soft errors versus data was parsed and is in error (hard error).
Refer to Trade Guide for System Administration User’s Guide for more
descriptive explanations of each error code.
With recovery set to “No”, the error codes are returned as previous
in 3.0, except for error 138. 138 before included “Element is
missing”. Now 190 will be returned for “Element is missing”
whether recovery is set to “Yes” or “No”.
For two of the error values, the mode of rules that are entered upon
returning from the access model parsing have changed: -1 and 138.
For –1 (EOF), if any characters are parsed for the base access
definition, 176 for too short or the appropriate error is returned in
place of –1. When –1 is returned, the element/field is absent, so the
ABSENT rules are performed. And 138 now represents errors, not
“Element missing”, so the ERROR rules are performed.
Data Access Parsing means the part of the translator that reads the
data stream character by character, verifies the character set and
validity of the data, formats the value (Numeric or Date/Time), and
executes recovery (if needed and turned on). This is all done inside
otrans and is hidden to the user.
If the data has a problem and recovery is set to “Yes”, otrans will go
through specific rules to try to recover the data as much as possible.
Rules of Recovery While reading character by character, the translator hits a character
out of character set, end of file, or reaches the maximum
element/field size as defined:
1. If no data was read, then read the post condition. If the post
condition is read or no post condition is defined for the item,
the error 141 is returned for TAGs and the error 190 is returned
for Composites and Definings.
2. If the minimum size was met and no post condition defined, the
error 0 is returned back to the data model.
3. If the minimum size was not met and a post condition is
defined, the next character is read for the post condition. If the
post condition is read, the error 176 is returned.
4. If the post condition is not read, continue reading until the post
condition, end of file or a size of 4096 is read. Return 191 if the
post condition is finally read, else return 200.
5. If maximum defined size is met, read next character for post
condition. If not defined, return 0. If defined but not present,
read till post condition, return error 177 if finally read, or return
200 if not read.
6. If Date/Time/Numeric, check format, If failed, return 146.
7. If TAG or composite, If fail post condition, return 192.
During parsing, when a element/field contains an invalid
character, the character is removed from the string of that element.
This means if an element has data as “ABC^DE” where ^ is out-of-
character set, if you get the value of the item, it will be ABCDE with
an error of 191. Defined delimiters are not considered invalid
characters and are not automatically removed from the string
during parsing.
Recovery only happens on elements. TAG and Composites can not
execute recovery. Therefore if the post condition is invalid on a
TAG, 192 is return but processing is not set to the next TAG or
Composite. Therefore when this error occurs, processing should
stop. It is considered a hard error.
Examples of Recovery
Rule Mode Processing There are three types of mode processing: PRESENT, ABSENT and
ERROR. This Version 3.0 changes only deals with the source
processing. When an item other than a group parses data, it enters
one of the 3 modes of processing:
r PRESENT mode: When the error code is 0.
r ABSENT mode: When the error code is -1, 139, 140, 141,
171, 190.
r ERROR mode: Any other error.
The modeler is able to put rules on any of these modes. When it is
said to “Clear the Error”, the last action in the mode results in a 0
error code. (A “null condition” by itself will “Clear the Error” or
reset it back to zero.) The following code will show how an error
190 would change into 0. This case would be if an element BIG_01
is missing.
BIG_01 { ElementAN @1 .. 10 none
[]
ARRAY->Big_01 = BIG_01
:ABSENT
[]
ARRAY->Big_01 = ““
}*1 .. 1
With this code, you are defaulting a value on the variable instead of
making an error of a missing required element. When processing is
done with ABSENT mode, the error is zero.
Occurrence Validation
The codes [-1, 139, 140, 141, 171, 190] enter ABSENT mode rules. If
no ABSENT mode rules are defined, the error remains at its value –
the error is not cleared. When processing is done with PRESENT or
ABSENT (whether rules are defined or not), and the error code is -
1, 139, 140, 141, 171, 190, occurrence validation is checked. If this
occurrence of the item is optional, the error is reset to zero and
processing proceeds onto the next sibling. If this occurrence of the
item is mandatory, the error code is taken into ERROR mode.
Like ABSENT, ERROR mode can also clear the error. If the error is
not cleared, processing converts it to a hard error (138).
Process Flow Between After Rule Mode Processing and Occurrence Validation, if a non-
Elements zero error value remains, the error is changed to 138 and is
returned to the parent of the item. What this represents is that a
hard error is found and no matter what the occurrence of the
parent, stop processing.
Soft errors are missing value type items. They will check
Occurrence Validation before they go into the ERROR mode rules
as previous stated. These errors will first go into ABSENT mode
rules. If the error is cleared, processing will continue to the next
element. Keeping the same error code or no ABSENT rules,
Occurrence Validation is checked first. If the minimum is not met,
it will enter the ERROR mode of that item. If the minimum is met,
the item is not considered an error.
Processing for groups is broken down into two categories:
r Groups only within groups
r Groups that have access items(Tags, Containers, Defining).
For the first type, the group will loop until the maximum
occurrence has been reached unless a Keyword like BREAK or
RETURN is used to leave the group or the group rules processing
ends with error 139, 140. For example, you have a group that goes
1 to 100 that just counts, this will loop 100 times. If you want to
break the group after 50, the code would look like:
Group1 {
[VAR->OTCount == 50]
BREAK
[]
VAR->OTCount = VAR->OTCount + 1
}*1 .. 100
For the second type, the group will loop either to the maximum
occurrence or no more data is read. When this happens, an error
171 is returned to the group. The processing flow will enter
ABSENT mode rules. If the error remains after ABSENT mode and
the minimum occurrence has been met, the looping will stop and
continue to the next element. If the minimum was not met,
processing will go to the ERROR mode rules and then act like a
hard error by going to the groups parent with error 138.
Common Syntax The common syntax for specifying an extended access device types
is:
“<dev_specific_name>[<dev_name>]<add_dev_specific_params>”
where
<dev_specific_name> is used to specify the data stream, a file,
program, socket address, or message queue ID. Refer to the
individual devices’ specifications following for details on each
access method.
<dev_name> is the name of the access method to be used: fifo,
pipe, socket, or msgq.
<add_dev_specific_params> is a list of specific modifiers that
control how the devices behave.
This syntax is used for the INPUT_FILE and OUTPUT_FILE
specifications within the map component file, by way of command
line parameters or defines, or as the parameter to the file attribute
of a group item type in a data model.
Example:
INPUT_FILE = “/tmp/fifo [fifo]”
OUTPUT_FILE = “/tmp/output.exp”
When the device type specification is absent, the translator will
default to the standard device of file. Spaces must delimit the fields
of the specification.
Application
Integrator Sockets
Examples
Socket Interface Notes Application Integrator sockets are used to read and write data from
a TCP/IP network. Sockets use the client/server model to initiate
the connection. The client/server model divides all communicating
applications into two types depending on what they do to facilitate
the connection. The application that waits (or listens) is called the
server and the one that initiates the connection is called the client.
Usually there is only one server but there can be many clients.
The connection mode determines which machine is the client and
which is the server. Receiving data from another programs does
not establish the client or server relationship as it relates to sockets.
This is discussed further in this section.
The main advantage in the client/server model is that it makes the
translation session machine independent. The server and the client
can run on separate machines and the two ends of a connection can
be on different types of machines. For example, one end can be a
Windows based PC and the other can be an UNIX based HP
machine.
Sockets Specifications
Defining a Socket Before a socket can be used, both parties, sender and receiver, must
agree on the address and configuration of both. If you will be
receiving data, you must define an INPUT_FILE, and if you are
sending data, you must define an OUTPUT_FILE. If you will be
both sending and receiving, two socket unidirectional ports or a
single bidirectional port is required.
Specify the device type of sockets with the following syntax:
“<host_name>:<port_number> [socket] passive persistent
<#retries><retry_time_period> &”
where,
<host_name> refers to the machine name or an alias of the computer
name. If the computer is outside the local network then the fully qualified
domain name should be used. If the translation involves a computer
outside your network, the fully qualified domain name or the IP address
has to be used. Your fully qualified domain name is the “<hostname
value>.<domain value>”. Refer to the procedure “To locate the hostname
of your computer” for more information.
<port_number> refers to the socket port through which the data
transfer will occur. It should be a number greater than 5000.
[socket] identifies the access type.
UNIX users
At the UNIX command prompt, type
uname -n
– or –
hostname
If your computer is not configured to use these commands, contact
your system or network administrator to identify the hostname of
your computer.
Windows 95 users
1. From the Settings menu, select Control Panel. Click the
Network icon. The Network dialog box will appear.
Properties
button
Host value
entry box
IP Address tab
Obtain an IP
address
automatically
radio button
IP Address
value entry
box
Properties
button
Obtain an IP
address from a
DHCP server
radio button
IP Address
value entry
box
4. Select the DNS tab. In the Host Name value entry box, locate
and make a note of your hostname.
Socket Attributes Socket attributes may be set to persistent or single attempt connect.
The default is single attempt. This setting is applicable for both
active (client) and passive (server) sockets.
If a socket is created without the persistent attribute, it will stop
after a single connection attempt. If a socket is created with the
persistent attribute, it will continue to retry the connect at one
second intervals, indefinitely. After connecting, the only way to
close a persistent socket is to exit the environment in which it was
created or to kill the process.
Besides the retry attribute, a retry time period can be specified. If a
number of retries is specified for a persistent socket, it will
reconnect after a close for the number of retries specified. The
default wait period between retries is one second. Both retry and
retry time period can be very large numbers.
For passive sockets, the persistent attribute causes the socket to “re-
listen” after receiving a close. Retry time limits do not apply to
passive sockets.
Data Transfer Mode There are two ways in which data can be transferred through
sockets:
1. Bidirectional
2. Unidirectional
A bidirectional socket allows data movement into and out of the
same socket address. Using a bidirectional socket the client could
write data to the server and the server could also write data to the
client on the same socket.
The biggest advantage of a bidirectional socket is that only one
socket is required for a two way transfer of data. This may be very
important because each socket uses system resources. As the
number of sockets used increases, the load on the system increases
and system performance is affected. This is especially important on
a heavily loaded machine (such as a multi-user UNIX file server).
A unidirectional socket allows data movement in one direction
only—either into or out of the socket address. If you want data to
flow in two directions you need two unidirectional sockets— one
for each direction.
The main advantage of an unidirectional socket is the simplicity of
developing programs to use it. Programmers are already familiar
with the concept of files in which they can read from one file and
write to another file. Programming is simpler with the use of
unidirectional sockets.
Specifying a Socket Socket type devices can be specified for any I/O item within
Application Integrator. Usually I/O devices are specified as
INPUT_FILE or OUTPUT_FILE in a map component file. To create
a socket, the attribute (socket) is added to the declaration of the file
item along with any connect mode, number of retries, and retry
period, as required. When an I/O device is a socket, the file handle
is used to specify the hostname and port address. Both hostname
and port address may have an alias. The hostname is the name of
the system and the port address is a number assigned by the system
administrator.
Sockets are widely used by the operating system for various tasks.
The O/S uses port numbers below a certain number. While
specifying the port number users should select a number greater
than 5000.
Theory of Operation The sockets interface was first developed to help UNIX
programmers use existing TCP/IP protocols for network
communications. While it was being developed, some of the
concepts also found their way into UNIX and sockets became fully
integrated with the operating system.
The Windows™ sockets (Winsock™) specification was based on
UNIX sockets. It includes UNIX socket routines and extensions
specific to Windows. Winsock is supplied as a dynamic link library
(DLL). This DLL has to be loaded before any socket operations are
performed. An important issue in Winsock is the version number.
The first Winsock version was 1.0, followed by 1.1 and 2.0. For
reasons of compatibility, most implementations support all versions
and allow the socket application to select the version it needs to use.
Both of these issues can be taken care of by the socket program
during initialization.
These details are transparent to the Application Integrator user.
However, the user must ensure that wsock32.dll (the 32-bit
Winsock) is in the right path and the DLL is the latest version.
Sockets Examples This section describes six examples of how to use sockets. For
details on how to set up the examples refer to Preparing to Run the
Examples. For details on running the examples, refer to the
following sections:
r Example 1: Input from a client socket and output to a file.
r Example 2: Input from a file and output to a server socket.
r Example 3: Input from a server socket and output to a file.
r Example 4: Input from a unidirectional server socket and
output to a unidirectional server socket.
r Example 5: Input from a persistent client socket and output
to a file.
r Example 6: Input and output through a persistent
bidirectional server socket.
File Description
OTsoc.in Input data file
OTsocm.att Master map component file
OTsoc.att Map component file that defines the source and
target data model
OTsocm.mdl Master model
OTsocs.mdl Source data model
OTsoct.mdl Target data model
OTclose.in File containing data required to close a persistent
socket
Preparing to Run the Before running the sockets examples, you must edit the script file or
Examples batch file and compile the examples programs. The following
procedure describes how to compile the programs for
Example 1. Substitute the appropriate information to compile
programs for the other examples.
UNIX
For UNIX customers, use the standard C compiler cc.
Example 1: Input Example 1 demonstrates input from a client socket and output to a
from a client socket file on a client machine. A connection mode has not been defined
for the socket so it defaults to the active mode, making it a client
and output to a file.
socket. Figure 1 illustrates this example.
File Description
OTsoc1.c C source code.
OTsoc1 UNIX program created by the compile that creates
a server socket. Once the client connects to the
program, it sends data to the client.
OTsoc1.exe Windows program that creates a server socket.
Once the client connects to the program, it sends
data to the client.
OTsoc1.sh UNIX shell script that executes the example. You
must specify your machine name at the host_name
argument of the INPUT_FILE variable.
INPUT_FILE = “host_name:5510 [socket]”
OUTPUT_FILE = “soc1.out”
OTsoc1.bat Windows batch file that executes the example. You
must specify your machine name at the host_name
argument of the INPUT_FILE variable.
INPUT_FILE = “host_name:5510 [socket]”
OUTPUT_FILE = “soc1.out”
soc1.out
Socket1.vsd
Performing Example 1
Windows
- or -
For systems using Windows ‘95, NT 3.51, or NT 4.0 locate the
OTsoc1.exe filename in Explorer and double-click.
This starts the example, opens the client socket, and establishes
the client/server relationship. A DOS window will appear.
- or -
For systems using Windows ’95 or NT 4.0, run the example by
locating the OTsoc1.bat filename in Windows Explorer and
double-clicking it.
This starts the program to process the translation. A Session
Output dialog box will appear.
UNIX
1. Open the vi Editor and modify OTsoc1.sh to specify your
hostname in the -DINPUT_FILE parameter. Save the changes
and exit the vi Editor.
2. Compile the OTsoc1.c program for this example.
3. Open two UNIX windows and arrange them so they are both in
view. In the following figure, the windows are titled Window 1
and Window 2.
4. Start the Application Integrator Control Server, type
otstart
5. At the command prompt in Window 1, type
OTsoc1
Press the Enter key. This starts the example, opens the client
socket, and establishes the client/server relationship.
6. In Window 1, at the “Enter the Server port number” prompt,
type
5510
Press the Enter key. This identifies the server socket port
number.
7. In Window 1, at the “Enter the input filename” prompt, type
OTsoc.in
Press the Enter key. This identifies the input filename.
8. In Window 2, at the command prompt, type
OTsoc1.sh
Press the Enter key. This starts the shell script to process the
example.
9. In Window 1, the Opened input file message should appear to
indicate you have successful communication and the example is
executing.
When both windows return to the command prompt without
error messages, the example executed successfully.
(window 1) (window 2)
$ OTsoc1
Enter the Server port number: 5510
Enter the input filename: OTsoc.in
$ OTsoc1.sh
Opened input file Session# 000102 started
$ Session# 000102 completed successfully.
$
Example 2: Input Example 2 demonstrates input from a file that resides on the server
from a file and machine and output to a passive socket. The output socket is
output to a passive passive socket which means that it will function as a server socket,
waiting for a connection from a client somewhere on the network.
socket
File Description
OTsoc2.c C source code
OTsoc2 UNIX program created by the compile that creates
a client socket. It connects to the server socket
created by the translator and receives data.
OTsoc2.exe Windows program that creates a client socket. It
connects to the server socket created by the
translator and receives data.
OTsoc2.sh UNIX shell script that executes the example. You
must specify your machine name at the host_name
argument of the OUTPUT_FILE variable.
INPUT_FILE = “OTsoc2.in”
OUTPUT_FILE = “host_name:5520 [socket]
passive”
OTsoc2.bat Windows batch file that executes the example. You
must specify your machine name at the host_name
argument of the OUTPUT_FILE variable.
INPUT_FILE = “OTsoc2.in”
OUTPUT_FILE = “host_name:5520 [socket]
passive”
soc2.out
OTsoc.in
Socket2.vsd
Performing Example 2
Windows
Press the Enter key. This identifies the server socket port
number.
6. At the DOS window, at the “Enter the output filename”
prompt, type
soc2.out
Press the Enter key. This identifies the output filename.
7. At the Session Output dialog box, a “Session ended: err: 0”
message should appear indicating the example ran successfully.
Choose the Quit button to close the session output dialog box.
UNIX
1. Open the vi Editor. Modify OTsoc2.sh to specify your
hostname in the -DOUTPUT_FILE parameter. Save the
changes and exit the vi Editor.
2. Compile the OTsoc2.c program for this example.
3. Open two UNIX windows and arrange them so they are both in
view. In the following figure, the windows are titled Window 1
and Window 2. Window 1 will be the client window and
Window 2 will be the server window.
4. Start the Application Integrator Control Server, type
otstart
5. In Window 1, at the command prompt, type
OTsoc2.sh
Press the Enter key. This starts the shell script that executes the
example.
6. In Window 2, at the command prompt, type
OTsoc2
Press the Enter key. This program creates the client socket.
7. In Window 2, at the “Enter the remote hostname” prompt, type
your host_name for the client machine.
Press the Enter key.
8. In Window 2, at the “Enter the remote port number” prompt,
type
5520
Press the Enter key. This identifies the server socket port
number.
9. In Window 2, at the “Enter the output filename” prompt, type
soc2.out
Press the Enter key. This identifies the output filename.
In Window 1 and Window 2, processing messages like those
shown below should appear indicating the example is
processing successfully.
When both windows return to the command prompt without
error messages, the example executed successfully.
(window 1) (window 2)
$ OTsoc2.sh
Session# 000103 started
$ OTsoc2
Enter the remote host name: host_name
Enter the remote port number: 5520
Enter the output filename: soc2.out
Identified remote host
Connected to remote host
Session# 000103 completed successfully. Opened output file
$ $
Example 3: Input Example 3 demonstrates input from a passive socket and output to
from a passive a file. The passive argument causes the socket to function as a
server socket and wait for a connection to be made by a client
socket and output
socket somewhere on the network. In the example, no retry
to a file. attributes were specified so the socket would try only one attempt
at connection. If the connection attempt fails, the user would have
to rerun the program to retry the connection.
File Description
OTsoc3.c C source code.
OTsoc3 UNIX program created by the compile that creates
a client socket. It connects to the server socket
created by the translator and sends data.
OTsoc3.exe Windows program that creates a client socket. It
connects to the server socket created by the
translator and sends data.
OTsoc3.sh UNIX shell script that executes the example. You
must specify your machine name at the host_name
argument of the INPUT_FILE variable.
INPUT_FILE = “host_name:5530 [socket] passive”
OUTPUT_FILE = “soc3.out”
OTsoc3.bat Windows batch file that executes the example. You
must specify your machine name at the host_name
argument of the INPUT_FILE variable.
INPUT_FILE = “host_name:5530 [socket] passive”
OUTPUT_FILE = “soc3.out”
soc3.out
Socket3.vsd
Performing Example 3
Windows
Press the Enter key. This identifies the server socket port
number.
6. At the DOS window, at the “Enter the input filename” prompt,
type
OTsoc.in
Press the Enter key. This identifies the input filename.
7. At the Session Output dialog box, a “Session ended: err: 0”
message should appear indicating the example ran successfully.
Choose the Quit button to close the Session Output dialog box.
UNIX
1. Open the vi Editor and modify OTsoc3.sh to specify your
hostname in the -DINPUT_FILE parameter. Save the changes
and exit the vi Editor.
2. Compile the OTsoc3.c program for this example.
3. Open two UNIX windows and arrange them so they are both in
view. In the following figure, the window are titled Window 1
and Window 2. Window 1 will be the client window and
Window 2 will be the server window.
4. Start the Application Integrator Control Server, type
otstart
5. In Window 1, at the command prompt, type
OTsoc3.sh
Press the Enter key. This starts the shell script that executes the
example.
6. In Window 2, at the command prompt, type
OTsoc3
Press the Enter key. This program creates the client socket.
7. In Window 2, at the “Enter the remote hostname” prompt, type
your host_name for the client machine.
Press the Enter key.
8. In Window 2, at the “Enter the remote port number” prompt,
type
5530
Press the Enter key. This identifies the server socket port
number.
(window 1) (window 2)
$ OTsoc3.sh
Session# 000104 started
$ OTsoc3
Enter the remote host name: host_name
Enter the remote port number: 5530
Enter the input filename: OTsoc.in
Identified remote host
Connected to remote host
Session# 000104 completed successfully. Opened input file
$ $
Example 4: Input Example 4 is a two phase exercise which demonstrates input from a
from a Unidirectional unidirectional passive socket and output to a unidirectional passive
socket. Unidirectional sockets process data in one direction only,
Passive Socket and
therefore, for the example, two sockets must be established, one for
Output to a input from one client machine and one for output to a different
Unidirectional Passive client machine. Because passive was specified as the connection
Socket. mode, the sockets on the server machine will be client sockets,
making the sockets on the client machines server sockets.
File Description
OTsoc4i.c C source code—input file.
OTsoc4o.c C source code—output file.
OTsoc4i UNIX program created by the compile that
creates a client socket. It connects to the
corresponding server socket created by the
translator and sends input data.
OTsoc4i.exe Windows program that creates a client socket. It
connects to the corresponding server socket
created by the translator and sends input data.
OTsoc4o UNIX program created by the compile that
creates a client socket. It connects the
corresponding server socket created by the
translator and receives the results of the
translation.
OTsoc4o.exe Windows program that creates a client socket. It
connects the corresponding server socket created
by the translator and receives the results of the
translation.
OTsoc4.sh UNIX shell script that executes the example. You
must specify your machine name at the
host_name argument of the INPUT_FILE and
OUPUT_FILE variables.
INPUT_FILE = “host_name:5540 [socket]
passive”
OUTPUT_FILE = “host_name:5541 [socket]
passive”
File Description
OTsoc4.bat Windows batch file that executes the example.
You must specify your machine name at the
host_name argument of the INPUT_FILE and
OUPUT_FILE variables.
INPUT_FILE = “host_name:5540 [socket]
passive”
OUTPUT_FILE = “host_name:5541 [socket]
passive”
Server Socket
#5541 Client initiates Client Socket
Passive Unidirectional socket
connection
Socket4.vsd
Performing Example 4
Windows
Press the Enter key. This identifies the server socket port
number.
6. At the DOS window, at the “Enter the input filename” prompt,
type
OTsoc.in
Press the Enter key. This identifies the input filename.
7. Start phase 2 of the example.
Display the Windows Run dialog box according to procedures
for your version of Windows. In the value entry box, type the
path and OTsoc4o.exe. Choose OK.
- or -
For systems using Windows `95 or NT 4.0, run the example by
locating the OTsoc4.bat filename in Windows Explorer and
double-clicking it.
This program creates the client socket.
8. At the DOS window, at the “Enter the remote hostname”
prompt, type your host_name for the client machine.
Press the Enter key.
9. At the DOS window, at the “Enter the remote port number”
prompt, type
5541
Press the Enter key. This identifies the server socket port
number.
10. At the DOS window, at the “Enter output filename” prompt,
type
soc4.out
Press the Enter key. This identifies the output filename.
11. At the Session Output dialog box, a “Session ended: err: 0”
message should appear indicating the example ran successfully.
Choose the Quit button to close the Session Output dialog box.
UNIX
1. Open the vi Editor. Modify OTsoc4.sh to specify your
hostname in the -DINPUT_FILE and the -DOUTPUT_FILE
parameters. Save the changes and exit the vi Editor.
2. Compile the OTsoc4i.c and the OTsoc4o.c programs for this
example.
3. Open two UNIX windows and arrange them so they are both in
view. In the following figure, the windows are titled Window 1
and Window 2. Window 1 will be the client window and
Window 2 will be the server window.
4. Start the Application Integrator Control Server, type
otstart
5. In Window 1, at the command prompt, type
OTsoc4.sh
Press the Enter key. This starts the shell script that executes the
example.
6. In Window 2, at the command prompt, type
OTsoc4i
Press the Enter key. This program creates the client socket.
7. In Window 2, at the “Enter the remote hostname” prompt, type
your host_name for the client machine.
Press the Enter key.
8. In Window 2, at the “Enter the remote port number” prompt,
type
5540
Press the Enter key. This identifies the server port number.
9. In Window 2, at the “Enter the input filename” prompt, type
OTsoc.in
Press the Enter key. This identifies the input filename.
10. In Window 2, at the command prompt, type
OTsoc4o
Press the Enter key. This program creates the client socket.
11. In Window 2, at the “Enter the remote hostname” prompt, type
your host_name for the client machine.
(window 1) (window 2)
$ OTsoc4.sh
Session# 000105 started
$ OTsoc4i
Enter the remote host name: host_name
Enter the remote port number: 5540
Enter the input filename: OTsoc.in
Identified remote host
Connected to remote host
Opened input file
$ OTsoc4o
Enter the remote host name: host_name
Enter the remote port number: 5541
Enter the output filename: soc4.out
Identified remote host
Connected to remote host
Session# 000105 completed successfully. Opened output file
$ $
File Descriptioni
OTsoc5.c C source code.
OTsoc5 UNIX program created by the compile that creates
a client socket. It connects to the server socket
created by the translator and sends data.
OTsoc5.exe Windows program that creates a client socket. It
connects to the server socket created by the
translator and sends data.
OTsoc5.sh UNIX shell script that executes the example. You
must specify your machine name at the host_name
argument of the INPUT_FILE variable.
INPUT_FILE = “host_name:5550 [socket]
persistent 5 60”
OUTPUT_FILE = “soc5.out”
OTsoc5.bat Windows batch file that executes the example. You
must specify your machine name at the host_name
argument of the INPUT_FILE variable.
INPUT_FILE = “host_name:5550 [socket]
persistent 5 60”
OUTPUT_FILE = “soc5.out”
Client Socket
Server Socket
#5550
Active Persistent Mode
soc5.out
Socket5.vsd
Performing Example 5
Windows
UNIX
1. Open the vi Editor and modify OTsoc5.sh to specify your
hostname in the -DINPUT_FILE parameter. Save the changes
and exit the vi Editor.
2. Compile the OTsoc5.c program to create the executable for this
example.
3. Open two UNIX windows and arrange them so they are both in
view. In the following figure, the windows are titled Window 1
and Window 2.
4. Start the Application Integrator Control Server, type
otstart
5. At the command prompt in Window 1, type
OTsoc5
Press the Enter key. This starts the example, opens the client
socket, and establishes the client/server relationship.
6. In Window 1, at the “Enter the Server port number” prompt,
type
5550
Press the Enter key. This identifies the server socket port
number.
7. In Window 1, at the “Enter the input filename” prompt, type
OTsoc.in
Press the Enter key. This identifies the input filename.
8. In Window 2, at the command prompt, type
OTsoc5.sh
Press the Enter key. This starts the shell script to process the
example.
In Window 1, the Opened input file message should appear to
indicate you have successful communication and the example is
executing.
When both windows return the command prompt without error
messages, the example executed successfully.
(window 1) (window 2)
$ OTsoc5
Enter the Server port number: 5550
Enter the input filename: OTsoc.in
Opened input file $ OTsoc5.sh
Session# 000106 started
Session# 000106 completed successfully.
$ $
File Description
OTsoc6.c C source code.
OTsoc6 UNIX program created by the compile that creates
a bidirectional client socket. It connects to the
bidirectional persistent server socket created by the
translator and sends and receives data.
OTsoc6.exe Windows program that creates a bidirectional
client socket. It connects to the bidirectional
persistent server socket created by the translator
and sends and receives data.
OTsoc6.sh UNIX shell script that executes the example. You
must specify your machine name at the host_name
argument of the INPUT_FILE and OUTPUT_FILE
variables.
INPUT_FILE = “host_name:5560 [socket] passive
persistent”
OUTPUT_FILE = “host_name:5560 [socket]&”
OTsoc6.bat Windows batch file that executes the example. You
must specify your machine name at the host_name
argument of the INPUT_FILE and OUTPUT_FILE
variables.
INPUT_FILE = “host_name:5560 [socket] passive
persistent”
OUTPUT_FILE = “host_name:5560 [socket]&”
Server Socket
#5560 Client1 initiates
Passive Persistent Mode socket connection 1 Client Socket
DATA FLOW 1
DATA FLOW DATA FLOW
Socket6.vsd
Performing Example 6
Windows
5560
Press the Enter key. This identifies the server socket port
number.
6. At the DOS window, at the “Enter the input filename” prompt,
type
OTsoc.in
Press the Enter key. This identifies the input filename.
7. At the DOS window, at the “Enter the output filename”
prompt, type
soc6.out
Press the Enter key. This identifies the output filename.
8. Start phase 2 of the example.
Display the Windows Run dialog box according to procedures
for your version of Windows. In the value entry box, type the
path and OTsoc6.exe. Choose OK.
- or -
For systems using Windows ’95 or NT 4.0, run the example by
locating the OTsoc6.exe filename in Windows Explorer and
double-clicking it.
9. At the DOS window, at the “Enter remote hostname” prompt,
type
your host_name for the server machine.
Press the Enter key.
10. At the DOS window, at the “Enter the remote port number”
prompt, type
5560
Press the Enter key. This identifies the server port number.
11. At the DOS window, at the “Enter input filename” prompt,
type
OTclose.in
Press the Enter key. This identifies the input filename.
12. At the DOS window, at the “Enter output filename” prompt,
type
soc.out
UNIX
1. Open the vi Editor. Modify OTsoc6.sh to specify the hostname
in the -DINPUT_FILE and -DOUTPUT_FILE parameters. Save
the changes and exit the vi Editor.
2. Compile the OTsoc6.c program for this example.
3. Open two UNIX windows and arrange them so they are both in
view. In the following figure, the windows are titled Window 1
and Window 2. Window 1 will be the server window and
Window 2 will be the client window.
4. Start the Application Integrator Control Server, type
otstart
5. In Window 1, at the command prompt, type
OTsoc6.sh
Press the Enter key. This starts the shell script that executes the
example.
6. In Window 2, at the command prompt, type
OTsoc6
Press the Enter key. This program creates the client socket.
7. In Window 2, at the “Enter the remote hostname” prompt, type
your host_name for the server machine.
Press the Enter key.
8. In Window 2, at the “Enter the remote port number” prompt,
type
5560
Press the Enter key. This identifies the server socket port
number.
9. In Window 2, at the “Enter the input filename” prompt, type
OTsoc.in
Press the Enter key. This identifies the input filename.
10. In Window 2, at the “Enter the output filename” prompt, type
soc6.out
Press the Enter key. This identifies the output filename.
11. In Window 2, at the command prompt, type
OTsoc6
Press the Enter key. This creates the bidirectional client socket.
12. In Window 2, at the “Enter the remote hostname” prompt, type
your host_name for the server machine.
Press the Enter key.
13. In Window 2, at the “Enter the remote port number” prompt,
type
5560
Press the Enter key. This identifies the server socket port
number.
14. In Window 2, at the “Enter the input filename” prompt, type
OTclose.in
Press the Enter key. This identifies the input filename.
15. In Window 2, at the “Enter the output filename” prompt, type
soc.out
Press the Enter key. This identifies the output filename.
In Window 1 and Window 2, processing messages like those
shown below should appear indicating the example is
processing successfully.
When both windows return to the command prompt without
error messages, the example executed successfully.
(window 1) (window 2)
$ OTsoc6.sh
Session# 000107 started
$ OTsoc6
Enter the remote host name: host_name
Enter the remote port number: 5560
Enter the input filename: OTsoc.in
Enter the output filename: soc6.out
Identified remote host
Connected to remote host
Opened input file
Opened output file
$ OTsoc6
Enter the remote host name: host_name
Enter the remote port number: 5560
Enter the input filename: OTclose.in
Enter the output filename: soc.out
Identified remote host
Connected to remote host
Opened input file
Session# 000107 completed Opened output file
successfully.
$
Configuring a To determine how to configure a socket you will need the following
Socket information:
Problem Resolution
Will you be sending Sending data requires defining an
or receiving data OUTPUT_FILE.
using a unidirectional Receiving data requires defining an
socket? INPUT_FILE.
Who will listen and Listening requires configuring the socket
who will connect as passive.
using a unidirectional Initiating the connect, requires
socket? configuring the socket as active (default).
Will you be sending Use a bidirectional socket where
and receiving data? messages can travel to and from either
Do you have limited end. Define both INPUT_FILE and
socket resources (the OUTPUT_FILE with the same socket
number of sockets number.
being used is high or
the system is heavily
loaded)?
Who will listen and Listening requires configuring the Input
who will connect socket as passive and the output socket
using a bidirectional as bidirectional.
socket? Initiating the contact requires
configuring the Input socket as active
and the output socket as bidirectional.
What is the For an OUTPUT_FILE, use the name of
hostname? the host you will be sending to.
For an INPUT_FILE, use the name of the
host running Application Integrator.
Should the socket Use persistent to keep the socket open
remain open after the and retry the connect.
first connect? Use the default to disconnect after one
connection attempt.
How many retries? For continuous retries, use the default.
Use a specific number of times, specify
retry and retry time period.
Problem Resolution
Are application level If application level headers are to be
protocols for control used, they must be modeled within
defined? Application Integrator to allow for their
processing.
Application level protocols are headers,
trailers, or other strings of data that you
agree to use with the other parties to the
sockets to tell each other to do specific
activities. For example, a header and
trailer surrounding the data may be
included by the sender to allow the
receiver to validate that all the records
sent were received and that some routing
request or other element of service is
indicated. It may also be used to
synchronize a bidirectional socket.
Typical uses of header information
include sender and receiver IDs, control
numbers, record counts, dates, version
indicators and identifying strings.
Error Messages The following table contains information about the types of errors
that might occur when working with sockets. For additional
information regarding errors, please refer to Trade Guide For System
Administration User’s Guide, Appendix B, "Application Integrator
Runtime Errors."
UNIX FIFO Specify the device type of FIFO with the following syntax:
“<fifo_dev_name> [fifo]”
Specifications
Example:
INPUT_FILE = “/tmp/fifo1 [fifo]”
The example above opens the fifo file “/tmp/fifo1” for use as an
input byte stream of the translator. FIFOs can be used to connect
the output of a program to the input of the translator. Even though
one may not have the source to a program to change its output
destination, FIFOs can be used to “pipe” the input to the translator
since they appear to be files to applications.
UNIX Pipes Specify the device type of Pipes with the following syntax:
Specifications “<piped_program_name> [pipe] program_parameters>”
Example:
INPUT_FILE = “/usr/home/arrpt1 [pipe] -Prmslp”
This example invokes the program “/usr/home/arrpt1”, with the
parameter “-Prmslp”, whose standard out will be piped to the
translator’s input byte stream. The program can be any program
that outputs to standard out, such as command line SQL queries.
UNIX Message Specify the device type of Message Queues with the following
Queues Specifications syntax:
“<msg_key>:<msgq_subkey>
[msgq] buffer <max_msg_size> create <open_mode>”
Examples:
1. INPUT_FILE = “0x3874252A [msgq]”
2. INPUT_FILE = “0x389F2340:03 [msgq] buffer
4096 create 0644”
The first example connects to a message queue with the key value
“0x3874252A” using default values for subkey ID and buffer size.
Messages passed supply the translator’s input byte stream. As the
create option has not been specified, the translator will report an
input file error if the message queue does not already exist.
The second example shows the use of the subkey ID option (03 in
this case), the buffer size option, and the create option with the
read/write mode specified. If the message queue already exists,
the translator will report an input file error. Subkey IDs are used as
the “mtype” field in the system message structure “msgbuf” and
allow the “sharing” of a message queue among different I/O
streams, reducing system message queue IDs.
Step 1: Obtain the Obtain all available documentation which explains the syntax,
Translation Definition structure and mapping rules that apply to the translation you are
going to model. The syntax defines the characteristics of the
Requirements
components, such as, the character sets, fixed or delimited data,
identification and tag definitions. The structure defines the
relationships between the components, and the occurrence
constraints. The mapping rules define the semantic meaning of the
components to accurately associate the source to the target.
If documentation is unavailable, find the person in your
organization or at your trading partner’s organization who can
provide an understanding of the content and structure of the data
for electronic commerce. From this person, obtain the syntax,
structure, and data mapping requirements.
If neither documentation nor a contact person is available to relay
the data and translation requirements, you must obtain the
information by examining the data files. This method, of course,
makes translation definition a process of trial-and-error. The more
complicated the translation requirement, the less assured you can
be of an accurate modeling definition.
Character Set When an item has a unique character set, which differs from other
items, a different type of item is used.
For example, five different types of items would be used for the
following:
r Numeric - which allows for 0-9, -, +, .
r Date - which allows for 0-9 with valid month and day
values
r Text - which allows for all printable characters
r Alpha - which allows for the character set range of A-Z
r Segment - which contains a tag at the beginning and a
delimiter at the end
Pre- and Post-Conditions Delimited data typically allows for the use of several delimiters.
These delimiters can be used either as the pre-conditions or the
post-conditions of items. You need to understand the delimiters
used in your input file to know when another type of item must be
defined.
For example, consider fixed length data within variable length
records. The data fields have no pre- or post-condition. The
record, however, has a post-condition of a delimiter (possibly a line
feed, or carriage return/line feed).
Another example would be the UN/EDIFACT standard which
specifies delimited data using three different delimiter characters.
The rules specifying which delimiters can precede or follow which
item types are very specific in this standard’s syntax.
By defining many items, to accommodate specific character sets and
the pre- or post-conditions, a more accurate translation definition
can be modeled. Using only a few items will offer less of a
guarantee that the proper item has been recognized (source) or
constructed (target) during translation. Also, non-precise item type
definitions may cause invalid item recognition, causing the
translation to fail on a later item, or causing the wrong data to be
mapped.
Sequence Sequence is the order in which items can appear. It can be rigid or
random. A rigid example is when a standard requires records to
appear in a certain order (record type A cannot appear after record
type B). A random example is when records can appear in any
order (record type A can appear before or after record type B).
Once the semantic meaning of the item’s value is known, then it can
be properly mapped.
Sometimes data has to be manipulated or changed from how it
appears in the source, to how it is output in the target. Conversion
between field sizes and types of items occurs automatically within
Application Integrator. Fields are either padded or truncated to
manipulate the size. For item type differences, the field will be
converted, for example, a source alphanumeric type might be
converted to a target implied decimal type.
Other types of manipulation have to be performed manually
through the use of rules (mapping rules). Functions are provided
which perform the following:
Function Description
Case convert Change to all uppercase letters or lowercase
letters
String manipulate Trim, concatenate, replace characters,
substring
Code verification Verify the value is contained with a list of
acceptable codes
Cross-reference Replace a value with another value
Step 2 should generate: r A list containing the types of items needed for the source.
r A list containing the types of items needed for the target.
The following are example lists of types of items for source and
target.
Source
Type: Fixed length fields in delimited records
Item Character Set Pre- Post-
Condition Condition
Alphanumeric any character between none none
‘ ’ and ‘~’
Alpha A-Z, a-z, space none none
Numeric Special Numeric none none
Function
Date Special Date Function none none
Time Special Time Function none none
Record tag: Alpha none line feed
character
Target
Type: Variable length fields in delimited records
Item Character Set Pre-Condition Post-Condition
Alphanumeric any character between ‘ ’ elem-delimiter elem-delimiter or sgmt delimiter
and ‘~’
Alpha A-Z, a-z, space elem-delimiter elem-delimiter or sgmt delimiter
Numeric Special Numeric Function elem-delimiter elem-delimiter or sgmt delimiter
Date Special Date Function elem-delimiter elem-delimiter or sgmt delimiter
Time Special Time Function elem-delimiter elem-delimiter or sgmt delimiter
Record tag: Alphanumeric sgmt delimiter sgmt delimiter
Step 3: Obtain the When obtaining an input file for testing the data models, the
Test Input File(s) volume or size of the input file is not as important as having an
input file that contains all acceptable variations of the input
structure. This includes not just expected variations, but all
possible variations as defined per the structure definition. The goal
is to be able to test all possible structure and content combinations
to ensure that the translation definition will not fail once placed
into production mode.
If you are unable to obtain a fair representation of input data, you
will have to use a text editor to create the input file. You will either
have to take an existing file and add alterations to the structure and
content, or create the file from scratch. You must complete Step 2,
Analyze the Definition Requirements, before this file can be created
or modified.
Step 3 should generate: r Test input file(s), containing all possible data variations.
Step 4: Lay Out The layout of the environment flow is a pictorial representation of
the Environment the various elements that need to be brought together to configure
the translator to process in a certain way (for example, it shows the
Flow
input files, output files, and other components) and the order in
which they are used. Each environment provides the ability to alter
the configuration of the translator and allows for the modular
creation of data models. Refer to Section 4 for a further discussion
and illustrations of environments.
Changing environments during translation can affect the following
configuration components:
r Access models
Changing the access models allows you to add, change or
remove item type definitions. This includes adding,
changing or removing access delimiter characters and
changing the use of access model COUNTERs.
r Input and output files
Changing the input file allows you to bring different data
into the translation. By changing the output file, the output
data can be filtered to different files.
r Profile Database key prefixes
Changing the database key prefixes provides different
views within the Profile Database. Different views may be
required at various points in the translation.
r Find match limit
Changing the scan forward limit (FINDMATCH_LIMIT) in
the generic model OTNxtStd.mdl allows you to reduce or
increase the scope of searching for a specific character
sequence. Refer to the section on # FINDMATCH in
Appendix B for more details.
Step 6: Create The syntax and structure of the translation will be modeled in this
Source and Target step. First, define the data models as per the Application Integrator
Model worksheet. Refer to a copy of the worksheet at the end of
Data Model
this section for assistance. Then enter the definitions into
Declarations Workbench. The rules for mapping will be created in Steps 8 and 9.
You can work on the source and target data models independently
of each other. One modeler can work on the source data models
while another modeler defines the target data models. The power
of Application Integrator allows the two sides to be brought
together at runtime for binding. The relationship between the two
sides is established in the mapping process, through the use of
mapping variables.
Create an Application Integrator Model Worksheet for each data
model defined on each Environment Definition Worksheet:
For each item contained within the data structure, create a line item
entry:
Section Name Instructions
Data Model Item Assign a label name, unique within this data
Name model, by which this item will be identified.
The name must begin with a letter, and
should not begin with the two letters “OT.”
Use the various columns under Item Name to
represent the various hierarchical levels in the
structure definition. (Used for all types of
items: group, tag, container, and defining
item.)
For example:
Message_Loop
Heading_Record
Heading_Rec_Field_1
Heading_Rec_Field_2
Heading_Rec_Field_3
Detail_Line_Item_Loop
Detail_Record_1
Detail_Rec_1_Field_1
Detail_Rec_1_Field_2
Detail_Rec_1_Field_3
Detail_Record_2
Detail_Rec_2_Field_1
Detail_Rec_2_Field_2
Detail_Rec_2_Field_3
Step 7: Create a Create a map component file for each Environment Definition
Map Component Worksheet created in Step 5. Create a new map component file by
completing the Map component file dialog box opened from the
File for Each
New Map Component option of the Workbench File menu. Refer
Environment to Section 4 for instructions.
Step 8: List Source Using the Application Integrator Variable Worksheet (an example
to Target Mapping of which is found at the end of this section), complete a line item for
each piece of data that will be mapped from the source to the
Variables
target. Once this worksheet is completed, the source data modeler
will be able to begin creating the rules described in Step 9,
independent of the target data modeler. The type of variable and
its ID (label) is all that the source and target data modelers need to
know.
Create the Application Integrator Variable Worksheet as follows:
Section Instructions
Name
Type Identify the type of variable to be used. Each
variable type has different mapping attributes.
Label Assign a label for the variable that will be unique
throughout the total translation session. The label
must begin with a letter and should not begin with
the letters “OT.”
Description Enter a description of what the variable type and
label represent.
Step 9: Create Using Workbench, you can now apply rules to the data models.
Data Model Rules Since the source and target are independent of each other and
runtime-bound via the variables, either side can be done first or
independently.
The source assigns its data model item values to specific variables,
for example, VAR->PONumber = HeadingRec_PONumber. The
target assigns the variable values to its data model items, for
example, Rec1-PONo = VAR->PONumber.
The primary use of the rules is to create the movement of data from
the source to the target, establishing the desired format in the
process. However, rules are also used for the following purposes:
r Logging of information for audit, message tracking, and for
reporting, using the functions LOG_REC( ) and
ERR_LOG( ), for example.
r Capturing of information for later acknowledgment
creation.
r Performing error recovery, such as defaulting when a value
is absent.
r Verifying relational conditional compliance — a relational
condition may exist among two or more siblings within the
same parent based on the presence or absence of one of
those siblings.
r Obtaining or changing values in the Profile Database.
(Make sure the key prefixes are set in the map component
file or through data model rules.)
r Using keywords to alter the natural processing flow –
ATTACH, EXIT, BREAK, RELEASE, CONTINUE, REJECT,
RETURN.
r Performing string manipulation, for example, sub-string,
trim, concatenate, replace.
r Characters, case conversion.
r Performing computations - +, -, *, /.
r Obtaining or changing the active character sets, decimal
notation and release characters.
r Obtaining or changing values associated with access
counters.
r Obtaining or changing the system’s date or time.
r Obtaining or changing the current error status.
Guidelines for Rule Creation a. If a rule action fails, the balance of the actions contained in the
rule are not executed. For example, if a variable or data model
item is referenced for its value, and a value has not yet been
assigned to it, the action will fail. (This can occur in a tag item’s
rule which references an optional child defining item, that was
not present in the tag item.) Whenever this possibility exists,
immediately start a new rule for the balance of the actions.
Example:
Incorrect way to model Correct way to model
Tag_A Tag_A
Defining_A1 (optional) Defining_A1 (optional)
Defining_A2 (optional) Defining_A2 (optional)
[] []
VAR->Fld1 = Defining_A1 VAR->Fld1 = Defining_A1
VAR->Fld2 = Defining_A2 []
VAR->Fld2 = Defining_A2
[]
SET_ERR( )
In the incorrect way, if the first action (VAR->Fld1 =
Defining_A1) fails because no value is available for the
reference to Defining_A1, the balance of the rule is not
performed, and Defining_A2 is not assigned to VAR->Fld2.
In the correct way, the third rule is added to set the error
status to zero, so that if second rule fails, the error of the
failure is not carried forward to the occurrence validation of
Tag_A.
b. Remember to always follow an ATTACH keyword action with
a new rule to capture the map component file’s returned status.
If you immediately follow the ATTACH keyword action with
another action, the second action would only occur if the map
component file returned a status of zero.
For example:
[ ]
ATTACH “OTX12Env.att”
[ ]
VAR->OTAttachRtnStatus = ERRCODE( )
c. Some rules can easily become complex. Take the time to lay out
on paper all complex rules, since the content of all rules cannot
be viewed within Workbench at one time. Once on paper, the
rules can easily be entered into Workbench, with minimal
editing.
d. On the source side, rules are usually placed on the tag item
rather than on each defining item. This is because often one or
more items qualify one or more other items to provide their
semantic meanings. On the target side, the rules must be
placed on the defining items for them to obtain their values.
All conditional rules should be entered before the null
conditional rules. If the last rule on an item is a conditional
rule, and the condition fails, the status the item will take on for
occurrence validation will be a failure status. To reset the
status, add a null condition rule with the function SET_ERR( ).
For example:
[ ]
SET_ERR(0)
Profile Database The following rules should be included in your data models to set
Lookups up the views into the database so that information can be accessed.
When accessing the Profile Database, your model must contain
rules that tell the translator what type of information you are going
to access and from where to access it. The type of information you
might access could be cross-references, code list verifications, or
substitutions. You could access this information from any
hierarchy level of the trading partner profile or from any of the
standard version levels. Refer to Section 4 of the Trade Guide for
System Administration User’s Guide for details on setting up or
modifying the trading partner profile and standard version code
lists.
The SET_EVAR data model function allows you to set the
environment variables for these type of lookups. Refer to the
description of the SET_EVAR function in Appendix B for more
details.
For information on the generic model used in trading partner
recognition, refer to the “Map Component Files for Enveloping/De-
enveloping” section in Section 4.
Database Key and Each level in the trading partner hierarchy is represented by a
Inheritance database key. Each database key is delimited by the pipe ( | )
character, and is a maximum of 12 characters long
The key is derived from the information keyed into the trading
partner’s profile, as follows:
Inheritance When looking up a value, the lookup is appended to the key prefix,
before the read. A cross-reference lookup of a part number, for
example, might be:
ABCIC|ABCFG|ABC850|part_a
By using this approach, the property of “inheritance” can be easily
applied, where inheritance denotes the use of values from higher
levels (ancestors) in the hierarchy when a specific value is not
found at the current level.
To continue with the example, if the cross-reference value of part_a
is not found at the message level ABCIC|ABCFG|ABC850, the
system automatically removes levels until the value is found or all
levels are exhausted.
ABCIC|ABCFG|ABC850|part_a
ABCIC|ABCFG|part_a
ABCIC|part_a
part_a
The property of inheritance exists for types of values that can be
stored in the Profile Database:
r Substitutions
r Cross-references
r Verification code lists
Inheritance can lessen the redundancy in the Profile Database,
since, all levels of a trading partner hierarchy (for example, all
divisions and/or all messages of the trading partner) may use the
same cross-references and codes.
Inheritance can be turned on/off as a parameter to database
functions. Whether or not the inheritance feature should be used is
determined by the developer during data modeling.
For details on setting up the trading partner hierarchy, refer to the
Trade Guide for System Administration User’s Guide.
Cross-references The environment keyword XREF_KEY is used to set the view into
the database to perform cross-reference lookups. Once you have
identified the type of information and where to access it, the next
step is to identify the category that the data is stored underneath
along with the value in the input stream to be cross-referenced.
The XREF data model function allows you to do this.
In addition to cross-referencing information, you have the ability to
manipulate the cross-reference information stored in the database.
The SET_XREF data model function allows you to update values
into the cross-reference portion of the Profile Database. The
DEL_XREF data model function allows you to delete a Profile
Database cross-reference.
Verification Code Lists The environment keyword LOOKUP_KEY is used to set the view
into the database to perform verification lookups. Once you have
identified the type of information and where to access it, the next
step is to identify the verify list ID that the data is stored
underneath. The verify list ID is keyed into the Verify field of the
data model item. The verify list ID is also known as the Category in
the Xrefs/Codes dialog box. The next step is to construct a rule
identifying the data model for which the verification is to occur and
the value to be looked up. The LKUP or DEF_LKUP data model
functions allow you to do this.
In addition to verifying code list information, you have the ability
to manipulate the code list information stored. The SET_LKUP
data model function allows you to update values into the
verification portion of the Profile Database. The DEL_LKUP data
model function allows you to delete a Profile Database code list
value.
Step 9 should generate: r All data model rules using Workbench, for both the source
and target.
r A completed Profile Database Interface Worksheet.
Step 10: Enter the All lookups into the Profile Database should have been listed on the
Profile Database Profile Database Interface Worksheet, as part of Step 9 when
creating rules within Workbench.
Values
Using Trade Guide for System Administration, enter in all of the
Profile Database lookups, including values to be used in
substitutions, cross-references (x/refs) and verification code lists.
Procedures for entering this information are found in Section 4 of
the Trade Guide for System Administration User’s Guide.
Cross-references 1. From the Xrefs/Codes dialog box, add the category under
which the cross-reference will occur to the category file, if not
already present.
2. Select the category from within the list box, and enter the
extracted values from the input stream in the Value field. If a
string of values is used, these values should be trimmed of
trailing spaces and concatenated together using the pipe ‘|’
character as a delimiter between the fields.
3. Enter the value to be used for the cross-reference in the
Description field.
Step 10 should generate: r All trading partner Profile database lookups exist for
substitutions, cross-references, and verification code lists.
Step 11: Run Test Using the input file from Step 3, translate the file(s). The files can
Translations and Debug be translated through the Run dialog box of Workbench, or at the
command line, by invoking the inittrans command in UNIX or the
otrun.exe command in Windows. Refer to Section 6 of this manual
for detailed instructions.
During test translation, it may be beneficial to set a high trace level.
The trace facility set at a high level generates a step-by-step log of
the translation. Various levels of the trace can be turned on and off
as needed. Continue testing and debugging until your data model
is free of error and ready to migrate to an official test or production
area.
Step 11 should generate: r Production ready data models and map component files.
Step 12: Make Once the translation has been modeled, the following modified or
Backup Files created files should be backed up:
r Map component files —.att
r Data model files — .mdl
r Test and input files
r Access model files — .acc
r Profile Database — sdb.dat & sdb.idx
All worksheets and flowcharts should also be packaged together for
later reference. Refer to Section 7 of the Trade Guide for System
Administration User’s Guide for details on scheduling backups.
Step 13: Migrate Once you are through with the complete data mapping process,
the Data including testing, you are ready to migrate your application to an
official test or production functional area. Refer to Section 7 of this
manual for suggestions on migrating to a different functional area.
Assigning Names Consider the following when assigning names to models and files.
Operating System When assigning filenames, consider naming conventions for all
Naming Conventions operating systems under which you expect to operate. Use the least
common denominator. That is, if you are expecting to use
Windows, limit the base portion of the filename to eight characters
and the extension to three characters. Note also that the Windows
operating system is not case-sensitive, but UNIX is. If you intend to
develop applications for these platforms, consider using all
lowercase or uppercase characters to avoid any problems following
migration. FTP will migrate files, maintaining the case it
encounters.
Application Integrator When assigning names to models, variables, etc., use a prefix other
Reserved Prefixes than the two-character “OT” or “ot.” All provided Application
Integrator models (.mdl, .acc), generic variables, and utility shell
scripts begin with the two letters “OT” or “ot.”
Extension Description
.att Map component filename (required)
.env Environment filename (required)
.acc Access model filename (required)
.mdl Data model filename (required)
.sh Shell script file
.std Standard data models, such as ASC X12,
UN/EDIFACT or TRADACOMS
.dat Data portion of an ISAM file (Profile or
Administration Database)
.idx Index portion of an ISAM file (Profile or
Administration Database)
.tmp Temporary/work files
.in Test input file to Application Integrator
.out Test output file to Application Integrator
Variable Length
Access model variable 40 characters
Data model variable 40 characters
Variable (temporary) variable 40 characters
MetaLink variable 40 characters
Array variable 40 characters
User-defined environment variable 40 characters
Substitution variable 254 characters
Verification list ID 254 characters
Using Application The models and files supplied by Application Integrator and
Integrator Models Application Integrator standards implementation packages have
names beginning with “OT” and “ot.” These files are read-only, in
and Files
most cases.
If you desire to modify an Application Integrator file, such as a
sample data model, copy the file to a file with a new name and then
modify the copy.
References to Files References to files and models within the map component files and
data models can be either relative or explicit. Relative referencing
means every file and model being used is located in the same
directory. These include source, target, and access models and map
component files. Using relative referencing, they can be moved to
another file system without changes. This allows the models to be
easily distributed to other users, as they are to Application
Integrator Customer Support, if necessary.
Explicit referencing means the files and models that are being used
are located in different directories, therefore, a path is necessary to
locate these files. Using explicit referencing, the same directory
structure (path or modifications to the map component files and
models) is always required.
We recommend relative referencing because it is more structured
and easier to track. The following examples illustrate both types of
referencing. Notice that the explicit filenames begin with either a
forward slash (/) or a backslash (\), whereas the relative filenames
do not.
Data Model Item Name Item Occurrence Size Format Match Value Verify Sort File Increment
Type Min Max Min Max List ID Yes/No
Application Integrator Variable Worksheet
Type: VAR->
ARRAY->
M_L->
$SUBS
ENV Label Description
Profile Database Interface Worksheet
Label (S) Hierarchy Level
Side Type Category (X) (Key Prefix)
Description of Lookup S/T S/X/V Verify List ID (V)
urce/Target
S/X/V = Substitution, Cross-reference (x-ref), Verification
Section 6
Translating and Debugging
Before Translating
4. In the Map Component File value entry box, type the name of
the map component file to be used in this translation.
5. In the Copy File value entry box, type the name of the data file
from which you are going to copy. This file will not be
removed after the translation is complete. It will remain in its
current location.
6. In the Trace Level value entry box, type the numeric value for
the trace level desired;
- or -
Select the level using the dialog box options associated with this
box. The trace level will default to zero if a trace level is not
entered.
Refer to the “Setting a Trace Level” section later in this section
for a complete description of this option.
7. In the Additional Parameters group box, enter any additional
parameters needed for this translation. These parameters could
include input filename, output filename, and user-defined
environment variables.
Using the Name and Value box entries, the system creates an
additional parameter statement, such as
“INPUT_FILE=OTX12I.txt”.
❖ Note: If the data model has been modified since you last
ran a translation, you will be prompted to save these
changes.
10. To close the Run dialog box, choose the Close button.
Setting a Trace The trace level controls the content of the translation trace log file.
Level The trace level is set using a numeric value. This value represents
which options of the trace will be set. You can manually enter the
trace level options, or use a dialog box for making selections.
The following table describes the trace levels.
Trace Description
Level
0 No Trace Setting or a Trace Setting of Zero (0)
q otrans or otrun.exe version, compile date and
time
q date/time translation began and ended
q loading of libraries, with their compiled
date/time
q translation ending status
1 Data Model Item Values Listing
q Values reported by “No Trace Setting or a
Trace Setting of Zero (0)”
q Names of source access and data models
q “SOURCE VALUES”
q The last source item values parsed
q “M_L VALUES”- declared within this source
data model
q “VAR VALUES” - declared within this source
data model
q “ARRAY VALUES” - declared within this
source data model
q Names of target access and data models
q “TARGET VALUES”
q “M_L VALUES” - declared within the source
and this target data model
q “VAR VALUES” - declared within the source
and this target data model
q “ARRAY VALUES” - declared within the
source and this target data model
Trace Description
Level
2 Value Table Listing
q Values reported by “No Trace Setting or a
Trace Setting of Zero (0)”
q “VALUE STACK” - target item labels with the
values assigned to them
q “VSTK” - values being referenced off the
value stack in target Phase 2 processing
4 Source Data Model Items
q Values reported by “No Trace Setting or a
Trace Setting of Zero (0)”
q Two lines for each source item, as processed -
“DM: ItemLabel”, “FINISHED ItemLabel”
8 Target Data Model Items
q Values reported by “No Trace Setting or a
Trace Setting of Zero (0)”
q One line for each target item, as processed -
“DM: ItemLabel”
q One line each time processing returns to a
parent level - for example, “FINISHED
ItemLabel”
16 Rule Execution
q Values reported by “No Trace Setting or a
Trace Setting of Zero (0)”
q One line for when entering rules on an item -
only if item has rules defined
q One line reporting rule execution status - all
items
32 *Rule Functions This level does not output on its own.
Refer to 48.
Trace Description
Level
48 Rule Functions (48, which is 16+32)
q Includes # NUMERIC/# DATE/# TIME access
function. Function NUMERIC_in: dm
PhoneNumber pic “No format”
dm left 10 .. 10 right 0 .. 0 radix
Function NUMERIC_in returns value:
“3255550961”
q Execution of rules - assignment, functions, etc.
64 Source Access Items
q Values reported by “No Trace Setting or a
Trace Setting of Zero (0)”
q Source TAG item matching - “pre condition
Rec_Code met”
q Source parsed values being returned back to
the source data model
128 Error Details (128)
q Values reported by “No Trace Setting or a
Trace Setting of Zero (0)”
q The clearing of the error stack - “err_dump( )”
the capturing of an error - “err_push( )”
256 IO Detail (256) - pertains to source only
q Values reported by “No Trace Setting or a
Trace Setting of Zero (0)”
q Shows file position entering each item
q Shows each character read and checks if
within defined character set.
q Function NUMERIC_in: dm PhoneNumber
pic “No format” dm left 10 .. 10 right 0 .. 0
radix
q Function NUMERIC_in returns value:
“3255550961”
512 Write Output Detail
q Values reported by “No Trace Setting or a
Trace Setting of Zero (0)”
q The order the items are written out and the
access items used
1023 Complete Trace
2. Select each type of trace level. To select all options, choose the
Set All button. To deselect all options, choose the Clear All
button.
3. Choose the Apply button to return the selected trace level;
- or -
Choose the Cancel button to exit the trace settings without
making changes.
❖ Hint: You can also subtract the number of the items you do
not want to show on the trace from the total trace value of
1023. For example, to show a total trace minus access
items, you would subtract 64.
Invoking the The translation program inittrans has the following common syntax
Translation in a UNIX development area. Refer to the following table for a list
of complete arguments.
Processes - UNIX
inittrans -at <initial map component file>
-cs <Control Server queue id> -tl <trace level> -I
Invoking the The translation program otrun.exe has the following command
Translation Process syntax in a Windows development area. Refer to the “Available
Arguments to inittrans/otrun.exe” table for a list of complete
Windows
arguments.
Available Arguments to
inittrans/otrun.exe
Code Description of Item Defined Environment
Variable Resources
-a source access file S_ACCESS
-A target access file T_ACCESS
-at initial map component file none
-cs specify Control Server queue none
ID
-D declare user-definable none
environment variable
-hk hierarchy key prefix HIERARCHY_KEY
-i input file INPUT_FILE
-I interactive, foreground none
versus background (default
processing)
-lg specifies a file for the none
<filename> translation session output
allowing you to translate in
the background instead of
Parameter Explanations The code for an initial map component file “-at” is exclusive of -i,
-o, -a, -A, -s, -t, -hk, -xk, and -lk. If -at is used, these others cannot
be used. If the other codes are used with -at, they will have no
impact.
The following sets are paired: -a with -s (source access and data
model), and -A with -t (target access and data model).
The parameter -lg <filename> generates translation session
feedback to a background file. This parameter writes to the
filename specified the translation session feedback usually
displayed in the Session Output dialog box. If the file cannot be
opened or created, the Session Output message box will be
displayed. It is good practice to keep this filename consistent for all
translations thus making any cleanup easier by handling only one
file.
The –D code is used to declare Application Integrator environment
variables (or user-defined variables) from the command line.
Defined environment variables will override values defined in the
map component file.
Ways to Set the You can alter the amount of detail contained in the trace in four
Trace Log places:
r The Workbench Run dialog box
r The environment (map component file)
r Within the data model rules
r At the command line
Through the Run Dialog You can set the trace level through the Workbench Run dialog box
Box by accessing the Trace Settings dialog box (accessed from the
ellipses next to the Trace Level box). This method sets the trace
throughout the complete translation session. Refer to the
procedures earlier in this section for instructions on completing this
dialog box.
Through the You can enter a trace level in the Other Environment Variables area
Environment of the Map Component Editor dialog box. This level is set
throughout the complete translation session. It must be specified
on the initial environment. Specifying a trace log level on any child
environment will have no effect.
Using Data Model Rules For any item in the translation session, you can define the trace
reporting detail. Use the function SET_EVAR( ) and the keyword
environment variable TRACE_LEVEL to do this.
[ ]
SET_EVAR(“TRACE_LEVEL”, 1023)
At the Command Line You can specify a trace level when invoking a translation by using
the -tl parameter and passing a numeric trace level value. For
example, a complete trace (1023) is specified with the following
line:
In UNIX, type:
inittrans -at train1.att -cs $OT_QUEUEID -tl 1023 –I
In Windows, type:
otrun.exe. –at train1.att –cs %OT_QUEUEID% -tl 1023 –I
Refer to the earlier section, “Translating at the Command Line,” for
more details on command line parameters.
Viewing the Trace You can view the trace log through Workbench, the UNIX
Log command line, or by opening the trace log via an editor.
3. Choose View Trace and then select the trace log from the list
box.
A window displaying the trace log opens.
4. You can search for text within the trace log by using the Find
option. To do this, choose the Find button to open a Find
dialog box. Type the text in the Find What value entry box and
choose Find Next.
UNIX Trace Output The name of the trace file is “$OT_QUEUEID.tr00000.log”. Where:
r $OT_QUEUEID is a UNIX environment variable of the range
[“pr”, “pp”, “pt”, “ts”, “01” ... “50”]
r “00000” is a sequential number beginning with zero, for the
sequence of the translation executed since the Control Server
was started. Restarting the Control Server resets this
sequence number back to zero.
The trace files are placed into the log directory specified as part of
the “otstart” or “ottg” shell scripts, which began the Control Server.
cservr $OT_QUEUEID -ld $LOGDIR > $LOGDIR/cservr.$OT_QUEUEID 2>&1&
The argument “-ld $LOGDIR” stands for log directory, which is
previously set in the script to the current directory
(LOGDIR=`pwd`) — the Control Server’s home directory. If the “-
ld” argument is not supplied, the log directory defaults to “/tmp.”
The other translation logs, Session Log and Error Log, are also
placed in the log directory (-ld).
Session Log - $OT_QUEUEID,s00000.log, see LOG_REC( )
Error Log - $OT_QUEUEID,e00000.log, see ERR_LOG( ) and
TRUNCATION_FLG
Using Windows A trace log file is generated for each translation session. The name
Trace Log Output of the trace file is “%OT_QUEUEID%.tr00000.log” where
%OT_QUEUID% is a Windows environment variable.
“00000” is a sequential number for the sequence of the translation
executed since the Control Server was started. Restarting the
Control Server resets this sequence number to zero.
The trace log files are placed into the log directory specified as part
of the cservr.exe that began the Control Server. The trace log file is
created with the –cs argument stated in the command. For
example:
Windows Trace Windows uses the file named trace.log as the output for any
Output translation output. The file resides in the same directory where the
Control Server was started. This file is produced when executing:
1. otrun.exe
2. the Workbench Run process
3. Trade Guide
To display the trace log, you would choose the View Trace button
on the Run dialog box. View Trace looks in the working directory
for the trace files. When the log directory (ld) is set to something
other than the working directory, you must manually locate the
trace files to display them. They will be found in the directory
specified by the “-ld” Control Server parameter. If this parameter
is not set, the log directory will default to the working directory.
Each time you run a translation or run Trade Guide, the file trace.log
is overwritten. For example, if you run a translation in Workbench
that has a complete trace.log and you want to see the Activity
Tracking report to see how the data was processed, when you run
Trade Guide your translation trace.log will be overwritten.
Source Processing:
Source data model processing - only if source data model declared
in this environment
r Source data and access model filenames —
“get_source acc OTFixed.acc model Example1s.mdl”
r Source item processing — repeat for each item
Data model item label
Access parsing / character reading - tags, containers,
defining
Rules performed, modes (Present/Absent/Error), conditions
and actions
r Source values (values are not seen if the data model is exited,
i.e., “EXIT 503”)
Data model item values
MetaLink values (M_L->) declared within this source data
model, in sequence declared
Array values (ARRAY->) declared within this source data
model, in sequence declared
Temporary variable values (VAR->) declared within this
source data model, in sequence declared
Debugging Hints The following are hints on using the trace log for debugging.
Using the Trace Log
To See Full Trace Consider setting a full trace (1023) to see all details. To see values
as they are formatted and written out off the value stack, you must
use the full trace level setting.
Excessive Looping Use a trace level value of “12”; this shows only the item’s labels
Concern with their occurrences.
Pinpointing Rule Set the trace level value to “16”; this shows the items with rule
Execution Errors errors.
LastName - PRESENT rules
<3> returning eval not instantiated
LastName: ERROR->> status after PRESENT rules 139
vs.
LastName - PRESENT rules
LastName: status OK after PRESENT rules
Keyword Meaning
DM: XXXXXXX Identifies the initial reference to an
instance of a data model item.
ENTER fill_sdm( ): Identifies the instance of parent group
XXXXXXX parent- item XXXXXXX. The parent instance is
>instance n incremented when a new hierarchical
level is encountered.
Equal returns Indicates the result of an evaluation or
True/False condition statement.
err_dump( ) Indicates that the content of the error
dump stack is being discarded or reset to
a non-error state.
err_push( ) status nnn Assigns interval setting of an error value
msg “xxxxxxxx” to alter the state of processing
depending on keyword used, data
encountered, or data not encountered.
ERROR Indicates that the rules that follow will
be performed if the item is found to be in
error. The translator determines which
mode it is in by evaluating the current
error code value after parsing a data
model item (if it is a defining item).
error number For example, “184”
Evaluated value Displays the contents of the item just
before performing rules associated with
the item.
exec_ops status Identifies the status returned by a
condition or actions.
FINISHED Identifies the end of processing of an
XXXXXXX: occurrence of a data model item.
fp_save set to nnn in Identifies the last position read from the
xxxxxxx input stream.
Keyword Meaning
FUNC XXXXXXX set Identifies where an access model item is
to value xxxx <nnnn> being set to a value of xxxx, where the
decimal value of xxxx is nnnn. The data
model “SET” functions, SET_DECIMAL,
SET_RELEASE, SET_FIRST_DELIM,
SET_SECOND_DELIM,
SET_THIRD_DELIM,
SET_FOURTH_DELIM,
SET_FIFTH_DELIM, and set the access
model items.
Function xxxxxx_in Displays the source access model
returns value: function (xxxxxx_in) used to return a
“yyyyyy” value of yyyyyy from the data input
stream.
Function xxxxxx_out: Displays the target data model output
dm yyyyyy pic function called (xxxxxx_out) to define
FFFFFFF the item type for data model item
yyyyyy. The output format for the item
is defined with the mask format
FFFFFFF.
get_source Where source data model processing
occurs.
infile “xxxxxxx” Specifies the name of the file used to
supply input data for processing.
initial pre condition Indicates that an access model function
XXXXXXX met call has returned a valid value for an
item pre-condition and assigned its
value to data model item XXXXXXX.
instance nnn Identifies which instance of the item was
acted upon. The instance is incremented
by a group. Instances are not
incremented by defining items or tag
items. Instances are reset when control
passes back to the parent item.
IO: char “x”, out of Indicates that data found in the input
char set stream is outside the range of the valid
characters defined for the item.
Keyword Meaning
IO: func XXXXXXX Indicates that the access model function
called XXXXXXX is being called to return a
value to the data model.
level nnn Starting from the top of the data model
and working downward, the level is a
reference to the number of unique data
model group items encountered. This
key word allows you to more easily
determine where in the data model the
item occurs. The level is also referenced
by the pipe ‘|’ symbols along the left
side of the trace file as each data model
item is encountered and when
processing of the item is finished. One
‘|’ is displayed for each level, i.e., “|||”
would be displayed at level 3.
litpush “X” Defines the value of a literal constant
usually assigned to a variable.
M_L VALUES Indicates that keywords are marking the
start of a summarization of values
assigned to all Meta_link variables on
the source and target data models.
Meta_link values are listed separately
for the source and target data models.
Matching XXXXXXX Shows the comparision performed for a
to YYYYYYY tag item. The data taken from the input
stream (XXXXXXX) is compared against
the tag (YYYYYYYY) for a match.
max occurrence nnnn Identifies the maximum number of times
that an item has been defined to occur
successively in a looping sequence.
occurrence nnnn Identifies the number of times the item
was encountered within a looping
sequence.
outfile “xxxxxxx” Specifies the name of the file used to
write processed data out to the disk.
Keyword Meaning
pre_cond xxxxxxx An item is identified if the data found
(not) found meets the requirements of pre_condition
- item - post_condition. This statement
indicates whether the pre_condition has
been found.
PRESENT Indicates that the rules that follow will
be performed if the item is found to be
present. The translator determines
which mode it is in by evaluating the
current error code value after parsing a
data model item (if it is a defining item).
An error code value of “0” defines
present mode - the data model item is
present.
put_target Where target data model processing
occurs.
radix Defines the character to be used to
signify the decimal place within a
numeric item definition.
read_set returning Displays the returned value of a
xxxx character string read in from the data
stream.
resetting fp to nnn Indicates that the data file pointer is
being reset to value “nnn,” usually the
last value referenced for functions such
as # SET_FIRST_DELIM,
# SET_SECOND_DELIM,
# SET_THIRD_DELIM,
# SET_FOURTH_DELIM, and
# SET_FIFTH_DELIM or when an item is
in error. The file pointer may also be
reset to the beginning of the file when
keywords REJECT or RELEASE are
encountered in the data model.
right nn .. nn For a date model item defined as a
numeric, this indicates the minimum to
maximum number of digits which may
appear to the right of the decimal point.
Keyword Meaning
SENTINEL sequence Indicates the sentinels occur between
each grouping of occurences.
SOURCE VALUES Indicates that keywords are marking the
start of a summarization of source
values. This summarization comes at
the end of the trace file listing for the
source data model.
SOURCE_TARGET Indicates that keywords specifying the
put_target acc names of target access and data models
xxxxxxx model being loaded at this point in the
yyyyyyy.mdl trace/translation.
status Identifies the status returned by
operations performed on the parent
item.
statusc Identifies the status returned by
operations performed on an item’s
children.
TARGET VALUES Indicates that keywords are marking the
start of a summarization of target values.
This summarization comes at the end of
the trace file listing for a target data
model.
TIME_in The “_in” (for source or model
TIME_out processing) or “_out” (for target data
model processing) is added to function
names such as TIME and DATE to
indicate access model functions.
VALUE STACK Values parsed or constructed and placed
on the value stack, beginning of Phase 2
target processing.
VAR VALUES Indicates that keywords are marking the
start of a summarization of values
assigned to all temporary variables on
the source and target data models.
Meta_link values are listed separately
for the source and target data models.
VSTK The writing of each value off the value
stack - Phase 2 of target processing
Keyword Meaning
VSTK->dm xxxxxxx Shows a walkthrough of data model
dm xxxxxxx value value assignments and matching
nnnn attempts.
XXXXXX read_set err Indicates that a value read in from the
input stream is outside of the character
set XXXXXX.
XXXXXXX: status OK Indicates that rules on data model items
after PRESENT rules XXXXXXX were performed sucessfully.
Debugging when 1. Insert messages in the model using the SEND_SMSG function
Processing a Large to mark progress during the translation at a high level, for
Volume of Data example:
SEND_SMSG(2, “Field Content:”, Field1)
2. Turn the trace on and off from within the model to get more
detail:
SET_EVAR("TRACE_LEVEL", 1023)
Refer to Appendix B of this manual for a complete description
of these functions.
Example Trace Log The following is an example using the trace level 1023 —all options
selected.
(Output begins here)
Application Integrator(tm) translator [ Version 1.41 - compiled:
01/15/96 16:28:28 ] Translation
(c) Copyright 1992-95 by GE Information Services & Initialization
TCS Enterprises, Inc.
All rights reserved.
Start of translation at Fri Jun 7 11:06:49 1996 Start Time
assignment to M_L->first
+++ EVAL +++ LastName
Evaluated value “Jones "
assignment to M_L->last
+++ EVAL +++ PhoneNumber
Evaluated value \“3135401600”
assignment to M_L->phone
exec_ops status 0
| END RULE |
InputRecord: status OK after PRESENT rules
err_dump()
| FINISHED InputRecord: status 0 statusc 0 occurrence 3 max occurrence 100
| DM: InputRecord instance 0 level 1
fp_save set to 66 in InputRecord
InputRecord Matching to dm->dh 1074430864
fill_sdm initial pre condition Rec_Code met
err_dump()
|| ----------------
|| ENTER fill_sdm(): FirstName parent->instance 0
|| ----------------
|| DM: FirstName instance 0 level 2
fp_save set to 66 in FirstName
IO: func CHARSET called
IO: c = 83 <S> Set start_char 32 end_char 126
IO: c = 117 <u> Set start_char 32 end_char 126
IO: c = 101 <e> Set start_char 32 end_char 126
IO: c = 32 < > Set start_char 32 end_char 126
IO: c = 32 < > Set start_char 32 end_char 126
IO: read_set returning “Sue "
err_dump()
FirstName: status OK after PRESENT rules
err_dump()
|| FINISHED FirstName: status 0 statusc 0 occurrence 1 max occurrence 1
|| DM: LastName instance 0 level 2
fp_save set to 71 in LastName
IO: func CHARSET called
IO: c = 87 <W> Set start_char 32 end_char 126
IO: c = 105 <i> Set start_char 32 end_char 126
IO: c = 108 <l> Set start_char 32 end_char 126
IO: c = 108 <l> Set start_char 32 end_char 126
IO: c = 105 <i> Set start_char 32 end_char 126
IO: c = 115 <s> Set start_char 32 end_char 126
IO: read_set returning “Willis"
err_dump()
LastName: status OK after PRESENT rules
err_dump()
|| FINISHED LastName: status 0 statusc 0 occurrence 1 max occurrence 1
|| DM: PhoneNumber instance 0 level 2
fp_save set to 77 in PhoneNumber
[ NULL CONDITION ]
| ACTIONS |
+++ EVAL +++ ARRAY->first
Evaluated value “John "
assignment to FirstName
exec_ops status 0
| END RULE |
FirstName: status OK after PRESENT rules
err_dump()
|| FINISHED FirstName: returning status 0
OutputRecord: status after children 0
| DM: OutputRecord instance 0 cur_tdm_inst 0
|| DM: PhoneNumber instance 0 cur_tdm_inst 0
PhoneNumber - PRESENT rules
[ NULL CONDITION ]
| ACTIONS |
+++ EVAL +++ ARRAY->phone
Evaluated value “3135401600"
assignment to PhoneNumber
exec_ops status 0
| END RULE |
PhoneNumber: status OK after PRESENT rules
err_dump()
|| DM: LastName instance 0 cur_tdm_inst 0
LastName - PRESENT rules
[ NULL CONDITION ]
| ACTIONS |
+++ EVAL +++ ARRAY->last
Evaluated value “Jones "
assignment to LastName
exec_ops status 0
| END RULE |
LastName: status OK after PRESENT rules
err_dump()
|| DM: FirstName instance 0 cur_tdm_inst 0
FirstName - PRESENT rules
[ NULL CONDITION ]
| ACTIONS |
+++ EVAL +++ ARRAY->first
Evaluated value “Bob "
assignment to FirstName
exec_ops status 0
| END RULE |
FirstName: status OK after PRESENT rules
err_dump()
|| FINISHED FirstName: returning status 0
OutputRecord: status after children 0
| DM: OutputRecord instance 0 cur_tdm_inst 0
|| DM: PhoneNumber instance 0 cur_tdm_inst 0
vstk 1074524556
vstk dm PhoneNumber val 2125551212
VSTK-> dm PhoneNumber dm OutputRecord value 2125551212
vstk 1074524556
vstk dm PhoneNumber val 2125551212
VSTK-> dm PhoneNumber dm PhoneNumber value 2125551212
Writing dm PhoneNumber acc NumericFld
Writing dm PhoneNumber acc NUMERIC
Function NUMERIC_out: dm PhoneNumber pic: (999) 999@-9999
dm value “2125551212"
NUMERIC_out: return value “(212) 555-1212"
vstk 1074524568
vstk dm LastName val Green
VSTK-> dm LastName dm LastName value Green
Writing dm LastName acc AlphaFld
Writing dm LastName acc SET OBJ
vstk 1074524580
vstk dm FirstName val John
VSTK-> dm FirstName dm FirstName value John
Writing dm FirstName acc AlphaNumericFld
Writing dm FirstName acc CHARSET
returning 3 writes
Writing dm OutputRecord acc LineFeed
Writing dm OutputRecord acc INTEGER OBJ
vstk 1074524592
vstk dm PhoneNumber val 3135401600
VSTK-> dm PhoneNumber dm OutputRecord value 3135401600
vstk 1074524592
vstk dm PhoneNumber val 3135401600
VSTK-> dm PhoneNumber dm PhoneNumber value 3135401600
Writing dm PhoneNumber acc NumericFld
Writing dm PhoneNumber acc NUMERIC
Function NUMERIC_out: dm PhoneNumber pic: (999) 999@-9999
dm value “3135401600"
NUMERIC_out: return value “(313) 540-1600”
vstk 1074524604
vstk dm LastName val Jones
VSTK-> dm LastName dm LastName value Jones
Writing dm LastName acc AlphaFld
Writing dm LastName acc SET OBJ
vstk 1074524616
vstk dm FirstName val Bob
VSTK-> dm FirstName dm FirstName value Bob
Writing dm FirstName acc AlphaNumericFld
Writing dm FirstName acc CHARSET
returning 3 writes
Writing dm OutputRecord acc LineFeed
Writing dm OutputRecord acc INTEGER OBJ
vstk 1074524628
You can view the contents of the input and output files from within
Viewing Input the Layout Editor of Workbench. Reviewing the actual data parsed
and Output Files or constructed can be a helpful tool in debugging.
Source to Target Two new options that were added to the Debug File drop down
Mappings Report menu are Source to Target Map Listing and Data Model Listing.
These are new dialog boxes.
The Source to Target Maps dialog box is used to create a report that
shows the source data model item labels, the associated variable
labels, and the target data model item labels. The report can be
displayed on the screen, printed, or sent to a file.
The Data Model Listing dialog box is used to create a report that
shows the data model and offers the option of printing the data
model with or without rules. The report can be displayed on the
screen, printed, or sent to a file.
3. At the Source Data Model value entry box, type the filename of
the source data model to be used in the report;
– or –
Select the down arrow button to display a drop down menu.
Highlight the desired source data model.
4. In the Target Data Model value entry box, type the filename of
the source data model to be used in the report;
– or –
Select the down arrow button to display a drop down menu.
Highlight the desired source data model.
5. In the Label Sequence group box, choose the sequence in which
the report should appear. The option chosen will appear in the
first column of the report.
6. In the Report Format group box, choose the format of the
report. You may choose as many check boxes as necessary to
format the listing. The group box will default to “Source to
Target Direct.”
Source Not Report will list the data model items that
Mapped appear in the source model that do not get
mapped to a variable or to a target data
model item.
Source Indirect Report will list the data model items that
get mapped to a variable in the source
model but do not get carried over to the
target model.
Source To Report will list the data model items that
Target Direct get mapped to a variable then mapped to a
target data model item.
Target Indirect Report will list the data model items in the
target model that get assignments from
variables but the variables don’t appear in
the source model.
Target Not Report will list data model items that
Mapped appear in the target model that have not
been mapped from a variable or source
data model item.
AmendmentSeqNo
ConsolidatedShipment
HSHRecordTypeID
MasterSeqNo
MessageType
ReceiverID
SenderID
TDECSeqNo
Source Indirect
The Source Indirect format lists the data model items that get
mapped to a variable in the source model but do not get carried
over to the target model.
MINGO MANUFACTURING
Date: 01/18/1999 Time: 16:01 Source to Target Mappings Page: 1
SOURCE INDIRECT
Sorted In 'Source DMI Label' Sequence
Source DMI Labels - KBTDECS.mdl Variable Labels___________ Target DMI Labels - KBTDECT.mdl
ArrivalDate ARRAY->DTM_2380
BusinessRegistrationNo ARRAY->RFF_1154
Countryof Dispatch ARRAY->LOC_C157_3225
CountryofUltDestination ARRAY->LOC_C157_3225
CountryofOrigin ARRAY->FTX_4441_12
CustomsControlPoint ARRAY->LOC_C517_3225
DepartureDate ARRAY->DTM_2380
FOBValue ARRAY->MOA_5004
FlightNo VAR->TDT_8028
GoodDescriptionNo1 ARRAY->FTX_4440_1d
GoodDescriptionNo2 ARRAY->FTX_4440_2d
Good DescriptionNo3 ARRAY->FTX_4440_3d
DIRECT
Sorted In 'Source DMI Label' Sequence
Source DMI Labels - KBTDECS.mdl Variable Labels___________ Target DMI Labels - KBTDECT.mdl
Target Indirect
This report lists the data model items in the target model that get
assignments from variables, but the variables do not appear in the
source model.
MINGO MANUFACTURING
Date: 01/18/1999 Time: 16:00 Source to Target Mappings Page: 1
TARGET INDIRECT
Sorted In 'Source DMI Label' Sequence
Source DMI Labels - KBTDECS.mdl Variable Labels___________ Target DMI Labels - KBTDECT.mdl
ARRAY->CNT_6069 CNT_01_01_6069
ARRAY->COM_3155_1 COM_01_02_3155
ARRAY->DTM_2005 DTM_01_01_2005
ARRAY->FTX_1131 FTX_03_02_1131
ARRAY->FTX_4440_5 FTX_04_05_4440
ARRAY->FTX_4451d FTX_01_4451_001
ARRAY->LOC_3227 LOC_01_3227
ARRAY->MOA_5025 MOA_01_01_5025
ARRAY->QTY_6063 QTY_01_01_6063
ARRAY->RFF_1153 RFF_01_01_1153
ARRAY->RFF_2_1153 RFF_2_01_01_1153
VAR_CNI_1490 CNI_1490
BGM-01_04_1000
BGM_04_4343
CNI_02_02_1373
CNI_02_03_1366
CNI_02_04_3453
CNI_03_1312
CNT_01_03_6411
CST_03_01_7361
CST_03_02_1131
CST_03_03_3055
CST_04_01_7361
CST_04-02_1131
Generating User- Beyond using the reports defined for you, Application Integrator
Defined Reports has provided the groundwork for application-specific report
generation. All Application Integrator reports are developed using
the same structures as data mapping (map component files and
models). To ease your generation of application-specific reports,
Application Integrator includes a set of models, map component
files, and other files that serve as templates for handling both
reports where the reporting data is known (for example, you are
reporting on an output file of X12 invoices (810s)) and others for
reporting on data where the content is not known in advance.
These files contain the logic to deal with the common report
characteristics:
r Pages are set to 66 lines in length
r Each report prints 57 lines per page
r Report headings include:
− Company name, centered
− Date and time, report title, centered page number
− Up to six lines of heading information
r Clean up of temporary report generated files
r Automatic calculation of report width: 80 column, 132
column, etc.
Reporting Where the The following diagram shows the flow of the generic report system
Data Content is Known for reports where data is pre-identified.
Printing and Generating The printing of a report is invoked through the use of the shell
Generic Reports script OTReport.sh and can be invoked from the command line
using the following syntax:
In UNIX, type:
OTReport.sh <D/P> <specific_report.att> <columns>
In Windows, type:
OTReport.bat <D/P> <specific_report.att> <columns>
For the arguments:
<D/P> — Enter either ‘D’ for display or ‘P’ for printing of report.
<specific_report.att> — Enter the name of the specific report map
component file.
<columns> — Enter the number of columns the report is be printed
into. It is an optional argument, defaults to 132.
Examples:
For UNIX, type:
OTReport.sh P OTActR1.att 80
For Windows, type:
OTReport.bat P OTActR1.att 80
UNIX
The shell script OTReport.sh invokes a translation, such as the
following:
inittrans -at -OTRpt.att -cs $OT_QUEUEID
-DREPORT=$2 -DSESSION_NO=$$ -P 1 -DCOLUMNS=$3 -I
where
inittrans — Program that passes a request for translation to the
Control Server.
-at OTRpt.att — Specifies the first map component file with which
to begin the translation.
-DREPORT=$2 — Passes the second OTReport.sh argument (for
example, OTActR1.att) into the translation session, using the
environment variable “REPORT.”
-P 1 — Defines the translation queue priority to be 1.
(1=low priority, 99=high priority).
-cs $OT_QUEUEID — Identifies the Control Server with which to
communicate.
-DCOLUMNS=$3 — Passes the third OTReport.sh argument into
the translation session, using the environment variable
“COLUMNS.”
-I — Invokes the translation to run interactively (in the foreground).
The OTRpt.att map component file then attaches to the
environment specified in the environment variable “REPORT”
(OTActR1.att). Both source and target data models are defined
within this map component file. The source data model is used to
extract/gather information that will be used to construct the report.
The target data model then uses this information to generate the
body of the report. The report is output into a temporary file called
“<SESSION_NO>.tmp,” where <SESSION_NO> is the process ID
number. Once the body of the report is generated, processing
returns to the OTRpt.att.
Windows
The batch file OTReport.bat invokes a translation, such as the
following:
otrun.exe -at -OTRpt.att –cs %OT_QUEUEID%
-DREPORT=%2 -DSESSION_NO=$$ -P 1 -DCOLUMNS=%3 -I
where
otrun.exe — Program that passes a request for translation to the
Control Server.
-at OTRpt.att — Specifies the first map component file with which
to begin the translation.
-cs %OT_QUEUEID% — Identifies the Control Server with which
to communicate.
-DREPORT=%2 — Passes the second OTReport.bat argument (for
example, OTActR1.att) into the translation session, using the
environment variable “REPORT.”
-P 1 — Defines the translation queue priority to be 1.
(1=low priority, 99=high priority).
-DCOLUMNS=%3 — Passes the third OTReport.sh argument into
the translation session, using the environment variable
“COLUMNS.”
-I — Invokes the translation to run interactively (in the foreground).
OTRpt.att then attaches to the environment specified in the
environment variable “REPORT” (OTActR1.att). Both source and
target data models are defined within this map component file. The
source data model is used to extract/gather information that will be
used to construct the report. The target data model then uses this
information to generate the body of the report. The report is output
into a temporary file called “<SESSION_NO>.tmp,” where
<SESSION_NO> is the process ID number. Once the body of the
report is generated, processing returns to the OTRpt.att.
Reporting Where the The following diagram shows the flow of the generic report system
Data Content is for reports where data has not been pre-identified.
Unknown
❖ Note: All printing using the lp command uses the shell
script OTPrint.sh. If necessary, the command in the shell
script can be modified to add the -d option to control the
printer or class of printer to which the report is to be sent.
Once you have completed the source and target data models, the
map component files, and the development testing, you are ready
to migrate your Application Integrator electronic commerce
application to a test area or production area.
This section provides background and instruction on migrating
from development to test or from development or test areas to
production areas. It includes procedures for importing and
exporting Profile Databases.
Permission The UNIX operating system assigns permissions to each file. For
Guidelines new and replacement files, the required permissions for owner,
group, and other must be specified for read, write, and execute.
For replacement files, it is customary to match the existing target
permissions unless other permissions are specifically indicated.
To delete an existing file, the user performing the migration will
need sufficient authority.
For Windows, the operating system controls the access permissions
and passwords for each file. These are set up by the system
administrator.
Profile Database The Profile Database (sdb.dat and sdb.idx files) can be updated via
Guidelines one of three methods: replacement, manual maintenance, and
export/import.
Replacement To replace the Profile Database, two files must be replaced. They
are:
r sdb.dat — data file of the Profile Database
r sdb.idx — index file of the Profile Database
When replacing these files they must be replaced together as a set
of files, replacing only one without the other will create a serious
problem.
Manual Maintenance of Substitutions, cross-references, and verification lists, all part of the
Xrefs/Codes Profile Database, can be maintained through Trade Guide menu
options by an operator or system administrator. In cases where
changes are not minor, using the Trade Guide export and import
features for these changes are the recommended methods of
migration.
❖ Note: Use the export and import features for trading partner
profiles and standards to migrate minor changes to the trading
partner profiles and cross-reference lists.
Export/Import You can update the Profile Database by exporting the entire
database, exporting portions of the database, and then importing
the database or database portions. Refer to the section, “Importing
and Exporting Profile Databases” later in this section for complete
instructions.
To Migrate *.acc Files 1. Obtain a list of access model files that are to be migrated.
Access model files will have a suffix of “.acc.”
2. Determine the migration mode. Which of these access models
are new, replacements, or are to be deleted from the target
directory?
3. Make a backup copy of any access files in the target directory
that will be affected (overwritten) by the migration.
4. Perform the migration. New access models are copied from the
source directory into the target directory.
Changed access models can be either replaced or edited.
When removing old access models, be sure they are not still in use
by any map component files or in use through an environment
variable.
In UNIX, to determine where access models are being used, use the
UNIX grep command to search for the name of the access model
throughout the entire directory. For example, the following
command would return a list of all files that referenced
OTFixed.acc:
grep OTFixed.acc *
Considerations
r Data models can attach to other map component files. It is
important to identify all map component files that will be
affected by the use of any data model to be migrated. To
determine whether a data model performs an ATTACH,
use the UNIX grep command or an editor such as vi and
search the data model for the keyword ATTACH. If the
data model contains the keyword ATTACH, then the model
must be examined to determine what other processing will
occur and how it will affect production processing.
r Data models have the ability to execute external programs
or shells. When migrating a data model, it is important to
consider what effect it may have on existing processing in
the target area. To determine what external executions are
being performed by a data model, use the UNIX grep
command or an editor such as vi, and search for the string
EXEC.
r When Application Integrator generic data models are
modified, or general purpose models are written, these
models should be updated into each development seat and
the models’ (release version) directory (/u/aidev/OT).
Thereby, all current and future development seats will be
using the latest data models.
Map Component Map component files identify all resources (other files) used during
File (*.att) a processing session. When migrating the map component files, it
is important to look at the resources referenced by the map
Guidelines
component file to determine what else will be affected.
Considerations
r If the map component file being migrated contains
references to data models, access models or other resources
already in production, then any dependency between them
must be identified.
r Map component files are usually small and can be viewed
or printed with little consideration, this will ease the
process of analysis.
r Map component files may contain key prefixes for
substitutions and cross-references (xrefs). If key prefixes or
other environment variables are used, the effect on existing
Profile Database entries must be considered.
Environment File
(*.env) Guidelines
Considerations If existing environment files change, then any model that uses those
files must be considered. Use the UNIX grep command to check for
these references.
In general, if you are unsure of whether or not a file has been
changed, you can use the UNIX diff command or the Windows fc
command to compare the files:
diff <first file> <second file> (UNIX)
fc <first file> <second file> (Windows)
❖ Note: If you are exporting the entire database, use the Export
option in the File menu, but if you are moving portions of the
database, such as a trading partner or code list, use the
Export option within the Trading Partner, Standards,
Application Recognition, or Post Processing dialog boxes.
Exporting Complete
Profile Databases
To export data 1. From the Trade Guide File menu, choose Export Values. The
dialog box shown below appears.
The File Name value entry box appears with a default filename
of the user’s ID followed by the “.exp” extension. You can
specify any filename.
2. Select Append To Output File to append to the existing file;
- or -
Select Overwrite Output File to overwrite the existing file.
3. Choose the OK button to export the Profile Database;
- or -
Choose Cancel to return to the Trade Guide main menu.
If you are overwriting a file when exporting data, the dialog
box shown below appears to verify that you want to overwrite
the existing file.
To import complete or 1. From the Trade Guide File menu, choose Import Values. The
selected portions of the dialog box shown below appears.
Profile Database
The File Name value entry box appears with a default filename
of the user’s ID followed by the “.exp” extension. You can
specify any filename you wish to import the data from.
2. Select Replace Existing & Add New Records to overwrite to the
existing Profile Database with the new values;
- or -
Select Add New Records Only to add the new values to the
Profile Database.
3. Choose the OK button to import the file into the Profile
Database;
- or -
Choose Cancel to return to the Trade Guide main menu.
When you are importing data, a Question dialog box appears to
verify that you want to load the entries.
4. Choose the Yes button to load the entries. If you choose Yes, a
working dialog box appears showing entries loading. (You
may terminate the load process early by choosing the Stop
button);
- or -
Choose the No button to return to the Import Values dialog
box.
5. Once all records have been loaded, the Stop button changes to
Continue. Choose the Continue button to return to the Import
Values dialog box.
Exporting Selected You can migrate the following selected portions of the Profile
Portions of the Database:
Profile Database
r Trading Partner Profiles
r Cross-references (Xrefs)/Codes Standards (a complete
standard)
r Cross-references (Xrefs)/Codes Standards (a selected
version)
r Cross-references (Xrefs)/Codes Standards (a selected
category)
r Cross-references (Xrefs)/Codes Application Recognition
(for example, 840-Catering/DAL840s.att)
r Cross-references (Xrefs)/Codes Standards Post Processing
(for example, GMC/gmupd)
Export the selections following the instructions below. To import
this information into a second database, follow the procedures
provided earlier in this section.
To export a trading 1. From the Profiles menu of Trade Guide, choose Trading
partner profile Partners.
2. Select the trading partner and choose the Export button. The
Export Values dialog box appears:
The File Name value entry box appears with a default filename
of the user's ID followed by the “.exp” extension. You can
specify any filename.
To export 1. From the Xrefs/Codes menu of the Trade Guide, choose either:
cross-references/codes
r Standards
r Application Recognition
r Post Processing
If you select Application Recognition or Post Processing, skip to
Step 3.
2. From the Standards dialog box, do one of the following:
r Select a standard to export.
r Double-click a standard to see a list of versions and then
select a version to export.
r Double-click a version to see a list of categories and then
select a category to export.
3. With the information to export highlighted, choose the Export
button. A dialog box similar to the following appears.
The File Name value entry box appears with a default filename
of the user's ID followed by the “.exp” extension. You can
specify any filename.
4. Select Append To Output File to append to the existing file;
- or -
Select Overwrite Output File to overwrite the file.
Access model A collection of item type definitions and declarations (access model
items) used during a translation in conjunction with an
environment, as specified in a map component file (.att).
Access model item The definition of an item type in order to recognize data in an input
stream, and construct data for the output stream. The definition of
an item type may include the information that precedes or follows
the character set and any special function processing.
Action One of two parts of a rule. Once a condition is met, the operations
(actions) defined with rules, such as assigning a value to a variable,
are to be performed.
See also condition.
AODL Access Object Definition Language. The language that gives the
access model the ability to describe components or types of items
that are both fixed and variable in length (delimited by specific
characters).
Base condition Part of the access model definition of an item type that describes
the allowable value set, for example, all alphabetic characters of the
set A-Z and a-z.
See also pre-condition and post-condition.
Code verification list A value parsed from the input file is verified for existence against a
list of valid codes using a code list label (lookup ID) specified in the
data model item's declaration (verify list ID).
The lookup has a prefix key associated with it, that is referenced in
the environment (Verify Key Prefix). The Profile Database is
hierarchically structured, with the levels of the hierarchy developer
defined. This prefix key defines where in the structure of the
database, the lookup is to occur. Because the prefix is always
appended with the lookup value, the same “Verify List ID” can be
used throughout the database hierarchy.
Composite/component Composite or component item types, like tag and defining item
item type types, are declared in the access model by the developer. These
item types are assigned a base value of “Container” in the access
models. Composite items in the X12 and UN/EDIFACT
environments would be defined by these “container” item types in
Application Integrator.
See also defining item type and tag item type.
Control Server The hub of the Application Integrator system, this process server
controls, resolves conflicts for, and schedules the many resources
used throughout the system, including translators, databases, input
and output files, etc. All requests (invoking a translator, database
lookups, status inquiries, etc.) go through the Control Server's
messaging system, that consists of an input (request) queue and
multiple output (response) queues; a response queue is established
for every requestor.
Cross-reference A value parsed from the input file is looked up in the Profile
Database to be replaced with another value.
The lookup is accomplished by using the # XREF data model
function.
Data model item A representation of one component of a message. Each data model
item consists of two parts –declaration and rules. The declaration
part, which is required, defines the item’s label, reference to the
access model item type, size, occurrence, and other attributes. The
rules, which are optional, define actions to be performed that are
associated with the item.
Declare First use or reference to items and variables that bring them into
existence. Item definitions in the access and data models declare
the item, so that it can be referenced later in the translation
definition process. Variables (environment, temporary, Array, and
VAR–>) are declared the first time they are used, and at that time,
they are defined in existence.
Defining item A data model item assigned a defining item type. Defining items
are the lowest level item in a data model. No other type of data
model item can exist below defining items hierarchically.
See also container item, group item, and tag item.
Defining item type An item type that identifies a data string's characteristics (name,
type, length, occurrence and format/mask).
Defining item types are developer declared in an access model. The
developer has control to create any and all types of defining items
that are necessary to properly identify/construct items within a
data stream. Once the environment associates an access model with
a data model, the defining item types can then be used in the data
model declarations to model the data structure.
Some common types of defining item types declared in the access
model are: numeric. alphanumeric, alpha, date, and time.
See also composite/component item type, defining item type, and tag item
type.
Entity The name given to the concatenated string that identifies a trading
partner in the Profile Database. Each level of the trading partner
profile (interchange, functional group, or message) includes an
ENTITY statement in the Profile Database. Also referred to as the
entity lookup ID, database lookup, or simply lookup.
Environment file (.env An optional file that can be used to enhance the configuration of the
file) translator. Environment files declare user-defined environment
variables, for example:
ACTIVITY_TRACK_SUM=“DM_ActS”
ACTIVITY_TRACK_DET=“DM_ActD”
An environment file is loaded by using the function ENVIRON_LD( )
in the data model.
Functions Special processing routines that tend to bring data into the
translation process that is non-input stream data, manipulate the
data, or output log messages. Functions are used in both the access
and data models. There are two types of functions that can be
invoked: standard and user-written.
The standard Application Integrator functions fall into many
categories, including the following:
Category Functions Performed
Database Cross-referencing, code verification, update
database
Date/time Get system date and time
String Concatenate, determine a string's length, extract
a substring
Environment Get and set an environment variable's value
Error Check the current error code value, set the value
Logging Output various log records - archive, error,
general messages
Other Reset a MetaLink list pointer, declare character
set
Hierarchy Refers to the tiered structure of data model items in the data
structure. Data model items are defined in relationship to other
items (parent, child, sibling), in this way, relationships such as
header records to detail records are established.
Map component file A file that defines the resources that are pulled together for a
(.att file) portion or a complete translation process – an environment.
Multiple environments are typically brought together to form a
complete translation session.
Example of a map component file’s content:
Input File =
Output File =
Source Access Model =
Source Data Model =
Target Access Model =
Target Data Model =
Substitution Key Prefix =
Xref Key Prefix =
Verify Key Prefix =
Trace Level =
Additional Parameters =
(Additional Parameters represent an unlimited number of
developer-defined environment variables.)
See also environment.
Modeling The creation of models that describe the content of input and
output data streams and define and control the processing of each.
Pre-condition Part of the access model definition of an item type that describes
any rules about the data that precedes the item, for example, a
leading delimiter.
Post-condition Part of the access model definition of an item type that describes
any rules about the data that follows this item, for example, a
trailing delimiter.
Scope Refers to the range or period when data is available during the
translation process — from the point the data is declared (comes
into existence) to the point when the data ceases to exist (the data is
not associated with a label any more).
Source data model Data model that applies to the input (incoming) data to be
processed.
Substitution values A value that is associated with a reference label. The label becomes
a database lookup key allowing substitution of trading partner-
specific information into a generic data model or environment
definition.
Tag item A data model item assigned a tag item type. A tag item in a data
model identifies different records or segments in the data stream.
An application record typically contains a record type or code.
Records in the public standards X12 and UN/EDIFACT contain a
segment ID that is the “tag.”
See also container item, defining item, and group item.
Target data model Data model that applies to the output (outgoing) data to be
processed.
Testing functional area Also referred to as a testing system or testing area. The name given
to an Application Integrator runtime system that has been set aside
for the formal testing of data models before migration to a
production area. This area exists only in UNIX environments.
See also development functional area and production functional area.
Trace log The log of the translation process. A trace log (also called a trace)
shows the process flow through the data model(s), including the
assignment of variables and their associated values, conditions and
actions, and map component files. The trace log can be set to
various levels from minimal to full details (for example comparing
a character to the current character set). The trace log provides an
immediate and detailed debugging tool.
Translation session ID The ID of the last session number used. Stored in the Translation
Session ID file (tsid). The session number is used to create unique
administration records and filenames.
User slot The name given to each Application Integrator Control Server
“user slot.” A user slot is opened each time the Trade Guide is
invoked, a translation session is started, or a Control API (C object
model) is linked to the Control Server.
Value stack The value stack is a list of values in memory where constructed
data gets stored. In Phase 1, values assigned to a Defining are
stored on the value stack, in the order assigned. Phase 2 references
the value stack to write the values out, while following the flow of
the data model structure. The values in the value stack consist of
the actual data and the label associated with the Defining.
Verify list ID A key value that can be looked up in the Profile Database for
verification purposes. A value parsed from the input file is verified
for existence against a list of valid codes using a code list label
(Verify List ID) specified in the data model item's declaration.
#CHARSET function
A
defining, 27
#DATE function, 7, 81, 287, 316 About
#DATE_NA function, 27, 81 Application Integrator help option, xv
# FIFTH_DELIM function About Workbench information box, 21
defining, 27 Absent mode rules, 104
# FINDMATCH function Access icon
defining limit, 278 description of, 34
# FIRST_DELIM function toggling display of, 51
base values for, 27 Access model
post-condition values, 29 base, 24
#FOURTH_DELIM function, 27 definition of, 403
#LOOKUP function, 27, 287 list of standard models, 24
#NUMERIC function, 28, 73, 287, 316 overview, 7, 24
#NUMERIC_NA function, 28, 73 post-condition, 24, 29
# SECOND_DELIM function, 26, 27 pre-condition, 24
#SET_FIFTH_DELIM function, 346 viewing contents of, 31
#SET_FIRST_DELIM function, 346 Access Model dialog box, 49
#SET_FOURTH_DELIM function, 346 Access model item
#SET_SECOND_DELIM function, 346 definition of, 403
#SET_THIRD_DELIM function, 346 Action
#THIRD_DELIM function, 26, 27 definition of, 403
#TIME function, 28, 82, 287, 316 Administration database, 9
#TIME_NA function, 28, 82, 83 definition of, 403
overview, 11
$ Aliases, 222
AODL (Access Object Definition Language)
$ (dollar symbol), 74, 80 definition of', 403
$ (substitution) function, 28, 133, 142, 224, 284
L M
Labels Map component file
data modeling limits, 300 naming conventions, 280
Layout Editor MapBuilder, 20, 42, 110, 161
closing, 99 default values, 151
Data Model menu, 48 definition of, 410
Debug menu, 55 icon, 15
Edit menu, 40 loop control, 162
File menu, 37 mapping data, 157
Help menu, 55 modeling session helps, 157
menus, 37 overview, 2, 110, 151
overview, 34 Preferences dialog box, 151, 153
overview of window, 33 processing messages, 161
restoring, 39 symbol, 158
toggling access icons, 51 using, 152, 158
toggling grid lines of, 53 Mask
View menu, 49 definition of, 411
window, 17 Masking characters
Layout Editor dialog box, 17 for dates, 81
Listing disk/tape contents, xx for justifying, 78
Literals for literals, 78
inserting into rules, 124 for numbers, 70
masking for, 80 for positive/negative sign, 75
LKUP function, 85, 287, 295 for time, 82
LOG_REC function, 289, 336 for triads, 78
LOOKUP_KEY keyword environment variable, Math operators
85, 171, 295, 322 inserting into rules, 130
Loop control Message queues
enabling, 156 specifications, 267
Loop Control Needed dialog box, 164 using, 211
overview, 162 MetaLink
source loop rules, 165 as a variable, 406, 415
target loop rules, 167 clearing values, 135
troubleshooting, 163 comparing to Array, 107
undoing, 162, 165 counters for, 90
using, 162 declaring references, 172
Loop Control Needed definition of, 8, 107, 411, 414
dialog box, 164 EXPORT, 136
using, 164 increment, 287
description of, 113 Run dialog box, 55, 229, 235, 240, 246, 247, 252,
Rule Notebook 257, 258, 297, 311, 317, 330, 311, 317, 330
description of, 112 Runtime
RuleBuilder, 110 syntax checking, 146
accessing, 117
Arrays tab, 131 S
Conditions tab, 127
Data model items tab, 129 Save As dialog box, 62, 98
Declarations tab, 138 Saving
definition of, 413 data model, 97
dialog box, 17, 48, 111, 118 standard data models under new names, 62
Edit menu, 115 Scope
File menu, 114 definition of, 413
Functions tab, 134 Scroll bars, 34
Help menu, 117 Search-Type dialog box, 366
icon, 34 Segment
Keywords tab, 135 data mapping for, 4
MetaLinks tab, 132 Semantic
Operators tab, 130 definition of, 414
overview, 2, 104, 110 Session log
Rule Notebook, 111 overview, 13
Substitutions tab, 133 Session number, 13, 55, 171, 309, 328, 375, 309,
tabs, 127 328, 375
toolbar, 113 translation session ID, 415
Variables tab, 131 Session Output dialog box, 231, 235, 236, 240,
window, 17 241, 246, 247, 252, 253, 257, 259, 323, 325, 323,
Rules 325
adding, 119 SET_CHARSET function, 27
applying changes, 120 SET_DECIMAL function, 73, 344
checking the syntax of, 140 SET_ERR keyword, 137, 290, 291
completing via prompts, 140 SET_EVAR function
copying to Clipboard, 139 enveloping, 189
cutting to Clipboard, 139 explanation of, 292
definition of, 413 rules, 332
methods of creating, 110 using with trace, 332, 348, 332, 348
modifying, 119 SET_FIFTH_DELIM function, 344
overview, 9, 104 SET_FIRST_DELIM function, 29, 344
reasons for using, 9 description, 27
types of conditions, 106 SET_FOURTH_DELIM function, 344
using to set trace level, 332 SET_LKUP function, 295
Help menu, 21 X
Layout Editor window, 34
main window, 15 XREF function, 187, 188, 189, 295
overview, 1, 2 XREF_KEY keyword environment variable, 171,
saving changes in, 37 187, 189, 295, 322
Tools menu, 20 Xrefs/Codes dialog box, 295, 296, 297
window components of, 15