Workbench 1 Book

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 452

5125.

71-7

EDI Application Integrator


Transaction Modeler Workbench
User’s Guide
(Release 3.1)

August 1999

Copyright © 1999
GE Information Services, Inc.

This document is produced by General Electric Company, U.S.A., which is not


connected with the General Electric Company p.l.c. of England.
DISCLAIMER Information in this document is subject to change without notice. GE Information
Services reserves the right to change (upgrade) data to provide the most accurate,
reliable quality product available. Specific mention of a product in this document
is not a guarantee by GE Information Services of complete hardware and software
compatibility with your data processing system. If you have questions about
hardware and/or software compatibility, please contact your GE Information
Services representative.
The following electronic data interchange (EDI) standards are developed and
maintained by these organizations:
TRADACOMS Article Number Association
Accredited Standards Committee (ASC) and
X12
Data Interchange Standards Association (DISA),
the secretariat and administrative branch for
ASC X12
Data Interchange Standards Association (DISA),
UN/EDIFACT
the secretariat and publisher for the Pan
American EDIFACT Board (PAEB)
Trade Guide, Trade Guide for System Administration, Transaction Modeler
TRADEMARKS
Workbench, MapBuilder, RuleBuilder, Interactive Gateway Extension, and User
Exit Extension are trademarks of RMS Electronic Services, Inc.
c-tree and c-tree Plus are trademarks of Faircom Corporation.
Cleo is a registered trademark of Interface Systems, Inc.
DEC, Digital Alpha Station, and Digital UNIX are trademarks of Digital
Equipment Corporation.
eXceed is a trademark of Hummingbird Communications, Inc.
HP, Hewlett-Packard, HP-UX are registered trademarks of Hewlett-Packard.
Intel is a registered trademark of Intel Corporation.
IBM, AIX, Risc System/6000 and RS6000 are trademarks of the International
Business Machine, Inc.
SCO is a trademark of the Santa Cruz Operation, Inc.
SPARC is a trademark of SPARC International, Inc.
Sun, Sun Microsystems, and Solaris are registered trademarks of Sun Microsystems, Inc.
XENIX, NT, Windows and Microsoft are registered trademarks and WinSock is a
trademark of Microsoft Corporation.
All product names and corporations mentioned in this document may be
trademarks, registered trademarks, or copyrighted by their respective owners.
Table of Contents

Preface.................................................................................................................................... iii

Section 1: Overview of Data Modeling ............................................................................ 1


Overview of Workbench Features ...............................................................................................2
Defining the Structure of the Input and Output Data...............................................................3
Associating Input Data with Output Data..................................................................................8
Other Data Modeling Components............................................................................................11
Accessing Workbench..................................................................................................................14
Workbench Menus .......................................................................................................................19

Section 2: Creating Source and Target Data Models ................................................... 23


Understanding the Role of the Access Model ..........................................................................24
Overview of the Layout Editor Window ..................................................................................33
Creating A Data Model ...............................................................................................................58

Section 3: Building Rules into Data Models ............................................................... 103


Overview of Rules Entry ...........................................................................................................104
Using RuleBuilder ......................................................................................................................111
Using MapBuilder......................................................................................................................151

Section 4: Creating Environments................................................................................. 169


Understanding Environments ..................................................................................................170
Defining a Map Component File..............................................................................................176
Map Component Files for Enveloping/ De-enveloping ....................................................187
Compliance Checking................................................................................................................192
Using Extended Access Device Types.....................................................................................211
Application Integrator Sockets Examples ...............................................................................212

Section 5: The Data Modeling Process ......................................................................... 269


List of Steps to Data Modeling .................................................................................................270
Notes on Data Model Development ........................................................................................299
Application Integrator Model Worksheets .............................................................................303

Workbenc
hUser’
sGuide i
Table of Contents
Section 6: Translating and Debugging.........................................................................307
Overview of Translating and Debugging............................................................................... 308
Translating Using Workbench ................................................................................................. 311
Translating at the Command Line........................................................................................... 319
If the Translator Does Not Execute Successfully ................................................................... 328
Using the Translation Trace Log.............................................................................................. 33 0
Understanding the Trace Output ............................................................................................ 336
Viewing Input and Output Files.............................................................................................. 366
Using Trade Guide Reporting Features to Debug................................................................. 375

Section 7: Migrating to Test and Production Functional Areas...............................383


Planning Development Migration ........................................................................................... 384
Migrating Applications ............................................................................................................. 387
Importing and Exporting Profile Databases........................................................................... 396

Glossary...............................................................................................................................403

Index.....................................................................................................................................417

ii Workbench User’s Guide


Preface

The Transaction Modelers Workbench User’s Guide, hereafter referred


to as Workbench User’s Guide, provides information on using the
Application Integrator development product. This preface contains
information on:
r Application Integrator documentation
r Workbench documentation
r Prerequisites for Workbench
r Documentation conventions
r Mouse and keyboard conventions
r Keyboard shortcuts
r On-line Help
r Application Integrator Customer Support

Note: Application Integrator software is sold and supported as a

❖ stand-alone (single user) system on the Windows 95, Windows 98


and Windows NT 4.0 operating systems. Application Integrator
software is sold and supported as a concurrent (multiple user)
system on the UNIX and Windows NT 4.0 operating systems. Your
system or network administrator can tell you whether your
Application Integrator software is a single or concurrent user
system.

Within the Application Integrator documentation, items directed


specifically to the Application Integrator Windows single user
system will be referred to as “single user” and to the Application
Integrator Windows multiple user system as “multiple user.”

Workbench User’s Guide iii


Preface

About Application The Application Integrator documentation set consists of the


Integrator following manuals:
Documentation
Document Purpose
ASC X12 Standards Provides information on using
Implementation Guide Application Integrator in an ASC X12
Standards Implementation.
CII/EIAJ Standards Provides information about using
Implementation Guide if Application Integrator in a CII/EIAJ
licensed Standards Implementation.
Communication Link Provides information on Application
Documents if licensed Integrator software that supports
communication with major value-
added networks (VANs).
Interactive Gateway Provides information on using an
Extension for Java User’s optional Application Integrator
Guide if licensed component that allows for direct
communication to the Control Server
from within a C application program.
Interactive Gateway Provides information on using an
Extension User’s Guide optional Application Integrator
component that allows for direct
communication to the Control Server
from within a C application program.
Migration to Application Provides procedures for migrating
Integrator applications/databases from version
2.0 to version 3.0. It also contains
procedures for installing the update
software.
Application Integrator Provides installation procedures for
Installation Guide the various platforms and operating
systems supported by the product.
Application Integrator Provides information about making
Localization to Japanese Application Integrator models
Handbook if licensed operate properly on Japanese
computers.

iv Workbench User’s Guide


About Application Integrator Documentation

Document Purpose
Release Notes Provides an overview of the latest
release.
System Configuration Provides information on system
Requirements Document resource requirements and
(UNIX only) procedures for configuring the
Control Server on various UNIX
platforms.
TRADACOMS Provides information on using
Standards Application Integrator in a
Implementation Guide TRADACOMS Standards
Implementation.
Trade Guide for System Provides instructions on using the
Administration Guide Trade Guide for setting up and
running translations, and
administering the system.
Transaction Modeler Provides information and procedures
Workbench User’s for developing data models. Used
Guide together with both the Trade Guide
(this manual) for System Administration Guide and
the applicable standards
implementation guide.
UN/EDIFACT Provides information on using
Standards Application Integrator in a
Implementation Guide UN/EDIFACT Standards
Implementation.
User Exit Extension Provides information on using the
User’s Guide Application Integrator feature that
allows you to write user-defined
functions which can be invoked (like
standard Application Integrator
functions) during data modeling.
ODBC Developer’s Provides information about
Guide if licensed connecting to different Structured
Query Language (SQL) databases and
creating Open Database Connectivity
(ODBC) data sources.

Workbench User’s Guide v


Preface

About the Workbench This guide is designed to give you a working understanding of the
User’s Guide operation and capabilities of the software. The information in this
document is arranged so you can quickly and easily understand the
Trade Guide features, menus, and operations.
The Transaction Modelers Workbench User’s Guide is divided into the
following sections and appendixes:

Section Description
Section 1: Provides an overview of the Workbench
Overview of Data features and terminology.
Modeling
Section 2: Provides complete procedures for
Creating Source and creating data models.
Target Data Models
Section 3: Provides procedures for using
Building Rules into RuleBuilder ® and MapBuilder ™ to
Data Models incorporate logic into data models.
Section 4: Provides instructions for creating the
Creating environment for electronic commerce.
Environments
Section 5: Provides a discussion and steps to the
The Data Modeling data modeling process, including
Process Application Integrator conventions and
tips.
Section 6: Provides procedures for translating and
Translating and debugging data models.
Debugging
Section 7: Provides background information and
Migrating to Test and steps for migrating models to a test or
Production production functional area.
Environments
Appendix A: Lists all the files shipped with
Application Integrator Application Integrator, including
Files Workbench files.
Appendix B: Lists the complete functions and
Application Integrator keywords used with Workbench to
Model Functions create powerful data models.

vi Workbench User’s Guide


About the Workbench User’s Guide

Section Description
Appendix C: Lists the various forms of the ASCII
ASCII Character Set character set (for example, decimal and
hexadecimal).
Appendix D: Provides quick reference sheets of the
Quick Reference Workbench features.
Sheets
Appendix E: Provides an explanation of Application
Application Integrator Integrator scripts and programs.
Utilities
Appendix F Provides a list and description of the
Runtime Errors error codes you might encounter while
modeling, translating, and debugging.
Glossary Provides a description of the
Application Integrator terminology
used in this manual.
Index Provides an alphabetical list of subjects
and corresponding page numbers
where information can be found.

Workbench User’s Guide vii


Preface

Prerequisites for
Workbench

System Prerequisites For information on the prerequisite hardware and software


necessary to run Application Integrator on UNIX systems,
including the Workbench and Trade Guide components, refer to
the Application Integrator System Configuration Requirements
document.
For information on the prerequisite hardware and software
necessary to run Application Integrator on Windows systems,
including the Workbench and Trade Guide components, refer to
Section 2 of the Application Integrator Installation Guide.

❖ Note: We recommend using the Windows default colors for your


Application Integrator Windows applications. Changing or
customizing your display colors may result in readability problems
when using Workbench in a Windows environment.

User Prerequisites It is helpful to have the following background before using


Workbench:
r Mouse and graphical user interface (GUI) experience –
windows and dialog boxes
r Basic knowledge of your operating system and an on-line
editor
r Program concept knowledge, including
− an understanding of data organization
− an understanding of data manipulation
− an understanding of program process flow
− an understanding of testing and debugging
r Knowledge of electronic data interchange, database
management, and systems reporting
r Knowledge of the standards implementation applicable to
your environment

viii Workbench User’s Guide


Documentation Conventions

Documentation
Conventions

Typographical Conventions

Regular This text style is used in general.


Courier This text style is used for system output and syntax
examples.
Italic This text style is used for book titles, new terms, and
emphasis words.

User Input In this document, anything printed in Courier and boldface type
should be entered exactly as written. For example, if you need to
enter the term “userid,” it will be shown in the documentation as
userid.

Notes, Hints, and Notes provide additional information and are boxed inside the text,
Cautions using the following format.

❖ Note: This is a note.

Hints provide helpful tips on performing operations in a quicker


manner. They are formatted in the same way.

❖ Hint: This is a hint.

Cautions provide information on practices or places where you


could possibly overwrite data or program files. They appear in the
following format.

❖ Caution: This is a caution.

Workbench User’s Guide ix


Preface

Screen Images The screen images in this manual were taken from the Windows
version of Workbench running on a Windows 95 platform. If you
are running Workbench on Windows NT or UNIX, the actual
screens (windows and dialog boxes) may differ slightly in
appearance. Differences between platforms are noted throughout
this manual where appropriate.

Tables Tables appear frequently in this manual and are indicated by


headings followed by dark underlining then the body of the table
without gridlines (in most cases). The end of the table is indicated
by double underlines.

Mouse and Keyboard In most cases, you can use either a mouse or a keyboard to enter,
Conventions view, and manipulate the windows and dialog boxes of
Application Integrator.

Keyboard Shortcuts In many instances, you will have the option of either using the
mouse or keyboard to perform an action. These keyboard shortcuts
are shown to the right of the menu item. For example, to use the
Find shortcut you would press and hold the Ctrl key then press the
F key. This shortcut is shown as Ctrl+F on the drop down menu
and is how they will be listed in this manual.

x Workbench User’s Guide


On-line Help

On-line Help Workbench comes with a Help system to assist you on-line while
setting up and maintaining translations and administering the
system. The Application Integrator on-line Help system opens a
Help window with search and navigation features.

To Get Help on Help 1. From the Workbench main menu, choose Help. The Help menu
appears. From the Help menu, choose Error Code Reference.
2. From the dialog box, choose the Help menu and then choose
the Help on the Browser option. You can also press Ctrl+H.
A Help entry on the Help system itself appears.

Section value
entry box

3. Click a hypertext entry (any underlined text) or use the Section


value entry box to learn more about on-line Help.
4. To close the Help entry and Help browser, from the File menu,
choose Close Browser. You can also press Ctrl+W.

Note: The Section value entry box on the Help window provides a

❖ means to quickly access all the major Help menus. Click the
indicator (in UNIX – a small rectangle; in Windows – a down arrow)
and a list of all major Help menus appear. Click any menu name to
move between them.

Workbench User’s Guide xi


Preface

To Use Application 1. From the Workbench main menu, choose Help. The Help menu
Integrator Help appears.
2. From the Help menu, choose one of the following options:
r Error Number Reference
r Keyword/Function Reference
r About

Ø If you choose Error Number Reference . . .


r When a listing of error code ranges appears, click the range
on which to obtain Help.
r When a listing of error codes appears, click once on the
exact error message for assistance on it.
A Help entry, such as the following, appears:

To search for a Help entry


a. To search for a Help entry, from the Navigate menu of any
Help entry, choose Search.
b. From the Search window, such as the one below, either scroll to
find a keyword matching your inquiry or start typing a
keyword and then double-click the keyword in the list box that
matches your inquiry.

xii Workbench User’s Guide


On-line Help

c. Click the Show Entries button. A Search Navigator window,


similar to the following, appears next to the actual Help entry.

d. Double-click the desired Help entry or highlight the desired


Help entry and choose Go To. The appropriate Help entry
appears. The following is the actual Help entry for the
exception processing option.

Workbench User’s Guide xiii


Preface

Ø If you choose Keyword/Function Reference


r Scroll to the desired function or topic.
r Click on it.
A Help entry, such as the following appears.

xiv Workbench User’s Guide


On-line Help

Ø If you choose About


A Help entry, such as the one below, appears when you choose the
menu selection.

To close the Help entry and Help browser, from the File menu,
choose Close Browser. You can also press Ctrl+W.

Workbench User’s Guide xv


Preface

Application Integrator Application Integrator support is offered through a separate


Customer Support Support Services Agreement. The type of support offered is
described within the contract. The support services cover:
r Application Integrator program updates
r Application Integrator model updates (for example, generic
and public standards-specific data models)
r Application Integrator standards updates (for example,
ASC X12 annual version updates)
r Help Desk Support

Calling for Customer To more effectively help you when you call in for support, follow
Support these steps:
1. If possible, attempt to resolve the question internally or through
the Application Integrator documentation, including the print,
on-line, and training documentation.
2. Make sure you have copied down the exact Application
Integrator version for which you are seeking assistance. The
version is found by selecting the About option on the Help
menu of Workbench. Also copy down the compile version date
of the Control Server (called cservr in UNIX and cservr.exe in
Windows) and the translator (called otrans in UNIX and
otrans.exe in Windows).

UNIX Users
To access these dates, type at the command line:
strings <program name> | grep otBuild
where <program name> is either “cservr” or “otrans”

Windows Users
To access these dates:
a. From File Manager or Windows Explorer (Windows 95 or
NT 4.0), right-click the filename cservr.exe (in Windows).
b. With the filename highlighted, choose Properties from the
File menu. The “Properties” dialog box opens and displays
information such as the filename, path, last change, version,
copyright, size, and other attributes.

xvi Workbench User’s Guide


Application Integrator Customer Support

3. Copy down the exact release number of the operating system


under which you are running Application Integrator, for
example: HP10.01 (include interim release numbers, not simply
HP or HP10) or Windows NT 4.0 (include interim release
numbers, not just Windows NT).
4. Print the Updates log found by choosing the Updates option on
the Help menu of Trade Guide. This log shows the installation
sequence, as in the following illustration:

5. Make sure the person placing the support call has taken
Application Integrator training and has a thorough background
on the issue for which you are seeking assistance.
6. Call the GE Information Services Application Integrator Help
Desk for support at (248) 324–1242.

Workbench User’s Guide xvii


Preface

Sending a Copy of Your At times, it may be necessary for an example of your problem to be
Files to Customer sent to the support staff. For consistency and compatibility across
Support the various platforms that execute Application Integrator, files
should be sent to Customer Support as follows:

Backing Up and Restoring UNIX Files


UNIX users should back up models using the UNIX “tar” command.
Try to back up relative, not explicit, so that Customer Support can
restore the files into any directory.

Media Load Command


CD ROM mount <devname> <mountpoint>
cp <mountpoint>/<OTsys>/* .
unmount <mountpoint> (after copy
is complete)
tape tar -xvf <devname>
(either 150 Mb ¼-inch
tape, 4 mm DAT tape,
or 8 mm DAT tape)

where:

<mountpoint> specifies the directory of your system where the


device will be mounted to, e.g., “/CDROM”

Note: Sun operating systems do not require a


mount to be performed. Sun automatically
performs an automount.

<devname> specifies the device name of the media drive of


your system, e.g., “dev/dsk/c0t5d0” or
“/dev/rmt0”
<OTsys> specifies the name of the directory which stores
the files on the CD ROM, either TRANDEV for
development systems or TRANPROD for a
production/test system.

To restore the files sent from Customer Support, use the tar
command with the -xvf options, for example,
tar -xvf /dev/fd0

xviii Workbench User’s Guide


Application Integrator Customer Support

Backing Up and Restoring Windows Files


Microsoft Windows has various backup programs available
depending on the particular version of the software you are
running. Users should either compress all the programs together
using programs such as winzip.exe, or send the uncompressed files
(assuming they are small in number and size). Refer to your
Windows documentation for more information. Be sure to inform
Customer Support of the backup/compression program used to
send the files and supply them the uncompression program, when
necessary.
Customer Support will return the files in a similar format.

Workbench User’s Guide xix


Preface

Listing the Contents of


the Disk or Tape
UNIX Users
To list the contents of the disk/tape, use the tar command with the -
tvf options, for example,
tar -tvf /dev/fd0

Windows Users
There is no method for reviewing the contents of the Windows
installation CDs. To review the contents of the installation media,
you must first install the product and then use File Manager,
Windows Explorer, or the MS-DOS dir command to list the
contents of the \Trandev, \Trantest, or \Tranprod directories.

xx Workbench User’s Guide


Section 1
Overview of Data Modeling

The Transaction Modeler Workbench (hereafter referred to as


Workbench) is a graphical user interface tool that enables you to
create translation models for electronic commerce transactions.
This section provides an overview of the features and terminology
of Workbench. This section also provides instructions on accessing
Workbench.
Workbench is the development component of the Application
Integrator system. For a complete overview of Application
Integrator, see Section 1 of the Trade Guide for System Administration
User’s Guide.

Workbench User’s Guide 1


Section 1. Overview of Data Modeling

Workbench provides you with a graphical user interface (GUI) in


Overview of which to map data, or in Application Integrator terms, create data
Workbench models. These models represent the data structure and the
Features necessary rules for processing the input and output data files of
your electronic commerce. Workbench supports public standards,
such as ASC X12, UN/EDIFACT, and TRADACOMS, and
proprietary and other non-standard formats.

Workbench Data modeling, sometimes referred to as data mapping, consists of


Development Tools two processes:
r Defining the structure of the input data and the output data
r Associating input data with output data
Workbench provides many graphical features for easily defining
the structure and characteristics of the data in the input and output
files. These features include applying standard or custom formats,
setting minimum and maximum data length and occurrences, and
using the standard features of the Windows Clipboard (Cut, Copy,
and Paste).
In addition to these features, Workbench provides a powerful tool,
called RuleBuilder®, for defining mapping rules. With RuleBuilder
you can determine how data from the input file and/or the output
file will be referenced, assigned, and/or manipulated. These rules
may be as simple as moving a field from the input to the output, or
as complex as combining data from many sources with conditional
comparisons, cross-references, and mathematics.
In the cases where input and output files have the same or very
similar structures, Workbench provides an even more automated
mapping tool called MapBuilder™ to ease this process.
Workbench, used in conjunction with Trade Guide, provides you
with all the functionality necessary to easily develop, perform test
translations, and debug transaction models. Once the models are
production ready, they are used with Trade Guide to conduct
electronic commerce.

2 Workbench User’s Guide


Defining the Structure of the Input and Output Data

Workbench uses a building block approach to defining data


Defining the structures. These building blocks are referred to as data model
Structure of the items, or simply items. Each type of data model item is represented
Input and Output by an icon in RuleBuilder. There are four basic classes of items.
Data They are:
Defining

Tag

Container

Group

Defining Items Defining items are the lowest level descriptors in the data model.
Examples of defining items include elements or fields. They define
a data string’s characteristics, such as size and type. Some
examples of item type characteristics are:
r Alpha characters (letters only [A-Z] [a-z])
r Numeric characters (numbers only [0-9])
r Alphanumeric characters (a combination of numbers and
letters [A-Z] [a-z] [0-9] and the “space” character)
r Date
r Time
You can specify that defining items are variable in length by using
an item type that includes delimiters to denote the end of one field
and the start of the next. Or you can define items fixed in length by
specifying the number of characters in the field (in which case, no
delimiters are necessary).
Numeric, date and time item types need a format definition to
describe how the field should be parsed or constructed. For
example, a date might be formatted as MMDDYYYY,
MM/DD/YYYY, or YYYY-MM-DD. During the building of the
data structure, Workbench provides an easy method of masking for
the appropriate format.

Workbench User’s Guide 3


Section 1. Overview of Data Modeling

Tag Items Tag items enable you to identify different records or segments. The
“tag” is the string of data at the beginning of the record/segment.
A record delimiter in the input or output may separate tag items. If
multiple types of records exist in a file, there is normally a “tag”
referenced to differentiate each type. For example, a heading
record may begin with an ‘H’ in the input stream and a detail
record may begin with a ‘D.’
Tag items can be fixed length or variable length.

Fixed Length Record In a fixed length record, you determine the number of characters
allowed in each field. If the data is not long enough to fill each
field, space characters will be added (either to the beginning or the
end of the field, depending on whether you have right- or left-
alignment specified in each field).

Record

D 3 5 0 0 1 A B C 4
Fld
T
Field of Field of of
a
5 characters 3 characters 1
g
char.

Variable Length Record A variable length tag item uses delimiters to denote the end of one
field and the start of the next. You determine the minimum and
maximum number of characters to be used in each field. If there is
no data available for a particular field, the field’s two delimiters
will appear next to each other with no spaces between them. (See
Field 4 in the example below.)

Record

B E G * 0 0 * N E * 0 0 1 2 3 * * 0 1 0 1 9 7

Tag Field 1 Field 2 Field 3 Field 4 Field 5

4 Workbench User’s Guide


Defining the Structure of the Input and Output Data

Container Items Like a tag, a container is used to group two or more defining items.
Unlike a tag, a container does not include a tag (or match value) at
the beginning, and in its absence a place holder in the data stream
is usually required. For example, in the X12 translation session, a
composite element is used to determine a measurement based on
data within the input stream (height, width, and length, for
MEA04), it is defined using a container in Application Integrator.
Depending on your standards implementation, containers are not
used as often as the other data modeling items. A list of container
item types found in the access models supplied by Application
Integrator are described in the appendixes of each of the standards
implementation guides.

Group Items When two or more items have the ability to repeat or “loop,” you
use the group item to define this characteristic. The group item
does not reference any data in the input or output and, therefore,
has no value associated with it. For example, when an invoice
contains a series of individual line items, each line item would be
characterized as a record and the records would be grouped
together by a group item.

The Relationship of Organizing these data model items–defining, tag, container and
Data Model Items group–in a hierarchy further defines the data structure.
The highest level of data is the parent data model item. The next
level of data is the child data model item. Children on the same
hierarchical level are called siblings.
In the example below, “Heading_Record” is the parent item and
“Heading_Document_Number” and “Heading_Document_Date”
are siblings.

Parent Heading Record

Child Heading _Document_Number

Child Heading_Document_Date

Workbench User’s Guide 5


Section 1. Overview of Data Modeling

Group items indicate an association among children items (tags and


defining items). Tag items further define a parent-child
relationship. Defining items cannot have any children items.
The hierarchy of the data model determines the processing flow as
well. See the “Environments” section later in this section for more
information on process flow. See Section 4, “Creating
Environments,” for further details.

Source and Target The four data model items–defining, tag, group, and container–are
Data Models used to describe the structure of the input data in the source data
model and the structure of the output data in the target data model.
These two data models also contain actions to be performed on the
data to correctly map it from the source to the target.
In addition to creating a data model from “scratch,” you can open
an existing data model, modify the data model items and rules,
and save the model under a new name to create a new data model.
This is described in Section 2. As you define the data structure,
attributes (such as size, format, and occurrence) of each field or
record/segment are specified. GE Information Services supplies
data model templates for the major public standards, including
ASC X12, UN/EDIFACT, and TRADACOMS.
Both the input (source) and output (target) data require a data
model. Usually, the input data can be parsed using one source data
model and the output can be constructed using one target data
model. However, in some cases, multiple source or target data
models are involved in a single transaction.
For example, the public standard X12 utilizes enveloping segments
(records) to enclose documents which are part of the data structure.
(See the following figure.) The Application Integrator X12
implementation utilizes one data model to parse/construct the
envelope segments, and another data model for the processing of
the documents contained within the envelopes.

6 Workbench User’s Guide


Defining the Structure of the Input and Output Data

Application
OmniTrans Data
Integrator Data
Model
Model 11

X12 Enveloping Segment Parses the envelope


segment

Data

Application
OmniTrans Data
Integrator Data
Model 2
Model 2

Processes data in the


envelope

The Access Model Each data model must be associated with an access model. The
access model contains a definition of the data model item types
available for the defining, tag, and container items that will be
associated within this data model. The access model information
describes the items that will be parsed (input) or constructed
(output) in the data streams. (The group type is always available,
although it is not specifically described in the access model.)
For example, a data model item named “InvoiceDate” could be
assigned an item type of “DateFld” in the source data model. By
examining the access model associated with the data model, we
would find that item type “DateFld” is defined by an Application
Integrator function # DATE where either spaces or all zeros in the
data field are valid.
Application Integrator supplies access models for the standards
implementation. See Section 2, “Creating Source and Target Data
Models,” for a list of these access models and more information on
the item types in these files.

Workbench User’s Guide 7


Section 1. Overview of Data Modeling

The second part of data modeling uses variables and rules to map
Associating data between the input (source) and output (target) data models.
Input Data with Workbench provides two graphical tools known as RuleBuilder
Output Data and MapBuilder for defining rules and associating data model
items with variables.

Variables A variable is a named area in memory where a value is stored and is


referenced by a label. By referring to the name, you can access the
value to use in an evaluation, computation, string manipulation,
assignment or to pass it as an argument to a function during a
translation session. There are six types of variables:
1. Data model item (Defining Type)
2. MetaLink (M_L–>)
3. Temporary (VAR–>)
4. Array (ARRAY–>)
5. Environment (keyword and user-defined)
6. Database (state: substitutions and domain)

The types of variables differ among one another in:


r scope (when they can be referenced or updated)
r length of existence
r number of values associated
r ability to maintain instances along with the value

8 Workbench User’s Guide


Associating Input Data with Output Data

Rules A rule in its most basic form is an assignment of a value from an


input field to a variable, then an assignment from that variable to
an output field. From this simple form, you can create more
complex rules to manipulate the data as necessary. See the figure
below for an example of using rules and variables:

Input Field Variable Date Calc. Output Field

03/12/95 03/12/95 03/12/95 03/17/95


+5 Days

In this example, a date function was used to accept the input date
and calculate a new date.

❖ Note: The date calculation function (DATE_CALC) can be used


before or after storing the date in the variable field.

You can use rules for:


r Moving data from the source to the target data model
r Writing data to or utilizing data from a database (such as, a
partner profile, cross-reference, or code verification from
the Profile Database, Administration Database values, or
user-defined database values)
r Altering the natural processing flow
r Absent handling (defaulting data)
r Error handling
r Compliance checking (X12: exclusion, paired, required,
conditional, and list conditional)
r Changing character sets, triad (thousand position separator
character), release and delimiter characters
r Obtaining/modifying stream position
r Computations (math, data)
r Manipulating a string of data - concatenation, substring,
case conversion, etc.
r Converting dates
r Logging data
r Placing conditional expressions around selected rules

Workbench User’s Guide 9


Section 1. Overview of Data Modeling

Environments The final step to map the data is to create an environment. An


environment defines all the pieces that need to be brought together
to configure the translator to process in a certain way. An
environment consists of components that control what data will be
translated, such as the input/output files, and the source, target,
and access models to be used.
In a Transaction Modeler Workbench application, an environment
is referred to as a “map component file,” since the environment
definition is “attached” to the translator. You can attach another
environment definition (using the keyword ATTACH) to
reconfigure the translator during processing. Environment files are
given the suffix “.att” (for example, OTRecogn.att, OTEnvelp.att)
and are referred to as “map component files.”
Several examples of the functions of a translation environment are:
r Processing fixed length data
r Processing variable length data
r Bypassing data
r Generating acknowledgments
r Recognizing data
r Enveloping data
r Committing output streams
Multiple environments are typically brought together to complete a
translation session. By using multiple environments, you can do
such operations as: making use of generic models (for enveloping,
de-enveloping, bypassing, and acknowledging), dynamically
reconfiguring the process flow, and constructing multiple output
streams from one input stream. During a translation session, the
environment can be changed through the use of different map
component files.

10 Workbench User’s Guide


Other Data Modeling Components

During the translation session, other components of Application


Other Data Integrator provide information crucial to processing, tracking, or
Modeling reporting on activities, among these components are the Profile
Components Database and the Administration Database.

Profile Database The Profile Database is a resource of values that can be accessed
during a translation session. The Profile Database stores:
r Communication and trading partner profiles
r Substitutions, used to replace a label with a value
r Cross-references, used to replace a value with another value
r Verifications, used to verify a value against a specified code
list
Refer to the Trade Guide for System Administration User’s Guide for
more information on the Profile Database.

Administration The Administration Database consists of one or more files that are
Database used to capture information from translation sessions. The
Administration Database provides you with information for:
r Process tracking, recording information on all translation
sessions
r Archive tracking, recording information on archived
documents
r Message tracking, recording each outbound document
translation along with a status
r Bypass tracking, recording information on all exception
data (errors)
Refer to Section 6, “Translating and Debugging,” for hints on using
the Administration Database reporting features for debugging.
Refer to the Trade Guide for System Administration User’s Guide for
more information on the set up and full reporting features of the
Administration Database.

Workbench User’s Guide 11


Section 1. Overview of Data Modeling

Environment Files An environment file (given the extension “.env”) can be used to
enhance the current configuration of the translator. It declares
user-defined environment variables with their associated values,
for example:

ACTIVITY_TRACK_SUM=“DM_ActS”
ACTIVITY_TRACK_DET=“DM_ActD”
MESSAGE_TRACK_IN=“DM_MsgI”
MESSAGE_TRACK_OUT=“DM_MsgO”
EXCEPTION_TRACK_SUM=“DM_BypS”
EXCEPTION_TRACK_DET=“DM_BypD”

An environment file is loaded into the translation session by using


the function ENVIRON_LD( ) in the data model, as per this
example statement:
[ ]
ENVIRON_LD(“OTDMDB.env”)

where “ENVIRON_LD” is the Application Integrator function and


“OTDMDB.env” is the environment file that defines additional
environment variables that specify the names of Administration
Database files used for message tracking and activity tracking. By
placing the names of these files in this definition file, the names can
be changed here and affect all references to them.
An environment file may contain any keyword environment
variables except the following environment variables which are
reserved by Application Integrator:
INPUT_FILE
OUTPUT_FILE
S_ACCESS
T_ACCESS
S_MODEL
T_MODEL

12 Workbench User’s Guide


Other Data Modeling Components

Translation Session Translation Session


Files and Trace Logs The Translation Session ID (tsid) file contains the next session
number to be used. Each time a session is run the tsid file is
updated. The session number is used to create unique
administration records and trace log filenames.

❖ Caution: You must not change the tsid file because the
administration reporting may be corrupted.

For UNIX users, the format of the translation session control


number is user-definable. See Section 2 of the Trade Guide for
System Administration User’s Guide for details on setting the
OT_SNFMT environment variable to do this.
For Windows 95 and Windows NT 4.0 users, the session number
has the format 6C (six places long, numeric only).

Trace Logs
When a translation is run and depending on the data model
functionality and process, the system automatically can create up to
three trace logs:

<queue_ID>.trace# .log Main trace log providing feedback on


the translation session; the amount of
feedback provided is user-definable.
Refer to Section 6 of this manual for
details.
For UNIX, trace# is an incremental
number that starts at zero for every
Control Server started up.
e<session_no>trace.log The error log.
s<session_no>trace.log The session log.

Workbench User’s Guide 13


Section 1. Overview of Data Modeling

Workbench is accessed differently depending on your operating


Accessing system.
Workbench

To access Workbench
in UNIX
From Do this
Command Line At the UNIX command line, type:
(Only Workbench oTTg w
will be running.)
The main Workbench window should appear.
See the next page for details.

To access Workbench Running the Workbench application from a server machine on a


in Windows client computer, when there is more than one development
directory, is not supported by GEIS. There could be conflicts if the
same models, map component files, etc., are modified by more than
one person.
Do one of the following:
From Do this Icon
Windows Desktop Double-click the Application Integrator
Development program folder icon on your
Application Integrator
desktop. Then double-click the Workbench
Developer
icon.
- or -
From the Start button, choose Programs and
then Application Integrator Development.
From Application Integrator Development,
choose Workbench.

❖ Note: Windows users must start a Control Server before accessing


Workbench to debug and run translation sessions. If a Control
Server is not running, you will continue to use Workbench off-line.
Debug and Run will be disabled while Workbench is off-line.

The Workbench main window appears.

14 Workbench User’s Guide


Accessing Workbench

Workbench Main
Window
Minimize/Maximize
Control Menu Title Bar Buttons

Menu Bar
Tool Bar

Work
Area

The main Workbench window areas include:

Window Area Description


Title Bar Displays the product name.
Menu Bar Displays the Workbench main menu. Click the
menu name to drop down the menu and choose
commands.
Tool Bar Displays command icons. Click once on any
command icon to initiate the action.
The main icons are:

Open

Workbench User’s Guide 15


Section 1. Overview of Data Modeling

Window Area Description

Save

MapBuilder
Work Area The Workbench display and work area.
Minimized source data models, target data
models, and map component files will display
in this area. Select the icon and use the right
mouse button to open a menu to work with
these files. The minimize file icons are:

Minimized map component file

Minimized source/target data model

Minimized access, input, output files


Control Menu Provides options for controlling the display of
Box the entire window.
Minimize Use the Minimize button to reduce the
Buttons Transaction Modeler Workbench window to the
following desktop icons:

Note: In Windows 95 and NT 4.0, when you


minimize a Workbench windows, you cannot
restore it from the “TMBENCH” on the taskbar.
You can only restore Workbench from the
minimized group box, using the Restore button.

Maximize Use the Maximize button to enlarge the


Buttons Workbench window to the complete desktop
display.
Restore Use the Restore button to return the Workbench
Buttons window to its last size.

16 Workbench User’s Guide


Accessing Workbench

Other Workbench windows and Map Component Editor dialog


boxes include the Layout Editor dialog box, the RuleBuilder dialog
box, and the Map Component file dialog box, as shown below.
Procedures for accessing and working in these areas are described
in later sections.

The Layout Editor window


provides the graphical means
to build source and target data
models. See Section 2 for more
details.

The RuleBuilder window


provides a work area, toolbar,
and “notebook” for easily
inserting logic into the data
models. See Section 3 for more
details.

Workbench User’s Guide 17


Section 1. Overview of Data Modeling

The Map Component Editor


dialog box provides a means to
create a data modeling
environment.
See Section 4 for more details.

18 Workbench User’s Guide


Workbench Menus

The main window of the Transaction Modeler Workbench has three


Workbench menus: File, Tools, and Help. These menus lead to all the
Menus secondary windows for this data modeling tool.

Workbench - File The figure below shows the Workbench File menu:
Menu

Component Description
New — Allows you to create a new map component file
Map (environment definition) or data model. The
Components Map Components file contains a list of resource
Data Model names that are brought together for a specific
translation.
Open Allows you to open existing map component files
or data models, to maintain, test, and debug
resources. Ctrl+O is the keyboard shortcut for
performing the same command.
Save All Allows you to save all the open data models and
map component files. Throughout the use of
Workbench, all changes are maintained in
memory until saved. The source and target data
models are saved independently. Alt+A is the
keyboard shortcut for performing the same
command.
Minimize All Minimizes all the open data models and map
component files. Ctrl+M is the keyboard shortcut
for performing the same command.

Workbench User’s Guide 19


Section 1. Overview of Data Modeling

Component Description
Print Setup Displays the Windows Print Setup dialog box.
This dialog box allows you to change your
default printer settings for the item to be printed.
Exit Exits from Workbench, prompting you to save if
any changes were made. Returns to the Trade
Guide main menu. Ctrl+Q is the keyboard
shortcut for performing the same command.

Workbench - Tools
Menu

Component Description
MapBuilder Opens the MapBuilder window to allow you to map data
from source to target data models. The keyboard
shortcut Ctrl+L performs the same function. See the
“Using MapBuilder” section in Section 2 for more
details.
MapBuilder Opens the MapBuilder Preferences dialog box which
Preferences allows you to review or change the default settings
within MapBuilder.

20 Workbench User’s Guide


Workbench Menus

Workbench - Help This menu provides access to the on-line Help system. See the
Menu Preface for instructions on using the Help system.

Component Description
Error Codes Provides Help on the various Application
Reference Integrator error codes.
Keyword/Functions Provides Help on Application Integrator
Reference environment keywords and data model
functions.
About Workbench Opens a dialog box that lists the current
Workbench release number, release date,
server path, and copyright.

Workbench User’s Guide 21


Section 1. Overview of Data Modeling

22 Workbench User’s Guide


Section 2
Creating Source and Target Data Models

Section 1 described how mapping data using Workbench consists of


two major tasks:
1. Defining the structure and the attributes of the input data in a
source data model and defining the structure and attributes of
the output data in a target data model.
2. Defining which input fields are to be associated with the
specified output fields using rules.
This section leads you through the process of defining source and
target data models. This section begins with background
information on access models to assist in your understanding of
data model items and how they are parsed and constructed by
Application Integrator.

Workbench User’s Guide 23


Section 2. Creating Source and Target Data Models

Each data model must be associated with an access model. The


Understanding access model contains a generic definition of each type of data
the Role of the model item for which data is parsed or constructed. (The group
Access Model item is not defined by the access model.)
Application Integrator supplies access models per standards
implementation.

Access Model Standards


OTFixed.acc Generic fixed length data access model
OTX12S.acc ASC X12 source access model
OTX12T.acc ASC X12 target access model
OTEFTS.acc UN/EDIFACT source access model
OTEFTT.acc UN/EDIFACT target access model
OTANAS.acc TRADACOMS source access model
OTANAT.acc TRADACOMS target access model

❖ Note: Other access models are installed with each standards


implementation. These models are used by the basic data models
and for customized work. Access models must be installed in the
directory in which the Control Server is started.

The access model sets three conditions for each item type: the pre-
condition, the base, and the post-condition. The pre-condition
describes any rules about the data that precedes this item, for
example, a leading delimiter or “tag.” The base describes the value
or character set allowed for this item, for example, a set of
alphabetic characters A-Z and a – z. The post condition describes
any rules about the data that follows this item, for example, a
trailing delimiter.
If you review any of the access models supplied by Application
Integrator, you will see these item type specifications in the format:

<Item Type>= Pre-Condition Base Post-


Condition
Example:
ElementA= (Elem_delim) ^(Alpha) ?

24 Workbench User’s Guide


Understanding the Role of the Access Model

where
ElementA is the name assigned to an item type. This name appears
in the Item Type list box when defining data model items and its
meaning is established by its complete access model definition (pre-
condition, base, and post-condition).
(Elem_delim) refers to a pre-condition delimiter set earlier in the
access model.
^(Alpha) is the base condition. The caret (^) determines two things:
1) the value parsed is to be passed back to the data model and 2)
the item type should appear in the Item Type list box when this
access model is associated with a data model. (Alpha) refers to
another non-display base element in the access model which sets a
range of acceptable alphabetic values. A base value preceded by a
pound sign (# ), such as # CHARSET, refers to access model
functions that precisely describe the data with which they are
associated.

❖ Note: The caret (^) must appear in front of the base condition for it
to appear in the Item Type list and be used in the data model.

The question mark (?) in the post-condition is a wild card indicating


all characters are valid.
Access model statements specify the following information for each
data model item that can parse or construct data:

Tag Type Sets the size of the tag and the post delimiter.
Defining Sets the pre-condition value (delimiter before data),
Type the character set, and any special formatting.
Container Sets the base value to CONTAINER. See the
Type description of each “composite” item in the
appropriate standards manual for examples.

When defining each data model item in your data model, you will
specify an item type. The possible list of item types available will
be based on the access model you associated with the data model.
The complete set of attributes associated with each data model item
(such as, the possible format or maximum occurrence) is also
related to the item type.

Workbench User’s Guide 25


Section 2. Creating Source and Target Data Models

Pre-Condition Values The following is a list of some pre-condition values. Certain values
may not apply to your standards implementation.

Pre- Description
Condition
Elem_delim Defined by the access function
# SECOND_DELIM or the data model function
SET_SECOND_DELIM.

Note: Can also be a post-condition.

Comp_delim Defined by the access function # THIRD_DELIM


or the data model function SET_THIRD_DELIM.

Note: Can also be a post-condition.

Seg_label Defines the character set as A – Z, 0 – 9, and the


space character, allowing one through three
repetitions, followed by an element delimiter
(Elem_delim).
Rec_code Defines the character set as space through tilde
(~), allowing one through five repetitions,
followed by an element delimiter (Elem_delim).

Note: Rec_code in OTFixed.acc allows 1 – 15


repetitions, not followed by an element delimiter.

26 Workbench User’s Guide


Understanding the Role of the Access Model

Base Values The following is a list of some base values. Certain values may not
apply to your standards implementation.

Base Value Description


# FIRST_DELIM Access model function; causes
character to be read and
compared to the value set with
function # SET_FIRST_DELIM or
the data model function
SET_FIRST_DELIM. This
character is removed from the
active character set.
# SECOND_DELIM, Access model functions that
# THIRD_DELIM, work in the same manner as
# FOURTH_DELIM , # FIRST_DELIM, for the second,
# FIFTH_DELIM third, fourth, and fifth delimiter,
respectively.
# CHARSET Access model function; defines
the character set as space
through tilde (~) unless
overwritten by the data model
function SET_CHARSET.
# LOOKUP Access model function; performs
automatic value verification each
time the data model item is
referenced in the data model.
The character set is defined by
the # SET_CHARSET access
function.
# DATE/# DATE_NA Access model function; verifies a
valid month, day of month, and
year. Uses a default format or a
format defined in the data
model. # DATE_NA will parse
or construct a date of all zeros or
spaces.

Workbench User’s Guide 27


Section 2. Creating Source and Target Data Models

Base Value Description


# TIME/# TIME_NA Access model function; verifies a
valid hour, minute, and second.
Uses a default format or a format
defined in the data model.
# TIME_NA will parse or
construct a time of all zeros or
spaces.
# NUMERIC/NUMERIC_NA Access model function; defines
the character set as zero through
nine, decimal notation character
(.), comma (,) dollar sign ($),
negative (–), and plus sign (+).
Uses a default format or a format
defined in the data model.
# NUMERIC_NA will parse or
construct a numeric value of all
zeros or spaces.
CONTAINER Defined by the children data
model items. Base component
for item type Composite.
TAG Only the pre- and post-
conditions are used when a data
model item has a base value of
TAG; it is defined by the
children data model items of the
tag item. Base component for
item types Segment,
FixedLgthRecord,
LineFeedDelimRecord, and
VariableLgthRecord.

28 Workbench User’s Guide


Understanding the Role of the Access Model

Post-Condition Values The following is a list of some post-condition values. Certain


values may not apply to your standards implementation.

Value Description
Seg_term Defined by the access function # FIRST_DELIM
or the data model function SET_FIRST_DELIM.
RecordDelim Defined by the access function # FIRST_DELIM
or the data model function SET_FIRST_DELIM.
For a list of all data model item types, Refer to the appendixes of
the Application Integrator standards implementation manuals, for
example, the ASC X12 Standards Implementation Guide.

Parsing Blank or Empty Data models must be modified to enable them to read through
Records empty records, that is, records that contain only the record
delimiter character (line feed). For version 3.0, the items
LineFeedDelimContainer and AnyCharO have been added to the
OTFixed.acc access model.
Use the OTFixed.acc access model with the following examples.
Use the following input data where (l/f) represent a single
character, the line feed:
1AB(l/f)
2ABCD(l/f)
(l/f)
4DEFG(l/f)
(l/f)
Example 1: In versions before 3.0, the following data model would
parse and display each record:
Init {
[]
SET_FIRST_DELIM(10)
}*1 .. 1
Group {
Record { LineFeedDelimRecord ""
Field { AnyChar @0 .. 10 none }*0 .. 1
[]
SEND_SMSG(1, STRCAT("READ: ", Field))
}*0 .. 10
}*1 .. 1

Workbench User’s Guide 29


Section 2. Creating Source and Target Data Models

Example 2: With version 3.0, the container


LineFeedDelimContainer is used in place of the tag
LineFeedDelimRecord. The change to processing is — if the Tag
does not have a MatchValue defined and no children are parsed,
then the access model post-condition (the line feed character) is not
read. However, with a container, even if no children of the
container are parsed, the post-condition is read:
Init {
[]
SET_FIRST_DELIM(10)
}*1 .. 1
Group {
Record { LineFeedDelimContainer
Field { AnyChar @0 .. 10 none }*0 .. 1
[]
SEND_SMSG(1, STRCAT("READ: ", Field))
}*0 .. 10
}*1 .. 1
When using the access model COUNTER function to automatically
count the number of LineFeedDelimContainer items parsed, use the
defining item AnyCharO (Any Character Optional) in place of
AnyChar. AnyChar returns error code 171 (no children parsed)
back to the container which prevents COUNTER from being
incremented for empty containers. AnyCharO returns an error
code of 0, so the COUNTER for LineFeedDelimContainer is always
incremented.

30 Workbench User’s Guide


Understanding the Role of the Access Model

Viewing the Access You can view of the contents of the access model associated with
Model the model from the Layout Editor.

To view the access 1. From the View menu of the Layout Editor window, choose
model associated with Access Model.
your data model
A text display window appears with the access model
displayed. Use the scroll bars to see the entire contents of the
model.

❖ Note: The following screen illustration shows a Windows


interface. If you are running in UNIX, your view window may
differ slightly.

2. You can search for text within the access model by using the
Find option.
To do this, choose the Find button to open a search-type dialog
box. Type the text in the Find What value entry box and choose
the Find Next button. Choose the Cancel button to exit this
dialog box.

Workbench User’s Guide 31


Section 2. Creating Source and Target Data Models

❖ Note: In most cases, the term for which you are searching is
highlighted and the system scrolls to the first reference. To
see every reference to a term, choose Find Next again.

However, if the file you are viewing has long line lengths
requiring horizontal scrolling, the system highlights the term,
but does not automatically scroll to it. Your indicator that the
term has been found is the lack of a return message of “Not
found” or “Search wrapped around in a file.” Usually
stretching the display window as large as possible reveals the
highlighted terms, in other cases, manually scrolling through
the file isolates the term.

3. You can narrow your search by determining whether to:


r Search Up or Down: Select the appropriate radio button
to do this.
r Match case: Select the Match case box to do this.
r Determine the regular expression.
4. Choose the Minimize button to minimize the text display to
review again. Choose the Close button to shut the text display
dialog box.

❖ Note: You cannot modify the access model through this


viewing option.

32 Workbench User’s Guide


Overview of the Layout Editor Window

You define or modify data models in the Layout Editor window.


Overview of the From this window you also can open the RuleBuilder window to
Layout Editor add rules to the data model items. The following sections describe
Window the components of the Layout Editor window and menus. Refer to
Section 3 for details on using RuleBuilder and MapBuilder.

Work Area Title Bar Attribute Bar

Menu Bar

Tool Bar

Field
Labels

Collapse/
Expand
Button

Access
Icons

RuleBuilder
Icon

Scroll Highlighted Scroll Bar


Bar Item Arrows

This window appears when you:


r Open a new model
r Open an existing model
r Restore an existing model minimized in the main
Workbench window

Workbench User’s Guide 33


Section 2. Creating Source and Target Data Models

Workbench Layout The Layout Editor window includes the following components:
Editor Window
Component Description Icon
Access Icons These icons represent the item type
of the data model item: group, tag,
container, and defining item.
Attribute Contains column buttons for the
Bar model. You can drag and drop
these buttons to rearrange the
columns.
Collapse/ As a minus sign (–), it collapses the
Expand model to show only the highest
Button hierarchical level of data models.
As a plus sign (+), it expands the
model to show all data model items.
Field Labels Field labels for the information
entered for each data model item.
Highlighted Indicates the active data model
Item item.
Horizontal Moves the content of the work area
and Vertical left/right and up/down by
Scroll Bars dragging the scroll bar to the
desired position, by clicking the
unshaded area to jump the scroll
bar up/down page by page, or by
clicking on a scroll bar arrow to
move the window to the desired
postion, one line at a time.
Menu Bar Contains menus that provide access
to various operations within the
modeler.
RuleBuilder This icon shows that rules have
Icon been assigned to the data model
item (when it appears colored). To
open RuleBuilder, click the icon.

34 Workbench User’s Guide


Overview of the Layout Editor Window

Component Description Icon


Title Bar Shows the current data model you
are working on, including the
filename indicator and whether or
not the model is a source or target
data model.
If you have opened a number of
windows, clicking on the title bar of
a window brings that window to
the front, making it the active
window.
Tool Bar Provides icons for quickly
performing commands, including
saving the model, appending or
inserting data model items, or
performing Clipboard operations.
Work Area The portion of the window where
you build your models.

Workbench User’s Guide 35


Section 2. Creating Source and Target Data Models

Collapsing/Expanding Two options allow you to control the amount of information you
the Data Model Items see in the Layout Editor window. One option allows you to
collapse the entire data model or a family (parent with siblings)
within the data model, to see only the highest level entries. The
reverse option allows you to expand the entire model or a family of
data model items.

To view the entire data r From the Layout Editor Data Model menu, choose Expand
model All Levels.
r Click the Expand All Levels icon from the topmost data
model item.

To collapse the data model r From the Layout Editor Data Model menu, choose Collapse
so that only the highest All Levels.
level is displayed r Click the Collapse All Levels icon.

To display the children r Select the Expand icon to the left of the data model item.
of a data model item

To collapse the children r Select the Collapse icon to the left of the data model item.
of a data model item

36 Workbench User’s Guide


Overview of the Layout Editor Window

Layout Editor After creating a new data model or map component file, or after
Menus opening an existing data model or map component file, the Layout
Editor window appears for the source and/or target data models
with the following menus:

Layout Editor — File The figure below shows the Layout Editor File menu:
Menu

Option Usage Icon


Open Allows you to open existing map
component files or data models. The
keyboard shortcut Ctrl+O performs the
same operation.
Include Allows you to specify Include files which
contain sets of rules to be executed one
or more times using the data model
function PERFORM.
Save Updates all changes to translation
resources to disk, for a permanent copy.
Throughout the use of Workbench, all
changes are maintained in memory until
saved. The source and target data
models are saved independently. The
keyboard shortcut Ctrl+S performs the
same operation.
Save As Allows you to save a modified data
model under another filename. The
keyboard shortcut Ctrl+A performs the
same operation.

Workbench User’s Guide 37


Section 2. Creating Source and Target Data Models

Option Usage Icon


Minimize Minimizes the current Layout Editor
Editor window, allowing you to continue data
modeling. The keyboard shortcut
Ctrl+M performs the same operation.
The current source or target data model
appears as an icon in the Transaction
Modeler Workbench work area
Close Quits the Layout Editor window,
Editor prompting you to save if any changes
were made to the model, and then
returns to the Workbench main menu.
The keyboard shortcut Ctrl+Q performs
the same operation.

38 Workbench User’s Guide


Overview of the Layout Editor Window

Minimizing the Layout You may want to minimize a particular editing session for a source
Editor or target data model if you have several data models open at once.
This avoids confusion among them.

Ø To minimize the Layout Editor for the current data model


1. Click the title bar of the window to activate the Layout Editor
window to be minimized.
2. From the File menu, choose Minimize Editor. This will reduce
the data model to an icon within the Workbench window.

Restoring the Layout Ø To restore the Layout Editor for the current data model
Editor
1. From the Workbench work area, click the data model icon to
select it.
To select and open a range of models, hold the Shift key down
while clicking the other data models to select them.
2. Once you have selected the model(s) to open, click the right
mouse button to display a menu.
3. Choose Restore from this menu.

Workbench User’s Guide 39


Section 2. Creating Source and Target Data Models

Layout Editor — The figure below shows the Layout Editor Edit menu:
Edit Menu

Option Usage Icon


Undo Allows you to reverse or cancel previously
performed actions. The drop down menu
lists the action to be undone. The keyboard
shortcut of Alt+Backspace performs the same
action.
Redo Allows you to redo the previously undone
actions. The drop down menu lists the action
to be redone. The keyboard shortcut of
Ctrl+Y performs the same action.
Cut Allows you to remove the current data
model item. The item is then removed from
the model and stored on the clipboard. You
may then paste it in another position. The
keyboard shortcut Ctrl+X performs the same
operation.
Copy Allows you to make a copy of the current
data model item. The item is copied and
stored on the clipboard until you paste it.
The keyboard shortcut Ctrl+C performs the
same operation.

40 Workbench User’s Guide


Overview of the Layout Editor Window

Option Usage Icon


Paste Allows you to paste a copy of the stored
item. Until you perform another cut or copy
operation, this item will be stored, allowing
you to paste several copies of the stored item.
The keyboard shortcut Ctrl+V performs the
same operation.
Duplicate Performs the same functions as copy and
paste operations, but requires only one step,
rather than two. The keyboard shortcut
Ctrl+D performs the same operation.
Find Provides the ability to locate a data model
item by its label. The keyboard shortcut of
Ctrl+F performs the same action.

Undo and Redo Data The Undo and Redo functions can be performed on data model
Model Actions actions. The undo function allows you to reverse or cancel actions
you’ve performed. Redo allows you to repeat actions that you’ve
canceled. The undo and redo functions are found on the Edit drop
down menu and on the toolbar.
Undo and redo functions can be performed when the drop down
menu selections or icons on the toolbar are active (sensitive).
Actions are either undone or redone, depending on the function
selected. The type of actions that can be undone or redone include:
r cut
r copy
r paste
r duplicate
r change an item’s label
r change an item’s attributes: Item Type, Occ Min/Max, Size
Min/Max, Format, Match Value, Version, File Sort,
Increment
r insert or append an item
r change an item’s level left or right
Multiple actions are tracked, allowing for several actions to be
undone and then redone as necessary. However, this list of tracked
actions is cleared when any of the following occurs:

Workbench User’s Guide 41


Section 2. Creating Source and Target Data Models

r when the Layout Editor is first opened for the data model
r when MapBuilder is used
r when the Layout is saved
r when Rules Editor is opened
r when applying rules.
A separate list of actions is traced for each Layout and Rules Editor
combination. When the Rules Editor is opened for a Layout Editor,
both editors take their actions on the same list, which can be
undone or redone as necessary.

Ø To undo a data model action


Use one of the following options to undo
r Menu – From the Layout Edit menu, choose Undo
r Toolbar icon – Click the Undo icon
r Keyboard shortcut – Press Alt+Backspace
This causes the last action performed on the actions list to be
undone. The previous action is now pointed to, to be undone.
Redo now points to the action just undone in the list.

Ø To redo a data model action


Use one of the following options to redo.
r Menu – From the Layout Edit menu, choose Redo
r Toolbar icon – Click the Redo icon
r Keyboard shortcut – Press Ctrl+Y
The last undone action is redone and redo now points to the next
action to be undone.

42 Workbench User’s Guide


Overview of the Layout Editor Window

Cutting, Copying, and Cut, Copy, and Paste Clipboard functions can be performed on
Pasting Data Model individual data model items as well as an entire data model item
Items hierarchy. Cut or Copy puts the selected information on the
Clipboard. Paste takes the information from the Clipboard to the
location specified in your data model.

Ø To cut a data model item


1. Select the data model item.
To cut a range of items, select the first item of the range and
while holding down the Shift key, select the last item of the
range. You cannot cut all of the items from the model.
2. Use one of the following options to cut the data model item.
r Menu – From the Layout Editor Edit menu, choose Cut.
r Toolbar icon – Click the Cut icon.
r Keyboard shortcut – Press Ctrl+X.
The data model item is then copied into the Clipboard until you
paste it. If the selected data model item contains children data
model items, they will also be cut.

Ø To copy a data model item


1. Select the data model item.
To copy a range of items, select the first item of the range and
while holding down the Shift key, select the last item of the
range.
2. Use of one the following options to copy the data model item.
r Menu – From the Layout Editor Edit menu, choose Copy.
r Toolbar icon – Click the Copy icon.
r Keyboard shortcut – Press Ctrl+C.
The data model item is then copied into the Clipboard until you
paste it. If the selected data model item contains children data
model items, they will also be copied.

Ø To paste a data model item


1. Select the location after which you want the data model item to
be placed.
2. Use of one the following options to paste the data model item:

Workbench User’s Guide 43


Section 2. Creating Source and Target Data Models

r Menu – From the Layout Editor Edit menu, choose Paste.


r Toolbar icon – Click the Paste icon.
r Keyboard shortcut – Press Ctrl+V.
Until you use the Cut or Copy command again, this data model
item will remain on the Clipboard, allowing you to paste
several copies of the current data model item. The data model
item name will be appended with “_001” which will be
incremented by one with each additional copy.
If the selected item contains children items, they will also be
pasted.

❖ Note: Only data model items can be pasted. These Clipboard


features are not implemented for individual boxes of the Layout
Editor window, for example, you cannot copy one item’s
format and paste it into the next item’s Format box.

44 Workbench User’s Guide


Overview of the Layout Editor Window

Duplicating Data Model Duplicating performs the same operation as using Copy, then
Items Paste, but involves only one step. Duplicate does not copy and
paste the children of the selected item.

❖ Note: Only one data model item can be duplicated at a time.

Ø To duplicate a data model item


1. Select the data model item.
2. Use of one the following options to duplicate the data model
item:
r Menu – From the Layout Editor Edit menu, choose
Duplicate.
r Keyboard shortcut – Press Ctrl+D.
The new data model item is appended below the first with the
same name appended with the suffix “_001.” This suffix is
incremented by one with each additional copy.
If you duplicate a data model item then attempt to change its
hierarchical level (move it to the right), the following dialog will
appear. This dialog also clears the undo stack so the previously
performed actions cannot be reversed.

Workbench User’s Guide 45


Section 2. Creating Source and Target Data Models

Finding Data Model The Find option displays the Find dialog box. The Find dialog box
Items provides the ability to locate a Data Model Item by label.

Ø To display the Find dialog box


Use one of the following options to display the Find dialog box.
r From the Edit menu, choose Find
r Keyboard shortcut – Press Ctrl + F

Fill in the appropriate value entry boxes using the following table
as a guide.

Find What Any part of the label can be entered. You can
narrow the search by selecting from the three toggle
buttons.
Find Next Locates the first occurrence of the item being
searched. Additional clicks will locate additional
occurrences.
Cancel Stops the search and exits the Find dialog box.
Toggle You can narrow your search by determining
Buttons whether to:
q Match whole word only: This option looks for
the entire character string entered in the Find
What value entry box, not parts of words.
q Match case: This option looks for text with the
same capitalization as the text entered in the
Find What value entry box.
q Regular expression: This option is a shorthand
way of specifying text patterns within the
Workbench model, including literal characters,
wild card characters, and repeated regular
expressions.

46 Workbench User’s Guide


Overview of the Layout Editor Window

Ø To find data model items


1. Type the text you want to find in the Find What value entry
box.
2. Choose the Find Next button. Find will locate the first
occurrence of the text.
3. Choose the Find Next button again to locate additional
occurrences of the text. As necessary, the levels within the
structure are expanded to reveal the label being searched for.

❖ Note: If the file you are viewing has long line lengths requiring
horizontal scrolling, the system highlights the term, but does
not automatically scroll to it. Your indicator that the term has
been found is the lack of a return message of “Not Found” or
“Search wrapped around file.” Usually, stretching the display
window as large as possible reveals the highlighted terms; in
other cases, manually scrolling through the file will isolate the
term.

Workbench User’s Guide 47


Section 2. Creating Source and Target Data Models

Layout Editor — The figure below shows the Layout Editor Data Model menu.
Data Model Menu

Option Usage Icon


Add Item — Leads to a submenu that allows you to
Insert or insert (add above) or append (add
Append below) the current data model item.
Keyboard shortcut Ctrl+Up Arrow
performs the Insert. Keyboard shortcut
Ctrl+Down Arrow performs the
Append.
Add/Edit Opens the RuleBuilder dialog box,
Rules allowing you to add or modify rules for
the data model.
Change Allows you to move the currently
Level Left highlighted data model item one level
left to restructure the model’s data
hierarchy. The keyboard shortcut
Ctrl+Left Arrow performs the same
operation.
Change Allows you to move the currently
Level Right highlighted data model item one level
right to restructure the model’s data
hierarchy. The keyboard shortcut
Ctrl+Right Arrow performs the same
operation.
Expand All Allows you to see all levels of the data
Levels model.
Collapse All Allows you to see only the highest level
Levels of the data model.

48 Workbench User’s Guide


Overview of the Layout Editor Window

Layout Editor — The figure below shows the Layout Editor View menu.
View Menu

Option Usage
Access Icons Allows you to view the icons representing the
item type and the rules for each data model item.
Selecting this entry again hides the access icons.

Note: This option is for the current session only.

Grid Lines Allows you to view the grid lines between data
model items in the current Layout Editor
window. Selecting this entry again hides the grid
lines.

Note: This option is for the current session only.

Access Allows you to view the access model associated


Model with the current source and target data model.
You can view the access model in either text or
hexadecimal mode. You can also search for a
character string or regular expression by
selecting the Find button on this dialog box.

Workbench User’s Guide 49


Section 2. Creating Source and Target Data Models

Option Usage
Input File Allows you to open the input file associated with
the current source and target data model. When
you select this menu entry, a dialog box appears
displaying the input file. You can also search for
a character string or regular expression by
selecting the Find button on this dialog box.
Output File Allows you to view the output file associated
with the current source and target data model
after a translation has taken place. You can view
the output file in either text or hexadecimal
mode. You can also search for a character string
or regular expression by selecting the Find
button on this dialog box.

50 Workbench User’s Guide


Overview of the Layout Editor Window

Toggling Access Icons Each data model item has an icon placed to the left of the
Off/On RuleBuilder icon which represents the item type classification
(group, tag, container, or defining). These icons are referred to as
access icons and are placed in the Layout Editor window, by default.

❖ Note: Access icon settings are session–specific; they are not kept
for multiple sessions.

Ø To display the access icons


1. From the Layout Editor menu, choose View.
2. Select the Access Icons icon.
The access icons are located to the left of the data model items.

Access
Icon

Ø To remove the access icons


1. From the Layout Editor window menu, choose View.
2. Clear the Access Icons box to deselect it.
The access icons placed to the left of the data model items are
removed, as in the following illustration. The RuleBuilder icon
remains to the left of the data model item name.

Workbench User’s Guide 51


Section 2. Creating Source and Target Data Models

52 Workbench User’s Guide


Overview of the Layout Editor Window

Toggling Grid Lines By default, the system separates individual data model items in the
Off/On Layout Editor window with grid lines. These grid lines can also be
turned off and turned on at your will.

❖ Note: Grid line settings are session–specific; they are not


maintained over multiple sessions.

Ø To display the grid lines


1. From the Layout Editor window menu, choose View.
Select the Grid Lines box. Grid lines are placed between each
row of data model items.

Grid Lines

Ø To turn the grid lines off


1. From the Layout Editor menu, choose View.
2. Clear the Grid Lines box to deselect it. The grid lines between
the data model item attributes are removed.

Workbench User’s Guide 53


Section 2. Creating Source and Target Data Models

54 Workbench User’s Guide


Overview of the Layout Editor Window

Layout Editor — The figure below shows the Layout Editor Debug menu.
Debug Menu

Option Usage
Run Allows you to start a translation. The Run list box
displays translation configurations, from which a
translation can be invoked. The work area window
provides information about the translation's status,
including, start/end time, session number, and
results. The keyboard shortcut Ctrl+U performs
the same operation.
Information about using the Run dialog box can be
found in Section 6.
Source to Allows you to generate a report detailing the
Target Map mappings between source and target data models.
Listing
Data Model Allows you to generate a report of any data models
Listing in the current directory.

❖ Note: The Layout Editor Help menu provides the same menu
options as the Workbench main menu. Refer to the section,
“Workbench — Help Menu,” in Section 1 for details on this menu
and using on-line Help.

Workbench User’s Guide 55


Section 2. Creating Source and Target Data Models

Overview of Data Depending on the item type, you will specify for each data model
Model Item Attributes item some subset of the following attributes:

Attribute Description
Occ Min/Max The minimum and maximum occurrence value
(Occurrence) controls the number of times a data model item
is required (min) and can be present (max) in
the data stream.
Size Min/Max The minimum and maximum size value controls
the data model item’s field size in the data
stream. Minimum=maximum for fixed length
data.
Format Sets the input or output format for data model
items defined as a date or time, or defines the
format for a numeric value.
Match Value For tag items, this optional feature compares a
value to the data in the input stream or defines a
value for the output stream.
During processing of the source data model, the
value in the Match Value box is compared to the
characters at the beginning of the record in the
input stream.
During processing of the target data model, the
value in the Match Value box will be
constructed in the output stream at the
beginning of the record.
Verify For defining items, you can enter the name of
(Verification the list from the Profile Database against which
List) the data for this item will be verified.
File The File option is only available for group items
(both source or target data models). The File
option allows the parsing from or the
constructing to the filename specified by this
option.

56 Workbench User’s Guide


Overview of the Layout Editor Window

Attribute Description
Sort The Sort option is only available for group items
in a target data model. The Sort option allows
you to set a sort sequence for the defining items
in a group, for example, to sort all detail records
by invoice number or date.
Increment The Increment/Non-Increment option is only
(Increment available on group items and only pertains to
Counter) MetaLink variables. Set to increment, this
option will increment the instance on all
MetaLink variables used with the group item
and its children items.

Additional information about entering values to the data model


attributes can be found in the Assigning Attributes to Data Model
Items section of this section. You will also apply rules to further
define the data map. Information about applying rules is found in
Section 3, Building Rules into Data Models.

Workbench User’s Guide 57


Section 2. Creating Source and Target Data Models

This section describes how to create a new data model using these
Creating A Data basic processes:
Model
r Defining a new data model.
r Opening an Existing data model.
r Defining a data model item, Assigning attributes to data
model items, and Assigning an item type.
r Establishing Data Hierarchy.
r Assigning rules to the data model item.
r Adding data model items until the data model is complete
for either the source or target side.
r Saving the data model.
Both a source and target data model are necessary for a translation;
one or both may be standard models supplied by Application
Integrator. You can also define a data model based on a standard
model (copied and then modified).

source and target data models.


❖ Note: The same procedures apply to

58 Workbench User’s Guide


Creating A Data Model

Defining a New Use this procedure to define a new data model.


Data Model

To define a new data 1. From the Transaction Modeler Workbench File menu, choose
model New.
2. From the New menu, choose Data Model. The New Model
Definition dialog box will appear.

3. Set this new model to either source or target by clicking the


arrow and selecting Source or Target. This selection must
coincide with the selection in the Access Model box.
The access model OTFixed.acc can be used for either source or
target data models. Most other access models follow the
convention XXXXXXY.acc, where Y is S for source or T for
target.
4. Select an access model by clicking the arrow and selecting the
appropriate access model for the source or target data model.
The access model determines the set of item types available for
this model.

❖ Hint: If you do not see a list of access models, check the


directory structure of the system to make sure that all files
with the extension “.acc” have been placed in the same
directory in which the Control Server was started.

Workbench User’s Guide 59


Section 2. Creating Source and Target Data Models

5. Choose the OK button to open the new model.


The Layout Editor window opens and a default parent data
model item called NewItem displays. This data model item is
the topmost data model item in your model. You cannot make
this data model item a sibling or child data model item. Follow
the directions for changing a data model item name to rename
this topmost data model item in the data structure.

6. From the File menu, choose Save As. Since this is a new model,
a dialog box appears for you to specify a directory path and
type a new name. Complete this dialog box.
You are now ready to define the structure of the model.

❖ Hint: It is good modeling practice to save your model


frequently. You must always have one data model item in the
model to save it.

60 Workbench User’s Guide


Creating A Data Model

Opening an Existing To return to a model previously created for further data modeling
or Standard Data or open a standard model shipped with an Application Integrator
standards implementation, follow these directions.
Model

❖ Note: Standard models and ID code files are found in the working
directory.

To open an existing 1. From the File menu, choose Open to display the following
data model or standard dialog box.
model
The current path is displayed in the top center button (UNIX) or
in the Look in box (Windows 95 and NT 4.0).

UNIX Open Dialog Box

Windows 95 Open Dialog Box

Workbench User’s Guide 61


Section 2. Creating Source and Target Data Models

2. To view a different directory and/or list of data model files,


select the data model from the current directory or use the
sorting and filter options (UNIX)
- or -
Select the file type option (given various names in Windows
versions) of the Open dialog box menus.

❖ Hint to UNIX Users: If the data models are stored in a


different directory, use the side arrow buttons to move up (<)
or down (>) in the directory hierarchy. To see the data model
files only, choose the Filters menu and select Data Models
(.mdl). Refer to the section on Open/Save As menus in
Section 1 for details.

The Open dialog box refreshes to display all available data


model files (source and target).
3. Select the data model to be opened.
4. To open the selected data model and close the dialog box,
choose the OK button (UNIX)
- or -
Choose the Open button (Windows 95 or NT 4.0).

❖ Note: If you open a second copy of the same data model or a


standard (.std) model, the second copy will be open in a read-
only mode. You can, however, save the second copy under a
new name using the File-Save As command and then modify it.

Ø To copy and use a standard model


1. Open a standard model, as described in the previous
procedure.
2. From the File menu, choose Save As. The Save As dialog box
appears.
3. Type the new name for the data model you want to create. Be
sure to give the file the extension “.mdl” – a file with the
extension “.std” will be saved in read-only mode. Choose OK.

62 Workbench User’s Guide


Creating A Data Model

4. From the Layout Editor of the new data model, make all the
changes you need, for example, copying or duplicating data
model items.

5. From the File menu, choose Save to maintain your changes to


the data model.
The Application Integrator standard models for ASC X12,
UN/EDIFACT, and TRADACOMS are distributed separately
from the Application Integrator programs. They are loaded
into the development working directory and have a file
extension of “.std.” The ID code lookups and descriptions (of
the ID codes) for each of the standard versions are included
with the .std files.
For file naming conventions and specific information about the
standard models, refer to the appropriate standards
implementation guide, such as, ASC X12 Standards
Implementation Guide.

Workbench User’s Guide 63


Section 2. Creating Source and Target Data Models

Defining a Data
Model Item

Adding Data Model You have several options for adding new data model items to a
Items data model. The following section describes each method.

Note: The default hierarchy level for a new item is the same as the

❖ currently selected data model item.

Ø To add a new data model item


1. In the Layout Editor window, select the data model item above
or below which you would like to add a new data model item.
If this is your first data model item, select the default data
model item.
2. To append (add below) a data model item, do one of the
following:
r Menu – From the Layout Editor Data Model menu, choose
Add Item;
- or -
From the Add Item menu, choose Append.
r Toolbar Icon – Click the toolbar Append icon.
r Keyboard Shortcut – Press Ctrl+Down Arrow.
3. To insert (add above) a data model item, do one of the
following:
r Menu – From the Layout Editor Data Model menu, choose
Add Item;
- or -
From the Add Item menu, choose Insert.
r Toolbar Icon – Click the toolbar Insert icon, shown at the
left.
r Keyboard Shortcut – Press Ctrl+Up Arrow.
4. To further define the data model item, click the row containing
the data model item and specify the additional attributes
required. Refer to the following sections for more information.

64 Workbench User’s Guide


Creating A Data Model

Changing the Name of a Each data model item you add has a default name of NewItem.
Data Model Item

Ø To change the name of a data model item


1. Highlight the name and type over the default or existing name
with a new name.
2. To accept the new name, click outside the data model item
name box;
- or -
Press Tab to go to the next option.

Workbench User’s Guide 65


Section 2. Creating Source and Target Data Models

Assigning an Item For each data model item you add to your model, you must assign
Type an item type. The options you view in the Item Type selection list
is based on the access model associated with your model.

❖ Hint: If you are unsure of the exact definitions of the item types,
you can view the access model associated with your model. To do
this, from the Layout Editor View menu, choose Access Model.

Data Model Item There are four major data model item structures: group, tag,
Structures defining, and container items. One or more item type names may
be associated with each of these structures, based on your access
model. All data model items default to the item type Group. Once
you define the item type, the leftmost icon in the Layout Editor
(referred to as the access icon) will change to reflect the major
structural type of the item:

Group

Tag

Defining

Container

66 Workbench User’s Guide


Creating A Data Model

Ø To assign an item type


1. Select the Item Type option for the data model item you want to
modify.

❖ Note: Refer to the appendix in each standards implementation


manual, such as the ASC X12 Standards Implementation
Guide, for a list of item types that apply to the standard.

2. Click the Item Type arrow. A selection list appears.

3. Select the item type from the list.


4. To accept the selection, click outside the Item Type box or press
Tab to move to the next box. The icon to the left of the
RuleBuilder icon changes to reflect the item type.

Workbench User’s Guide 67


Section 2. Creating Source and Target Data Models

Assigning Attributes
to Data Model Items

Occ Min/Max
Setting Minimum and Maximum Occurrence
The minimum and maximum occurrence value controls the number
of times a data model item must and can be present in the data
stream. The minimum and maximum occurrence value of a new
data model item is user-defined. The default is 1. The minimum
occurrence value must be less than or equal to the maximum
occurrence value. A minimum occurrence of 0 indicates that the
data model item is optional. The maximum value can be set with
the asterisk (*) wild card to specify a variable amount.

Ø To modify the minimum and/or maximum occurrence values


1. Select the data model item and click in the OccMin/Max box.
The Minimum and Maximum boxes displays, as shown below:

2. For each box, type a numeric value that specifies the minimum
and maximum occurrence.
3. To accept the values entered, click outside the box or press Tab
to move to the next option.

68 Workbench User’s Guide


Creating A Data Model

Size Min/Max
Setting Minimum and Maximum Size
The minimum and maximum size value controls the data model
item’s field size in the data stream. The minimum and maximum
size value of a new data model item is user-defined. The minimum
size value must be less than or equal to the maximum size value.
The size maximum value cannot exceed 4092.

❖ Note: Size is not available for numeric Date or Time fields.


must specify the exact size through correct masking in the Format
You

box. See the next section on formatting for instructions.

Ø To modify the minimum and/or maximum values


1. Select the data model item to be modified and click in the
SizeMin/Max box.
The Minimum and Maximum boxes display as shown below:

2. For each box, type a numeric value that specifies the minimum
and maximum size allowable for data mapped to this item.

❖ Note: The minimum must be greater than zero.

3. To accept the values entered, click outside the box or press Tab
to move to the next option.

Workbench User’s Guide 69


Section 2. Creating Source and Target Data Models

Format
Defining the Data Model Item’s Format
The data model item format box is only available if the data model
item is defined as a date, time, or numeric item type

❖ Note: See the on-line Help for examples of possible numeric, date,
and time formats.

Ø To add a format
1. Select the data model item to be modified and select the Format
box.
2. Type the format for the date, time, or numeric field using the
numeric and sign masking characters described in this section,
for example, you might type “MM/DD/YYYY” for a date item.
For a numeric field, be sure to consider the decimal placement,
positive or negative sign, and alignment desired.
3. To accept the new format, click outside the box or press Tab to
move to the next option.

Numeric Formatting and Masking Characters


The tables on the following pages list the numeric formatting
characters for floating point, whole numbers, numeric signs
(positive or negative), decimal characters, and alignment. Consider
the following limitations to numeric handling before setting your
numeric formats.
A. Numeric Handling Description
Application Integrator supports unlimited numeric lengths in the
parsing and constructing of data within your input or output
streams. During these processes, numeric values are handled as
strings, conforming to the format you set up in your data models.
There is, however, a limit to the number of digits that Application
Integrator supports during computation processing. In these cases,
Application Integrator converts the string into a numeric. The
following is a brief description of this limitation:

70 Workbench User’s Guide


Creating A Data Model

r Application Integrator limit per number is 15 digits, not


including the decimal character, sign character, or triads
(thousand separator character). This limit applies to each
element in the equation and the result.
For example, if you have a number greater than 15 digits
such as 1234567890123456, the system returns
1234567890123458, where the 16th and greater positions are
populated with random numbers from the memory stack.
r If a number has more than 6 digits after the decimal point, it
will round to the 6th decimal place. For example,
0.123456789 returns in memory 0.123457
0.123454321 returns in memory 0.123454
r Decimal values must be preceded by a whole number value
or zero (0). Otherwise, a syntax error occurs on parsing.
For example,
VARà VALUE=.12345 + .12345 returns Error message
VARà VALUE=0.12345 + 0.12345 returns 0.24690

❖ Hint: Should your application require the computation of


extremely large numeric values or the carrying of lengthy
decimal values, the Application Integrator User Exit
Extension product provides support for user-defined
functions to handle these numerics and/or equations.
These functions can be invoked like standard Application
Integrator functions during data modeling.

Workbench User’s Guide 71


Section 2. Creating Source and Target Data Models

B. Floating Explicit Decimal


In floating explicit decimal, the format does not define the position
of the decimal. The data stream must contain a decimal in order to
output a decimal. Use the following masking characters to format
floating explicit decimals. These examples depict values being
target formatted.
Mask Example
N Non space-taking sign. Includes a negative sign for a
negative value. No character is used to indicate a
positive value.
Example:
–0.12 à “NRRRRR” à “–.12”
+0.12 à “NRRRRR” à “.12”
R Floating number with an explicit decimal when required
Example:
0.12 à “RRRRR” à “.12”
r Used with “R” to indicate decimal precision
Example:
0.12 à “RRRrrrr” à “.1200”
0 Used with “R” to specify a whole zero digit is required
for a decimal value
Example:
0.12 à “0RRRRR” à “0.12”
:n Minimum size, where “n” is from 1 to 9
Example:
0.12 à “0RRRRR:5” à “000.12”
:, Decimal notation defined in format
Example:
0.12 à “RRRRR:,” à “,12”
:rn Maximum decimal size, where “n” is from 1 to 5.
Example:
0.12 à “RRRRR:r3” à “.120”

72 Workbench User’s Guide


Creating A Data Model

C. Notes on the Floating Explicit Decimal Masking Characters (Rr0)


The following notes pertain to floating explicit decimal masking
characters R, r, and 0:
r Applies to the # NUMERIC and # NUMERIC_NA access
model function.
r “0” used in format must precede the “R”s, for example,
“0RRRR.”
r “0” is not counted in the length.
r “r”s must be to the right of all “R”s.
r Invalid to use the following in the format: period (.), comma
(,) and caret (^).
r The automatic insertion of the decimal notation character is
not counted in the length.
r The decimal notation character is only output when needed.
r When the decimal notation is not defined within a format
(“RRRRR:,”), the decimal will default from
SET_DECIMAL( ) if set, or else will default to the “.”
character.
r To define both minimum size and decimal notation, be sure
to use a colon after each, for example: “RRRRR:2:,” would
do this.

Workbench User’s Guide 73


Section 2. Creating Source and Target Data Models

D. Other Than Floating Explicit Decimal


In non-floating decimal, the format defines the implied or explicit
position of the decimal. Per the format, the value will always
contain the format-defined number of decimal places. An explicit
decimal defined in the format requires the decimal to be
parsed/constructed in the data stream. An implied decimal
defined in the format requires the decimal to not be
parsed/constructed in the data stream. Use the following masking
characters for numerics other than floating explicit decimals. The
examples below depict values being target formatted.

Mask Example
9 Zero fill whole leading or decimal trailing zero digits
Examples:
123 à “99999” à “00123”
1.1 à “99.99” à “01.10”
Z Space fill whole leading or decimal trailing zero digits
Examples:
123 à “ZZZZZ” à “ 123”
1.1 à “ZZ.ZZ” à “ 1.1 ”
F Suppress whole leading or decimal trailing zero digits
(variable length)
Examples:
000123 à “FFFFF” à “123”
1.100 à “FF.FF” à “1.1”
$ Monetary symbol, treated like the “F” mask character,
but inserts the dollar sign at the beginning of the string
(variable length)
Examples:
134567 à “$ZZZ,Z99.99” à “$ 134,567.00”
134567 à “$FFF,F99.99” à “$134,567.00”
1.25 à “$$$,$$$.99” à “$1.25”

74 Workbench User’s Guide


Creating A Data Model

E. Sign Masking Characters


Use the following masking characters to return the appropriate sign
(or no sign):

Sign (Masking)
Character Explanation Examples
N Displays a negative sign for a Negative:
negative value. -123 à“99999N” à “00123-”
No character is used to indicate a Positive:
positive value. 123 à “99999N” à “00123”
- Displays a negative sign for a Negative:
(use the hyphen negative value. -123 à “99999-” à “00123-”
character) Displays a space for a positive Positive:
value. 123 à “99999-” à “00123 ”
None No character is used to indicate a Negative:
positive or negative value. -123 à “99999” à “00123”
Positive:
123 à “99999” à “00123”
+ Displays a negative sign for a Negative:
(use the plus negative value. Displays a plus -123 à “99999+” à “00123-”
sign character) sign for a positive value. Positive:
123 à “99999+” à “00123+”
_ Displays a negative sign for a Negative:
(use the negative value. -123 à “99999_” à “00123-”
underscore Displays a zero (0) for a positive Positive:
character) value. The zero is dropped when 123 à “_99999” à “000123”
only whole digits and right 123 à “999.99_” à “001.230”
justified. 123 à “99999_” à “000123”
(right justified)
A The ASCII overpunch table is Negative:
(must be placed used to indicate a negative or -123 à “99999A” à “00012s”
in the rightmost postive value. Positive:
position) 123 à “99999A” à “000123”
E The EBCDIC table is used to Negative:
(must be placed indicate a negative or positive -123 à “9999E” à “0012L”
in the rightmost value. Positive:
position) 123 à “9999E” à “0012C”

Workbench User’s Guide 75


Section 2. Creating Source and Target Data Models

F. Notes Pertaining to Sign Masking Characters (-_+NAE)


r The masking format character for a sign is only valid at the
beginning or end of the format, except for “A” and “E”
which can only be placed at the end of the format.

G. Decimal Masking Characters

^ The caret (^) is used for implied decimal position


formatting
Example:
1.2 à “99^99” à “0120”

Explicit decimal notation can be defined in several ways:


1. A colon followed by the “.” or “,” character defines the decimal
notation within the format string.
Example:
“RRRRR:.” à defines “.” as the decimal notation.
2. A single occurrence of “,” or “.” in a format defines it as
decimal notation.
Examples:
“ZZZ.ZZZ” à defines “.” as decimal notation.
“FFF,FFF” à defines “,” as decimal notation.
3. Multiple occurrence of the same character denotes a triad, with
the other character (. or ,) defined as the decimal notation.
Examples:
“ZZZ,ZZZ,ZZZ” à defines “.” as decimal notation.
“FFF.FFF.ZZZ” à defines “,” as decimal notation.
4. One occurrence of each character (,.), defines the rightmost
character as the decimal notation.
Examples:
“ZZZ,ZZZ.ZZZ” à defines “.” as decimal notation.
“FFF.FFF,ZZZ” à defines “,” as decimal notation.

76 Workbench User’s Guide


Creating A Data Model

H. Notes Pertaining to Decimal Masking Character


r When no decimal digits are being output, the decimal
notation is not output.
I. Binary and Packed Decimal Masking Characters
In order to take advantage of a computer central processing unit’s
(CPU) processing cycles, the significant byte order must be taken
into account. The most significant byte (MSB) stores data in the low
order and least significant byte (LSB) stores data in the high order.
In simplest terms, this means that MSB data is read from right-to-
left and LSB data is read from left-to-right. Because of the MSB
versus LSB situation, data models built using MSB cannot be
directly used on computers with LSB. Also, the profile databases
can be directly copied from MSB to MSB computers or from LSB to
LSB computers. However, if the databases are to be copied from
MSB ot LSB, they must be exported then imported using Trade
Guide.
The purpose of the binary and packed decimal masking characters
is to allow data to be read and processed between different CPU
processors. Packed decimal and binary data formats are supported
by Application Integrator and enhance its use with legacy
applications (such as COBOL application data). For example, data
created with a Hewlett Packard PA-RISC computer system could be
read on an Intel/NT computer system.
When you are modeling a source data model, you must know
where the input data will be created.
r If the input data is created with an Intel-based CPU, you
would use ‘p’ or ‘b’ because it is LSB.
r If the input data is created with a non-Intel-based CPU, you
would use ‘P’ or ‘B’ because it is MSB.
When you are modeling a target data model, you must know where
the input data will be going.
r If the output data is going to an Intel-based CPU, you
would use ‘p’ or ‘b’ because it is LSB.
r If the input data is going with a non-Intel-based CPU, you
would use ‘P’ or ‘B’ because it is MSB.

Workbench User’s Guide 77


Section 2. Creating Source and Target Data Models

The following table shows various types of platforms and the


formats used for each. Use this table to determine the masking
character for binary and packed decimal numbers in your inbound
or outbound data models.

MSB LSB
Platform Binary Packed Binary Packed
Intel/NT b p
Intel/SCO‡ b p
Intel/Linux‡ b p
DEC Alpha OSF/1 b p
Digital UNIX b p
HP PA–RISC B P
Sun SuperSparc B P
IBM B P
IBM PowerPC‡ B P
SGI MIPS B P
Motorola 680x0 B P
(Macintosh)‡
‡ As of date of publication, Application Integrator is not available on these
platforms.

The translator converts the formatted input or output into the


Application Integrator internal numeric format. that is, negative
numbers are preceded with a hyphen (-). When the number
includes a decimal notation character, the decimal is explicit. When
a fractional number occurs, a leading zero is placed before the
explicit decimal. The character set for numerics is 0–9, “.”, “-”.

❖ Note: Unsigned numerics are not supported by Application


Integrator.

78 Workbench User’s Guide


Creating A Data Model

Binary data can be stored in 1, 2, or 4 bytes. This is 8, 16, and 32


bits, respectively. Therefore, the data modeler would represent
binary data as:

LSB Mask MSB Mask Numeric Value Range


B b -128 to +127
BB bb -32,768 to +32,767
BBBB bbbb -2,147,483,648 to +2,147,483,647

Packed decimal or Comp data stores data in 1 to 7 bytes. Seven


bytes will hold a decimal value containing 13 digits, which is the
maximum number of numeric characters allowed for Application
Integrator.

LSB Mask MSB Mask Length of Numeric Storage Required


P p 1 1
PP pp 2 2
PP pp 3 2
PPP ppp 4 3
PPP ppp 5 3
PPPP pppp 6 4
PPPP pppp 7 4
PPPPP ppppp 8 5
PPPPP ppppp 9 5
PPPPPP pppppp 10 6
PPPPPP pppppp 11 6
PPPPPPP ppppppp 12 7
PPPPPPP ppppppp 13 7

Workbench User’s Guide 79


Section 2. Creating Source and Target Data Models

For example, to format a field for mainframe data that will contain
the packed values of +123, -123, and 123, you would use the format
‘PP’. The translator would read and store the values as follows:

Value stored in Hexadecimal COBOL Picture Clause


two bytes Value
+123 12 3C S9999 COMP-3
-123 12 3D S9999 COMP-3
123 12 3F 9999 COMP-3
(unsigned is not supported)

The value of +1234567890 stored in six bytes and modeled as


PPPPPP, would be:

Value stored Hexadecimal Value COBOL Picture Clause


in six bytes
+1234567890 01 23 45 67 89 0c S999999999 COMP-3

J. Other Masking Characters


Use the following masking characters for justification, triads, and
literals:

Mask Usage
:L Left justify
:R Right justify
Examples:
12 à “ZZZZZ.ZZ” à “ 12 ”
12 à “ZZZZZ.ZZ:L” à “12 ”
12 à “ZZZZZ.ZZ:R” à “ 12”
triads “,” or “.” can be used with “9”, “F”, “Z”, “$”, but
not with “R” for the thousand position placement
character.
@ Escape literal characters defined within the format.
(Escape) Example:
“@For: $ZZ,ZZZ” à escapes the “F” literal.

80 Workbench User’s Guide


Creating A Data Model

K. Notes on Other Formatting Characters


r Multiple colon formatting characters can be combined in a
format string, but each must be separated with a colon. For
example: “ZZZZ:L:,” not “ZZZZ:L,”

L. Date Masking Characters (# DATE, # DATE_NA)


Use the following masking characters to establish a date format:

Mask Usage
M Date location for month, requires two Ms
Example:
19940902 à “MM/DD/YY” à “09/02/94”
D Date location for day of month, requires two Ds
Example:
19940902 à “DD/MM/YYYY” à “02/09/1994”
Y Date location for year, requires one, two or four Ys
Example:
19940902 à “YMMDD” à “40902”
m Replaces leading month digit (if zero) with space
Example:
19940902 à “mM/DD/YY” à “ 9/02/94”
d Replaces leading day digit (if zero) with space
Example:
19940902 à “dD/MM/YY” à “ 2/09/94”
0 Defines a date of all zeros to be constructed
(# DATE_NA)
y Date location for variable length year must be in this
form: “yyYY”
<spac A space “ ” as a leading character in a mask defines a
e> date of all spaces to be parsed or constructed
(# DATE_NA)

Workbench User’s Guide 81


Section 2. Creating Source and Target Data Models

M. Notes on Date Masking Characters


Source Processing
r When using a variable length year (“yyYY”), literals may be
used in the format for masking, however, no escape
characters may be used. This format, “The @date is :
yyYYMMDD” is not permitted. This format, “Birth:
yyYYMMDD” is correct.

N. Time Masking Characters (# TIME, # TIME_NA)


Application Integrator internal time format consists of 8 digits for
valid hours, minutes, seconds, and decimal seconds. Hours and
minutes must be specified in the mask (a mask must be at least 4
digits). Use the following masking characters to establish a time
format:

Mask Usage
H Time location for mandatory hours, requires two Hs
(Required) Example:
120959 à“HH:MM:SS” à “12:09:59”
M Time location for mandatory minutes, requires two Ms
(Required) Example:
120959 à“HH:MM:SS” à “12:09:59”
S Time location for mandatory seconds, requires two Ss
Example:
120959 à“HH:MM:SS” à “12:09:59”
s Time location for optional seconds, requires two s’
(Source Example:
only) 1209 à“HH:MM:ss:” à “12090000”
120959 à“HH:MM:ss” à “12095900”
D Time location for mandatory decimal seconds,
requires two Ds
Example:
12095900 à “HH:MM:SS:DD” à “12095900”
d Time location for optional decimal seconds, requires
(Source two ds
only) Example:
120959 à “HH:MM:SS:dd” à “12095900”
1209591 à “HH:MM:SS:dd” à “12095910”
12095912 à“ HH:MM:SS:dd” à “12095912”

82 Workbench User’s Guide


Creating A Data Model

Mask Usage
<space> A space “ ” as a leading character in a mask defines
a time of all spaces to be parsed or constructed
(# TIME_NA). The value parsed and passed back to
the source data model will be spaces, not zeros.

O. Notes on Time Masking Characters


Source Processing
r A time parsed by the source access model is supplied back
to the source data model in the Application Integrator
internal format of 8 digits, no matter if the time was parsed
as 4, 6, 7, or 8 digits. The additional digits of 0 are added to
the parsed value to construct the internal format.
r A minimum of 4 masking characters is required. A mask
must be 4, 6, or 8 characters in length, not counting the
<space> mask character.

Target Processing
r H, M, S, and D are the target formatting characters. Use of
the source masking characters ‘s’ of ‘d’ will be taken as
literals and output as such, for example, “12:14:ss:dd.”
r The value received is first converted to an 8-digit number
by adding trailing zeros and then output based the format
definition. If the value is a single digit (e.g., “2”), a leading
zero is first inserted before the trailing zeros are added (e.g.,
“02”).
r A value of more than 8 digits generates error code 146.

Workbench User’s Guide 83


Section 2. Creating Source and Target Data Models

Match Value Setting a Match Value


Setting a match value is optional and is only available on a tag data
model item.
The Match Value box takes a literal value and has a maximum size
of 15 characters when using OTFixed.acc as the access model; or a
maximum size of 3 characters when using OTX12S.acc,
OTX12T.acc, OTEFTS.acc, OTEFTT.acc, OTANAS.acc or
OTANAT.acc as the access model.
During processing of the source data model, the value in the Match
Value box is compared to the characters at the beginning of the
record in the input stream. If the match value is not encountered,
processing continues at the next record/segment.
During processing of the target data model, the value in the Match
Value box will be constructed in the output stream at the beginning
of the record.

insensitive, preface the


❖ Note: To define the match value as case
value with a caret (^), for example, “^from.” The “^” character is
not counted as a character during parsing.

Ø To define a data model item’s match value


1. Select the tag to be modified and select the Match Value box.

2. Type the appropriate match value.


3. To accept the value, click outside the box or press Tab to move
to the next box.

84 Workbench User’s Guide


Creating A Data Model

Verify Specifying a Verification List


The Verify box is only available on a defining item. In this box, you
enter the name of the list against which the data for this item will be
verified. The verification list is created via the Trade Guide from
the Standards dialog box of the Xrefs/Codes menu. Refer to
Section 3 of the Trade Guide for System Administration User’s Guide
for details on entering these lists.
Besides specifying this list, verification requires additions to the
data model rules that specify the lookup key (using the
environment keyword variable “LOOKUP_KEY”) and the
appropriate lookup value must be defined in the source and/or
target data models for the verification list values to be found in the
Profile Database (using the functions “DEF_LKUP” or “LKUP.”).
For details, see the explanation of the keyword “LOOKUP_KEY”
and the functions “DEF_LKUP( )” and “LKUP( )” in Appendix B of
this manual.

Ø To define a data model item’s verification list


1. Select the data model item to be modified and select the Verify
box.

2. Type the name of the list.


3. To accept the list entry, click outside the box or press Tab to
move to the next box.

Workbench User’s Guide 85


Section 2. Creating Source and Target Data Models

File Specifying a Secondary Input or Output File


Workbench supports the ability to read from or write to a
secondary file during either inbound or outbound processing. The
File option can alter the input or output stream within an
environment. This option provides a means to parse from or
construct data to a second file. For example, you could specify that
selected output go to a secondary file for analyzing or reporting
purposes.
This feature is set up by specifying a file (or an environment
variable) to read from/write to during data modeling.
The following parameters apply to using the File option of the
Layout Editor:
r The File option is only available for group items in either a
source or target data model.
r On the source side, the File option specifies a file to read
from, opening the file upon the initial read of the
environment, reading continuously until the source data
model processing is complete. If the secondary file is not
found on the initial read of the model, an error occurs.
r On the target side, the File option specifies a file to
construct. The I/O name specified in the File value entry
box for the group item is resolved in the following
sequence:
1. The name is determined at model parsing time versus
at execution time; the name cannot be altered within the
data model in which it is used.
2. The I/O name is attempted to be resolved by treating it
as a user-defined environment variable.
r If an environment variable of the same name does not exist,
the filename is taken as a string literal.
r All items hierarchically contained within the group item are
parsed/constructed per the specified input/output stream.
The parsing/constructing of the data continues until control
returns to the parent data model item of the group data
model item.

86 Workbench User’s Guide


Creating A Data Model

r Outside of the group item, the system uses the


environment’s specified input/output stream (specified in
the map component file or at the command line) during
parsing/constructing, unless, of course, the File option is set
for another group item.
r An attached environment specified within a “File” group
item inherits the previous environment’s specified
input/output stream, not the secondary stream specified by
the File group item.
r In a single layered environment, within a File group item,
the data will be appended to the specified output stream.
r In a multiple layered environment, within a File Group
item, the data will be overwritten in the specified output
stream, since the File option is reprocessed each time the
environment where the File group item is specified is
reattached.

❖ Note: Refer to Section 4 for more details on


environments.

Ø To specify a secondary input/output file


1. Select the group item to be modified and click in the File box.

2. Type the appropriate filename (including the full path, if


necessary).
3. Click outside the box to accept the value.

Workbench User’s Guide 87


Section 2. Creating Source and Target Data Models

Sort Sorting Defining Item Output


This option provides a means to reorder a section of the output
stream for reporting or other purposes. The output for the group
will be in the order based on the sort order (primary, secondary,
etc.) specified. A sorted group can be output to a secondary file.
See the next section on the File option for details.

❖ Note: The Sort option is available on items within a group in a


target data model only. This option provides a method of sorting
selected defining items associated within (children of) the group
item.

Ø To define a group item’s sort value


1. Select the group item to be modified and select the Sort box.
2. Single click the ellipses that appear in the box to open a dialog
box that allows you to select the sort sequence of the defining
items within the group selected. This Sort dialog box displays
two areas labeled “List” and “Sort.”

3. From the List box, select each defining item by which to sort the
data model items in the group. To place the item in the Sort
box, choose the >> button. To remove an item from the Sort,
select it and choose the << button, returning it to the List box.
The first item you select is the primary sort, the second item
becomes the secondary sort, and so forth.

88 Workbench User’s Guide


Creating A Data Model

4. Choose the Apply button to save your sort order for the group
or choose the Cancel button to return to the Layout Editor
window without specifying a sort order.
Once you return to the Layout Editor and select another area of
the window, the sort order appears in the Sort box. To review
or edit the complete list of defining items (since only the first
few characters of the primary sort appear in the box), select the
Sort box and click the ellipses to return to the Sort dialog box.

Workbench User’s Guide 89


Section 2. Creating Source and Target Data Models

Increment Setting an Increment/Non-Increment Counter


The Increment/Non-Increment option is only available on group
items and only pertains to MetaLink variables. Set to Yes, this
option increments the instance on all MetaLink variables used with
the group item and its children items. The default value is Yes.
Refer to the “Variables” section in Section 3 for a description of
these types of variables and their usage.

Ø To set the counter to increment


1. Select the group item to be modified and click the Increment
box.

2. Click the arrow and select Yes to increment (the default) or No


to avoid incrementing.
3. To accept the value, click outside the box or press Tab to move
to the next box.

90 Workbench User’s Guide


Creating A Data Model

Establishing Data When a new data model item is inserted or appended, it is placed
Hierarchy into the data model at the same hierarchy level as the original data
model item.

Hierarchical The relationship between parent, children, and sibling items is


Relationships shown below:

Parent Item 1

Child Item 1

Child Item 2

Child Item 3

Parent Item 2
The hierarchy also determines processing flow, child to sibling and
then back to parent.
Parent (3)
Child (1)
Sibling (2)
Refer to the “Understanding Environments” section in Section 4 for
a discussion of processing flow.

Ø To change a data model item hierarchy level


1. Select the data model item you want to modify.
2. To make an item a parent (top hierarchy level), a child (one
hierarchy lower than a parent data model item), or a sibling
(same level), you have several options. Choose one of the
following options:
r Menu – From the Layout Editor Data Model menu, choose
Change Level Right to make the selected data model item a
child or sibling;
- or -
Choose Change Level Left to make the current data model
item a parent or sibling.

Workbench User’s Guide 91


Section 2. Creating Source and Target Data Models

r Toolbar Icon – Click the Change Level Right icon to


restructure the selected item’s hierarchical level to one of a
lower level, child, or sibling;
- or -
Click the Change Level Left icon to restructure the selected
item’s hierarchical level to one of a higher level, parent or
sibling.
r Keyboard Shortcut – Press Ctrl+Right Arrow to make the
selected data model item a child or sibling;
- or -
Press Ctrl+Left Arrow to make the selected data model item
a parent or sibling.

❖ Note: If you attempt to modify the last top-level sibling


item (for example, the last tag item in a source data
model) after opening RuleBuilder for any data model item
in the model, you will receive the following message:

In order for Workbench to coordinate changes to the data


model layout and the data model rules, changes to the
data model layout should be made before opening
RuleBuilder; in the case of the last “parent” item, this is
required.

92 Workbench User’s Guide


Creating A Data Model

Including Files in The Include… option allows you to attach Include files to your data
Data Models models. Include files contain rules that you can reference from
your data model so you can use them once or multiple times. The
Include file’s extension is “.inc”. The rules are in the form of
declare statements.

To access the Include… 1. From the Layout Editor or the RuleBuilder File main menu,
option choose File. The File drop down menu appears.
2. Choose Include… The Include dialog box appears.

The Include dialog box displays Available Files on the left side
and Included Files on the right. The Available Files are those
Include files that are available to this data model. The
Available Files cannot be accessed by the data model until they
are linked to the data model. This is done by moving the
filename from the Available Files list into the Included Files list,
then applying the change and saving the data model. The
following table describes the items found on the Include dialog
box.

Item Description
Available Files This list box displays the filenames of the
Include files available to this data model.
Included Files This list box display the filenames of the
Included files that will be or are linked to
the data model
<< Choosing this button will move the filename
from the Included Files list to the Available
Files list.

Workbench User’s Guide 93


Section 2. Creating Source and Target Data Models

Item Description
>> Choosing this button will move the filename
from the Available Files list to the Included
Files list.
Apply Saves the changes.
View Allows you to review the highlighted file.
Cancel Exits the Include dialog box.

To link an Include file to 1. From the Layout Editor or the RuleBuilder File main menu,
a data model choose File. The File drop down menu appears.
2. Choose Include… The Include dialog box appears.
3. In the Available Files list box, highlight the filename of the
Include file to be linked to the data model.
4. Choose the >> button. The filename will move from the
Available Files list box to the Included Files list box.
5. To complete the entry, choose the Apply button.

To unlink an Include file 1. From the Layout Editor or the RuleBuilder File main menu,
to a data model choose File. The File drop down menu appears.
2. Choose Include… The Include dialog box appears.
3. In the Included Files list box, highlight the filename of the
Include file to be unlinked from the data model.
4. Choose the << button. The filename will move from the
Included Files list box to the Available Files list box.
5. To complete the entry, choose the Apply button.

94 Workbench User’s Guide


Creating A Data Model

To view an Include file 1. From the Include dialog box, highlight the filename of the file to
be viewed.
2. Choose the View button. The View dialog box will appear.

3. To locate a specific item in the file, choose the Find button.


Refer to the “Finding Data Model Items” section of Section 2 in
this manual for instructions on using this function.
4. Close the viewing dialog box by choosing the Close button.
You can also view an Include file using an on-line editor.
The Include files contain INCLUDE rules that load PERFORM()
declare statements (or declarations). These declarations can be
used any number of times in the data model without having to
duplicate the code.
Information about adding declarations to a data model can be
found in the “Declarations Tab” section of Section 3 of this manual.
Refer to Appendix B, “Application Integrator Model Functions” for
additional information about using the INCLUDE data model
keyword and the PERFORM() data model function.

Workbench User’s Guide 95


Section 2. Creating Source and Target Data Models

Assigning Rules to When you assign rules to data model items, you are adding
Data Model Items processing logic to your model. Information about adding rules
can be found in Section 3 Building Rules into Data Models.

Saving a Data It is good modeling practice to save your model frequently during
Model development. It is recommended that you save your data models
and map component files to the working directory.

Note: During the Save operation in the Layout Editor, a set of rules

❖ may be placed in the topmost data model item. If it is a Group type


item that uses the Sort function, then rule execution on a sort item
is not allowed, therefore, the output data will not be created.
Shown here is an example of a rule that could be inserted and the
message box:

[ ]
; mapbuilder predefined do not remove this line

To avoid this problem, choose the OK or the Not Used button on the
message box. Insert another Group type item under the top level
item and define the sort on it. Save the data model.

96 Workbench User’s Guide


Creating A Data Model

To save a data model 1. Activate the Layout Editor window of the data model to be
saved. (Click the title bar of the window to activate it.)
2. Save the data model in one of the following ways:
r Menu – From the File menu, choose Save.
r Toolbar Icon – Click the Save icon.
3. If you have already named your data model, the application
will save the work under the current name. If you have not
named the data model, a dialog box appears for you to enter a
path and name.

❖ Note: If any of the following items is in the top level of the


models, the Layout Editor will close normally. The Insert Rules
information box will not appear.

;; MapBuilder (predefined)
Do Not Remove This Line
VAR–>OTTargetSuccessful
VAR–>OTSourceSuccessful
PERFORM(“OTSrcEnd”)
PERFORM(OTTrgEnd”)

Workbench User’s Guide 97


Section 2. Creating Source and Target Data Models

To save a data model 1. Activate the Layout Editor window of the data model to be
under a new name saved with a new name. (Click the title bar of the window to
activate it.)
2. From the File menu, choose Save As.
3. In the Save As dialog box that appears (see the figures below to
note differences between operating systems), type the new
name in the box provided.

UNIX Save As Dialog Box

Windows 95 Save As Dialog Box


4. Choose the OK button (UNIX and Windows NT 3.51) or Save
button (Windows 95 or NT 4.0) to save to the new name and
close the dialog box.

Hint for UNIX Users: It is also possible to print out a data

❖ model definition with or without rules using the UNIX script


OTmdl.sh. See Appendix F, “Application Integrator Utilities,”
for a complete description of this program.

98 Workbench User’s Guide


Creating A Data Model

Closing the Editor

To completely exit from 1. Activate the Layout Editor window to be exited. (Click the title
the Layout Editor bar of the window to activate it.)
2. From the File menu, choose Close Editor or press Ctrl+Q).
If changes were made, but not saved or applied (rules), a
prompt displays asking if you want to apply and save changes.

Format You can create a report that shows the different formats used in
Representation defining a data model by running a translation that contains a
model and map component file which contain examples of each of
Report
the valid formats. They have been provided for the evaluation of
target formatting.
In UNIX, the translation can be executed by running the following
script at the command line:
OTFmt.sh
Output is automatically displayed to the screen from the file
OTFmt.out. (This filename is defined in the map component file
OTFmt.att).
In Windows, the translation is executed by issuing the command:
otrun.exe -at OTFmt.att –cs %OT_QUEUEID% -I
You can then use a Windows editor, such as MS Word or Notepad,
to view the file OTFmt.out. In Windows, the program must print to
a file named OTFmt.out.

Additional values can be run through the model by adding the


values to the “Initialization” item of the model OTFmtT.mdl.
Assign the values as strings to the variable “ARRAYàNumber.”

Workbench User’s Guide 99


Section 2. Creating Source and Target Data Models

Standard Data Two listings are available to show the contents of data models
Model Listings derived from a standard data model. The Group/Tag listing
includes the Group and Tag data model item types. The All Data
Model Items listing includes all data model item types: Group,
Tag, and Defining.
For Group item types, the description will be the data model item
label. For Tag type items, the description will be a cross-reference
of the match value to a database-stored description.
For Defining type data model items, the cross-reference is based on
the element number (included in the data model item label) to a
database-stored description. If the data model item labels does not
include the element number (such as, standard data models created
before version 3.0), the description is the data model item label. All
descriptions are based upon the Standard: ASC X12,
UN/EDIFACT or TRADACOMS.

Running the Group/Tag Ø To run the Group/Tag listing


Listing

UNIX operating system


From the command line, type the following:
OTmdl1.sh <data_model_name> <rules? y/n> <output_device>

Windows operating system


From the Run dialog box, type the following:
OTmdl1.bat <data_model_name> <rules? y/n> <output_device>
where
<data_model_name> indicates the name of the data model to be
documented.
<rules? y/n> indicates whether rules are to be included in the
listing. If set to ‘Y’ or ‘y’, the rules associated with the labels will be
included. If set to ‘N’ or ‘n’, the rules will not be printed.
<output_device> this optional argument indicates the output
preference. If the output device entry is omitted, the output will be
directed to the default printer. If ‘display’ is entered, the output
will be directed to the monitor. If any other value is entered, output
will be directed to a file in overwrite mode.

100 Workbench User’s Guide


Creating A Data Model

Running the All Data Ø To run the All Data Model Items listing
Model Items Listing

UNIX operating system


From the command line, type the following:
OTmdl2.sh <data_model_name> <rules? y/n> <output_device>

Windows operating system


From the Run dialog box, type the following:
OTmdl2.bat <data_model_name> <rules? y/n> <output_device>
where
<data_model_name> indicates the name of the data model to be
documented.
<rules? y/n> indicates whether rules are to be included in the
listing. If set to ‘Y’ or ‘y’, the rules associated with the labels will be
included. If set to ‘N’ or ‘n’, the rules will not be printed.
<output_device> this optional argument indicates the output
preference. If the output device entry is omitted, the output will be
directed to the default printer. If ‘display’ is entered, the output
will be directed to the monitor. If any other value is entered, output
will be directed to a file in overwrite mode.

Workbench User’s Guide 101


Section 2. Creating Source and Target Data Models

102 Workbench User’s Guide


Section 3
Building Rules into Data Models

This section describes how to use RuleBuilder to add processing


logic to your data models. This section also describes using
MapBuilder, the tool for automating data mapping, when source
and target data models have the same or nearly the same structure.

Workbench User’s Guide 103


Section 3. Building Rules into Data Models

Rules allow for the movement of data from the source to the target
Overview of data model. Rules can be placed on any type of data model item in
Rules Entry the data model (group, tag, container, or defining items) to
describe how data is referenced, assigned, and/or manipulated.
In the source data model (input side), the rules are normally placed
on the parent item (tag) to ensure the entire tag has been parsed in
and validated before any rules are executed and data mapping
occurs. In the target data model (output side), the rules are placed
on the defining items in order to specify from which variables
values are to be mapped. (These variables having been assigned
via rules in the source data model.)

Modes for There are three modes for processing rules that are available for all
Processing Rules data model items within a data model. They are performed in the
following sequence:

Mode Description
PRESENT Rules will be performed when entering rules
processing with a status of 0
(no errors).
ABSENT Rules will be performed when entering rules
processing if one of the following statuses is found:
138-data model item not found
139-data model item no value found
140-no instance
171-no children found
These rules will also be performed when leaving
PRESENT mode with the same statuses.

Note: You cannot use an Absent rule in fixed length


data.

104 Workbench User’s Guide


Overview of Rules Entry

Mode Description
ERROR Rules will be performed when entering rules
processing with any other status than the following:
0-okay
138-data model item not found y
139-data model item no value found
140-no instance
171-no children found
These rules will also be performed when leaving
ABSENT mode processing with a non-zero status.

Note: An error typically occurs when an invalid


date, time, or numeric function is used.

Refer to the “Understanding Environments” section in Section 4 for


more details on rules processing.

Workbench User’s Guide 105


Section 3. Building Rules into Data Models

Types of Rule Each rule consists of a condition with one or more actions. There
Conditions are two types of conditions: Null and Conditional Expression.
A Null condition is always true and the actions will always be
performed. It is also referred to as No Condition.
With a Conditional Expression, the condition must come true before
the actions will be performed. Any data model item can have one
or more conditions, and each condition can have one or more
actions.

Variables Variables are the links between the source and target data model
items. There are three types of variables supported by Application
Integrator, as noted in the following table:

Variable Description
Variable This type of variable is a single value, also referred to
as a temporary Variable. If more than one
assignment is made to the same variable name, the
last assigned value is the value that will be
referenced. A variable is useful for referencing the
same value multiple times, as a counter, or in a
concatenation.
Array This type of variable is a list of values. Manual
controls are recommended with this variable
whenever multiple levels in the data model are
mapped. These controls are used to ensure that the
proper data stays together, such as: detail records
with the proper header record or sub-detail records
with the proper detail records. There are a set index
and a reference index associated with the list of
values. The set index points to the last value placed
on the list and the reference index points to the next
value to be referenced from the list. The reference
index can be reset to the top of the list by using the
data model keyword RESET_VAL.

106 Workbench User’s Guide


Overview of Rules Entry

Variable Description
MetaLink This type of variable is a list of values. A data model
item’s instance and its parent’s instance are
maintained with each value placed on the variable.
These instances eliminate the need for manual
controls to ensure that the proper data stays together,
such as: detail records with the proper header record
and sub-detail records with the proper detail record.
Only a source data model can assign values to the
MetaLink. If the target data model attempts to assign
a value to the MetaLink, the last obtained value will
be overwritten with the new value. Only the target
data model can reference a value from the MetaLink.

Note: MetaLinks are only intended to be used when


the looping of source and target data are the same or
closely related.

Use the example below for assistance in selecting between a


MetaLink and an Array.
Assume some data exists with the following information...

Source Structure Data Model Type Input Data


Document Loop Group 1Doc_1
HeadRec Tag (1) 200500Prod_1
HeadNo Defining 200125Prod_2
DetailLoop Group 200840Prod_3
DetailRec Tag (2) 1Doc_2
DetQty Defining 200225Prod_4
DetPartNo Defining 200100Prod_5

Workbench User’s Guide 107


Section 3. Building Rules into Data Models

...then the following values will be placed on the MetaLink


variables, where P-Inst is the Parent Instance and I-Inst is the Item
Instance.

M_L -> HeadNo M_L-> DetPartNo


Value P-Inst I-Inst Value P-Inst I-Inst
Doc_1 0 0 Prod_1 0 0
Prod_2 0 1
Prod_3 0 2
Doc_2 0 1 Prod_4 1 1
Prod_5 1 2

... and on the Array variables: ARRAY ->DetPartNo.

Value
Prod_1
Prod_2
Prod_3
Prod_4
Prod_5

108 Workbench User’s Guide


Overview of Rules Entry

Keywords and Application Integrator provides a library of keywords and


Functions functions.
Keywords provide a means to alter the natural processing flow
within an item, among items in a data model, and within an
environment. For example, the keyword BREAK leaves the file
pointer unchanged and processing moves to the next sibling item to
process.
Application Integrator’ functions fulfill numerous tasks, including
manipulating data in the input/output stream, assigning and
referencing values in the Administration Database, and calculating
date, time, and mathematics.
Refer to Appendix B for a complete reference to the Application
Integrator keywords and functions.

Workbench User’s Guide 109


Section 3. Building Rules into Data Models

Two Methods for There are two different methods for creating rules to map your
Creating Rules data:
r RuleBuilder
r MapBuilder
RuleBuilder allows you to create customized mapping rules. Using
RuleBuilder, you have access to the full functionality of the
Workbench rules system. Depending on the expertise of the
developer, rule definition can be done either in a free-format text
editor or through prompting via the RuleBuilder interface. When
using the RuleBuilder, the order that the rules appear in is the order
in which they will be executed during a translation session.
Children data model items are acted on before parent data model
items, hence the re-ordering of the rules to match the execution
order.
RuleBuilder provides a series of tabbed pages or “tabs” which
organize the components of rules (conditions, data model items,
functions, variables and so forth) into categories. Using the mouse
or keyboard shortcuts, you can quickly build the data model logic.
The RuleBuilder interface is described in the “Using RuleBuilder”
section.
MapBuilder is an automated way of applying rules on data model
items. MapBuilder uses a drag and drop feature to map from
source to target data model. The rules are placed on the defining
items only and are a NULL condition (that is, the actions will
always be performed). In the source data model, MapBuilder
creates a rule that assigns a data model item’s value to a variable.
In the target data model, MapBuilder creates a rule that references
the variable for its value and assigns it to the data model item.
MapBuilder is an efficient way to map from source to target data
models when the input and output stream are the same, or
extremely similar, in structure. Refer to the “Using MapBuilder”
section for details.

110 Workbench User’s Guide


Using RuleBuilder

RuleBuilder is accessed from the Layout Editor window. The


Using RuleBuilder window displays rules in the order that they are
RuleBuilder executed during a translation session.

RuleBuilder Window RuleBuilder provides an interface for quickly defining null and
conditional expressions. The following illustration shows the
RuleBuilder interface:

RuleBuilder
Rule Notebook
Toolbar
Tabs

Rule Rule Notebook Rule Edit


Notebook Paging Arrows Workspace

The RuleBuilder Window has two parts: the Rule Notebook (the left
portion of the window) and the Rule Edit Workspace (the right
portion of the window). You can directly type lines of rule
expressions in the Rule Edit Workspace or you can use the Rule
Notebook options to create expressions via mouse selection.

Workbench User’s Guide 111


Section 3. Building Rules into Data Models

The Rule Notebook contains tab pages and each page represents
different rule components. You can bring any page forward by
clicking its tab, for example, the illustration shows the DM Items
tab. You can also use the arrow keys at the bottom of the Rule
Notebook to move between pages.
The Rule Edit Workspace is a list of rules to be processed during
translation. When you open RuleBuilder, the entire file displays in
the order in which the existing rules are to be executed (not the order of
the data model items as displayed in the Layout Editor window).
The insertion point is placed on the data model item from which
you opened RuleBuilder. The Present, Absent, and Error mode
rules displays for each data model item.

112 Workbench User’s Guide


Using RuleBuilder

RuleBuilder Toolbar The RuleBuilder toolbar provides buttons to quickly add rules
logic. The description of the function of each button can found in
the RuleBuilder–File Menu and the RuleBuilder–Edit Menu
sections.

Insert Null Check


Copy Condition Syntax

Insert Insert
Cut Paste Assignment Return Apply

Undo Redo Find Next


Parameter
Insert Insert
Literal Condition

Workbench User’s Guide 113


Section 3. Building Rules into Data Models

RuleBuilder - File The RuleBuilder File menu has three options:


Menu

Toolbar Keyboard
Menu Option Icon Shortcut Description
FileàIncludes… Displays the Include
dialog box.
FileàApply Ctrl+A Provides options for
applying the rules.

Caution: Rules are


not recorded in
RuleBuilder until they
are applied. Rules
when applied are
updated to the Layout
memory area,
however, rules are not
permanently saved to
disk until the data
model is saved.

FileàMinimize Ctrl+M Minimizes the


RuleBuilder window
and returns you to the
Layout Editor for the
given data model.

114 Workbench User’s Guide


Using RuleBuilder

RuleBuilder - Edit The Edit menu provides options for working with the Rule Edit
Menu Workspace.

The following table indicates the drop down menu options, their
corresponding toolbar icons and keyboard shortcuts, and
descriptions for each of the RuleBuilder items.

Menu Toolbar Keyboard


Option Icon Shortcut Description
EditàUndo Alt+Back- Reverses or cancels the
space previously performed
actions.
EditàRedo Ctrl+Y Redoes the previously
undone actions.
EditàCut Ctrl+X Cuts the highlighted text
to the clipboard.
EditàCopy Ctrl+C Copies the highlighted
text to the clipboard.
EditàPaste Ctrl+V Moves the cut or copied
text from the clipboard to
the current insertion point.

Workbench User’s Guide 115


Section 3. Building Rules into Data Models

Menu Toolbar Keyboard


Option Icon Shortcut Description
EditàInsert Ctrl+G Places a single equal sign
Assignment (=) into the current
expression to do an
assignment.
EditàInsert Ctrl+L Places the quotation
Literal marks to start the entry of
literal text in the current
expression.
EditàInsert Ctrl+N Adds the empty brackets [
Null ] to start a Null condition.
Condition
EditàInsert Ctrl+I Adds the brackets with an
Condition equal sign [=] to add a
conditional expression.
EditàInsert Enter/ Behaves as if pressing the
Return Return Enter/Return key. Adds a
return at the cursor
position.
EditàFind Ctrl+F Prompts you for the next
Next parameter to complete the
Parameter current expression.
EditàChec Ctrl+K Parses the current rules
k Syntax and notifies you of any
syntax errors.

116 Workbench User’s Guide


Using RuleBuilder

RuleBuilder Help The Help Menu provides options for working with the Help
Menu facility. Additional information about using the Help system can
be found in the Preface.

Accessing RuleBuilder

To display the 1. From the Layout Editor dialog box, highlight the data model
RuleBuilder dialog box item to which you want to add rules.

Workbench User’s Guide 117


Section 3. Building Rules into Data Models

2. To display the RuleBuilder dialog box for the chosen data


model item, select one of the following options:

Menu From the Data Model menu, choose Add/Edit


Rules.
Icon Click the RuleBuilder icon to the left of the data
model item label. If rules have been added to the
data model item, the icon is highlighted to imply
the addition of rules.

The RuleBuilder window for the highlighted data model item


appears.

118 Workbench User’s Guide


Using RuleBuilder

Adding/Modifying Once you open the RuleBuilder window, you are ready to add or
Rules modify rules of the data model. As entries and selections are made
from the Rule Notebook, they appear in the Rule Edit Workspace.
When you open the RuleBuilder window, the focus is on the
present mode rules of the data model item you have currently
selected.
The same methods are used to insert PRESENT, ABSENT, or
ERROR mode rules.
Rules for any data model item can be displayed or hidden by
clicking the Collapse or
Expand icons to the right of the data model item’s name in the Rule
Edit Workspace. The empty brackets [ ] indicate that no rules are
defined for the data model item. An equal sign within the brackets
[=] indicate that there are rules defined for the data model item. An
example of an expanded rule is shown in this following illustration.

Click the symbol


to display or hide
the rules assigned
to a data model
item

Workbench User’s Guide 119


Section 3. Building Rules into Data Models

To insert an Assignment
1. Either highlight the data model item to add rules to and open
RuleBuilder, or once in the Rule Edit Workspace, move the
insertion pointer to the data model item to which you want to
add rules. Be sure to move the insertion pointer to the mode to
which you want to add rules (Present, Absent, or Error).
2. To insert an Assignment = at the insertion point, use one of the
following methods:
r Menu-From the Edit menu, choose Insert Assignment.
r Toolbar Icon-Click the Insert Assignment icon.
r Keyboard Shortcut-Press Ctrl+G.
r Keyboard–Press = (equal sign).
3. Insert the appropriate statements for the rule by either typing
them directly into the Rule Edit Workspace or by following the
procedure for inserting a Rule Notebook option.
4. Apply your changes to the Rule Edit Workspace. Changes to
the Rule Edit Workspace are not complete until they are
applied, using one of the following methods:
r Menu-From the File menu, choose Apply.
r Toolbar Icon-Click the Apply icon.
r Keyboard Shortcut-Press Ctrl+A.

❖ Caution:
applied.
Rules are not “saved” in RuleBuilder until they are
Rules, when applied, are updated to the Layout
memory area, however, rules are not permanently saved to disk
until the data model is saved.

5. To minimize the RuleBuilder window, from the RuleBuilder


File menu, choose Minimize (or Ctrl+M).

120 Workbench User’s Guide


Using RuleBuilder

To insert a Null condition


1. Either highlight the data model item to add rules to and open
RuleBuilder, or once in the Rule Edit Workspace, move the
insertion pointer to the data model item to which you want to
add rules. Be sure to move the insertion pointer to the mode to
which you want to add rules (Present, Absent, or Error).
2. To insert a Null condition [ ] at the insertion point, use one of
the following methods:
r Menu-From the Edit menu, choose Insert Null
Condition.
r Toolbar Icon-Click the Null Condition icon.
r Keyboard Shortcut-Press Ctrl+N.
r Keyboard-Press [space] (left bracket space right bracket).
3. Insert the appropriate statements for the rule by either typing
them directly into the Rule Edit Workspace or by following the
procedure for inserting any Rule Builder Tab option.
4. Apply your changes to the Rule Edit Workspace. Changes to
the Rule Edit Workspace are not complete until they are
applied, using one of the following methods:
r Menu-From the File menu, choose Apply.
r Toolbar Icon-Click the Apply icon.
r Keyboard Shortcut-Press Ctrl+A.

❖ Caution:
applied.
Rules are not “saved” in RuleBuilder until they are
Rules, when applied, are updated to the Layout
memory area, however, rules are not permanently saved to
disk until the data model is saved.

5. To minimize the RuleBuilder window, from the RuleBuilder


File menu, choose Minimize (or Ctrl+M).

Workbench User’s Guide 121


Section 3. Building Rules into Data Models

To insert a Condition
1. Either highlight the data model item to add rules to and open
RuleBuilder, or once in the Rule Edit Workspace, move the
insertion pointer to the data model item to which you want to
add rules. Be sure to move the insertion pointer to the mode to
which you want to add rules (Present, Absent, or Error).
2. To insert a conditional expression at the insertion point, use one
of the following methods:
r Menu-From the Edit menu, choose Insert Condition.
r Toolbar Icon-Click the Condition icon.
r Keyboard Shortcut-Press Ctrl+I.
r Keyboard-[=] (bracket equal sign bracket)
3. Insert the appropriate statements for your rule by either typing
them directly into the Workspace or following the procedure
for inserting a RuleBuilder Tab option.
4. Save your changes to the Rule Edit Workspace. Changes to the
Rule Edit Workspace are not complete until they are applied,
using one of the following methods:
r Menu-From the File menu, choose Apply.
r Toolbar Icon-Click the Apply icon.
r Keyboard Shortcut-Press Ctrl+A.

❖ Caution:
applied.
Rules are not “saved” in RuleBuilder until they are
Rules, when applied, are updated to the Layout
memory area, however, rules are not permanently saved to
disk until the data model is saved.

5. To minimize the RuleBuilder window, from the RuleBuilder


File menu, choose Close (or Ctrl+M).

122 Workbench User’s Guide


Using RuleBuilder

To insert a Rule 1. Make sure the insertion point is placed in the Rule Edit
Notebook option Workspace under the desired data model item name and under
the label for the mode for which you are adding a rule
expression.

2. Select the Rule Notebook tab from which you want to select
options.
3. Double-click the desired option or value. For example, to place
a keyword into the rule, click the Keyword tab, scroll through
the list, and double-click the desired keyword.
4. To add new entries for Arrays, Variables, MetaLinks and
Substitutions, click the appropriate tab, click in the entry box at
the top of the list, type in the desired value and press Enter.
The Rule Notebook will be refreshed to display the new entry.
5. Apply your changes to the Rule Edit Workspace. Changes to
the Rule Edit Workspace are not complete until they are
applied, using one of the following methods:
r Menu-From the File menu, choose Apply.
r Toolbar Icon-Click the Apply icon.
r Keyboard Shortcut-Press Ctrl+A.

Workbench User’s Guide 123


Section 3. Building Rules into Data Models

❖ Caution:
applied.
Rules are not recorded in RuleBuilder until they are
Rules, when applied, are updated to the Layout
memory area, however, rules are not permanently saved to
disk until the data model is saved.

6. To minimize the RuleBuilder window, from the RuleBuilder


File menu, choose Minimize (or Ctrl+M).

To insert literals
1. Use any of the following methods to insert a literal:
r Menu-From the Edit menu, choose Insert Literal.
r Toolbar Icon-Click the Literal icon.
r Keyboard Shortcut-Press Ctrl+L.
r Keyboard-“<Literal>” (enclose the text of the Literal
between the left and right quotation marks)
2. The insertion cursor will be placed between the quotation
marks. Type the text to be interpreted literally.
3. Apply your changes to the Rule Edit Workspace. Changes to
the Rule Edit Workspace are not complete until they are
applied, using one of the following methods:
r Menu-From the File menu, choose Apply.
r Toolbar Icon-Click the Apply icon.
r Keyboard Shortcut-Press Ctrl+A.

124 Workbench User’s Guide


Using RuleBuilder

To insert comments Comments can be inserted into a data model to describe the process
into a data model being modeled, to identify modifications to models, or to explain
rules. Comments can be placed on individual lines or immediately
following a rule.
1. Make sure the insertion point is placed in the Rule Edit
Workspace under the desired data model item name and under
the label for the mode for which you are adding a comment.
2. To insert a comment on its own line,
a. Place the cursor in the first position of an empty line.
b. Type a semicolon character (;). This indicates to
RuleBuilder that a comment will follow and that all text
appearing after the semicolon should be ignored.
c. Type the comment immediately after the semicolon. You
can enter any character into the comment except the
following special characters: {, }, @, *, and |.
d. At the end of the comment, use the Enter key to type a
Return. This indicates to RuleBuilder that the comment is
ended.

3. To insert a comment on the line following a rule,


a. Insert the rule according to the appropriate procedure.

Workbench User’s Guide 125


Section 3. Building Rules into Data Models

b. When you reach the end of the rule, type a space followed
by the semicolon character (;). This indicates to
RuleBuilder that a comment will follow and that all text
appearing after the semicolon should be ignored.
c. Type the comment immediately after the semicolon. You
can enter any character into the comment except the
following special characters: {, }, @, *, and |.
d. At the end of the comment, use the Enter key to type a
Return. This indicates to RuleBuilder that the comment is
ended.

4. Apply your changes to the Rule Edit Workspace. Changes to


the Rule Edit Workspace are not complete until they are
applied, using one of the following methods:
r Menu-From the File menu, choose Apply.
r Toolbar Icon-Click the Apply icon.
r Keyboard Shortcut-Press Ctrl+A.

❖ Caution:
applied.
Rules are not recorded in RuleBuilder until they are
Rules, when applied, are updated to the Layout
memory area, however, rules are not permanently saved to
disk until the data model is saved.

126 Workbench User’s Guide


Using RuleBuilder

RuleBuilder Tabs The RuleBuilder tabs allow you to easily add information to your
set of rules.

RuleBuilder Rule
Tabs Notebook

Conditions Tab From the Conditions tab, you can select a conditional expression
type for analyzing your data. RuleBuilder places the complete
syntax of the expression in the Rule Edit Workspace.

Workbench User’s Guide 127


Section 3. Building Rules into Data Models

The following conditions are available:

== Equals (compares values before executing next instruction)


!= Not Equal
< Less Than
> Greater Than
<= Less Than or Equal
>= Greater Than or Equal
&& AND Condition
|| OR Condition

❖ Hint: These expressions can be typed from the keyboard also.

Ø To add conditions
1. From the Rule Edit Workspace, insert a condition.
2. From the RuleBuilder tabs, choose the Conditions tab. The
available conditions will appear in the rule notebook.
3. From the rule notebook, double-click the type of conditional
expression to add. For example, if you double-click the Equals
option (to select and paste it), RuleBuilder places the expression
in the Rule Edit Workspace:
<operand> = <operand>
4. You can replace the <operand> prompts by typing over them
with the correct operands, or press Ctrl+F, or use the Next
Parameter icon ( ) to move from parameter to parameter
completing the conditional expression. (The system
automatically deletes the prompts.)

128 Workbench User’s Guide


Using RuleBuilder

Data Model Items Tab The Data Model (DM) Items tab provides a list of all the items in
your data model for use in constructing expressions. This list can be
sorted alphabetically by data model item name, or hierarchically by
placement in the data model.
The default sort option is “by Name.”

Ø To change the sort option


Click “by Hierarchy” radio button in the “Sort by” box at the top of
the DM Items list box.

Ø To select a data model item on the DM Items tab


Use one of the following methods:
r Double-click the data model item label in the rule notebook.
r Select an item in the list and press the Spacebar to select and
insert the text in the Rule Edit Workspace.
r Start to type the name of the item (the search is case
insensitive) and once the item appears in the list, select it.
Type with a minimum delay between the characters to
pinpoint a specific one, too long of a delay and the select
restarts.

Workbench User’s Guide 129


Section 3. Building Rules into Data Models

Operators Tab From the Operators tab, you select one of the following operators
for use in the rules for manipulating data.

= Equals (to assign or move a value; operates


from right to left)
+ Addition
- Subtraction
* Multiplication
/ Division
( Open Parenthesis (used with ‘)’ to set precedence)
) Close Parenthesis (used with ‘(’ to set precedence)
&& AND (only used in a conditional
expression)
|| OR (only used in a conditional
expression)
% Modulus Returns the remainder of a division
calculation.

130 Workbench User’s Guide


Using RuleBuilder

Arrays Tab The Arrays RuleBuilder tab displays the array names available for
the chosen data model or allows you to add arrays the data model.
Refer to the “Variables” section earlier in this section for a
discussion of Application Integrator arrays.

Ø To use the Array variable (ARRAY->) in an action


1. Click the Arrays tab.
2. Enter the array name in the Array Name box and press Enter.
If the value already exists in the Arrays list box, double-click
the appropriate Array name. The array is placed into the rule
at the insertion point.

Variables Tab The Variables tab displays a list of the temporary variables
available in this data model or allows you to add temporary
variables to the rules. Refer to the “Variables” section earlier in this
section for a discussion of Application Integrator temporary
variables.

Workbench User’s Guide 131


Section 3. Building Rules into Data Models

Ø To use a temporary Variable (VAR->) in an action


1. Click the Variables tab.
2. Enter the Variable name in the Temporary Name box and press
Enter. If the value already exists in the Variables list box,
double-click the appropriate Variable’s name. The Variable is
placed into the rule at the insertion point.

MetaLinks Tab The MetaLinks tab displays the MetaLinks available in this data
model or is used to add MetaLinks to the data model. Refer to the
“Variables” section earlier in this section for a discussion of
Application Integrator MetaLinks.

Ø To use the MetaLink variable (M->L) in an action


1. Click the MetaLinks tab.
2. Enter the MetaLink name in the MetaLink Name box and press
Enter. If the value already exists in the MetaLink list box,
double-click the appropriate MetaLink name. The MetaLink is
placed into the rule at the insertion point.

132 Workbench User’s Guide


Using RuleBuilder

Substitutions Tab The substitution variable is a single value variable where a label is
replaced with a value. Each label is associated with a value entry
box of the Trading Partner Profile dialog box (or other Application
Integrator value entry box). During processing, the value entered
in the box is returned when that label is used as a substitution. The
dollar sign ($) precedes all substitution variable names.
The Substitutions tab displays the substitution labels available in
this data model or is used to add substitution labels to the data
model.

For a listing of the substitution labels used, refer to the appropriate


standards implementation guide, such as the ASC X12 Standards
Implementation Guide, UN/EDIFACT Standards Implementation Guide,
or the TRADACOMS Standards Implementation Guide.

Ø To use the substitution variable in an action


1. Click the Substitutions tab.
2. Enter the substitution label in the Substitution Name value
entry box and press Enter. If the value already exists in the
Substitution list box, double-click the appropriate Substitution
label.

Workbench User’s Guide 133


Section 3. Building Rules into Data Models

Functions Tab Predefined functions can be used in the rules for manipulating the
data. Some of the operations performed by these functions include:
cross referencing, verifying against a code list, entering system date
and time information, extracting a substring, checking the current
error code value, outputting various log records, and resetting a
MetaLink pointer. For a complete list of predefined functions, refer
to Appendix B of this manual.
The Functions tab displays the Functions available in this data
model and is used to add Functions to the data model rules.

Ø To use a function in an action


1. Click the Functions tab.
2. Double-click the appropriate function and it will be added to
the Rule Edit Workspace under the selected mode and the
selected data model item.
3. Complete the function by following the provided prompts. Use
the Find Next Parameter command of the Edit menu to
complete the function.
Functions that return a value can be used within other functions
for condition and action statements. Refer to Appendix B for a
complete description of the Application Integrator functions.

134 Workbench User’s Guide


Using RuleBuilder

Keywords Tab Rule keywords provide you with a means to alter the natural
processing flow within an item, among items in a data model, and
within an environment.
The Keywords tab displays the Keywords available in this data
model and is used to add Keywords to the data model rules.

Keyword Description
ATTACH Causes a new map component file to be opened,
changing the environment configuration for the
translation session.
BREAK The file pointer status remains unchanged. The
error status is reset to 0 and no additional actions
are processed. Occurrence validation is skipped.
The flow control proceeds to the next sibling
data model item.
CLEAR_VAL Clears all values from the MetaLink and Array
variable lists. Once cleared, RESET_VAL
keyword will not restore the value.
CONTINUE The file pointer status remains unchanged. The
error status is reset to 0 and no additional actions
are processed. Occurrence validation is checked.
The flow control depends on the occurrence
maximum — repeat if not greater than the
maximum or proceed to the next sibling data
model item.

Workbench User’s Guide 135


Section 3. Building Rules into Data Models

Keyword Description
EXEC Provides the ability within a data model to
execute a process outside of Application
Integrator. The executed process can be another
program, translation, or shell script.
EXIT The file pointer status remains unchanged. The
error status is as specified. Occurrence validation
is skipped. Flow control returns to the parent
environment, unless an error status of zero is
specified on the source side, in which case flow
control proceeds onto the target side.
EXPORT Provides the ability for a Variable (temporary),
MetaLink, or Array variable to exist beyond its
normal scope. A Variable, MetaLink, or Array
variable comes into existence when it’s first
declared by reference in a data model. It is then
available for reference in the current and all
children environments. Once the current
environment is exited, the variable no longer
exists. Using EXPORT, the variable can be
extended back to the parent environment.
REJECT The file pointer status is reset. The error status is
138-not found (source data model) or 139-no
value (target data model). Occurrence
validation is checked. Flow control depends on
the occurrence minimum — return back to the
parent data model item if not greater than the
minimum, or proceed to the next sibling data
model item.
RELEASE The file pointer status is reset. The error status is
reset to 0. Occurrence validation is skipped.
Flow control proceeds to the next sibling data
model item.
RESET_VAL Will reset the MetaLink and Array variable list
pointer to the top of the list. If CLEAR_VAL
keyword was used, RESET_VAL keyword will
not restore the value.

136 Workbench User’s Guide


Using RuleBuilder

Keyword Description
RETURN The file pointer status remains unchanged. The
error status is reset to 0. Occurrence validation
is skipped. Flow control returns back to the
parent data model item.
SET_ERR Provides the ability to set an error value to force
an error condition.

Ø To use a keyword in an action


1. Click the Keywords tab.
2. Double-click the appropriate keyword and it will be added to
the Rule Edit Workspace under the selected mode and the
selected data model item.
3. Certain keywords require additional values; complete the
keyword by following any provided prompts.

Workbench User’s Guide 137


Section 3. Building Rules into Data Models

Declarations Tab A set of rules is defined and referenced by the DECLARE


statement. One or more declarations are contained in an Include
(.inc) file, which may be included in the current data model. Each
set of rules appears as a selectable function in the Declarations tab.
Selecting the function automatically inserts a PERFORM statement
into the rules at the cursor position. There is no limit to the number
of times a function is selected.
The Declarations tab displays the declarations available in this data
model and is used to add declarations to the data model.

Ø To use a declaration in an action


1. Click the Declarations tab.
2. Double-click the appropriate declaration label and it will be
added to the Rule Edit Workspace under the selected mode and
the selected data model item.
3. Certain declarations require additional values; complete the
declaration by following any provided prompts.

138 Workbench User’s Guide


Using RuleBuilder

Cutting, Copying, Cut, Copy, and Paste Clipboard functions can be performed on
and Pasting Rules rules on individual data model items for any of the modes: Present,
Absent, or Error. Cut or copy assigns the selected information to
the Clipboard. Only one mode at a time is allowed to be copied or
cut and placed on the Clipboard.
Paste takes the information from the Clipboard to the location you
specify in the data model rules.

To cut text from the 1. Highlight the text to cut in the Rule Edit Workspace.
Rule Edit Workspace 2. Use any of these methods to cut the text:
r Menu-From the RuleBuilder Edit menu, choose Cut.
r Toolbar Icon-Click the Cut icon.
r Keyboard Shortcut-Press Ctrl+X.
The text is assigned to the Clipboard until something else is
assigned which replaces it.

To copy text from the 1. Highlight the text to copy in the Rule Edit Workspace.
Rule Edit Workspace 2. Use any of these methods to copy the text:
r Menu-From the Edit menu, choose Copy.
r Toolbar Icon-Click the Copy icon.
r Keyboard Shortcut-Press Ctrl+C.
The text is assigned to the Clipboard until something else is
assigned which replaces it.

To paste text from the 1. Move the insertion pointer to the place to paste in the Rule Edit
Clipboard into the Rule Workspace.
Edit Workspace 2. Use any of these methods to paste the text:
r Menu-From the Edit menu, choose Paste.
r Toolbar Icon-Click the Paste icon.
r Keyboard Shortcut-Press Ctrl+V.
Until you make another copy or cut, this text will remain on the
Clipboard, allowing you to paste several copies of the current
text.

Workbench User’s Guide 139


Section 3. Building Rules into Data Models

Finding the Next The system makes it easy for you to enter the parameters to
Parameter functions, conditions, and keywords by prompting you for the next
required parameter. Individual parameters of a parameter list can
be selected by repeatedly choosing Find Next Parameter.

To find the next 1. Insert the cursor after the point where you want to the system
parameter to begin the parameter search.
2. Use one of the following methods to issue the command:
r Menu-From the Edit menu, choose Find Next Parameter.
r Toolbar Icon -Click the Next Parameter icon.
r Keyboard Shortcut-Press Ctrl+F.
3. Complete the parameter as per the instructions.

Checking the Workbench provides a utility for checking the syntax of the rules
Syntax of Rules during rule entry.

To check the syntax 1. Use any of these methods to call the rule checking utility:
r Menu-From the Edit menu, choose Check Syntax.
r Toolbar Icon-Click the Check Syntax icon.
r Keyboard Shortcut-Press Ctrl+K.
2. If any errors are found, an Errors in Parse dialog box is
displayed showing the line numbers of the errors. You have
the ability to select the Go to Error button or double-click the
line in the list box to automatically transfer to the line with the
error. If no errors are found, a message is noted on the status
line.

140 Workbench User’s Guide


Using RuleBuilder

❖ Hint for UNIX Users: It is also possible to print out a data


model definition with or without rules using the UNIX script
OTmdl.sh. Refer to Appendix F, “Application Integrator
Utilities,” for a complete description of this program.

Syntax Error Checking Syntax checking catches the first syntax error on each data model
item. A second or subsequent error will not be listed in the Parse
on Errors dialog box until the first is corrected.
The following types of errors are checked in the rules during syntax
checking or when applying the rules (using the Apply command):
1. Invalid constructed variable, for example,
VaR-> lower- vs. uppercase ‘A’
Array- missing ‘>’ and lowercase ‘rray’
M_L> missing ‘-’
2. Invalid (label) or undeclared data model item
r Checks spelling
r Checks character case (for example, ‘a’ vs. ‘A’)
3. Forgetting to define the condition before the action ([ ])
4. Incorrect number of parentheses ( ‘)’ ) or quotation marks ( ‘ “ ’ )
r Checks for too many
r Checks for not enough

Workbench User’s Guide 141


Section 3. Building Rules into Data Models

5. Function expecting an identifier (variable or data model item)


for the parameter, for example:
DM_READ (“DM_X”, “Y”, 0, 1, $GET_GCOUNT(1)) where
GET_GCOUNT( ) function is not an identifier.

❖ Note: Errors are checked and listed in the sequence of the


data model items in the Layout Editor window, not in the order
of parsing the rules.

Parsing Syntax Workbench will catch errors when it parses the model or map
Checking component file and will also catch errors before it saves.

Command Line Syntax Checking — otrun.exe and inittrans


The following items are checked:
r All arguments are checked to verify they are valid defined
codes or strings associated with the code. An example of a
code with its associated string is “-at OTRegogn.att”. Case
sensitive codes. Codes that expect an argument must
contain an argument and not another code. If no argument
is passed on the command line, a segmentation fault will be
returned.

Valid Example:
otrun.exe –at OTRecogn.att –cs dv –DINPUT_FILE=OTIn.Flt –I
Invalid Examples:
otrun –at –cs dv –DINPUT_FILE=OTIn.Flt –I (missing
string for –at argument)
otrun –aa OTRecogn.att -cs dv –dINPUT_FILE –I (-aa
spelling and -d case sensitivity errors)
otrun.exe –at OTRecogn.att OTEnvelp.att –cs dv
–DINPUT_FILE=OTIn.Flt –I (two strings for –at)
r Requires that one of the following is an argument: –at (map
component file), –s (source data model), or –t (target date
model).

142 Workbench User’s Guide


Using RuleBuilder

r If the code does not require an argument, the presence of an


argument is not checked.
r Checks for closing quotation marks when opening quotation
marks are present. Also checks for the presence of spaces in
a string when the string is not enclosed in quotation marks.

Valid Example:
otrun.exe –at OTRecogn.att –cs dv –DINPUT_FILE=OTIn.Flt –DA=’Bob Smith’ –I
Invalid Example:
otrun –at OTRecogn.att –cs dv –DINPUT_FILE=OTIn.Flt –DA=Bob Smith –I
(There is a space between Bob and Smith; string should be in quotation marks.)
otrun –at OTRecogn.att –cs dv –DINPUT_FILE=OTIn.Flt –DA=’Bob Smith –I
(Missing closing quotation mark.)

Translator and Workbench Syntax Checking, for the following


types of files: mdl, .att, .acc, .inc
r Checks for proper use of characters, such as, closing or
balanced use of parentheses ‘()’, brackets ‘[]’ and braces ‘{}’,
single quotation marks (‘’), double quotation marks (“”), and
the use of commas (,) where required.

Valid Example:
DMI {
[]
VAR->Tmp = STRCAT(VAR->A, VAR->B)
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Tmp = STRCAT(VAR->A VAR->B
*1 .. 1
(missing “,”, “)”, “}” characters)
r Checks for the item type and that all components are
present. For example, Definings require the following
syntax: label, open brace, access item label, ‘@’sign,
minimum, .., maximum, optional format, verify list ID,
closing brace, *, minimum occurrence, .., maximum
occurrence.

Workbench User’s Guide 143


Section 3. Building Rules into Data Models

Valid Example:
DMI { AlphaNumericFld @5 .. 5 none
[]
VAR->Tmp = STRCAT(VAR->A, VAR->B)
}*1 .. 1
Invalid Example:
DMI { alphanumericfld @5 .. 5
[]
VAR->Tmp = STRCAT(VAR->A, VAR->B)
}1 .. 1
(Missing verify list ID and “*” for occurrence.)

r Checks for valid in scope use of data model item labels.

Valid Example:
Group {
DMI_A { AlphaNumericFld @5 .. 5 none
[]
VAR->TmpA = DMI_A
}*1 .. 1
DMI_B { AlphaNumericFld @5 .. 5 none
[]
VAR->TmpB = DMI_B
}*1 .. 1
}*1 .. 1
Invalid Example:
Group {
DMI_A { AlphaNumericFld @5 .. 5 none
[]
VAR->TmpB = DMI_B
}*1 .. 1
DMI_B { AlphaNumericFld @5 .. 5 none
[]
VAR->TmpA = DMI_A
}*1 .. 1
}*1 .. 1
(DMI_B is being referenced out of scope – before it comes
into existence)
r Checks that data model item labels are not referenced in
include files

Valid Example:

144 Workbench User’s Guide


Using RuleBuilder

DECLARATIONS {
INCLUDE “Example.inc”
}
DMI_A { AlphaNumericFld @5 .. 5 none
[]
PERFORM(“Ex1”, &DMI_A)
}*1 .. 1

include file ‘Example.inc’:


DECLARE Ex1(&defining) {
[]
CLEAR_VAL ARRAY->Tmp
VAR->Tmp = DMI_INFO(&defining, &ARRAY->Tmp)
}
Invalid Example:
DECLARATIONS {
INCLUDE “Example.inc”
}
DMI_A { AlphaNumericFld @5 .. 5 none
[]
PERFORM(“Ex1”)
}*1 .. 1

include file ‘Example.inc’:


DECLARE Ex1() {
[]
CLEAR_VAL ARRAY->Tmp
VAR->Tmp = DMI_INFO(&DMI_A, &ARRAY->Tmp)
}
(Attempted to reference a data model item label within an
include file.)
r Reference to a undefined data model item label.

Valid Example:
DMI { AlphaNumericFld @5 .. 5 none
[]
VAR->Tmp = DMI
}*1 .. 1
Invalid Example:
DMI { AlphaNumericFld @5 .. 5 none
[]
VAR->Tmp = Dmi
*1 .. 1
(Dmi is not a defined data model item label)

Workbench User’s Guide 145


Section 3. Building Rules into Data Models

Rule Execution Syntax Workbench will catch errors when it executes rules in the translator
Checking and at runtime.

Translator Only Syntax Checking


r Correct number of arguments for those functions that
contain a fixed number of arguments.

Valid Example:
DMI {
[]
VAR->Tmp = STRCAT(VAR->A, VAR->B)
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Tmp = STRCAT(VAR->A, VAR->B, VAR->C)
*1 .. 1
(STRCAT() only has two arguments, not three.)

Translator Runtime Syntax Checking


r Proper use of ampersand (&) when required and not
required in a function.

Valid Example:
DMI {
[]
VAR->Pos = GET_FILEPOS(&DMI)
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Pos = GET_FILEPOS(DMI)
VAR->Tmp = STRCAT(&VAR->A, &VAR->B)
*1 .. 1
(GET_FILEPOS() requires ‘&’, STRCAT() does not.)
r Consistent use of ampersand (&) with arguments between
the PERFORM() and its declaration in the include file.

146 Workbench User’s Guide


Using RuleBuilder

Valid Example:
DECLARATIONS {
INCLUDE “Example.inc”
}
DMI_A { AlphaNumericFld @5 .. 5 none
[]
PERFORM(“Ex1”, &DMI_A, VAR->Tmp)
}*1 .. 1

include file ‘Example.inc’:


DECLARE Ex1(&defining, temporary) {
[]
VAR->Tmp = STRCAT(defining, temporary)
}
Invalid Example:
DECLARATIONS {
INCLUDE “Example.inc”
}
DMI_A { AlphaNumericFld @5 .. 5 none
[]
PERFORM(“Ex1”, DMI_A, &VAR->Tmp)
}*1 .. 1

include file ‘Example.inc’:


DECLARE Ex1(&defining, temporary) {
[]
VAR->Tmp = STRCAT(defining, temporary)
}
(Ampersand character is not used consistently between the
PERFORM and the DECLARE.)
r Argument type is checked.

Valid Example:
DMI {
[]
VAR->Tmp = STRSUBS(VAR->Tmp, 2, 4)
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Tmp = STRSUBS(“ABCDEF”, 2, 4)
}*1 .. 1
(The string in STRSUBS() cannot be a string literal.)

Workbench User’s Guide 147


Section 3. Building Rules into Data Models

r A valid defined function is either an internal Application


Integrator function or a User Exit Extension function.

Valid Example:
DMI {
[]
VAR->Tmp = STRSUBS(VAR->Tmp, 2, 4)
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Tmp = STRSUBSX(VAR->Tmp 2, 4)
}*1 .. 1
(The function STRSUBSX() is not an Application Integrator
function or User Exit Extension function.)

Syntax Checking That The following are not verified during syntax checking.
Does Not Occur
r Labels are not checked for consistent use of upper- and
lowercase letters throughout the data model.
r Reference to a variable’s value before it was set with a value
is not checked.

Valid Example:
DMI { AlphaNumericFld @5 .. 5 none
[]
VAR->Tmp = DMI
VAR->Temp = STRCAT(VAR->Tmp, DMI)
}*1 .. 1
Invalid Example:
DMI { AlphaNumericFld @5 .. 5 none
[]
VAR->Tmp = DMI
VAR->Temp = STRCAT(VAR->TMP, DMI)
*1 .. 1
(Does not catch VAR->TMP, was not previously assigned.)

148 Workbench User’s Guide


Using RuleBuilder

r Assigning a value from a function that does not return a


value is not checked.

Valid Example:
DMI {
[]
CLOSE_INPUT()
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Tmp = CLOSE_INPUT()
*1 .. 1
(CLOSE_INPUT() does not return a value. The translator
will attempt to obtain a value off the stack, which can cause
a stack underflow error if no values are on the stack.)
r User entry (outside of Workbench) for the proper sequence
of rule modes: PRESENT, ERROR, then ABSENT is not
checked.

Valid Example:
DMI {
[]
CLOSE_INPUT()
:ABSENT
[]
VAR->Error = ERRCODE()
:ERROR
[]
VAR->Error = ERRCODE()
}*1 .. 1
Invalid Example:
DMI {
[]
CLOSE_INPUT()
:ERROR
[]
VAR->Error = ERRCODE()
:ABSENT
[]
VAR->Error = ERRCODE()
}*1 .. 1
(ABSENT and ERROR are in the wrong sequence.)

Workbench User’s Guide 149


Section 3. Building Rules into Data Models

r Correct number of arguments for those functions that


contain a variable number of arguments is not checked. (A
model can be verified for correct argument count using
OTCheck.bat on Windows or OTCheck.sh on UNIX.)

Valid Example:
DMI {
[]
VAR->Tmp = STRCATM(2, VAR->A, VAR->B)
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Tmp = STRCATM(2, VAR->A, VAR->B, VAR->C)
*1 .. 1
(Using three arguments, but telling the function it’s using
only two.)

Function Checking Function checking is performed on functions that expect data


model item addresses. If functions are passed the wrong type of
information an error will be reported at runtime. This could
happen if the ampersand (&) character did not appear before the
variable.
Valid Example:
DMI_INFO (&DMI , &ARRAY)

Invalid Example:
DMI_INFO (DMI , &ARRAY)
Error code 144 is returned in cases where the data model item has a
value assigned to it before the function is executed. If there is no
value assigned to the variable at runtime and the ampersand is
missing, the translator tries to evaluate the variable. It returns error
code 139 when it is unable to evaluate it.

150 Workbench User’s Guide


Using MapBuilder

MapBuilder automates the process of mapping data between like


Using MapBuilder source and target data models. In one step, MapBuilder creates the
rules that both assign the source data model items to variables and
then make the assignments from these variables to the target data
model items. MapBuilder allows you to drag and drop data model
item rules between source and target data models and in doing so,
automatically create the rules for both source and target data
models.
MapBuilder is accessed from the Tools drop down menu. There are
two options: MapBuilder and MapBuilder Preferences. MapBuilder
Preferences dialog box allows mapping options to be set.

A check mark will appear beside the drop down menu option to
indicate when MapBuilder mode is running. Also, the MapBuilder
icon on the toolbar will appear to be pressed in.

MapBuilder MapBuilder allows you to drag and drop rules between data
models using predefined settings for Variable Type, Variable
Name, Link Type, Select Data Assignment type, and Prompt with
Loop Control Warning Message. The following table shows the
predefined settings that are used when running MapBuilder.

Option Setting
Variable Type Array
Variable Name Both
Link Type Tag-To-Defining
Defining-To-Defining
Select Data Assignment Use DEFAULT_NULL() on Source EDI
Type Use STRTRIM() on Source non-EDI
Use NOT_NULL() on Target EDI

Refer to the “MapBuilder Preferences” section for a table that


contains an explanation of these settings.

Workbench User’s Guide 151


Section 3. Building Rules into Data Models

Accessing the
MapBuilder Function
Ø To enable MapBuilder
1. Open the source and target data models that you wish to map.
2. Open MapBuilder using one of the following methods:
r Menu-From the Workbench main menu, choose Tools.
Then select MapBuilder.
r Toolbar Icon-Click the MapBuilder icon.
r Keyboard Shortcut-With the Workbench main window
selected, press Ctrl+L.

❖ Note: Refer to the “Loop Control” section for specific


information about using the loop control function.

When you activate MapBuilder in Windows, the mouse pointer


changes to a pointer overlaid with the MapBuilder symbol
( ). The pointer does not change for UNIX operating
systems.

❖ Note: First, map Defining data model items, then perform loop
control procedures. The loop control rules are inserted at the
beginning of PRESENT/ABSENT mode. These rules must be
executed before performing a data assignment, to maintain the
integrity of all mappings.

152 Workbench User’s Guide


Using MapBuilder

MapBuilder MapBuilder Preferences allows you to customize the settings for


Preferences Variable Type, Variable Name, Link Type, Select Data Assignment
type, and Prompt with Loop Control Warning Message. When
MapBuilder Advanced is chosen from the Tools drop down menu,
the MapBuilder Preferences dialog box appears.

During the building of the rules, you determine whether the


variable type should be a Variable or Array variable. For a
complete discussion of these types of data structures and how
Workbench uses them, refer to the “Variables” section earlier in this
section.
You also have the option to establish the rules on either a source
Tag item or the individual source Defining items during the
mapping process (rules are always placed on Defining items on the
target side). Establishing the rules on the Tag item of the source
data model is the usual Application Integrator method, providing a
means to parse and check the complete tag before mapping
individual defining items to variables.

Workbench User’s Guide 153


Section 3. Building Rules into Data Models

Accessing MapBuilder The MapBuilder Preferences dialog box will display with default
Preferences settings already set. These default settings should serve most of
your mapping needs. If you change the default settings they can be
reset back to their original settings using the Reset button.

Ø To access the MapBuilder Preferences dialog box


1. Open the source and target data models that you wish to map.
2. From the Workbench main menu, choose Tools. Select the
MapBuilder Preferences option.
3. The MapBuilder Preferences dialog box appears. Using the
following table as a guide, enable the settings you wish to be in
effect when you are mapping.

Section Option Description


Variable Array A variable that is a list of
Type values. Manual controls
are recommended with this
variable whenever multiple
levels in the data model are
mapped or with loop
control.
Variable A single value, is also
referred to as a temporary
Variable. If more than one
assignment is made to the
same variable name, the
last assigned value is the
value that will be
referenced.
A Variable is useful for
referencing the same value
multiple times, as a
counter, or in a
concatenation.
Variable Both Indicates that the source
Name and target labels will be
concatentated to form the
Variable Name.

154 Workbench User’s Guide


Using MapBuilder

Section Option Description


Source Indicates that the source
label will be used as the
Variable Name.
Target Indicates that the target
label will be used as the
Variable Name.
Link Tag To Defining Create rules on the parent
Type Tag item of the defining
items you are mapping. As
per the suggested method
for establishing rules using
Application Integrator, by
establishing rules on the tag
(parent) item, the system
will parse and check the
complete set of defining
items (children) before
applying the rules.
When you select Tag To
Defining, you drag and
drop from source defining
item to target defining item.
The system places the rules
on the source data model
tag item that is the parent
tag to the data model item
selected.
If no parent is found, an
error message is generated
and the action is canceled.
Defining To Create rules on both the
Defining source and target defining
items.
When you select Defining
to Defining, the rules are
placed on the data model
item itself in both the
source and target data
model.

Workbench User’s Guide 155


Section 3. Building Rules into Data Models

Section Option Description


Select Use EDI source data models
Data DEFAULT_NULL( ) will have the
Assign- on Source EDI DEFAULT_NULL( )
ment function inserted as part of
Type the created rules. Refer to
Link Type.
Use STRTRIM() on Non-EDI source data
Source non-EDI models will have the
STRTRIM ( ) function
inserted as part of the
created rules. Refer to Link
Type.
Use NOT_NULL() EDI target data models will
on Target EDI have the NOT_NULL( )
function inserted as part of
the created rules. Refer to
Link Type.
Enable Loop Control when Choose this check box to
mapping Defining to Defining apply loop control to two
Defining data model items.
For example, if you have a
Defining data model item
that repeats.
Normally, you would not
apply loop control on two
Defining items. However,
should the need arise, this
check box must be enabled
to apply loop control to two
Defining items.

156 Workbench User’s Guide


Using MapBuilder

Section Option Description


Prompt with Loop Control Choose this check box to
warning message. display the Loop Control
Needed dialog box during a
MapBuilder loop control
session. When the box is
not checked, loop control
automatically applies loop
control without displaying
the Loop Control Needed
dialog box.
Store Choose the Store button to
save the changes made to
the MapBuilder Preferences
dialog box.
Close Choose the Close button to
exit the MapBuilder
Preferences dialog box.
Retrieve Choose the Retrieve Stored
Stored button to enable the last
settings made using the
Store button.
Reset Choose the Reset button to
restore the original default
settings.

Mapping Data In most cases, rules are placed in PRESENT mode as a null
condition (that is, actions are always performed). In the source
data model, MapBuilder creates a rule that assigns a data model
item’s value to a variable. In the target data model, MapBuilder
creates a rule that references the variable for its value and assigns it
to the data model item. Shown here are some hints to help your
modeling session:
r You should Map Defining items first, then continue with
Group, Tag, and Container items. This is because the loop
control feature places rules at the beginning of PRESENT
and ABSENT mode and the loop control rules must be
processed before any data assignments are made.

Workbench User’s Guide 157


Section 3. Building Rules into Data Models

r MapBuilder places messages on the status bars of both the


Source Layout dialog box and the Target Layout dialog box.
These messages keep you informed of MapBuilder’s
processing and whether the mapping was successful.
r You can view the rules created by MapBuilder by clicking
the RuleBuilder icon. This opens RuleBuilder and displays
the data model items you mapped so you can identify the
MapBuilder and loop control rules.

Ø To map data between source and target models

❖ Note: MapBuilder will operate in either direction, source to


target or target to source.

1. From the Tools menu, choose MapBuilder.


2. To create rules using MapBuilder, select the desired data model
item from the source data model by holding down the left
mouse button and dragging the data model item to the desired
target data model item.
Place the pointer on the desired target data model item’s label
or name and release the left mouse button.

❖ Note: To use the drag and drop feature, click the data model
item to highlight it. Move the mouse pointer to the new
location and release the mouse button.

❖ Caution: Be sure the item to be dragged and dropped is


highlighted or has focus. If not, the item previously
highlighted will be used.

Notice that when the source data model item is dragged to the
target data model item that the defining item name along with
the mouse pointer changes to a bull’s-eye in UNIX and
continues to be the unique mouse pointer overlaid with the
MapBuilder symbol in Windows.

158 Workbench User’s Guide


Using MapBuilder

Workbench User’s Guide 159


Section 3. Building Rules into Data Models

3. The Variable Name is automatically created in the MapBuilder


window by concatenating the source data model item name
and the target data model name, separating the names with an
underscore (_).

❖ Note: You can also drag and drop from the target to the
source defining items, however, MapBuilder always creates the
rules from source to target, maintaining the name of the
variable as “<source defining item name>_<target defining
item name>.”

4. As data model items are mapped using MapBuilder, they will


appear in the RuleBuilder Workspace under the PRESENT
mode for the selected Defining items or Tag item on the source
data model and the selected defining items on the target data
model. The following figure shows a RuleBuilder example for
the source data model:

The figure below shows a RuleBuilder example for the target


data model, where the data model item PhoneNumber is
assigned from the variable
VAR->PhoneNumber_PhoneNumber.

160 Workbench User’s Guide


Using MapBuilder

In the previous examples, notice that in the source model, the


value in PhoneNumber is assigned to a temporary variable and
that in the target model, the value on the temporary variable is
being referenced and assigned to PhoneNumber. For more
information on the Rule Edit Workspace, refer to the “Using
RuleBuilder” section earlier in this section.

❖ Note: MapBuilder processing messages will appear in the


Status Bar of the data model window.

5. After you have completed mapping between the source and


target data models, close MapBuilder by using one of the
following methods:
r Menu-From the Transaction Modeler Workbench main
menu, choose Tools. Then select MapBuilder.
r Toolbar Icon-Click the MapBuilder icon again.
6. Save the data model.

❖ Note:
layout.
The MapBuilder rules are immediately applied to the
There is no need to enter the RuleBuilder Editor and
apply the rules. However, rules are not permanently saved to
disk until the data model is saved.

Workbench User’s Guide 161


Section 3. Building Rules into Data Models

Loop Control The loop control feature provides the code to create processing
loops when one of the data model items is a Group, Tag, or
Container. Loop control ensures that detail records are kept
together with the proper header record. Also, subdetail records are
kept together with the appropriate detail records.
Loop control automates the process of mapping complex data
structures that repeat. Loop control automatically adds PRESENT
mode rules, ABSENT mode rules, or group data model items to
both the source and target layouts. These rules or Group items
contain Array variable assignments. Control of the array variable
automatically occurs during the MapBuilder loop control process.
Normally you would not apply loop control from Defining to
Defining. However, in the rare case when this is necessary, you
would first enable the “Enable Loop Control when mapping
Defining to Defining” option on the MapBuilder Preferences dialog
box.
Here are some points to remember when applying loop control:
r Loop control is needed on items where the maximum
occurrence is greater than the minimum.
r When indicating the target occurrences for loop control, be
sure the maximum is 1 greater than the maximum intended.
The last loop goes into the loop control rules to break out of
the looping process. Loop control automatically checks for
and corrects these situations.
r The Undo function is not operable when the Loop Control
Link Type is enabled. Refer to the “Undoing Loop Control”
section for more information about correcting errors.

Ø To add loop control to your data model


1. Open the source and target data model Layout Editors.
Arrange the Editors to be side-by-side on the screen.
2. From the Tools menu, choose MapBuilder.
3. Select the desired Tag or Group data model item from the
source/target model by holding down the left mouse button
and dragging the data model item to the desired Tag or Group
target/source data model item. Release the left mouse button.
4. Save the change or cancel.

162 Workbench User’s Guide


Using MapBuilder

Troubleshooting There are several warning or error messages that can appear when
mapping using loop control.
Illegal map messages will appear in the status bar of the
MapBuilder dialog box during the mapping session. When a
message such as this appears, the rules are not updated. For
example, if you try to map a source item to a source item the
following message will appear in the status bar, “Illegal map.
Source equals target.”
You cannot map or apply loop control to the topmost Group item
because it is a parent to all other Group items and their children. If
the source item does not have a parent, the following error message
appears.

If you attempt to perform loop control on the same items more than
once, the following error message appears.

Workbench User’s Guide 163


Section 3. Building Rules into Data Models

Loop Control Needed The Loop Control Needed dialog box appears when MapBuilder
finds the items that do not have loop control applied, and the
“Prompt with Loop Control warning message” check box is
selected. When you drag and drop an item onto another,
MapBuilder checks all the items appearing above and, if it finds an
item whose maximum is greater than the minimum and is not the
topmost item, it will display the Loop Control Needed dialog box.
You can drag and drop loop control on any type of item: Group,
Tag, Container, or Defining. However, you must have the “Enable
Loop Control when mapping Defining to Defining” check box
selected to enable loop control on Defining items.
In this example, the two looping items were caught because they
had multiple iterations that were greater than 1 and they were not
the topmost item. If mapping was performed on an item appearing
higher on the map, for example, LinePONo to LinePONo, only
SrcLineLoop and TLineLoop would appear in the Loop Control
Needed dialog box because the mapping occurred higher in the
map. Remember, the system checks for the item needing loop
control above the items on which mapping occurred.

Those items appear in the


Loop Control Needed
dialog box

Ø To apply loop control


1. In the Loop Control Needed dialog box, highlight the item to
which you intend to apply loop control.
2. Choose the Go to Item button. The source and target maps will
appear with the layout items highlighted.
3. Apply loop control as indicated in the “Loop Control” section.

164 Workbench User’s Guide


Using MapBuilder

Undoing Loop Control Loop control clears the Undo list. Also, Undo does not function in
loop control. If you apply loop control to the wrong item, you must
remove the loop control code manually. This section shows the
actual code for loop control in the event you must remove it from
your map.

Source Loop Control Rules


The following code contains the actual loop control rules that are
inserted into the source model when loop control is applied. This
example represents a source Group item being dragged and
dropped onto a target Group item. Should loop control rules be
applied in error, the code (shown in the example as bold text)
would have to be located in the rules and deleted manually.
Assignment of ‘Data’ is placed in the PRESENT rules of the item
selected.
Assignment of ‘Stop’ is placed in the PRESENT rules of the parent,
of the item selected.

Workbench User’s Guide 165


Section 3. Building Rules into Data Models

Document {
HeadRec { LineFeedDelimRecord "H"
HeadDate { DateFld @6 .. 6 "YYMMDD" none }*1 .. 1
HeadDocNo { AlphaNumericFld @1 .. 10 none }*1 .. 1
[]
ARRAY->HeadDate = HeadDate
ARRAY->HeadDocNo = HeadDocNo
}*1 .. 1
SrcLineLoop {
LineRec { LineFeedDelimRecord "L"
LinePart { AlphaNumericFld @10 .. 10 none }*1 .. 1
LinePONo { AlphaNumericFld @10 .. 10 none }*1 .. 1
LineQty { NumericFld @05 .. 05 "99999" none }*1 .. 1
[]
ARRAY->LinePart = LinePart
ARRAY->LinePONo = LinePONo
ARRAY->LineQty = LineQty
}*1 .. 1
SrcSubLineLoop {
SubLineRec { LineFeedDelimRecord "S"
SubLineQty { NumericFld @05 .. 05 "99999" none }*1 .. 1
SubLineStore { AlphaNumericFld @1 .. 10 none }*1 .. 1
[]
ARRAY->SubLineQty = SubLineQty
ARRAY->SubLineStore = SubLineStore
}*1 .. 1
[]
ARRAY->LoopCtl_SrcSubLineLoop_TSubLineLoop = “Data”
}*0 .. 100
[]
ARRAY->LoopCtl_SrcSubLineLoop_TSubLineLoop = “Stop”
[]
ARRAY->LoopCtl_SrcLineLoop_TLineLoop = “Data”
}*1 .. 100
[]
ARRAY->LoopCtl_SrcLineLoop_TLineLoop = “Stop”
}*1 .. 100

166 Workbench User’s Guide


Using MapBuilder

Target Loop Control Rules


The following code contains the actual loop control rules that are
inserted into the target model when loop control is applied. Should
loop control rules be applied in error, the code (shown in the
example as bold text) would have to be located in the rules and
deleted manually.
In this code, a child group item is created within the target item
selected for loop control to test for ‘Stop’. The label of this item
created will be the selected target items label appended with
‘LoopCtrl’.
An ABSENT rule is added to the Target item selected to test for
‘Stop’.

❖ Note: If the target item selected is a Defining item (field, element),


then no child Group item is created. Instead, the rules that would
have been placed in this Group item are placed in the PRESENT rules
of this Defining item. And the subsequent ABSENT rule would not
be required/applied.

Workbench User’s Guide 167


Section 3. Building Rules into Data Models

Document {
HeadRec { LineFeedDelimRecord "H"
HeadDate { DateFld @6 .. 6 "YYMMDD" none [] HeadDate = ARRAY->HeadDate }*1 .. 1
HeadDocNo { AlphaNumericFld @1 .. 10 none [] HeadDocNo = ARRAY->HeadDocNo }*1 .. 1
}*1 .. 1
TLineLoop {
TLineLoopLoopCtrl {
[]
VAR->LoopCtl_SrcLineLoop_TLineLoop = “Stop”
VAR->LoopCtl_SrcLineLoop_TLineLoop = ARRAY-> LoopCtl_SrcLineLoop_TLineLoop
[VAR->LoopCtl_SrcLineLoop_TLineLoop == “Stop”]
REJECT
}*1 .. 1
LineRec { LineFeedDelimRecord "L"
LineQty { NumericFld @05 .. 05 "99999" none [] LineQty = ARRAY->LineQty }*1 .. 1
LinePONo { AlphaNumericFld @10 .. 10 none [] LinePONo = ARRAY->LinePONo }*1 .. 1
LinePart { AlphaNumericFld @10 .. 10 none [] LinePart = ARRAY->LinePart }*1 .. 1
}*1 .. 1
TSubLineLoop {
TSubLineLoopLoopCtrl {
[]
VAR->LoopCtl_SrcSubLineLoop_TSubLineLoop = “Stop”
VAR->LoopCtl_SrcSubLineLoop_TSubLineLoop = ARRAY-
>LoopCtl_SrcSubLineLoop_TSubLineLoop
[VAR->LoopCtl_SrcSubLineLoop_TSubLineLoop == “Stop”]
REJECT
}*1 .. 1
SubLineRec { LineFeedDelimRecord "S"
SubLineStore { AlphaNumericFld @10 .. 10 none
[] SubLineStore = ARRAY->SubLineStore
}*1 .. 1
SubLineQty { NumericFld @05 .. 05 "99999" none
[] SubLineQty = ARRAY->SubLineQty ABSENT rule
}*1 .. 1
tests for
}*1 .. 1
“Stop”
:ABSENT
[VAR->LoopCtl_SrcSubLineLoop_TSubLineLoop == “Stop”]
REJECT
}*0 .. 100 ; |-- end TSubLineLoop --|
:ABSENT
[VAR->LoopCtl_SrcLineLoop_TLineLoop == “Stop”]
REJECT
}*1 .. 100 ; |-- end TLineLoop --|
}*1 .. 100 ; |-- end Document --|

168 Workbench User’s Guide


Section 4
Creating Environments

This section describes how to define an environment. An


environment defines all the specifications that need to be brought
together to configure the translator to process. This is done by
creating a new map component file either from scratch or based on
an existing or standard map component file.
This section also describes how to change environments during a
translation by inserting a call to a second environment from within
the data model. A description of the additional access methods for
specifying input and output files is also found at the end of this
section.
Once you have completely defined source and target data models
and environments (map component files), you are ready to
translate. Procedures for performing a translation, viewing a trace
log, and debugging data models are found in Section 6 of this
manual.

Workbench User’s Guide 169


Section 4. Creating Environments

An environment consists of components that control what data will


Understanding be translated, such as the input/output files, the source and target
Environments data models, and the access models to be used.
In a Transaction Modeler Workbench application, an environment
is referred to as a map component file which is attached to the
translator. The name comes from attaching another environment
definition (using the data model keyword ATTACH) to reconfigure
the translator during processing. Environment files are given the
suffix “.att” (for example, OTRecogn.att, OTEnvelp.att, which are
two of the standard files used to de-envelope and envelop data).

Function of Several examples of the use of translation environments (map


Environments component files) are:
r User-defined ASC X12 mappings
r User-defined UN/EDIFACT mappings
r Processing fixed length data
r Processing variable length data
r Bypassing data
r Generating acknowledgments
r Recognition of data
r Enveloping of data
r Committing output streams
A map component file must specify at least a source or a target
definition, or it can specify both. The data model structure and
rules define the processing to be performed within the
environment, as the following illustration shows:
Environment

Source Map Map Target


Data Model to from Data Model
Declarations Variables Declarations
& &
Rules Rules

Source Target
Input Access Model Access Model Output
Data (parsing) (construction) Data

The data structure may contain only group items, with no input or
output occurring, or just rule processing logic. Information
obtained in the parent and grandparent environments can be
referenced in child environments.

170 Workbench User’s Guide


Understanding Environments

Environment The following list describes the parsing sequence:


Sequence of Parsing 1. The environment file (for example, OTRecogn.att) is opened
and read in.
2. The following may be set by the environment (the map
component file). You must use the value entry boxes provided in
the Map Component Editor dialog box.
input file (INPUT_FILE)
output file (OUTPUT_FILE)
source access model (S_ACCESS)
source data model (S_MODEL)
target access model (T_ACCESS)
target data model (T_MODEL)
You must specify these in the Other Environment Variables area of
the Map Component Editor dialog box.
trace level (TRACE_LEVEL)
find match limit (FINDMATCH_LIMIT)
substitution key prefix (HIERARCHY_KEY)
cross-reference key prefix (XREF_KEY)
code list verification key prefix (LOOKUP_KEY)
All other environment variables (user-defined) are set when they
are referenced in:
r The definition of other keywords or user-defined variables.
For example:
SESSION_NO=“$$”
OUTPUT_FILE=“(SESSION_NO).tmp”
r Data model rules. For example:
VAR->SessionNo=GET_EVAR(“SESSION_NO”)
3. The input file (INPUT_FILE) is opened and read into memory.
4. The output file (OUTPUT_FILE) is created or opened in append
mode. (To open in append mode, a plus sign (+) is added to the
end of the filename when it is entered.)
5. The source access model (S_ACCESS) is opened and parsed.

Workbench User’s Guide 171


Section 4. Creating Environments

6. The source data model (S_MODEL) is opened and parsed. This


includes:
r Data model syntax checking
r Verifying references to source access model items
r Declarations of first time references to Temporary
Variables, Arrays, and MetaLink Variables
7. Source mode processing occurs.
8. The target access model (T_ACCESS) is opened and parsed.
9. The target data model (T_MODEL) is opened and parsed. This
includes:
r Data model syntax checking
r Verifying references to target access model items
r Declarations of first time references to Temporary
Variables, Arrays, and MetaLink Variables
10. Target mode processing occurs.

172 Workbench User’s Guide


Understanding Environments

Processing Flow Within a source or target data model, processing flows down the
within the Model hierarchy from parent to child (starting with the first child
encountered) and then back to parent, as per the following
illustration:

Data Model Structure X12 Example Processing Order


Group (Parent) Initialization (8)
Group (Parent/Child) Document 1 (7)
Tag 1 (Parent/Child) BIG (3)
Defining (Child) BIG_01 (1)
Defining (Child) BIG_02 (2)
Tag 2 (Parent/Child) N1 (6)
Defining (Child) N1_01 (4)
Defining (Child) N1_01 (5)

In each case, the current status is returned from the child to the
parent data model item. The process moves down the data
structure from child to child; once the children are read, process
returns to the parent item, then proceeds to the next parent item.

Single Environment In a single environment process, once the map component file input
Process Flow and output streams are read or opened, environment processing
begins with the source data model. Once source processing is
completed, target data model processing occurs. Once target
processing is completed, the environment ceases to exist.

Environment Layer

Source Model Target Model

Workbench User’s Guide 173


Section 4. Creating Environments

Multiple Multiple environments are typically brought together to complete a


Environments translation session. During a translation session, the environment
is changed through the use of different map component files. These
environments (map component files) are called by the use of the
data model keyword ATTACH from within the data model.
Multiple environments allow for:
r Use of generic models and modular modeling.
− Includes enveloping, de-enveloping, bypassing
errors, generating acknowledgments
− Once written, eliminates rewriting, testing, and
debugging among multiple translations
− Allows one-time modifications for multiple common
translations
r Ability to dynamically reconfigure the translator to parse
the input stream (eliminating the need for a preparsing
program).
− Once identified, X12 utilizes the X12 syntax models
− Once identified, UN/EDIFACT utilizes the
UN/EDIFACT syntax models, and so forth through
the various standards
r Ability to dynamically reconfigure the translator to
construct the output stream.
− Determines the recipient from a batch of application
documents
− Determines the appropriate target data model for
the standard, and implementation within the
standard
r Ability to parse the input stream once, and construct
multiple output streams.
If a file contains multiple documents to be processed:
r Using one map component file, the source processing would
have to parse in all documents before switching over to the
target. The target would then output from memory all of
the translated documents.
r Using multiple map component files, optionally, the parent
environment could repeat the child environment for each
document processed. Then the child environment would
read one document and output one document per pass
through the environment.

174 Workbench User’s Guide


Understanding Environments

The following illustration shows the use of multiple map


component files in the processing flow for an X12 application. The
data model keyword ATTACH calls a second environment, which
in turn calls another environment. In each case, the current status is
returned from the child environment, to the parent environment,
just like child-to-parent data model items within a data model.

Recognition Environment

ATTACH 1

ATTACH 2

Source Model only

X12 De-Enveloping Environment

ATTACH

Source Model only

X12 Message Processing Environment

Source Model Target Model

X12 Acknowledgment Environment

Target Model only

Refer to the “Changing Environments During a Translation” section


later in this section for details on using ATTACH to call a second
environment.

Workbench User’s Guide 175


Section 4. Creating Environments

A key step in mapping data and preparing for translation using


Defining a Map Application Integrator is to create an environment by defining and
Component File saving a map component file. As described earlier, an environment
consists of components that control how the data will be translated,
such as the input/output files, and models to be used. In a
Transaction Modeler Workbench application, an environment is
referred to as a “map component file,” and the environment
definition is “attached” to the translator.

Recommended When naming the map component files, keep the following
Naming Convention considerations in mind:
r Use “.att” for the suffix.
r Use no more than 8 characters in the base map component
filename.
r Do not use the prefix “OT,” since it will conflict with names
already assigned in the Application Integrator application.
The prefix “OT” is a reserved prefix for Application
Integrator application files. Using it can compromise the
software’s performance.
r Use upper- and lowercase letters and underscore “_” only.
r Do not use spaces.

❖ Note: When implementing public standard EDI messages, the


Application Integrator generic processing method is normally
invoked. The generic processing method appends the suffix
“.att” to the base filenames before attaching the map
component file to the translator.

Although the translator does not require file extensions of


“.att”, the generic method does. So we recommend using
the extension “.att” in all map component file filenames.

176 Workbench User’s Guide


Defining a Map Component File

Defining a New Map The Map Component Editor dialog box is used to define and
Component File modify map component files.
The colors appearing in the value entry boxes indicate whether data
is mandatory or optional. When a new map component editor
dialog box appears, the source and Target value entry boxes are
blue for mandatory or white for optional. (The map component file
must have at least the source or target model defined.) Once one is
defined, it's paired Access value entry box illuminates blue and the
other model is turned white. Whenever one of the two (Model or
Access) value entry boxes contains a string, the other illuminates
blue. Both value entry boxes in the Source and Target areas are
white when the other set contains strings.
The lists of the model drop down list boxes are populated with
*S.mdl and *T.mdl files depending on whether the source or the
target model drop down list is being accessed. The value entry box
is editable so you can type the preferred model name.
It is recommended that you save your map component files and
data models to the working directory.

Ø To define a map component file


1. From the Workbench File menu, choose New.
2. From the New menu, choose Map Component. The Map
Component Editor dialog box displays.

Workbench User’s Guide 177


Section 4. Creating Environments

3. Type the appropriate values for the map component file in the
Data section.

❖ Note: Do not type double quotation marks (“ ”) around


text in the Map Component Editor dialog box. The
system automatically places the quotes around the
necessary text within the map component file.

r Input: Type the name of the input stream to be used by this


map component file.
r Output: Type the name of the output stream to be used by
this map component file.

❖ Note: Refer to the “Using Extended Access Device


Types” section at the end of this section for information
on additional methods for accessing input or output data.

4. Enter the appropriate values in the Source section.


r Model: Enter either the explicit source data model name or
an environment variable name.
r Access: Select either the explicit source access model name
or the description from the Access drop down list box.
To enter an exact model name, type, or select from the list, the
name of the data model to be used by this map component file.
(Both source and target data models display in the list.) Be sure
to add the extension “.mdl” for the data model.
To reference a model name by an environment variable, enclose
the variable name in parenthesis. Multiple variables, or a
combination of string and variable names, can be entered on the
line. Add the extensions, if it is not specified in the variable
reference. The system automatically concatenates the name.

178 Workbench User’s Guide


Defining a Map Component File

5. Enter values in the Target section.


r Model: Enter either the explicit target data model name or
reference the environment variable.
r Access: Select either the explicit target access model name
or the description from the Access drop down list box.
To enter an exact model name, type, or select from the list, the
name of the access or data model to be used by this map
component file. (Both source and target data models display in
the list.) Be sure to add the extension “.acc” for the access
model or “.mdl” for the data model.
To reference a model name by environment variable, enclose
the variable name in parenthesis. Multiple variables, or a
combination of string and variable names, can be entered on the
line. Add the extensions, if it is not specified in the variable
reference. The system automatically concatenates the name.
6. In the Other Environment Variables section, type additional
information to use in translating, if needed (for example, user-
defined variables, such as, “SESSION_NO=$$”).
r Name: Type the variable name.
r Value: Type the value assigned to the variable.

❖ Note: Assigning a value to an environment variable that


contains parentheses will cause the string within the
parentheses to be ignored if it is not an environment
variable.

There is no limit to the number of environment variables that


can be used. Refer to the “Invoking the Translation Process”
section in Section 6 for more details on environment variables
specified during translation.
7. Choose the Add button to accept the entered values;
– or –
Choose the Delete button to delete the highlighted entries.

Workbench User’s Guide 179


Section 4. Creating Environments

8. Complete the entry.


a. To save the map component file for the first time or under a
new name, choose the Save button. Save writes the map
component file to disk then minimizes the Map Component
Editor dialog box onto the main window. If the Layout
Editors are not opened, they will be opened. If this is a new
map component file, Save automatically performs a Save
As and prompts for a new filename.

❖ Note: Leaving the Save As dialog


box opened allows you to change
and save the map component file
again.

b. To exit the Map Component Editor dialog box without


saving, choose the Cancel button. If this is not a new map
component file, Cancel reloads the previous map
component file then minimizes it on the main dialog box.

180 Workbench User’s Guide


Defining a Map Component File

Modifying an An existing map component file can be modified or used as a


Existing Map template instead of creating a new map component file. To modify
an existing map component file, select and open a map component
Component File
file that is similar to the one to be created. To edit a map
component file, follow the instructions below and save the file
under the same name.

Ø To create a map component file from an existing map


component file
1. From the Workbench File menu, choose Open.
2. Select the map component file you want to modify and choose
the OK or Open button, depending on your operating system.
The Layout Editor window will open for each data model
defined by the map component file.
3. From each Layout Editor File menu, choose Minimize Editor.
This will minimize the Layout Editor to icons in the Workbench
work area, as shown on the following page.

❖ Hint: You may need to scroll upward in the Workbench


window to see these minimized model and map
component file icons.

Workbench User’s Guide 181


Section 4. Creating Environments

4. From the Workbench window, select the Map Component File


icon of the map component file to be opened (restored).

❖ Hint: You must have both the icon and text highlighted
to work with the map component file.

5. Click the right mouse button to display a menu. From this


menu, choose Restore. The selected Map Component Editor
dialog box displays, as shown below.

6. Make the changes, using the same techniques described in the


previous section, “Defining a New Map Component File.”

182 Workbench User’s Guide


Defining a Map Component File

7. Either choose Save from the File menu to save your changes;
– or –
Choose Save As from the same menu to save the map
component file under a new name. In the dialog box that
appears (as shown below for UNIX systems), type a new name
for the map component file. Choose the OK button (UNIX,
Windows NT 3.51) or Save button (Windows 95 and NT 4.0) to
save to the new name and close the dialog box.

When using the Save As command to modify an existing map


component file, the new map component file you have created
will be opened.

❖ Note: Leaving the Save As dialog box opened allows


you to change and save the map component file again.

8. To minimize the Map component file dialog box, from the File
menu, choose Minimize.

Workbench User’s Guide 183


Section 4. Creating Environments

Changing For a new environment to be introduced into the translation


Environments session, the keyword ATTACH must be encountered in the data
model. ATTACH requires one argument, the map component file.
During a Translation
An environment can be attached in either the rules for the source or
target data model using either a specific name or a variable,
allowing for the substitution of map component files.
[ ]
ATTACH “OTX12Env.att” (specific file)

[ ]
ATTACH VAR->map_component_filename (variable
name)
When ATTACH is encountered during a translation session, the
current environment’s processing stops. The map component file
associated with the ATTACH statement is opened and processing
begins. Processing continues in this environment until processing
completes successfully, an error is returned, or the data model
keyword ATTACH is encountered again.
Processing returns to the parent environment immediately
following the data model keyword ATTACH. The error code
returned to the parent environment can be captured and errors
handled, as per the following example:
[ ]
ATTACH “OTX12NxtStd.att”
[ ]
VAR->RtnStatus=ERRCODE( )
[VAR->RtnStatus > 0]
<actions to recover from error>

Note: It is good modeling practice to follow these

❖ recommendations for error handling:

1. Capture the returned status of the ATTACH data model


keyword with a new rule. This new rule must be defined
immediately following the ATTACH statement.

2. Define actions in the new rule to recover from any


possible non-zero status return (error). If ATTACH returns
an error, the balance of the actions in the current rule will
not be executed.

184 Workbench User’s Guide


Defining a Map Component File

Ø To change environments using an ATTACH statement


1. Open the data model for which you want to add an ATTACH
statement.
2. Select the data model item for which you want to edit the rules.
3. From the Layout menu, choose Add/Edit Rules (or click the
RuleBuilder icon to the left of the data model item name). The
RuleBuilder window displays, as shown below:

4. Place the pointer in the Rules Edit workspace in the appropriate


mode.
5. If necessary, define a null condition or conditional expression.
6. Select the Keywords tab, then select ATTACH from the
function list.
7. Complete the function by typing the name of a new map
component file (.att) to be used.
8. From the RuleBuilder File menu, choose Apply to enter the
changes to the data model.
9. From the Layout menu, choose Save from the File menu to save
the changes made to your model.

Workbench User’s Guide 185


Section 4. Creating Environments

Common ATTACH During translation processing, the following errors are commonly
Errors Encountered found when there are problems with the map component file
definition. Refer to Appendix F or the on-line Help for a complete
description of these errors.

Error Code Description


133 Source Access Syntax Error
134 Source Data Model Syntax Error
135 Target Access Syntax Error
136 Target Data Model Syntax Error
137 ATTACH Error
138 Data Model Item Not Found
139 Data Model Item - No Value Found
145 Parse Environment Error
160 Error Opening Infile
161 Error Opening Outfile
169 Data Model Type Not Found In Model
170 Command Line (-at) ATTACH Error
172 Data Model Type Not Found in Access Model
173 Improper Access Model Item Definition

186 Workbench User’s Guide


Map Component Files for Enveloping/ De-enveloping

Application Integrator provides map component files (and


Map Component associated models) for enveloping and de-enveloping data from/to
Files for public standards. During the course of data modeling, consider the
Enveloping/ use of these map component files for your application.
De-enveloping

Processing Using The OTRecogn.att file is a map component file typically used for
OTRecogn.att processing public standards into application data.
The rule logic necessary to perform the extraction of the values
(De-enveloping)
from the input stream is already included in the OTRecogn.mdl. It
also automatically sets the HIERARCHY_KEY keyword
environment variable. To use this feature, you must define the
trading partner in the Profile Database by using the Trading
Partner option from the Trade Guide Profiles menu.

Recognizing the Trading The process of recognizing what trading partner needs to be read
Partner from the Profile Database is handled with the generic model
OTRecogn.mdl. This model is designed to allow multiple
interchanges from multiple trading partners within the input file.
To define the trading partner at the interchange level, the model
sets an environment variable XREF_KEY to the value “ENTITY.”
This means, in the simplest terms, that whenever the translator
attempts to do a cross-reference from the database, it will look for a
line or record within the database that starts with “ENTITY,” until
the environment variable XREF_KEY is changed to another value.
Once the environment variable XREF_KEY has been set, the model
uses the functions STRCAT and STRTRIM to concatenate the
Sender’s Qualifier, Sender’s ID, Receiver’s Qualifier, and the
Receiver’s ID that it has read from the input file, in that sequence,
and assigns the results to a temporary variable VAR-
>OTICRecognID. At this point a cross-reference is performed using
the function XREF and passing in some required parameters as per
the function:
XREF(“ENTITY”, VAR->OTICRecognID, &VAR->OTICHierarchyID, “N”)

Workbench User’s Guide 187


Section 4. Creating Environments

where the arguments are:


“ENTITY” Category of XREF or where to look in the
Profile Database
OTICRecognID Value that was concatenated together to
perform the actual cross-reference
OTICHierarchyID Variable name for which the return value of
the cross-reference will be assigned
Y/N Determines whether (Y) or not (N) to turn on
inheritance
The model logic accesses the Profile Database, searching for a line
or record that starts with “ENTITY” and has the concatenated value
assigned to the variable VAR->OTICRecognID. For example:
“ENTITY|X|ENTITY|C|(<sender qualifier>~<sender id>~
<receiver qualifier>~<receiver id>”) “TP|NAME”
where
<sender’s qualifier> 02
<sender id> SENDER ID
<receiver’s qualifier> ZZ
<receiver id> RECEIVER ID

“ENTITY|X|ENTITY|C|(02~SENDER ID~ZZ~RECEIVER ID”) “TP|NAME”


therefore,
VAR->OTICRecognID = 02~SENDER ID~ZZ~RECEIVER ID
The “ENTITY” clause is the XREF lookup key and value; the “TP”
clause is the value to be returned to the XREF.
A return status is then tested to make sure the cross-reference was
successful.
The “ENTITY” line/record is automatically placed in the Profile
Database when you save a trading partner profile through the
Trade Guide. Depending on the standard selected, an “ENTITY”
record might exist for each level of the trading partner profile
hierarchy:
r IC - Interchange level
r FG - Functional group level
r Message - Document level
Refer to the section on de-enveloping in the appropriate standards
implementation guide (for example, the ASC X12 Standards
Implementation Guide) for instructions on using this environment.

188 Workbench User’s Guide


Map Component Files for Enveloping/ De-enveloping

Processing Using The OTEnvelp.att file is a map component file typically used for
OTEnvelp.att processing application data into the public standards (enveloping).
Unlike the processing of public standards, where generic models
(Enveloping)
are provided, each application system requires customized models.
When the models are created, the entity lookup logic must be
included. To do this, you must define the trading partner in the
Profile Database by using the Trading Partner option from the
Trade Guide Profiles menu.
Refer to the section on enveloping in the appropriate standards
implementation guide (for example, the ASC X12 Standards
Implementation Guide) for instructions on using this environment.
The logic to perform extraction, concatenation, and entity lookup, to
obtain a trading partner view into the database, needs to be
included in the custom application model. The logic is represented
below:
[ ]
;sets the cross reference view into the database as
;“ENTITY”
SET_EVAR(“XREF_KEY”, “ENTITY”)

;ids are extracted and concatenated together


VAR->OTRecognID =STRCAT(STRTRIM(PartnerID “T”, “ ”),
STRCAT(”~”, (STRTRIM(SetID “T”, “ ”)))

;entity cross reference lookup


VAR->OTXRefStatus = XREF(“ENTITY”, VAR->OTRecognID,
&VAR->OTHierarchyID, “N”)

[VAR->OTXRefStatus !=0]
;entity lookup failure
EXIT 501

[ ]
;sets the trading partner’s substitution view
;into the database
SET_EVAR(“HIERARCHY_KEY”, VAR->OTHierarchyID)
The concatenated lookup must be specified in the Application
Cross-reference value entry box of the Outbound X12 Values dialog
box exactly as it is in the model. (This dialog box is opened at the
Message Level of the Trading Partner Profile dialog box.)

Workbench User’s Guide 189


Section 4. Creating Environments

In the syntax example, the value entered in the Application Cross-


reference value entry box would be “PartnerID~SetID”,
representing the trading partner’s ID/name and the document type
which may or may not be the same as the Set-ID field.

190 Workbench User’s Guide


Map Component Files for Enveloping/ De-enveloping

An example of this logic can be reviewed in the ASC X12 model


provided called, OTX12SOS.mdl. Refer to the Present mode rules
on the group item “DocRead.”
Refer to the section on de-enveloping in the appropriate standards
implementation guide (for example, the ASC X12 Standards
Implementation Guide) for instructions on using this environment.

❖ Note: It is good modeling practice for the entity lookup to


include the Trading Partner name and the document type.

Workbench User’s Guide 191


Section 4. Creating Environments

The Application Integrator translator contains a compliance


Compliance checking capability which captures the majority of parsing errors.
Checking The error handling code within data models can be reduced using
the compliance checking data model functions, keywords, and
keyword environment variables. The error handling code ensures
that the proper error code is captured and the natural processing
flow of the translation session continues to the next element.
The following data model functions are used in compliance
checking:
r DMI_INFO()
r ON_ERROR()
r PERFORM()
The following data model keywords are used in compliance
checking.
r INCLUDE
r STOP
The following keyword environment variables are used in
compliance checking:
r RECOVERY
Specific information about each of these items can be found in
Appendix B, Application Integrator Model Functions.
The DMI_INFO() data model function obtains data model item
information associated with the specified data model item and
updates an array variable. The data model item can be a Group,
Tag, Container, or Defining item.
The STOP data model keyword is used to alter the normal
translation processing flow in the PERFORM() declarations.
Processing stops and returns to the data model with a status of
zero.
Error handling routines which are used to capture the envelope
header errors appear in all the generic models supplied with
Application Integrator. However, the user-defined message models
must be modified to include the necessary error handling code to
capture errors at the message level. Modifying the user-defined
models is discussed later in this section.

192 Workbench User’s Guide


Compliance Checking

The PERFORM() data model function provides the ability to


modularize the data models for error handling and database access.
This is useful to applications that use an external database rather
than the Application Integrator Administration Database for
tracking information. The code for accessing the external database
can be placed in the PERFORM() procedures to replace existing
Administration Database access. The rules associated with the
PERFORM() functions are defined in an external file called an
INCLUDE file.
The INCLUDE files are declared in a data model in a Group item
labeled DECLARATIONS. As the data model is parsed, so are all
declared INCLUDE files. The INCLUDE files contain only
procedures; structure is not allowed in INCLUDE files. By having
the procedures in an external file, the same PERFORM() routines
can be shared by many data models.
The translator contains recovery routines that are enabled when an
error occurs. The purpose of the recovery routines is to allow
processing to continue instead of exiting the data model on the first
error encountered. Multiple errors can be captured and reported.
The recovery routines are activated and deactivated using the
RECOVERY keyword environment variable.
Recovery occurs during parsing of the input data. It is
implemented in both the access model and the data model. Each
data element is allowed one fault/error. When an error is
encountered, the translator performs the appropriate recovery
routine to correct the first error in the data element, reports the
appropriate error code to the data model and sets the file position
to the start of the next item. However, if the data element contains
more than one error, the translator returns processing flow to the
data model with error code 200. Error code 200 indicates to the
data model that the data problem is unknown and the translator is
unable to recover.

Workbench User’s Guide 193


Section 4. Creating Environments

The ON_ERROR data model function defines a standard ERROR


mode PERFORM routine to be invoked on an item when no ERROR
mode rules have been defined. The PERFORM must be loaded
using an INCLUDE statement before it is referenced, otherwise, a
134 parse error will be returned. When the INCLUDE file
containing the ON_ERROR PERFORM declaration is loaded, it is
inherited into the child environment and can be used without
having to reload the INCLUDE file. If an INCLUDE file is loaded
into a child environment which has the same declaration
name/label, the second will override the first declaration for this
child environment only.
RECOVERY and ON_ERROR() are independent of each other, that
is, one can be applied without the other. RECOVERY has to do
with source access model parsing of the data. ON_ERROR has to
do with the execution of default data model error rules when
ERROR mode is not defined on the specific data model item. This
table shows the four settings of RECOVERY and ON_ERROR().
RECOVERY ON_ERROR()
Yes Yes
Yes No
No Yes
No No

During processing of inbound files, errors that occur during parsing


of the envelope headers (ISA, GS, or ST in ASC X12) cause
processing of the envelope segments to stop. When Reject or
Bypass exceptions on the envelope is encountered, parsing stops
and the set action is taken on the unit based on which envelope it is.
When source errors are encountered during processing of the
messages on either inbound or outbound files, target processing
will not occur. Instead, a code can be placed on the source model to
force it to exit at a specific error code, or to continue on to the target
side by modifying the supplied source models to attach to the target
environment and generate an application file.

❖ Note: It is recommended that the source model not be


modified unless it is truly necessary.

194 Workbench User’s Guide


Compliance Checking

Changes to Source The following table identifies the error numbers returned from
Model Processing access model parsing. Two levels of error reporting are offered,
with RECOVERY active and with RECOVERY inactive.

Error Description Before RECOVERY RECOVERY


Code Version Inactive, Active,
3.0 version 3.0 version 3.0
-1 End of stream Returned Returned Returned
0 Parsed OK Returned Returned Returned
138 Parsing error† Returned Returned
141 Tag precondition Returned Returned Returned
failure
146 Format error Returned Returned Returned
152 Lookup error Returned Returned Returned
176 Too short Returned Returned Returned
177 Too long Returned
190 Missing/no data Returned Returned
191 Out of character set Returned
192 Post condition Returned Returned
failure
200 Unrecoverable Returned
error
† In versions before 3.0, error code 138 was returned for Missing/no data, Out
of character set, Post condition failure, and Too long. Now, in version 3.0, when
RECOVERY is set to inactive, Missing/no data is reported as error code 190.
Error code 138 represents only Out of character set and Too long.

r The following applies when the data model item was


attempted to be parsed. Referencing a data model item in
the rules that has missing data or has no data associated
with it no longer returns error code 130 (Data model item–
no value found). Instead, a null value is assigned in the
access model to those items and an error code 190
(Missing/no data is returned). To return error code 139,
you must use RETURN to go back a level before the balance
of the children are access processed.

Workbench User’s Guide 195


Section 4. Creating Environments

r Depending on the error code value, the mode of rules


entered from returning from access model parsing or
leaving another mode of rules has changed in some
instances. The following table shows the error values
returned from access model processing of Definings, Tags
and Containers.
Error Description Processing Flow Processing
Code before Version 3.0 Flow for
Version 3.0
-1 End of stream ERROR ABSENT
0 Parsed OK PRESENT PRESENT
138 Parsing error ABSENT ERROR
141 Tag precondition ABSENT ABSENT
failure
146 Format error ERROR ERROR
152 Lookup error ERROR ERROR
176 Too short ERROR ERROR
177 Too long Was not returned ERROR
190 Missing/no data Was not returned ABSENT
191 Out of character Was not returned ERROR
set
192 Post condition Was not returned ERROR
failure
200 Unrecoverable Was not returned ERROR
error

r The following table shows the error values returned when


moving from one rule mode to another.
Error Code Processing Flow Processing Flow for
before Version 3.0 Version 3.0
Leaving PRESENT
Mode
-1 ERROR ABSENT
0 Occurrence violation Occurrence
violation

196 Workbench User’s Guide


Compliance Checking

Error Code Processing Flow Processing Flow for


before Version 3.0 Version 3.0
133, 134, ERROR ERROR
135, 136,
137
138 ABSENT ERROR
139 ERROR ERROR
140 ABSENT ERROR
144 ERROR ERROR
167 ERROR ERROR
Leaving ABSENT
Mode
-1 ERROR Handled like error
code 190, depending
on whether its
optional/mandatory
, previous children
parsed, Tag has a
match value.
0 Occurrence violation Occurrence
violation
138 ERROR ERROR
190 Was not returned Occurrence
(instance violation, changed
optional) to 0
190 Was not returned If Tag has a match
(instance value defined or
mandatory) previous children
present, ERROR.
If Tag has no match
value and no
previous child
present, Occurrence
violation–leave
value at 190
Container, same as
Tag with no match
value.

Workbench User’s Guide 197


Section 4. Creating Environments

r Error codes returned to the parent after child processing.

Error Processing Flow before Processing Flow for


Code Version 3.0 Version 3.0
0 PRESENT PRESENT
138 ABSENT ERROR
171 ABSENT ABSENT

r Incrementing access model counters has been simplified to


the following rules:
a. Counters are incremented only when returning from
access model parsing.
b. Counters 1–5 are incremented for valid parsed data in
the input data stream. Data is parsed in the items base
definition. Counters 6–10 are incremented for each
item defined in the data model structure for which the
access model attempted to parse it.
c. Counters 1–10 are incremented (as designated in the
access model) when error code 0 is returned from
access model parsing. When the returned value is not
0, only counters 6–10 are incremented.
d. Since only counters 1–5 are incremented by a 0 status
from the parser, if the error is corrected in the rules and
the item is to be counted, you must add code to the
model to increment counters 1–5 for the item. (The
DMI_INFO() function will return the counters that were
to be updated for the item.) An exception to this is
when an error code of 192 is returned from the access
model when reading the post condition of a Tab or
Container. If the error code 192 is set back to 0 when
leaving ERROR rules, counters 1–5 are incremented
automatically.
e. With the exception of REJECT and RELEASE, if
keywords are used, counter values must be decreased
manually. The use of REJECT or RELEASE
automatically decreases the values of counters 1–5
when they have been previously incremented.
Counters 6–10 will always decrease with the use of
REJECT or RELEASE.

198 Workbench User’s Guide


Compliance Checking

f. Tags with a null match value and Container counters 1–


5 are not updated when no values are parsed for any
child other than positional delimiters.
g. Tags and Container counters are updated after
returning from post condition processing.
h. Counters are decremented if the keywords RELEASE or
REJECT are used in any rule mode: 1–5 are
decremented only if they were previously incremented,
and counters 6–10 are always decremented.

❖ Note: Upon RELEASE, counters 6–10 must be


reincremented within the rules.

r Error code 141, No match, is returned if the Tag’s


Elem_delim fails. In ASC X12, there is the following access
model definition for a Segment:
Seg_label = ? ^(([‘A’..’A’, ‘0’..’9’, ‘’]){1..3}) (Elem_delim)
Segment = (%Seg_label) ^(TAG) (Seg_term)
r Error code 138 is now considered a hard error. It represents
a serious problem during the parsing/processing of the
data. Typically, it means that the error code 138 must be
carried up the structure hierarchy. All nonzero error
values, upon leaving an item, are changed to error code 138
which represents errors such as: Too short, Too long,
Invalid character, Format error, etc.
Optional items change error code 190 to 0 before leaving so
they do not become a hard error. Mandatory items change
error code 190 to 138 upon leaving. The 138 causes child
processing to discontinue and to return to the parent item.
If the parent is a Tag with a null match value, or a
Container, and no other children have been processed yet,
the hard error of 138 is changed to error code 171, No
children. If other children have been parsed or the Tag’s
match value has been parsed, the 138 is returned to the
parent as a hard error, representing a problem in parsing
one of the existing parent’s children.

Workbench User’s Guide 199


Section 4. Creating Environments

A child is considered parsed when the current stream


position is greater than the position upon entry of the item
and rule processing ends with an error status of 0, BREAK,
RETURN, or CONTINUE. The following two represent the
child parsed.
a. For a Tag: (**) two delimiters noting the position of ‘no
data’ element or composite.
b. For a Container: (+: ) delimiters noting the position of
‘no data’ component.

❖ Note: The PRESENT() function would return False


for these elements/composites/components.

r A Tag is considered PRESENT once its match value has


been parsed (match value must have a string value length
greater than 0). If its first child is mandatory, it will be
treated as a hard error, but the first mandatory child of a
Container or null Tag returns a soft error of 171 to the
parent. Child requirements are handled by requirements
defined on each child (optional or mandatory) and
Tag/Container dependencies (conditional) through the user
of the CONDITIONAL() function. Error code 171 can be
returned from the children of a Tag with an null match
value, a Tag with a match value, or a Container, when none
of the children had characters parsed.
r End-of-file is reported from the access model as a –1 value.
If no characters are read for the item’s precondition or not
characters are read for the item’s base, then –1 is returned.
If any characters are read for the base, the appropriate error
code is returned.

200 Workbench User’s Guide


Compliance Checking

Migrating otrans With all the changes that occurred in the translator, there are some
process flow changes that will be addressed here to simplify the
process of end user migration from previous version to the current
version of 3.0. This section will explain the changes in the
translator and help clarify any issues or concerns. This section is
only intended to users that specify special error routines to capture
errors and unique data flow. This section is solely for technical
users that are already using previous versions of Application
Integrator and wish to understand the detail changes that occurred
in the translator in version 3.0.

Translator Process Flow Previous to the 3.0 release, otrans was limited to a few errors
Before Version 3.0 returning to the data model. Error code 138 was returned when
something was wrong with the data element. Even though an error
existed, processing continued to the next sibling if the occurrence of
the element was met. What this meant was if you had an element
that was too long and optional, no hard error was recognized on
the item and processing would continue to the next. The errors
returned in versions before 3.0 are shown in the following table.

Error
Code Description
-1 End Of Stream
0 OK
138 Element error - either missing, invalid character, too
long, and post condition failure
139 Item not instantiated
140 No more values on ARRAY
141 Missing TAG
146 Incorrect format
152 Lookup ID failure
171 No Children
176 Element is too short

When the error occurred, the file position was at the position where
the error occurred. This meant that the post condition of the
element, if defined was not read. If the error remained after the
element went through Rule Mode Processing, the file position was
reset to the position where the element was entered.

Workbench User’s Guide 201


Section 4. Creating Environments

Version 3.0 Translator The Version 3.0 otrans improves the way the translator processes
Process Flow data and makes the rules consistent for the process flow. This
translator allows for multiple errors to be caught because there is a
built in Recovery routine that positions the file pointer to the next
item. The areas of the translator are:
r Data Access Parsing
r Rule Mode Processing
r Occurrence Validation
r Process Flow between elements.

Data Access Parsing With the version 3.0 of otrans, new error codes return more
and Recovery descriptive meanings. Also with Recovery on, the file position is
reset so that the next element can be read if the error is cleared.
Recovery deals with Tag, Container and Defining type items only.
To enable Recovery, you would type:
VAR->OTPriorEvar = SET_EVAR(“RECOVERY“,“Yes“)
By default in the enveloping and de-enveloping generic models,
recovery is set to “Yes”.
The following table shows which error codes are returned
depending if recovery is set to “Yes” or “No”, and which mode of
rules are entered upon returning from the access model with the
specific error code.

Difference
Error Recovery Recovery Rule mode from prior
Code Description “No” “Yes” entered version
-1 End Of File* Returned Returned ABSENT ERROR
0 OK Returned Returned PRESENT
138 Hard Error Returned ERROR ABSENT
141 Missing TAG* Returned Returned ABSENT
146 Incorrect format Returned Returned ERROR
152 Lookup ID failure Returned Returned ERROR
171 No Children* Returned Returned ABSENT
176 Element is too short Returned Returned ERROR
177 Element is too long Returned ERROR

202 Workbench User’s Guide


Compliance Checking

Difference
Error Recovery Recovery Rule mode from prior
Code Description “No” “Yes” entered version
190 Element is missing* Returned Returned ABSENT
191 Invalid character in Returned ERROR
the data
192 Post condition Returned Returned ERROR
failure
200 Unrecoverable error Returned ERROR
*Represents soft errors versus data was parsed and is in error (hard error).

Refer to Trade Guide for System Administration User’s Guide for more
descriptive explanations of each error code.
With recovery set to “No”, the error codes are returned as previous
in 3.0, except for error 138. 138 before included “Element is
missing”. Now 190 will be returned for “Element is missing”
whether recovery is set to “Yes” or “No”.
For two of the error values, the mode of rules that are entered upon
returning from the access model parsing have changed: -1 and 138.
For –1 (EOF), if any characters are parsed for the base access
definition, 176 for too short or the appropriate error is returned in
place of –1. When –1 is returned, the element/field is absent, so the
ABSENT rules are performed. And 138 now represents errors, not
“Element missing”, so the ERROR rules are performed.
Data Access Parsing means the part of the translator that reads the
data stream character by character, verifies the character set and
validity of the data, formats the value (Numeric or Date/Time), and
executes recovery (if needed and turned on). This is all done inside
otrans and is hidden to the user.
If the data has a problem and recovery is set to “Yes”, otrans will go
through specific rules to try to recover the data as much as possible.

Workbench User’s Guide 203


Section 4. Creating Environments

Rules of Recovery While reading character by character, the translator hits a character
out of character set, end of file, or reaches the maximum
element/field size as defined:
1. If no data was read, then read the post condition. If the post
condition is read or no post condition is defined for the item,
the error 141 is returned for TAGs and the error 190 is returned
for Composites and Definings.
2. If the minimum size was met and no post condition defined, the
error 0 is returned back to the data model.
3. If the minimum size was not met and a post condition is
defined, the next character is read for the post condition. If the
post condition is read, the error 176 is returned.
4. If the post condition is not read, continue reading until the post
condition, end of file or a size of 4096 is read. Return 191 if the
post condition is finally read, else return 200.
5. If maximum defined size is met, read next character for post
condition. If not defined, return 0. If defined but not present,
read till post condition, return error 177 if finally read, or return
200 if not read.
6. If Date/Time/Numeric, check format, If failed, return 146.
7. If TAG or composite, If fail post condition, return 192.
During parsing, when a element/field contains an invalid
character, the character is removed from the string of that element.
This means if an element has data as “ABC^DE” where ^ is out-of-
character set, if you get the value of the item, it will be ABCDE with
an error of 191. Defined delimiters are not considered invalid
characters and are not automatically removed from the string
during parsing.
Recovery only happens on elements. TAG and Composites can not
execute recovery. Therefore if the post condition is invalid on a
TAG, 192 is return but processing is not set to the next TAG or
Composite. Therefore when this error occurs, processing should
stop. It is considered a hard error.

204 Workbench User’s Guide


Compliance Checking

Examples of Recovery

Example Element Action


1 Data Model Element1 {ElementAN @1 .. 20 none }*1 ..
1
Data N2**
Result Returns error 190 to ABSENT mode rules
2 Data Model Tag1 { SEGMENT “N1” }*1 .. 1
Data N2*Value
Result Returns error 141 to ABSENT mode rules
of Tag1
3 Data Model Element1 {ElementAN @1 .. 20 none }*1 ..
1
Data N2*1234
Result Returns 0 to PRESENT mode rules, and
doesn’t do recovery because the min was
met
4 Data Model Element1 {AlphaNumericFld @1 .. 20
none }*1 .. 1
Data N21234
Result Returns 0 to PRESENT mode rules, and
doesn’t do recovery because the min was
met
5 Data Model Element1 {ElementAN @20 .. 20 none }*1
.. 1
Data N2*1234*A
Result Returns 176 to ERROR mode rules, and
recovery positions to “A”
6 Data Model Element1 {ElementAN @20 .. 20 none }*1
.. 1
Data N2*12^34*A where ^ is out of character
Result Returns 176 to ERROR mode rules and
recovery positions to “A”. The last error
would be returned which is minimum
not met. If Reference the DMI, the value
is 1234 because the ^ is removed from
the data.

Workbench User’s Guide 205


Section 4. Creating Environments

Example Element Action


7 Data Model Element1 {ElementAN @1 .. 20 none }*1 ..
1
Data N2*12^34*A where ^ is out of character
Result Returns 191 to ERROR mode rules, and
recovery positions to “A”. If
reference the DMI, the value is 1234
because the ^ is removed from the data.
8 Data Model Element1 {ElementAN @1 .. 2 none }*1 ..
1
Data N2*1234*A
Result Returns 177 to ERROR mode rules, and
recovery positions to “A”.
9 Data Model Element1 {ElementAN @1 .. 6 none }*1 ..
1
Data N2*12^345678*A where ^ is out of
character
Result Returns 177 to ERROR mode rules, and
recovery positions to “A”. The last
error would be returned which is
maximum not met. If reference the DMI,
the value is 1234 because the ^ is
removed from the data.
10 Data Model Element1 {ElementAN @1 .. 4 none }*1 ..
1
Data N2*12^34*A where ^ is out of character
Result Returns 191 to ERROR mode rules, and
recovery positions to “A”. The last
error would not be maximum not met
because the ^ is removed from the data.
11 Data Model Tag1 { LineFeedDelimRecord “N2”
Element1 {NumericFld @4 .. 4 none }*1 .. 1
Element2 {AlphaNumericFld @4 .. 4 none
}*1 .. 1
Data N21234ABCD<lf>
Result Returns 0 to PRESENT mode rules, and
doesn’t do recovery because the elements
met their definition.

206 Workbench User’s Guide


Compliance Checking

Example Element Action


12 Data Model Tag1 { LineFeedDelimRecord “N2”
Element1 {NumericFld @4 .. 4 none }*1 .. 1
Element2 {AlphaNumericFld @4 .. 4 none
}*1 .. 1
Data N21234ABCD - No line feed record after
the D
Result Returns error 192 to ERROR mode rules
on Tag1.
13 Data Model Tag1 { LineFeedDelimRecord “N2”
Element1 {NumericFld @5 .. 5 none }*1 ..
1
Element2 {AlphaNumericFld @4 .. 4 none
}*1 .. 1
Data N21234ABCD<lf>
Result Returns error 146 to ERROR mode rules,
for Element1 and 176 to ERROR mode
rules for Element2, if the error was
cleared on Element1.
14 Data Model Element1 {ElementN @1 .. 8 none }*1 .. 1
Data N2*12^34*A where ^ is out of character
Result Returns 191 to ERROR mode rules, and
recovery positions to “A”. The last error
would not be format because the ^ is not
part of the character set.
15 Data Model Element1 {ElementN @1 .. 8 none }*1 .. 1
Data N2*12B34*A where B is part of
# CHARSET
Result Returns 146 to ERROR mode rules, and
recovery positions to “A”. The last
error would not be format because the B
is part of # CHARSET.

Workbench User’s Guide 207


Section 4. Creating Environments

Rule Mode Processing There are three types of mode processing: PRESENT, ABSENT and
ERROR. This Version 3.0 changes only deals with the source
processing. When an item other than a group parses data, it enters
one of the 3 modes of processing:
r PRESENT mode: When the error code is 0.
r ABSENT mode: When the error code is -1, 139, 140, 141,
171, 190.
r ERROR mode: Any other error.
The modeler is able to put rules on any of these modes. When it is
said to “Clear the Error”, the last action in the mode results in a 0
error code. (A “null condition” by itself will “Clear the Error” or
reset it back to zero.) The following code will show how an error
190 would change into 0. This case would be if an element BIG_01
is missing.
BIG_01 { ElementAN @1 .. 10 none
[]
ARRAY->Big_01 = BIG_01
:ABSENT
[]
ARRAY->Big_01 = ““
}*1 .. 1
With this code, you are defaulting a value on the variable instead of
making an error of a missing required element. When processing is
done with ABSENT mode, the error is zero.

Occurrence Validation
The codes [-1, 139, 140, 141, 171, 190] enter ABSENT mode rules. If
no ABSENT mode rules are defined, the error remains at its value –
the error is not cleared. When processing is done with PRESENT or
ABSENT (whether rules are defined or not), and the error code is -
1, 139, 140, 141, 171, 190, occurrence validation is checked. If this
occurrence of the item is optional, the error is reset to zero and
processing proceeds onto the next sibling. If this occurrence of the
item is mandatory, the error code is taken into ERROR mode.
Like ABSENT, ERROR mode can also clear the error. If the error is
not cleared, processing converts it to a hard error (138).

208 Workbench User’s Guide


Compliance Checking

Process Flow Between After Rule Mode Processing and Occurrence Validation, if a non-
Elements zero error value remains, the error is changed to 138 and is
returned to the parent of the item. What this represents is that a
hard error is found and no matter what the occurrence of the
parent, stop processing.
Soft errors are missing value type items. They will check
Occurrence Validation before they go into the ERROR mode rules
as previous stated. These errors will first go into ABSENT mode
rules. If the error is cleared, processing will continue to the next
element. Keeping the same error code or no ABSENT rules,
Occurrence Validation is checked first. If the minimum is not met,
it will enter the ERROR mode of that item. If the minimum is met,
the item is not considered an error.
Processing for groups is broken down into two categories:
r Groups only within groups
r Groups that have access items(Tags, Containers, Defining).
For the first type, the group will loop until the maximum
occurrence has been reached unless a Keyword like BREAK or
RETURN is used to leave the group or the group rules processing
ends with error 139, 140. For example, you have a group that goes
1 to 100 that just counts, this will loop 100 times. If you want to
break the group after 50, the code would look like:
Group1 {
[VAR->OTCount == 50]
BREAK
[]
VAR->OTCount = VAR->OTCount + 1
}*1 .. 100
For the second type, the group will loop either to the maximum
occurrence or no more data is read. When this happens, an error
171 is returned to the group. The processing flow will enter
ABSENT mode rules. If the error remains after ABSENT mode and
the minimum occurrence has been met, the looping will stop and
continue to the next element. If the minimum was not met,
processing will go to the ERROR mode rules and then act like a
hard error by going to the groups parent with error 138.

Workbench User’s Guide 209


Section 4. Creating Environments

Sometimes missing required elements are not considered hard


(138). If the first item in a group is missing but required, the error
171 is return back to the parent. This way occurrence validation is
checked to verify if the occurrence was met, If so, it will continue to
the sibling. For an example:
Group1 {
Group2 {
Tag1 { Segment “BIG”
}*1 .. 1
}*1 .. 1
Group3 {
}*1 .. 1
}*0 .. 1
If the BIG segment is missing, even though it is required, error 171
is returned back to Group1. Since the occurrence validation is 0 .. 1
(optional group), there is no hard error.

210 Workbench User’s Guide


Using Extended Access Device Types

Application Integrator’ extended access provides additional


Using Extended methods to the standard file access method. Additional access
Access Device types include:
Types
r TCP/IP sockets, referred to as “socket”
r UNIX FIFO devices, referred to as “fifo”
r Pipe, referred to as “pipe”
r UNIX System V message queues, referred to as “msgq”

❖ Note: Only the file access device type supports archiving,


regardless of the settings. The extended access types
described in this section are not supported by Application
Integrator archiving.

Common Syntax The common syntax for specifying an extended access device types
is:
“<dev_specific_name>[<dev_name>]<add_dev_specific_params>”
where
<dev_specific_name> is used to specify the data stream, a file,
program, socket address, or message queue ID. Refer to the
individual devices’ specifications following for details on each
access method.
<dev_name> is the name of the access method to be used: fifo,
pipe, socket, or msgq.
<add_dev_specific_params> is a list of specific modifiers that
control how the devices behave.
This syntax is used for the INPUT_FILE and OUTPUT_FILE
specifications within the map component file, by way of command
line parameters or defines, or as the parameter to the file attribute
of a group item type in a data model.
Example:
INPUT_FILE = “/tmp/fifo [fifo]”
OUTPUT_FILE = “/tmp/output.exp”
When the device type specification is absent, the translator will
default to the standard device of file. Spaces must delimit the fields
of the specification.

Workbench User’s Guide 211


Section 4. Creating Environments

Application
Integrator Sockets
Examples

Sockets Description Sockets are communication endpoints or service access points.


Before a socket can be accessed across a network, it must be bound
to an address. The socket address consists of a fully-qualified
domain name or Internet protocol (IP) address and a port address.
Stream sockets, like those used with Application Integrator, are
appropriate for transferring large volumes of data reliably. A
connection is set up, data is transferred, and each packet is checked
at the receiving end to verify accurate transmissions. Stream
sockets use the transmission control protocol (TCP).

Socket Interface Notes Application Integrator sockets are used to read and write data from
a TCP/IP network. Sockets use the client/server model to initiate
the connection. The client/server model divides all communicating
applications into two types depending on what they do to facilitate
the connection. The application that waits (or listens) is called the
server and the one that initiates the connection is called the client.
Usually there is only one server but there can be many clients.
The connection mode determines which machine is the client and
which is the server. Receiving data from another programs does
not establish the client or server relationship as it relates to sockets.
This is discussed further in this section.
The main advantage in the client/server model is that it makes the
translation session machine independent. The server and the client
can run on separate machines and the two ends of a connection can
be on different types of machines. For example, one end can be a
Windows based PC and the other can be an UNIX based HP
machine.

212 Workbench User’s Guide


Application Integrator Sockets Examples

The socket based approach makes Application Integrator


applications Internet ready. If the clients are assumed to be systems
distributed all over the Internet and the server is located at one
location accessible through the Internet, then the session can be
carried out through the Internet. However, although the
architecture supports wide area network (WAN) distributed
processing, Application Integrator is supported and quality assured
in a local area network (LAN) environment only.

❖ Note: Generic processing is not supported using sockets.

Sockets Specifications

Defining a Socket Before a socket can be used, both parties, sender and receiver, must
agree on the address and configuration of both. If you will be
receiving data, you must define an INPUT_FILE, and if you are
sending data, you must define an OUTPUT_FILE. If you will be
both sending and receiving, two socket unidirectional ports or a
single bidirectional port is required.
Specify the device type of sockets with the following syntax:
“<host_name>:<port_number> [socket] passive persistent
<#retries><retry_time_period> &”
where,
<host_name> refers to the machine name or an alias of the computer
name. If the computer is outside the local network then the fully qualified
domain name should be used. If the translation involves a computer
outside your network, the fully qualified domain name or the IP address
has to be used. Your fully qualified domain name is the “<hostname
value>.<domain value>”. Refer to the procedure “To locate the hostname
of your computer” for more information.
<port_number> refers to the socket port through which the data
transfer will occur. It should be a number greater than 5000.
[socket] identifies the access type.

Workbench User’s Guide 213


Section 4. Creating Environments

passive if specified, refers to a server socket. If passive is not specified,


the socket becomes active and acts as a client socket. If passive is
specified, the socket becomes a server socket and would wait for a client to
connect to it. Then, after completing a translation, it would wait for a new
client to connect.
persistent if specified, makes the translator stay up, depending
upon the data model structure. If an active socket fails to connect to
the server it will keep trying until connection is made or until a
specified time elapses. The default for active persistent sockets is to
keep trying to connect to the server at 1 second intervals
indefinitely. If persistent is not specified, single attempt connect
will occur.
<#retries> is the number of attempts an active persistent socket will
make to connect to a server before an error is returned.
<retry_time_period> is the number of seconds the socket waits before
retrying.
& specifies that the socket is bidirectional. If bidirectional is not
specified, a unidirectional socket will be used.
Examples:
“hpmachine:5560 [socket] persistent 50 5”
“hpmachine:5550 [socket] &”
The following paragraphs discuss sockets specifications in detail.

Ø To locate the hostname of your computer

UNIX users
At the UNIX command prompt, type
uname -n
– or –
hostname
If your computer is not configured to use these commands, contact
your system or network administrator to identify the hostname of
your computer.

Windows 95 users
1. From the Settings menu, select Control Panel. Click the
Network icon. The Network dialog box will appear.

214 Workbench User’s Guide


Application Integrator Sockets Examples

2. From the Network dialog box, select Configuration tab. In the


select list, locate the TCP/IP entry and highlight it.

Properties
button

3. Choose the Properties button. The TCP/IP Properties dialog


box will appear.
4. Select the DNS Configuration tab.
a. If the Enable DNS radio button is on, note the entry that
appears in the Host value entry box. This is your TCP/IP
hostname.

Workbench User’s Guide 215


Section 4. Creating Environments

If Enable DNS radio


button is ON, use
the name that
appears in the Host
value entry box

Host value
entry box

b. If the Disable DNS radio button is on, select the IP Address


tab. At the IP Address tab, if the Specify an IP Address
radio button is on, note the entry that appears in the IP
Address value entry box.

216 Workbench User’s Guide


Application Integrator Sockets Examples

IP Address tab

Obtain an IP
address
automatically
radio button

IP Address
value entry
box

However, if the “Obtain an IP address automatically” radio


button is on, contact your system or network administrator to
obtain the hostname for your machine.
5. Choose the Cancel button to exit each of the dialog boxes.

Windows NT 4.0 users


1. From the Settings menu, select Control Panel. Click the
Network icon. The Network dialog box will appear.
2. From the Network dialog box, select the Protocols tab. In the
select list, locate the TCP/IP entry and highlight it. Choose the
Properties button. The TCP/IP Properties dialog box will
appear.

Workbench User’s Guide 217


Section 4. Creating Environments

Properties
button

3. At the IP Address dialog box, if the Specify an IP Address radio


button is on, note the entry that appears in the IP Address value
entry box.
However, if the “Obtain an IP address from a DHCP server”
radio button is on, contact your system or network
administrator to obtain the hostname for your machine.

218 Workbench User’s Guide


Application Integrator Sockets Examples

Obtain an IP
address from a
DHCP server
radio button

IP Address
value entry
box

4. Select the DNS tab. In the Host Name value entry box, locate
and make a note of your hostname.

5. Choose the Cancel button to exit each of the dialog boxes.

Workbench User’s Guide 219


Section 4. Creating Environments

Connection Mode In Application Integrator, the connection mode determines the


client/server relationship. The sockets may be active or passive.
Active mode causes a socket to act as a client and initiate the
connection to another server socket. It is the default mode. Passive
mode causes the socket to function as a server and be in a listening
mode, waiting for a connection to be initiated by an active socket
somewhere else in the network.

❖ Note: The mode of connection determines which machine is


the client and which is the server. Receiving data from
another program does not establish the client or server
relationship in Application Integrator.

Socket Attributes Socket attributes may be set to persistent or single attempt connect.
The default is single attempt. This setting is applicable for both
active (client) and passive (server) sockets.
If a socket is created without the persistent attribute, it will stop
after a single connection attempt. If a socket is created with the
persistent attribute, it will continue to retry the connect at one
second intervals, indefinitely. After connecting, the only way to
close a persistent socket is to exit the environment in which it was
created or to kill the process.
Besides the retry attribute, a retry time period can be specified. If a
number of retries is specified for a persistent socket, it will
reconnect after a close for the number of retries specified. The
default wait period between retries is one second. Both retry and
retry time period can be very large numbers.
For passive sockets, the persistent attribute causes the socket to “re-
listen” after receiving a close. Retry time limits do not apply to
passive sockets.

220 Workbench User’s Guide


Application Integrator Sockets Examples

Data Transfer Mode There are two ways in which data can be transferred through
sockets:
1. Bidirectional
2. Unidirectional
A bidirectional socket allows data movement into and out of the
same socket address. Using a bidirectional socket the client could
write data to the server and the server could also write data to the
client on the same socket.
The biggest advantage of a bidirectional socket is that only one
socket is required for a two way transfer of data. This may be very
important because each socket uses system resources. As the
number of sockets used increases, the load on the system increases
and system performance is affected. This is especially important on
a heavily loaded machine (such as a multi-user UNIX file server).
A unidirectional socket allows data movement in one direction
only—either into or out of the socket address. If you want data to
flow in two directions you need two unidirectional sockets— one
for each direction.
The main advantage of an unidirectional socket is the simplicity of
developing programs to use it. Programmers are already familiar
with the concept of files in which they can read from one file and
write to another file. Programming is simpler with the use of
unidirectional sockets.

Miscellaneous Details The socket interface provided is a unidirectional or bidirectional


data-only socket that allow for the transfer of byte streams. The
socket device does not provide any command level interface.
Socket devices in Application Integrator are implemented in TCP
only and do not afford any User Datagram Protocol, nor do they
provide any out of band control.
At the physical level, the Application Integrator socket device reads
or writes streams of data up to 1024 bytes per transfer. This
transfer attribute is transparent to network users of the socket as
well as Application Integrator modelers, and is mentioned here
only for the sake of completeness.

Workbench User’s Guide 221


Section 4. Creating Environments

Specifying a Socket Socket type devices can be specified for any I/O item within
Application Integrator. Usually I/O devices are specified as
INPUT_FILE or OUTPUT_FILE in a map component file. To create
a socket, the attribute (socket) is added to the declaration of the file
item along with any connect mode, number of retries, and retry
period, as required. When an I/O device is a socket, the file handle
is used to specify the hostname and port address. Both hostname
and port address may have an alias. The hostname is the name of
the system and the port address is a number assigned by the system
administrator.
Sockets are widely used by the operating system for various tasks.
The O/S uses port numbers below a certain number. While
specifying the port number users should select a number greater
than 5000.

Theory of Operation The sockets interface was first developed to help UNIX
programmers use existing TCP/IP protocols for network
communications. While it was being developed, some of the
concepts also found their way into UNIX and sockets became fully
integrated with the operating system.
The Windows™ sockets (Winsock™) specification was based on
UNIX sockets. It includes UNIX socket routines and extensions
specific to Windows. Winsock is supplied as a dynamic link library
(DLL). This DLL has to be loaded before any socket operations are
performed. An important issue in Winsock is the version number.
The first Winsock version was 1.0, followed by 1.1 and 2.0. For
reasons of compatibility, most implementations support all versions
and allow the socket application to select the version it needs to use.
Both of these issues can be taken care of by the socket program
during initialization.
These details are transparent to the Application Integrator user.
However, the user must ensure that wsock32.dll (the 32-bit
Winsock) is in the right path and the DLL is the latest version.

222 Workbench User’s Guide


Application Integrator Sockets Examples

Sockets Examples This section describes six examples of how to use sockets. For
details on how to set up the examples refer to Preparing to Run the
Examples. For details on running the examples, refer to the
following sections:
r Example 1: Input from a client socket and output to a file.
r Example 2: Input from a file and output to a server socket.
r Example 3: Input from a server socket and output to a file.
r Example 4: Input from a unidirectional server socket and
output to a unidirectional server socket.
r Example 5: Input from a persistent client socket and output
to a file.
r Example 6: Input and output through a persistent
bidirectional server socket.

List of Common Files


Required By The Examples

File Description
OTsoc.in Input data file
OTsocm.att Master map component file
OTsoc.att Map component file that defines the source and
target data model
OTsocm.mdl Master model
OTsocs.mdl Source data model
OTsoct.mdl Target data model
OTclose.in File containing data required to close a persistent
socket

Workbench User’s Guide 223


Section 4. Creating Environments

Preparing to Run the Before running the sockets examples, you must edit the script file or
Examples batch file and compile the examples programs. The following
procedure describes how to compile the programs for
Example 1. Substitute the appropriate information to compile
programs for the other examples.

UNIX
For UNIX customers, use the standard C compiler cc.

Ø To edit the UNIX shell script


Modify the OTsoc1.sh file to contain your hostname using an on-
line editor such as vi.

Ø To compile the C programs


Type the following command, using the appropriate filenames
where indicated by brackets. Do not type the brackets.
cc <C source code filename> -o <output filename>
$ cc OTsoc1.c -o OTsoc1
On some UNIX systems, socket functions are supplied by the socket
compatibility library /lib/libsocket.a. This has to be
explicitly included during compilation. For such systems, the above
command will produce unresolved symbol errors. When they
occur, type the following command to correct them.
cc <C source code filename> -o <output filename> -lsocket
$ cc OTsoc1.c -o OTsoc1 -lsocket

Windows (95, and NT 4.0)


For information about using Visual C++, refer to the Microsoft
documentation included with the software. In addition to the
default libraries, wsock32.lib is a library that must be included
while linking. This library implements the 32-bit version of
Windows Sockets.
For Windows customers, the socket examples have already been
compiled for you; although, we have documented the procedures
necessary to perform a compilation. The following procedures are
used to compile programs for Windows using MS Visual C++,
Version 4.0 or Version 5.0.

224 Workbench User’s Guide


Application Integrator Sockets Examples

Ø To edit the Windows batch file


Modify the OTsoc1.bat file to contain your hostname using an on-
line editor such as Notepad.

Ø To prepare for compiling using MS Visual C++, Version 4.0 or


Version 5.0
Create a backup copy of the OTsoc1.exe file (found in the same
directory as Application Integrator), for example:
copy OTsoc1.exe OTsoc1ex.bak

Ø To create a project workspace using Microsoft Visual C++,


Version 4.0
1. From the Microsoft Developer Studio File menu, select New.
The New dialog box will appear.
2. Choose Project Workspace. Choose the OK button. The New
Project Workspace dialog box will appear.
3. At the Name value entry box, type
OTsoc1
4. At the Type area, choose Console Application. Choose the
Create button.

Ø To create a project workspace using Microsoft Visual C++,


Version 5.0
1. From the Microsoft Developer Studio File menu, select New.
The New dialog box will appear.
2. Choose the Projects tab. Choose Win 32 Console Application.
3. At the Project Name value entry box, type
OTsoc1
4. Choose the OK button.

Ø To set project settings using Microsoft Visual C++, Version 4.0


1. From the Microsoft Developer Studio Build menu, select
Settings. The Project Settings dialog box will appear.
2. Choose the C/C++ tab.
3. From the Category pull down, choose General.
4. In the Preprocessor definition’s value entry box append the
value “,_WINDOWS”.
5. Choose the OK button.

Workbench User’s Guide 225


Section 4. Creating Environments

Ø To set project settings using Microsoft Visual C++, Version 5.0


1. From the Microsoft Developer Studio Project menu, select
Settings. The Project Settings dialog box will appear.
2. Choose the C/C++ tab.
3. From the Category pull down, choose General.
4. In the Preprocessor definition’s value entry box append the
value “,_WINDOWS”.
5. Choose the OK button.

Ø To insert files into the project using Microsoft Visual C++,


Version 4.0
1. From the Microsoft Developer Studio Insert menu, select Files
into Project. The Insert Files into Project dialog box will appear.
For the List Files of Type pull down menu, choose Library Files
(*.lib).
2. In the directory tree, point the path to the \msdev\lib directory
that contains the correct version of WSOC32.LIB. From the list
of files, select
WSOCK32.LIB
Choose the OK button.
3. From the Microsoft Developer Studio Insert menu, select Files
into Project. The Insert Files into Project dialog box will appear.
4. In the directory tree, point the path to the Application
Integrator directory. From the list of files, select
OTsoc1.c
Choose the OK button.

Ø To insert files into the project using Microsoft Visual C++,


Version 5.0
1. From the Microsoft Developer Studio Project menu, select Add
to Project. Choose Files. The Insert Files into Project dialog box
will appear. For the List Files of Type pull down menu, choose
Library Files (*.lib).
2. In the directory tree, point the path to the \msdev\lib directory
that contains the correct version of WSOC32.LIB. From the list
of files, select
WSOC32.LIB
Choose the OK button.

226 Workbench User’s Guide


Application Integrator Sockets Examples

3. From the Microsoft Developer Studio Project menu, select Add


to Project. Choose Files. The Insert Files into Project dialog box
will appear.
4. In the directory tree, point the path to the Application
Integrator directory. From the list of files, select
OTsoc1.c
Choose the OK button.

Ø To compile and build using Microsoft Visual C++, Version 4.0


or Version 5.0
1. From the Microsoft Developer Studio, select either Debug or
Release from the Project Configuration pull down.
2. From the Microsoft Developer Studio, open the OTsoc1 files
folder to display the entries you inserted into the project. Select
OTsoc1.c to identify the program to compile.
3. From the Microsoft Developers Build menu, select Compile
OTsoc1.c. This will check the C code for errors.
4. From the Microsoft Developers Build menu, select Build
OTsoc1.exe.
5. From the \msdev\projects\otsoc1\debug or release directory
copy OTsoc1.exe to your Application Integrator Development
Directory.
6. Test your functions for proper parameter passing, etc.

Workbench User’s Guide 227


Section 4. Creating Environments

Example 1: Input Example 1 demonstrates input from a client socket and output to a
from a client socket file on a client machine. A connection mode has not been defined
for the socket so it defaults to the active mode, making it a client
and output to a file.
socket. Figure 1 illustrates this example.

List of Additional Files


Needed

File Description
OTsoc1.c C source code.
OTsoc1 UNIX program created by the compile that creates
a server socket. Once the client connects to the
program, it sends data to the client.
OTsoc1.exe Windows program that creates a server socket.
Once the client connects to the program, it sends
data to the client.
OTsoc1.sh UNIX shell script that executes the example. You
must specify your machine name at the host_name
argument of the INPUT_FILE variable.
INPUT_FILE = “host_name:5510 [socket]”
OUTPUT_FILE = “soc1.out”
OTsoc1.bat Windows batch file that executes the example. You
must specify your machine name at the host_name
argument of the INPUT_FILE variable.
INPUT_FILE = “host_name:5510 [socket]”
OUTPUT_FILE = “soc1.out”

In example 1, your machine is the server and you establish the


socket by running OTsoc1. You are prompted to enter the server
port number and the input filename.
The client machine connects to the server by running OTsoc1.sh.
This program creates the client socket on the server machine and
establishes the client/server relationship for the input data stream.
When the connection is made, the server sends the input data to the
translator through the socket. It is processed through the translator
and output to a file on the client machine. Closed messages appear
on the server and client windows indicating the socket is closed and
the translation session has ended successfully.

228 Workbench User’s Guide


Application Integrator Sockets Examples

Server Socket Client initiates


#5510 socket Client Socket
Active Mode connection

DATA FLOW DATA FLOW DATA FLOW

OTsoc1 Application Integrator Translator

soc1.out

your machine (server) outside machine (client)

Socket1.vsd

Figure 1. Sockets Example 1

Performing Example 1

Windows

❖ Note: The dialog boxes that appear when running the


examples in Windows are shown in Example 1 only and are
typical of those that would appear in each of the examples.

1. If you are a Windows user, open the Notebook Editor. Modify


OTsoc1.bat to specify your hostname in the –DINPUT_FILE
parameter. Add –cs <Control Server number> where <Control
Server number> is “pr” for production, “dv” for development,
or “ts” for test. Save the changes and exit the Notebook Editor.
2. Start the example.
Display the Windows Run dialog box according to the
procedures for your version of Windows. In the value entry
box, type the path and OTsoc1.exe. Choose the OK button.

Workbench User’s Guide 229


Section 4. Creating Environments

- or -
For systems using Windows ‘95, NT 3.51, or NT 4.0 locate the
OTsoc1.exe filename in Explorer and double-click.
This starts the example, opens the client socket, and establishes
the client/server relationship. A DOS window will appear.

3. At the DOS window, at the “Enter the Server port number”


prompt, type
5510
Press the Enter key. This identifies the server socket port
number.
4. At the DOS window, at the “Enter the input filename” prompt,
type
OTsoc.in
Press the Enter key. This identifies the input filename.
5. Start the translation.
Display the Windows Run dialog box according to procedures
for your version of Windows. In the value entry box, type the
path and OTsoc1.bat filename. Choose the OK button.

230 Workbench User’s Guide


Application Integrator Sockets Examples

- or -
For systems using Windows ’95 or NT 4.0, run the example by
locating the OTsoc1.bat filename in Windows Explorer and
double-clicking it.
This starts the program to process the translation. A Session
Output dialog box will appear.

6. At the Session Output dialog box, a “Session ended: err: 0”


message should appear indicating the example ran successfully.
Choose the Quit button to close the Session Output dialog box.

Workbench User’s Guide 231


Section 4. Creating Environments

UNIX
1. Open the vi Editor and modify OTsoc1.sh to specify your
hostname in the -DINPUT_FILE parameter. Save the changes
and exit the vi Editor.
2. Compile the OTsoc1.c program for this example.
3. Open two UNIX windows and arrange them so they are both in
view. In the following figure, the windows are titled Window 1
and Window 2.
4. Start the Application Integrator Control Server, type
otstart
5. At the command prompt in Window 1, type
OTsoc1
Press the Enter key. This starts the example, opens the client
socket, and establishes the client/server relationship.
6. In Window 1, at the “Enter the Server port number” prompt,
type
5510
Press the Enter key. This identifies the server socket port
number.
7. In Window 1, at the “Enter the input filename” prompt, type
OTsoc.in
Press the Enter key. This identifies the input filename.
8. In Window 2, at the command prompt, type
OTsoc1.sh
Press the Enter key. This starts the shell script to process the
example.
9. In Window 1, the Opened input file message should appear to
indicate you have successful communication and the example is
executing.
When both windows return to the command prompt without
error messages, the example executed successfully.
(window 1) (window 2)
$ OTsoc1
Enter the Server port number: 5510
Enter the input filename: OTsoc.in
$ OTsoc1.sh
Opened input file Session# 000102 started
$ Session# 000102 completed successfully.
$
Example 2: Input Example 2 demonstrates input from a file that resides on the server

232 Workbench User’s Guide


Application Integrator Sockets Examples

from a file and machine and output to a passive socket. The output socket is
output to a passive passive socket which means that it will function as a server socket,
waiting for a connection from a client somewhere on the network.
socket

List of Additional Files


Needed

File Description
OTsoc2.c C source code
OTsoc2 UNIX program created by the compile that creates
a client socket. It connects to the server socket
created by the translator and receives data.
OTsoc2.exe Windows program that creates a client socket. It
connects to the server socket created by the
translator and receives data.
OTsoc2.sh UNIX shell script that executes the example. You
must specify your machine name at the host_name
argument of the OUTPUT_FILE variable.
INPUT_FILE = “OTsoc2.in”
OUTPUT_FILE = “host_name:5520 [socket]
passive”
OTsoc2.bat Windows batch file that executes the example. You
must specify your machine name at the host_name
argument of the OUTPUT_FILE variable.
INPUT_FILE = “OTsoc2.in”
OUTPUT_FILE = “host_name:5520 [socket]
passive”

In example 2, your machine is the server and you start the


translation session by running OTsoc2.bat or OTsoc2.sh. In this
script, you have identified the input file, which is stored on your
machine, and the hostname, port number, and attributes for the
output server socket.

Workbench User’s Guide 233


Section 4. Creating Environments

The client machine connects to the server machine by running


OTsoc2. This program creates the unidirectional client socket on
the server machine and establishes the client/server relationship for
the output data stream. When the connection is made, the server
sends the translated data to the client through the socket and closes
the connection. Closed messages appear on the server and client
windows indicating the socket is closed and the translation session
has ended successfully.

Server Socket # Client initiates


5520 socket Client Socket
Passive Mode connection

DATA FLOW DATA FLOW OTsoc2 DATA FLOW

Application Integrator Translator

soc2.out

OTsoc.in

your machine (server)


outside machine (client)

Socket2.vsd

Figure 2. Sockets Example 2

234 Workbench User’s Guide


Application Integrator Sockets Examples

Performing Example 2

Windows

❖ Note: Samples of the dialog boxes can be found in Example 1

1. If you are a Windows user, open the Notebook Editor. Modify


OTsoc1.bat to specify your hostname in the –DINPUT_FILE
parameter. Add –cs (Control Server number> where <Control
Server number> is “pr” for production, “dv” for development,
or “ts” for test. Save the changes and exit the Notebook Editor.
2. Start the translation.
Display the Windows Run dialog box according to procedures
for your version of Windows. In the value entry box, type the
path and OTsoc2.bat. Choose OK.
- or -
For systems using Windows ’95 or NT 4.0, run the example by
locating the OTsoc2.bat filename in Windows Explorer and
double-clicking it.
This starts the program to process the translation. A Session
Output dialog box will appear.
3. Start the example.
Display the Windows Run dialog box according to procedures
for your version of Windows. In the value entry box, type the
path and OTsoc2.exe. Choose OK.
- or -
For systems using Windows ’95 or NT 4.0, run the example by
locating the OTsoc2.exe filename in Windows Explorer and
double-clicking it.
This starts the example, opens the client socket, and establishes
the client/server relationship. A DOS window will appear.
4. At the DOS window, at the “Enter remote host name” prompt,
type your host_name for the client machine.
Press the Enter key.
5. At the DOS window, at the “Enter the Server port number”
prompt, type
5520

Workbench User’s Guide 235


Section 4. Creating Environments

Press the Enter key. This identifies the server socket port
number.
6. At the DOS window, at the “Enter the output filename”
prompt, type
soc2.out
Press the Enter key. This identifies the output filename.
7. At the Session Output dialog box, a “Session ended: err: 0”
message should appear indicating the example ran successfully.
Choose the Quit button to close the session output dialog box.

UNIX
1. Open the vi Editor. Modify OTsoc2.sh to specify your
hostname in the -DOUTPUT_FILE parameter. Save the
changes and exit the vi Editor.
2. Compile the OTsoc2.c program for this example.
3. Open two UNIX windows and arrange them so they are both in
view. In the following figure, the windows are titled Window 1
and Window 2. Window 1 will be the client window and
Window 2 will be the server window.
4. Start the Application Integrator Control Server, type
otstart
5. In Window 1, at the command prompt, type
OTsoc2.sh
Press the Enter key. This starts the shell script that executes the
example.
6. In Window 2, at the command prompt, type
OTsoc2
Press the Enter key. This program creates the client socket.
7. In Window 2, at the “Enter the remote hostname” prompt, type
your host_name for the client machine.
Press the Enter key.
8. In Window 2, at the “Enter the remote port number” prompt,
type
5520

236 Workbench User’s Guide


Application Integrator Sockets Examples

Press the Enter key. This identifies the server socket port
number.
9. In Window 2, at the “Enter the output filename” prompt, type
soc2.out
Press the Enter key. This identifies the output filename.
In Window 1 and Window 2, processing messages like those
shown below should appear indicating the example is
processing successfully.
When both windows return to the command prompt without
error messages, the example executed successfully.

(window 1) (window 2)
$ OTsoc2.sh
Session# 000103 started
$ OTsoc2
Enter the remote host name: host_name
Enter the remote port number: 5520
Enter the output filename: soc2.out
Identified remote host
Connected to remote host
Session# 000103 completed successfully. Opened output file
$ $

Workbench User’s Guide 237


Section 4. Creating Environments

Example 3: Input Example 3 demonstrates input from a passive socket and output to
from a passive a file. The passive argument causes the socket to function as a
server socket and wait for a connection to be made by a client
socket and output
socket somewhere on the network. In the example, no retry
to a file. attributes were specified so the socket would try only one attempt
at connection. If the connection attempt fails, the user would have
to rerun the program to retry the connection.

List of Additional Files


Needed

File Description
OTsoc3.c C source code.
OTsoc3 UNIX program created by the compile that creates
a client socket. It connects to the server socket
created by the translator and sends data.
OTsoc3.exe Windows program that creates a client socket. It
connects to the server socket created by the
translator and sends data.
OTsoc3.sh UNIX shell script that executes the example. You
must specify your machine name at the host_name
argument of the INPUT_FILE variable.
INPUT_FILE = “host_name:5530 [socket] passive”
OUTPUT_FILE = “soc3.out”
OTsoc3.bat Windows batch file that executes the example. You
must specify your machine name at the host_name
argument of the INPUT_FILE variable.
INPUT_FILE = “host_name:5530 [socket] passive”
OUTPUT_FILE = “soc3.out”

In example 3, your machine is the server and you start the


translation session by running OTsoc3.bat or OTsoc3.sh. In this
script you have identified the server hostname, the port number
that will act as the server socket, defined the attributes of the socket,
and identified the output filename that will exist on the server
machine.

238 Workbench User’s Guide


Application Integrator Sockets Examples

The client machine connects to the server by running OTsoc3. This


program connects to the server socket at port 5530 and establishes
the socket’s client/server relationship. At the prompts, the server’s
hostname, port number, and input filenames are entered. The
client machine sends the input file to the translator where it is
processed. The output is stored on the server, the session is ended,
and the port is closed.

Server Socket Client initiates


# 5530 socket Client Socket
Passive Mode connection

DATA FLOW DATA FLOW DATA FLOW

Application Integrator Translator OTsoc3

soc3.out

your machine (server)


outside machine (client)

Socket3.vsd

Figure 3. Sockets Example 3

Workbench User’s Guide 239


Section 4. Creating Environments

Performing Example 3

Windows

❖ Note: Samples of the dialog boxes can be found in Example 1

1. If you are a Windows user, open the Notebook Editor. Modify


OTsoc3.bat to specify your hostname in the -DINPUT_FILE
parameter. Add –cs <Control Server number> where <Control
Server number> is “pr” for production, “dv” for development,
or “ts” for test. Save the changes and exit the Notebook Editor.
2. Start the translation.
Display the Windows Run dialog box according to procedures
for your version of Windows. In the value entry box, type the
path and OTsoc3.bat. Choose OK.
- or -
For systems using Windows ’95 or NT 4.0, run the example by
locating the OTsoc3.bat filename in Windows Explorer and
double-clicking it.
This starts the program to process the translation. A Session
Output dialog box will appear.
3. Start the example.
Display the Windows Run dialog box according to procedures
for your version of Windows. In the value entry box, type the
path and OTsoc3.exe. Choose OK.
- or -
For systems using Windows ’95 NT 4.0, run the example by
locating the OTsoc3.exe filename in Windows Explorer and
double-clicking it.
This starts the example, opens the client socket, and establishes
the client/server relationship. A DOS window will appear.
4. At the DOS window, at the “Enter remote hostname” prompt,
type your host_name for the client machine.
Press the Enter key.
5. At the DOS window, at the “Enter the Server port number”
prompt, type
5530

240 Workbench User’s Guide


Application Integrator Sockets Examples

Press the Enter key. This identifies the server socket port
number.
6. At the DOS window, at the “Enter the input filename” prompt,
type
OTsoc.in
Press the Enter key. This identifies the input filename.
7. At the Session Output dialog box, a “Session ended: err: 0”
message should appear indicating the example ran successfully.
Choose the Quit button to close the Session Output dialog box.

UNIX
1. Open the vi Editor and modify OTsoc3.sh to specify your
hostname in the -DINPUT_FILE parameter. Save the changes
and exit the vi Editor.
2. Compile the OTsoc3.c program for this example.
3. Open two UNIX windows and arrange them so they are both in
view. In the following figure, the window are titled Window 1
and Window 2. Window 1 will be the client window and
Window 2 will be the server window.
4. Start the Application Integrator Control Server, type
otstart
5. In Window 1, at the command prompt, type
OTsoc3.sh
Press the Enter key. This starts the shell script that executes the
example.
6. In Window 2, at the command prompt, type
OTsoc3
Press the Enter key. This program creates the client socket.
7. In Window 2, at the “Enter the remote hostname” prompt, type
your host_name for the client machine.
Press the Enter key.
8. In Window 2, at the “Enter the remote port number” prompt,
type
5530
Press the Enter key. This identifies the server socket port
number.

Workbench User’s Guide 241


Section 4. Creating Environments

9. In Window 2, at the “Enter the input filename” prompt, type


OTsoc.in
Press the Enter key. This identifies the input filename.
In Window 1 and Window 2, processing messages like those
shown below should appear indicating the example is
processing successfully.
When both windows return to the command prompt without
error messages, the example executed successfully.

(window 1) (window 2)
$ OTsoc3.sh
Session# 000104 started
$ OTsoc3
Enter the remote host name: host_name
Enter the remote port number: 5530
Enter the input filename: OTsoc.in
Identified remote host
Connected to remote host
Session# 000104 completed successfully. Opened input file
$ $

242 Workbench User’s Guide


Application Integrator Sockets Examples

Example 4: Input Example 4 is a two phase exercise which demonstrates input from a
from a Unidirectional unidirectional passive socket and output to a unidirectional passive
socket. Unidirectional sockets process data in one direction only,
Passive Socket and
therefore, for the example, two sockets must be established, one for
Output to a input from one client machine and one for output to a different
Unidirectional Passive client machine. Because passive was specified as the connection
Socket. mode, the sockets on the server machine will be client sockets,
making the sockets on the client machines server sockets.

List of Additional Files


Needed

File Description
OTsoc4i.c C source code—input file.
OTsoc4o.c C source code—output file.
OTsoc4i UNIX program created by the compile that
creates a client socket. It connects to the
corresponding server socket created by the
translator and sends input data.
OTsoc4i.exe Windows program that creates a client socket. It
connects to the corresponding server socket
created by the translator and sends input data.
OTsoc4o UNIX program created by the compile that
creates a client socket. It connects the
corresponding server socket created by the
translator and receives the results of the
translation.
OTsoc4o.exe Windows program that creates a client socket. It
connects the corresponding server socket created
by the translator and receives the results of the
translation.
OTsoc4.sh UNIX shell script that executes the example. You
must specify your machine name at the
host_name argument of the INPUT_FILE and
OUPUT_FILE variables.
INPUT_FILE = “host_name:5540 [socket]
passive”
OUTPUT_FILE = “host_name:5541 [socket]
passive”

Workbench User’s Guide 243


Section 4. Creating Environments

File Description
OTsoc4.bat Windows batch file that executes the example.
You must specify your machine name at the
host_name argument of the INPUT_FILE and
OUPUT_FILE variables.
INPUT_FILE = “host_name:5540 [socket]
passive”
OUTPUT_FILE = “host_name:5541 [socket]
passive”

In example 4, your machine is the server and you start the


translation session by running OTsoc4.bat or OTsoc4.sh. In this
script, you have identified your hostname, the port number and
attributes for the input server socket, and the port number and
attributes for the output server socket.
In phase 1, the first client machine connects to the server by running
OTsoc4i. This program creates the unidirectional server socket on
the server machine and establishes the client/server relationship for
the input data stream. After the file is processed through the
translator, the connection is closed and the client screen returns to
the command prompt.
In phase 2, the second client machine connects to the server by
running OTsoc4o. This program creates the unidirectional server
socket on the server machine and establishes the client/server
relationship for the output data stream. When the connection is
made, the translator sends the output to the client machine and
closes the connection. Closed messages appear on the server and
client windows indicating the sockets are closed and the translation
session has ended successfully.

244 Workbench User’s Guide


Application Integrator Sockets Examples

Server Socket Client initiates


Client Socket
#5540 socket
Passive Unidirectional connection

DATA FLOW DATA FLOW OTsoc4i


DATA FLOW

first outside machine (client)

Application Integrator Translator

DATA FLOW DATA FLOW DATA FLOW OTsoc4.out

your machine (server) second outside machine (client)

Server Socket
#5541 Client initiates Client Socket
Passive Unidirectional socket
connection

Socket4.vsd

Figure 4. Sockets Example 4

Workbench User’s Guide 245


Section 4. Creating Environments

Performing Example 4

Windows

❖ Note: Samples of the dialog boxes can be found in Example 1

1. If you are a Windows user, open the Notebook Editor. Modify


OTsoc4.bat to specify your hostname in the -DINPUT_FILE
parameter. Add –cs <Control Server number> where <Control
Server number> is “pr” for production, “dv” for development,
or “ts” for test. Save the changes and exit the Notebook Editor.
2. Start the translation.
Display the Windows Run dialog box according to procedures
for your version of Windows. In the value entry box, type the
path and OTsoc4.bat. Choose OK.
- or -
For systems using Windows ’95 or NT 4.0, run the example by
locating the OTsoc4.bat filename in Windows Explorer and
double-clicking it.
This starts the program to process the translation. A Session
Output dialog box will appear.
3. Start phase 1 of the example.
Display the Windows Run dialog box according to procedures
for your version of Windows. In the value entry box, type the
path and OTsoc4i.exe. Choose OK.
- or -
For systems using Windows ’95 or NT 4.0, run the example by
locating the OTsoc4i.exe filename in Windows Explorer and
double-clicking it.
This starts the example, opens the client socket, and establishes
the client/server relationship. A DOS window will appear.
4. At the DOS window, at the “Enter remote hostname” prompt,
type your host_name for the client machine.
Press the Enter key.
5. At the DOS window, at the “Enter the Server port number”
prompt, type
5540

246 Workbench User’s Guide


Application Integrator Sockets Examples

Press the Enter key. This identifies the server socket port
number.
6. At the DOS window, at the “Enter the input filename” prompt,
type
OTsoc.in
Press the Enter key. This identifies the input filename.
7. Start phase 2 of the example.
Display the Windows Run dialog box according to procedures
for your version of Windows. In the value entry box, type the
path and OTsoc4o.exe. Choose OK.
- or -
For systems using Windows `95 or NT 4.0, run the example by
locating the OTsoc4.bat filename in Windows Explorer and
double-clicking it.
This program creates the client socket.
8. At the DOS window, at the “Enter the remote hostname”
prompt, type your host_name for the client machine.
Press the Enter key.
9. At the DOS window, at the “Enter the remote port number”
prompt, type
5541
Press the Enter key. This identifies the server socket port
number.
10. At the DOS window, at the “Enter output filename” prompt,
type
soc4.out
Press the Enter key. This identifies the output filename.
11. At the Session Output dialog box, a “Session ended: err: 0”
message should appear indicating the example ran successfully.
Choose the Quit button to close the Session Output dialog box.

Workbench User’s Guide 247


Section 4. Creating Environments

UNIX
1. Open the vi Editor. Modify OTsoc4.sh to specify your
hostname in the -DINPUT_FILE and the -DOUTPUT_FILE
parameters. Save the changes and exit the vi Editor.
2. Compile the OTsoc4i.c and the OTsoc4o.c programs for this
example.
3. Open two UNIX windows and arrange them so they are both in
view. In the following figure, the windows are titled Window 1
and Window 2. Window 1 will be the client window and
Window 2 will be the server window.
4. Start the Application Integrator Control Server, type
otstart
5. In Window 1, at the command prompt, type
OTsoc4.sh
Press the Enter key. This starts the shell script that executes the
example.
6. In Window 2, at the command prompt, type
OTsoc4i
Press the Enter key. This program creates the client socket.
7. In Window 2, at the “Enter the remote hostname” prompt, type
your host_name for the client machine.
Press the Enter key.
8. In Window 2, at the “Enter the remote port number” prompt,
type
5540
Press the Enter key. This identifies the server port number.
9. In Window 2, at the “Enter the input filename” prompt, type
OTsoc.in
Press the Enter key. This identifies the input filename.
10. In Window 2, at the command prompt, type
OTsoc4o
Press the Enter key. This program creates the client socket.
11. In Window 2, at the “Enter the remote hostname” prompt, type
your host_name for the client machine.

248 Workbench User’s Guide


Application Integrator Sockets Examples

Press the Enter key.


12. In Window 2, at the “Enter the remote port number” prompt,
type
5541
Press the Enter key. This identifies the server port number.
13. In Window 2, at the “Enter the output filename” prompt, type
soc4.out
Press the Enter key. This identifies the output filename.
14. In Window 1 and Window 2, processing messages like those
shown below should appear indicating the example is
processing successfully.
When both windows return to the command prompt without
error messages, the example executed successfully.

(window 1) (window 2)
$ OTsoc4.sh
Session# 000105 started
$ OTsoc4i
Enter the remote host name: host_name
Enter the remote port number: 5540
Enter the input filename: OTsoc.in
Identified remote host
Connected to remote host
Opened input file
$ OTsoc4o
Enter the remote host name: host_name
Enter the remote port number: 5541
Enter the output filename: soc4.out
Identified remote host
Connected to remote host
Session# 000105 completed successfully. Opened output file
$ $

Workbench User’s Guide 249


Section 4. Creating Environments

Example 5: Input Example 5 demonstrates an active persistent socket on a server


from an active machine. As an active socket, it acts as a client socket and waits for
a connection from another machine. In the example, the persistent
persistent socket
attribute identifies the interval that the translator will use in
and output to a file attempting connection to the server machine. As a persistent socket,
it will attempt connection until a connection is made, until the time
and retry attributes are met, or until the process is killed.

List of Additional Files


Needed

File Descriptioni
OTsoc5.c C source code.
OTsoc5 UNIX program created by the compile that creates
a client socket. It connects to the server socket
created by the translator and sends data.
OTsoc5.exe Windows program that creates a client socket. It
connects to the server socket created by the
translator and sends data.
OTsoc5.sh UNIX shell script that executes the example. You
must specify your machine name at the host_name
argument of the INPUT_FILE variable.
INPUT_FILE = “host_name:5550 [socket]
persistent 5 60”
OUTPUT_FILE = “soc5.out”
OTsoc5.bat Windows batch file that executes the example. You
must specify your machine name at the host_name
argument of the INPUT_FILE variable.
INPUT_FILE = “host_name:5550 [socket]
persistent 5 60”
OUTPUT_FILE = “soc5.out”

In Example 5, you start the example by running OTsoc5.bat or


OTsoc5 on your machine, the server. This program requires you to
enter the input filename and the server port number that will act as
the server socket.

250 Workbench User’s Guide


Application Integrator Sockets Examples

On the client machine, script OTsoc5.sh is run which starts the


translation session and attempts connection to the server socket. If
the connection attempt fails, the session will wait for 60 seconds
and try again. It will repeat up to five times, or until a connection is
successful. Figure 5 illustrates that the client makes two failed
attempts at connection before successfully connecting to the server.
After connecting, the translator accepts input from OTsoc5,
processes it through the translator and outputs the data to soc5.out,
located on the client machine.
Client initiates
socket
connection on
third attempt

Client Socket
Server Socket
#5550
Active Persistent Mode

Failed Socket Connection 2

Failed Socket Connection 1

DATA FLOW DATA FLOW DATA FLOW

OTsoc5 Application Integrator Translator

soc5.out

your machine (server)


outside machine (client)

Socket5.vsd

Figure 5. Sockets Example 5

Workbench User’s Guide 251


Section 4. Creating Environments

Performing Example 5

Windows

❖ Note: Samples of the dialog boxes can be found in Example 1

1. If you are a Windows user, open the Notebook Editor. Modify


OTsoc5.bat to specify your hostname in the -DINPUT_FILE
parameter. Add –cs <Control Server number> where <Control
Server number> is “pr” for production, “dv” for development,
or “ts” for test. Save the changes and exit the Notebook Editor.
2. Start the translation.
Display the Windows Run dialog box according to procedures
for your version of Windows. In the value entry box, type the
path and OTsoc5.bat. Choose OK.
- or -
For systems using Windows ’95 or NT 4.0, run the example by
locating the OTsoc5.bat filename in Windows Explorer and
double-clicking it.
This starts the program to process the translation. A Session
Output dialog box will appear.
3. Start the example.
Display the Windows Run dialog box according to procedures
for your version of Windows. In the value entry box, type the
path and OTsoc5.exe. Choose OK.
- or -
For systems using Windows ’95 or NT 4.0, run the example by
locating the OTsoc5.exe filename in Windows Explorer and
double-clicking it.
This starts the example, opens the client socket, and establishes
the client/server relationship. A DOS window will appear.
4. At the DOS window, at the “Enter the Server port number”
prompt, type
5550
Press the Enter key. This identifies the server socket port
number.

252 Workbench User’s Guide


Application Integrator Sockets Examples

5. At the DOS window, at the “Enter the input filename” prompt,


type
OTsoc.in
Press the Enter key. This identifies the input filename.
6. At the Session Output dialog box, a “Session ended: err: 0”
message” should appear indicating the example ran
successfully. Choose the Quit button to close the Session
Output dialog box.

UNIX
1. Open the vi Editor and modify OTsoc5.sh to specify your
hostname in the -DINPUT_FILE parameter. Save the changes
and exit the vi Editor.
2. Compile the OTsoc5.c program to create the executable for this
example.
3. Open two UNIX windows and arrange them so they are both in
view. In the following figure, the windows are titled Window 1
and Window 2.
4. Start the Application Integrator Control Server, type
otstart
5. At the command prompt in Window 1, type
OTsoc5
Press the Enter key. This starts the example, opens the client
socket, and establishes the client/server relationship.
6. In Window 1, at the “Enter the Server port number” prompt,
type
5550
Press the Enter key. This identifies the server socket port
number.
7. In Window 1, at the “Enter the input filename” prompt, type
OTsoc.in
Press the Enter key. This identifies the input filename.
8. In Window 2, at the command prompt, type
OTsoc5.sh

Workbench User’s Guide 253


Section 4. Creating Environments

Press the Enter key. This starts the shell script to process the
example.
In Window 1, the Opened input file message should appear to
indicate you have successful communication and the example is
executing.
When both windows return the command prompt without error
messages, the example executed successfully.

(window 1) (window 2)
$ OTsoc5
Enter the Server port number: 5550
Enter the input filename: OTsoc.in
Opened input file $ OTsoc5.sh
Session# 000106 started
Session# 000106 completed successfully.
$ $

254 Workbench User’s Guide


Application Integrator Sockets Examples

Example 6: Input Example 6 is a two phase exercise that demonstrates the


and output through bidirectional persistent server socket. This type of socket is used to
receive data and send data through the same socket and is opened
a bidirectional
by a client and remains open until a client closes it. It is used in
persistent server cases where the server does not store the input data or the output
socket data. The server in Example 6 is used only as a place to translate
data. See Figure 6.

List of Additional Files


Needed

File Description
OTsoc6.c C source code.
OTsoc6 UNIX program created by the compile that creates
a bidirectional client socket. It connects to the
bidirectional persistent server socket created by the
translator and sends and receives data.
OTsoc6.exe Windows program that creates a bidirectional
client socket. It connects to the bidirectional
persistent server socket created by the translator
and sends and receives data.
OTsoc6.sh UNIX shell script that executes the example. You
must specify your machine name at the host_name
argument of the INPUT_FILE and OUTPUT_FILE
variables.
INPUT_FILE = “host_name:5560 [socket] passive
persistent”
OUTPUT_FILE = “host_name:5560 [socket]&”
OTsoc6.bat Windows batch file that executes the example. You
must specify your machine name at the host_name
argument of the INPUT_FILE and OUTPUT_FILE
variables.
INPUT_FILE = “host_name:5560 [socket] passive
persistent”
OUTPUT_FILE = “host_name:5560 [socket]&”

Workbench User’s Guide 255


Section 4. Creating Environments

In Example 6, your machine is the server and you start the


translation session by running OTsoc6.bat or OTsoc6.sh. In this
script, you have identified your hostname, the port number that
will act as the server socket, and defined the attributes of the socket.
The client machine connects to the server by running OTsoc6. This
program connects to the server socket at port 5560 and creates the
persistent bidirectional socket and establishes the client/server
relationship. It also identifies the input and output filenames. The
input is sent through the socket and processed in the server’s
translator. The server sends the output back to the client through
the same socket and the socket remains open after processing is
through. The server will wait until another client connects for
processing or until a close signal is received from a client.
The client then closes the socket. The client machine connects to the
server by running OTsoc6 again. This time, the same hostname and
socket port number are identified but the input file, OTclose.in, will
contain the code to close the socket. The output file, soc.out, will
return as a 0-byte file if the socket closure is successful. The closed
session message will appear on the server window indicating the
socket is closed and the translation session is ended successfully.

Server Socket
#5560 Client1 initiates
Passive Persistent Mode socket connection 1 Client Socket

DATA FLOW 1
DATA FLOW DATA FLOW

Application Integrator Translator OTsoc6

your machine (server)


outside machine (client1)

Socket6.vsd

Figure 6. Sockets Example 6

256 Workbench User’s Guide


Application Integrator Sockets Examples

Performing Example 6

Windows

❖ Note: Samples of the dialog boxes can be found in Example 1

1. If you are a Windows user, open the Notebook Editor. Modify


OTsoc6.bat to specify your hostname in the -DINPUT_FILE
parameter. Add –cs <Control Server number> where <Control
Server number> is “pr” for production, “dv” for development,
or “ts” for test. Save the changes and exit the Notebook Editor.
2. Start the translation.
Display the Windows Run dialog box according to procedures
for your version of Windows. In the value entry box, type the
path and OTsoc6.bat. Choose OK.
- or -
For systems using Windows ’95 or NT 4.0, run the example by
locating the OTsoc6.bat filename in Windows Explorer and
double-clicking it.
This starts the program to process the translation. A Session
Output dialog box will appear.
3. Start phase 1 of the example.
Display the Windows Run dialog box according to procedures
for your version of Windows. In the value entry box, type the
path and OTsoc6.exe. Choose OK.
- or -
For systems using Windows ’95 or NT 4.0, run the example by
locating the OTsoc6.exe filename in Windows Explorer and
double-clicking it.
This starts the example, opens the client socket, and establishes
the client/server relationship. A DOS window will appear.
4. At the DOS window, at the “Enter the remote hostname”
prompt, type
your host_name for the server machine.
Press the Enter key.
5. At the DOS window, at the “Enter the remote port number”
prompt, type

Workbench User’s Guide 257


Section 4. Creating Environments

5560
Press the Enter key. This identifies the server socket port
number.
6. At the DOS window, at the “Enter the input filename” prompt,
type
OTsoc.in
Press the Enter key. This identifies the input filename.
7. At the DOS window, at the “Enter the output filename”
prompt, type
soc6.out
Press the Enter key. This identifies the output filename.
8. Start phase 2 of the example.
Display the Windows Run dialog box according to procedures
for your version of Windows. In the value entry box, type the
path and OTsoc6.exe. Choose OK.
- or -
For systems using Windows ’95 or NT 4.0, run the example by
locating the OTsoc6.exe filename in Windows Explorer and
double-clicking it.
9. At the DOS window, at the “Enter remote hostname” prompt,
type
your host_name for the server machine.
Press the Enter key.
10. At the DOS window, at the “Enter the remote port number”
prompt, type
5560
Press the Enter key. This identifies the server port number.
11. At the DOS window, at the “Enter input filename” prompt,
type
OTclose.in
Press the Enter key. This identifies the input filename.
12. At the DOS window, at the “Enter output filename” prompt,
type
soc.out

258 Workbench User’s Guide


Application Integrator Sockets Examples

Press the Enter key. This identifies the output filename.


13. At the Session Output dialog box, a “Session ended: err: 0”
message should appear indicating the example ran successfully.
Choose the Quit button to close the Session Output dialog box.

UNIX
1. Open the vi Editor. Modify OTsoc6.sh to specify the hostname
in the -DINPUT_FILE and -DOUTPUT_FILE parameters. Save
the changes and exit the vi Editor.
2. Compile the OTsoc6.c program for this example.
3. Open two UNIX windows and arrange them so they are both in
view. In the following figure, the windows are titled Window 1
and Window 2. Window 1 will be the server window and
Window 2 will be the client window.
4. Start the Application Integrator Control Server, type
otstart
5. In Window 1, at the command prompt, type
OTsoc6.sh
Press the Enter key. This starts the shell script that executes the
example.
6. In Window 2, at the command prompt, type
OTsoc6
Press the Enter key. This program creates the client socket.
7. In Window 2, at the “Enter the remote hostname” prompt, type
your host_name for the server machine.
Press the Enter key.
8. In Window 2, at the “Enter the remote port number” prompt,
type
5560
Press the Enter key. This identifies the server socket port
number.
9. In Window 2, at the “Enter the input filename” prompt, type
OTsoc.in
Press the Enter key. This identifies the input filename.
10. In Window 2, at the “Enter the output filename” prompt, type

Workbench User’s Guide 259


Section 4. Creating Environments

soc6.out
Press the Enter key. This identifies the output filename.
11. In Window 2, at the command prompt, type
OTsoc6
Press the Enter key. This creates the bidirectional client socket.
12. In Window 2, at the “Enter the remote hostname” prompt, type
your host_name for the server machine.
Press the Enter key.
13. In Window 2, at the “Enter the remote port number” prompt,
type
5560
Press the Enter key. This identifies the server socket port
number.
14. In Window 2, at the “Enter the input filename” prompt, type
OTclose.in
Press the Enter key. This identifies the input filename.
15. In Window 2, at the “Enter the output filename” prompt, type
soc.out
Press the Enter key. This identifies the output filename.
In Window 1 and Window 2, processing messages like those
shown below should appear indicating the example is
processing successfully.
When both windows return to the command prompt without
error messages, the example executed successfully.

260 Workbench User’s Guide


Application Integrator Sockets Examples

(window 1) (window 2)
$ OTsoc6.sh
Session# 000107 started
$ OTsoc6
Enter the remote host name: host_name
Enter the remote port number: 5560
Enter the input filename: OTsoc.in
Enter the output filename: soc6.out
Identified remote host
Connected to remote host
Opened input file
Opened output file

$ OTsoc6
Enter the remote host name: host_name
Enter the remote port number: 5560
Enter the input filename: OTclose.in
Enter the output filename: soc.out
Identified remote host
Connected to remote host
Opened input file
Session# 000107 completed Opened output file
successfully.
$

Workbench User’s Guide 261


Section 4. Creating Environments

Configuring a To determine how to configure a socket you will need the following
Socket information:

Problem Resolution
Will you be sending Sending data requires defining an
or receiving data OUTPUT_FILE.
using a unidirectional Receiving data requires defining an
socket? INPUT_FILE.
Who will listen and Listening requires configuring the socket
who will connect as passive.
using a unidirectional Initiating the connect, requires
socket? configuring the socket as active (default).
Will you be sending Use a bidirectional socket where
and receiving data? messages can travel to and from either
Do you have limited end. Define both INPUT_FILE and
socket resources (the OUTPUT_FILE with the same socket
number of sockets number.
being used is high or
the system is heavily
loaded)?
Who will listen and Listening requires configuring the Input
who will connect socket as passive and the output socket
using a bidirectional as bidirectional.
socket? Initiating the contact requires
configuring the Input socket as active
and the output socket as bidirectional.
What is the For an OUTPUT_FILE, use the name of
hostname? the host you will be sending to.
For an INPUT_FILE, use the name of the
host running Application Integrator.
Should the socket Use persistent to keep the socket open
remain open after the and retry the connect.
first connect? Use the default to disconnect after one
connection attempt.
How many retries? For continuous retries, use the default.
Use a specific number of times, specify
retry and retry time period.

262 Workbench User’s Guide


Application Integrator Sockets Examples

Problem Resolution
Are application level If application level headers are to be
protocols for control used, they must be modeled within
defined? Application Integrator to allow for their
processing.
Application level protocols are headers,
trailers, or other strings of data that you
agree to use with the other parties to the
sockets to tell each other to do specific
activities. For example, a header and
trailer surrounding the data may be
included by the sender to allow the
receiver to validate that all the records
sent were received and that some routing
request or other element of service is
indicated. It may also be used to
synchronize a bidirectional socket.
Typical uses of header information
include sender and receiver IDs, control
numbers, record counts, dates, version
indicators and identifying strings.

Workbench User’s Guide 263


Section 4. Creating Environments

Error Messages The following table contains information about the types of errors
that might occur when working with sockets. For additional
information regarding errors, please refer to Trade Guide For System
Administration User’s Guide, Appendix B, "Application Integrator
Runtime Errors."

Code Error Possible Cause Resolution


-1 Connection One of the programs Verify that both
closed (client or server) programs are
exited, thereby closing running.
the connection.
Connection between Contact your
the client and server network or system
was broken due to administrator.
network problems.
160 Input The server that creates Run the server.
Error the input socket was
not running when the
client tried to connect Contact your
to it. network or system
The input socket could administrator.
not be created by the
server due to lack of Verify that the
system resources. server is running.
The input socket could Check whether
not be created by the another client is
client because it could connected to the
not connect to the server. If
server. necessary, contact
your network or
system
administrator.

264 Workbench User’s Guide


Application Integrator Sockets Examples

Code Error Possible Cause Resolution


161 Output The server that creates Run the server.
Error the output socket was
not running when the
client tried to connect
to it. Contact your
The output socket network or system
could not be created administrator.
due to lack of system
resources.
Verify that the
The output socket server is running.
could not be created Check whether
by the client because it another client is
could not connect to connected to the
the server. server. If
necessary, contact
your network or
system
administrator.

Workbench User’s Guide 265


Section 4. Creating Environments

UNIX FIFO Specify the device type of FIFO with the following syntax:
“<fifo_dev_name> [fifo]”
Specifications
Example:
INPUT_FILE = “/tmp/fifo1 [fifo]”
The example above opens the fifo file “/tmp/fifo1” for use as an
input byte stream of the translator. FIFOs can be used to connect
the output of a program to the input of the translator. Even though
one may not have the source to a program to change its output
destination, FIFOs can be used to “pipe” the input to the translator
since they appear to be files to applications.

UNIX Pipes Specify the device type of Pipes with the following syntax:
Specifications “<piped_program_name> [pipe] program_parameters>”
Example:
INPUT_FILE = “/usr/home/arrpt1 [pipe] -Prmslp”
This example invokes the program “/usr/home/arrpt1”, with the
parameter “-Prmslp”, whose standard out will be piped to the
translator’s input byte stream. The program can be any program
that outputs to standard out, such as command line SQL queries.

266 Workbench User’s Guide


Application Integrator Sockets Examples

UNIX Message Specify the device type of Message Queues with the following
Queues Specifications syntax:
“<msg_key>:<msgq_subkey>
[msgq] buffer <max_msg_size> create <open_mode>”
Examples:
1. INPUT_FILE = “0x3874252A [msgq]”
2. INPUT_FILE = “0x389F2340:03 [msgq] buffer
4096 create 0644”
The first example connects to a message queue with the key value
“0x3874252A” using default values for subkey ID and buffer size.
Messages passed supply the translator’s input byte stream. As the
create option has not been specified, the translator will report an
input file error if the message queue does not already exist.
The second example shows the use of the subkey ID option (03 in
this case), the buffer size option, and the create option with the
read/write mode specified. If the message queue already exists,
the translator will report an input file error. Subkey IDs are used as
the “mtype” field in the system message structure “msgbuf” and
allow the “sharing” of a message queue among different I/O
streams, reducing system message queue IDs.

Workbench User’s Guide 267


Section 4. Creating Environments

268 Workbench User’s Guide


Section 5
The Data Modeling Process

This section provides a detailed walk-through of the steps to data


modeling. This section is based on the Workbench features and
interface described in Sections 1 through 4.
The final sections of this section provide Application Integrator
recommendations for file naming and other data modeling
conventions, and examples of the data modeling worksheets
mentioned in the text of the section.

Workbench User’s Guide 269


Section 5. The Data Modeling Process

The steps to data modeling, as described in this section, are:


List of Steps to
Data Modeling
Step 1: Obtain the Translation Definition Requirements
Step 2: Analyze the Definition Requirements
Step 3: Obtain the Test Input File(s)
Step 4: Lay Out the Environment Flow
Step 5: Complete the Environment Definition
Step 6: Create Source and Target Data Model Declarations
Step 7: Create a Map Component File for Each Environment
Step 8: List Source to Target Mapping Variables
Step 9: Create Data Model Rules
Step 10: Enter the Profile Database Values
Step 11: Run Test Translations and Debug
Step 12: Make Backup Files
Step 13: Migrate the Data

270 Workbench User’s Guide


List of Steps to Data Modeling

Step 1: Obtain the Obtain all available documentation which explains the syntax,
Translation Definition structure and mapping rules that apply to the translation you are
going to model. The syntax defines the characteristics of the
Requirements
components, such as, the character sets, fixed or delimited data,
identification and tag definitions. The structure defines the
relationships between the components, and the occurrence
constraints. The mapping rules define the semantic meaning of the
components to accurately associate the source to the target.
If documentation is unavailable, find the person in your
organization or at your trading partner’s organization who can
provide an understanding of the content and structure of the data
for electronic commerce. From this person, obtain the syntax,
structure, and data mapping requirements.
If neither documentation nor a contact person is available to relay
the data and translation requirements, you must obtain the
information by examining the data files. This method, of course,
makes translation definition a process of trial-and-error. The more
complicated the translation requirement, the less assured you can
be of an accurate modeling definition.

Step 1 should generate: r An accumulation of source data documentation - syntax,


structure, semantic definition of the data.
r An accumulation of target data documentation - syntax,
structure, semantic definition of the data.
r Contact(s) for source/target questions.

Workbench User’s Guide 271


Section 5. The Data Modeling Process

Step 2: Analyze Using the resources available to you (documentation, contacts,


the Definition and/or examination of the input data), analyze the syntax,
structure and mapping requirements of the source and target until
Requirements
you have a complete understanding of each. Review the source
syntax and structure first, the target syntax and structure next, and
then the mapping rules between the source and target.

Syntax For complete understanding of syntax, you must first understand


each type of item (field/element, record/segment) contained in the
data. Common types of items are: Alphanumeric, Alpha, Text,
Implied Decimal Numeric, Explicit Decimal Numeric, Date, Time,
Records, Segments, etc. To prepare for your translation, make a list
of each type of item that is used. A different item should appear on
the list whenever the character set and/or the pre- and
post-conditions differ among them.

Character Set When an item has a unique character set, which differs from other
items, a different type of item is used.
For example, five different types of items would be used for the
following:
r Numeric - which allows for 0-9, -, +, .
r Date - which allows for 0-9 with valid month and day
values
r Text - which allows for all printable characters
r Alpha - which allows for the character set range of A-Z
r Segment - which contains a tag at the beginning and a
delimiter at the end

272 Workbench User’s Guide


List of Steps to Data Modeling

Pre- and Post-Conditions Delimited data typically allows for the use of several delimiters.
These delimiters can be used either as the pre-conditions or the
post-conditions of items. You need to understand the delimiters
used in your input file to know when another type of item must be
defined.
For example, consider fixed length data within variable length
records. The data fields have no pre- or post-condition. The
record, however, has a post-condition of a delimiter (possibly a line
feed, or carriage return/line feed).
Another example would be the UN/EDIFACT standard which
specifies delimited data using three different delimiter characters.
The rules specifying which delimiters can precede or follow which
item types are very specific in this standard’s syntax.
By defining many items, to accommodate specific character sets and
the pre- or post-conditions, a more accurate translation definition
can be modeled. Using only a few items will offer less of a
guarantee that the proper item has been recognized (source) or
constructed (target) during translation. Also, non-precise item type
definitions may cause invalid item recognition, causing the
translation to fail on a later item, or causing the wrong data to be
mapped.

Structure For complete understanding of structure, you must become familiar


with how items are assembled in the message. Three attributes are
used to define structure — sequence, occurrence, and relationship.

Sequence Sequence is the order in which items can appear. It can be rigid or
random. A rigid example is when a standard requires records to
appear in a certain order (record type A cannot appear after record
type B). A random example is when records can appear in any
order (record type A can appear before or after record type B).

Occurrence Occurrence is the number of times an item can repeat in succession.


The minimum and maximum occurrence of an item specifies the
number of times the item must (minimum) and can (maximum)
repeat. A zero minimum occurrence indicates that the item is
optional. A minimum occurrence of one or greater indicates that
the item is mandatory per the specified number of occurrences.

Workbench User’s Guide 273


Section 5. The Data Modeling Process

Relationship Relationship defines item association with other items, and is


represented with three terms: parent, child, sibling. Parent
represents a higher level relationship to a child item. Child
represents a lower level relationship to parent item. Sibling
represents the same level relationship to another item.
The following table lists examples for the use of these terms.
Diagram Description
Record_A Record_A - is the parent of Field_A1 and Field_A2
Field_A1 Record_A - is the sibling of Record_B
Field_A2 Field_A1 - is the child of Record_A
Record_B Field_A1 - is the sibling of Field_A2
Field_B1
Field_B2

Mapping For complete understanding of mapping, you must understand


what is required to semantically identify and manipulate the data.
The specific meaning of a data model item is usually determined by
using one or more of the following:
r The item’s location in the structure
r The value of another item that qualifies its meaning
r Its occurrence (the fourth instance has this specific meaning,
etc.)

Once the semantic meaning of the item’s value is known, then it can
be properly mapped.
Sometimes data has to be manipulated or changed from how it
appears in the source, to how it is output in the target. Conversion
between field sizes and types of items occurs automatically within
Application Integrator. Fields are either padded or truncated to
manipulate the size. For item type differences, the field will be
converted, for example, a source alphanumeric type might be
converted to a target implied decimal type.
Other types of manipulation have to be performed manually
through the use of rules (mapping rules). Functions are provided
which perform the following:

274 Workbench User’s Guide


List of Steps to Data Modeling

Function Description
Case convert Change to all uppercase letters or lowercase
letters
String manipulate Trim, concatenate, replace characters,
substring
Code verification Verify the value is contained with a list of
acceptable codes
Cross-reference Replace a value with another value

This manipulation allows the data to be properly prepared to be


output in the format required.
Make sure that you understand the structure for both the source
and target sides. Inaccurate occurrence constraints between the
source and target will cause errors. The areas to watch out for are:
r Target minimum is greater than the source minimum
occurrence.

Structure compliance can be met on the source side with an


error occurring on the target. You can model around this
error using “absent” rules, which default a value. For
example, a default of a literal or database substitution
would allow the minimum target occurrence to be met.
r Target maximum is less than the source maximum
occurrence.

Structure compliance can be met on the source side with an


error occurring on the target. You can model around this
error using source rules, which limit the number of
occurrences mapped to the MetaLink or Array variables,
whose values will be assigned to the target.
You can also model around this error using target rules, which
reference the variables (pulls values off the variable list) but does
not assign them to the data model items, for those occurrences
greater than the maximum.

Workbench User’s Guide 275


Section 5. The Data Modeling Process

Step 2 should generate: r A list containing the types of items needed for the source.
r A list containing the types of items needed for the target.

The following are example lists of types of items for source and
target.
Source
Type: Fixed length fields in delimited records
Item Character Set Pre- Post-
Condition Condition
Alphanumeric any character between none none
‘ ’ and ‘~’
Alpha A-Z, a-z, space none none
Numeric Special Numeric none none
Function
Date Special Date Function none none
Time Special Time Function none none
Record tag: Alpha none line feed
character

Target
Type: Variable length fields in delimited records
Item Character Set Pre-Condition Post-Condition
Alphanumeric any character between ‘ ’ elem-delimiter elem-delimiter or sgmt delimiter
and ‘~’
Alpha A-Z, a-z, space elem-delimiter elem-delimiter or sgmt delimiter
Numeric Special Numeric Function elem-delimiter elem-delimiter or sgmt delimiter
Date Special Date Function elem-delimiter elem-delimiter or sgmt delimiter
Time Special Time Function elem-delimiter elem-delimiter or sgmt delimiter
Record tag: Alphanumeric sgmt delimiter sgmt delimiter

276 Workbench User’s Guide


List of Steps to Data Modeling

Step 3: Obtain the When obtaining an input file for testing the data models, the
Test Input File(s) volume or size of the input file is not as important as having an
input file that contains all acceptable variations of the input
structure. This includes not just expected variations, but all
possible variations as defined per the structure definition. The goal
is to be able to test all possible structure and content combinations
to ensure that the translation definition will not fail once placed
into production mode.
If you are unable to obtain a fair representation of input data, you
will have to use a text editor to create the input file. You will either
have to take an existing file and add alterations to the structure and
content, or create the file from scratch. You must complete Step 2,
Analyze the Definition Requirements, before this file can be created
or modified.

Step 3 should generate: r Test input file(s), containing all possible data variations.

Workbench User’s Guide 277


Section 5. The Data Modeling Process

Step 4: Lay Out The layout of the environment flow is a pictorial representation of
the Environment the various elements that need to be brought together to configure
the translator to process in a certain way (for example, it shows the
Flow
input files, output files, and other components) and the order in
which they are used. Each environment provides the ability to alter
the configuration of the translator and allows for the modular
creation of data models. Refer to Section 4 for a further discussion
and illustrations of environments.
Changing environments during translation can affect the following
configuration components:
r Access models
Changing the access models allows you to add, change or
remove item type definitions. This includes adding,
changing or removing access delimiter characters and
changing the use of access model COUNTERs.
r Input and output files
Changing the input file allows you to bring different data
into the translation. By changing the output file, the output
data can be filtered to different files.
r Profile Database key prefixes
Changing the database key prefixes provides different
views within the Profile Database. Different views may be
required at various points in the translation.
r Find match limit
Changing the scan forward limit (FINDMATCH_LIMIT) in
the generic model OTNxtStd.mdl allows you to reduce or
increase the scope of searching for a specific character
sequence. Refer to the section on # FINDMATCH in
Appendix B for more details.

278 Workbench User’s Guide


List of Steps to Data Modeling

Advantages of modular modeling in the environment flow are:


r Reduction in the modeling effort
Breaking a translation down into data models facilitates
reuse. By creating a base of existing data models to choose
from, the modeling effort for future transactions is reduced.
The reduction occurs from not having to re-create and test
the same processing logic.
r Reduction in data model maintenance and testing
When data models are written as modules for use in
multiple translations, changes spanning several translations
can be implemented and tested once within the generic data
models.

Step 4 should generate:


r An environment processing flow depicting the relationship
of the environments used during translation.

Workbench User’s Guide 279


Section 5. The Data Modeling Process

Step 5: Complete The Environment Definition Worksheet has been provided as


the Environment means to consolidate information that will be needed to define each
environment. Some of this information will require some time
Definition
researching, such as the access models to be used. The output of
this step will be the input to the next step.
For each new environment defined in Step 4, complete the
following sections of the Environment Definition Worksheet, an
example of which is found at the end of this section.

Section Name Instructions


Map Component Filename Assign a unique label for the name
of the environment/map component
file. Do not begin the label with the
two letters “OT.” These letters are
reserved for standard Application
Integrator names. The length of the
label is operating system dependent.
When assigning a name, take into
consideration all platforms on which
the map component file will be used.
We recommend limiting the length
of these filenames to 8 characters.
Production, Development, Check this section to indicate
or Functional area whether this environment definition
will be used strictly for development
and testing, or if it will also be used
as the production functional area.
Environment Description Write a short description about the
intended purpose of this
environment. This description will
be of use for other modelers who
may have to follow your work, or
for later reference by yourself.

280 Workbench User’s Guide


List of Steps to Data Modeling

Section Name Instructions


Input/Output file(s) Identify whether any new input or
output files are to be used in this
environment. The names can be
literally defined or can be
dynamically defined at translation
time by using translation data (data
from within the message being
processed, or the translation
assigned session control number).
Check the appropriate box next to
the input/output file on the
worksheet that indicates where this
name will be defined or obtained
from.
Access Models From Step 2’s analysis of the syntax,
determine whether an existing
access model will be used or a new
one will be created. Any existing
model may be used providing it
contains all required types.
Otherwise, an existing model should
be copied and modified. A new
access model also may be created if
desired. Usually copying an existing
access model to a new name and
modifying it as necessary provides
the most effective results.
The decision to add or create should
be based on the uniqueness of the
new access item types. If the items
are totally unique to this translation
(most likely not used for other
translations), then a new access
model is recommended. If,
however, the items will be regularly
encountered in other translations,
adding to the existing access model
is recommended.

Workbench User’s Guide 281


Section 5. The Data Modeling Process

Section Name Instructions


Data Models Assign a unique label for the names
of the data models. Do not begin the
labels with the two letters “OT.”
These letters are reserved for
standard Application Integrator
names. The length of the name is
operating system dependent. When
assigning a name, take into
consideration all platforms on which
the data models will be used. We
recommend limiting the length of
these filenames to 8 characters.
A data model only has to be defined
if its particular mode of process will
be needed in the environment. Look
at the environment flow to
determine whether the source, target
or both modes of processing will be
needed.
If the names of the data models are
to be dynamically determined at
translation time, make sure you
check the box below. The names can
be obtained from the Profile
Database through substitutions.
Checking the box will help you
remember to set up the necessary
substitutions in the database.

282 Workbench User’s Guide


List of Steps to Data Modeling

Section Name Instructions


Profile Database Key If values are to be obtained from the
Prefixes, Profile Database, you will have to
X/Ref Values establish the appropriate database
key.
The database key is assigned when
database values are entered or
loaded (Trading Partner data or ID
code lists).
The x/ref values are the values
extracted out of the input stream (for
example, sender and receiver IDs)
which will be cross-referenced to the
database keys. The extracted values
are trimmed of trailing spaces and
concatenated together, delimited by
the tilde (~) character. Cross-
referencing the extracted input
values minimizes the impact when
these values change. When the
values change, the cross-referenced
values can be changed, with no
changes to the database keys or
alteration to the Profile Database.
Refer to the “Profile Database
Lookups” section, found later in this
section, for details.
User-Defined Environment If special environment keyword
Keyword Variables variables will be needed, they
should be recorded where they will
be easy to locate and define.

Workbench User’s Guide 283


Section 5. The Data Modeling Process

Section Name Instructions


Source and Target Special If delimited data will be parsed or
Access Characters generated, or a special decimal
and/or release character will be
needed, the characters should be
identified on the worksheet. If the
characters are dynamically
determined during data parsing,
note “As Parsed” under the Source
column. If the characters are
dynamically determined based on
substitutions, note “$Subs” under
the Target column.

Step 5 should generate:


r Functional Area Definition Worksheets completed for every
new environment defined
in Step 4.
r New access models or the updating of new item types to
existing access models.

284 Workbench User’s Guide


List of Steps to Data Modeling

Step 6: Create The syntax and structure of the translation will be modeled in this
Source and Target step. First, define the data models as per the Application Integrator
Model worksheet. Refer to a copy of the worksheet at the end of
Data Model
this section for assistance. Then enter the definitions into
Declarations Workbench. The rules for mapping will be created in Steps 8 and 9.
You can work on the source and target data models independently
of each other. One modeler can work on the source data models
while another modeler defines the target data models. The power
of Application Integrator allows the two sides to be brought
together at runtime for binding. The relationship between the two
sides is established in the mapping process, through the use of
mapping variables.
Create an Application Integrator Model Worksheet for each data
model defined on each Environment Definition Worksheet:

Section Name Instructions


Mode Specify whether the data model will be used on
the source or target side.
Model Name Complete the Data Model name as indicated on
the Environment Definition Worksheet.
Environment Complete the Map Component Filename for the
Name data model being created, as indicated on the
Functional Area Definition Worksheet.
Translation Complete any summary reference notes for use
Reference in later reviews of this data model.

Workbench User’s Guide 285


Section 5. The Data Modeling Process

For each item contained within the data structure, create a line item
entry:
Section Name Instructions
Data Model Item Assign a label name, unique within this data
Name model, by which this item will be identified.
The name must begin with a letter, and
should not begin with the two letters “OT.”
Use the various columns under Item Name to
represent the various hierarchical levels in the
structure definition. (Used for all types of
items: group, tag, container, and defining
item.)

For example:
Message_Loop
Heading_Record
Heading_Rec_Field_1
Heading_Rec_Field_2
Heading_Rec_Field_3
Detail_Line_Item_Loop
Detail_Record_1
Detail_Rec_1_Field_1
Detail_Rec_1_Field_2
Detail_Rec_1_Field_3
Detail_Record_2
Detail_Rec_2_Field_1
Detail_Rec_2_Field_2
Detail_Rec_2_Field_3

Section Name Instructions


Item Type Specify the item type used to identify this data
model item.
Occurrence - Specify the number of times the item is required
Min/Max (minimum) and allowed (maximum) to repeat.
A minimum occurrence of zero represents the
items presence is optional. A minimum
occurrence of one or greater, represents the
items presence is mandatory. (Used for all
types of items — group, tag, container, and
defining.)

286 Workbench User’s Guide


List of Steps to Data Modeling

Section Name Instructions


Size - Specify the number of characters, minimum and
Min/Max maximum, that are allowed for this item. When
the two lengths are the same value, the item is
considered to be fixed in length. (Used for
defining class of items only.)
Format Specify the format of numeric values that use
the # NUMERIC, # DATE, or # TIME access
model functions. (Used for defining type of
items only.)
Match Value Specify any character string used to identify a
tag item. Commonly known as a record code or
segment tag. (Used for tag class of items only.)
Verify List ID Specify the ID to be used with the Verification
Profile Database key for code list verification. It
is used together with the Verification key prefix
defined in map component file/environment file.
This ID must begin with a letter. (Used for
defining type of items only.) This ID can be
used for automatic code list verification by
using the # LOOKUP access model function. Or
can be used manually by using the LKUP( ) data
model function in rules.
Sort Specify whether the defining items associated
with the group item should be sorted in a
special order.
File Specify whether this group item should be read
from or output to a file other than the one
specified in the map component file.
Increment For group items that are also MetaLinks,
specifying whether or not to increment each
instance of the use of the MetaLink variable
(Yes) or to ignore incrementing (No).
Once worksheets are completed, they are ready to be entered using
Transaction Modeler Workbench. Refer to Section 2 for procedures
on entering and modifying source and target data model items.

Step 6 should generate: r Application Integrator Model Worksheets completed for


every new data model.
r The creation of all data model files using Workbench.

Workbench User’s Guide 287


Section 5. The Data Modeling Process

Step 7: Create a Create a map component file for each Environment Definition
Map Component Worksheet created in Step 5. Create a new map component file by
completing the Map component file dialog box opened from the
File for Each
New Map Component option of the Workbench File menu. Refer
Environment to Section 4 for instructions.

Step 7 should generate: r All environment map component files.

Step 8: List Source Using the Application Integrator Variable Worksheet (an example
to Target Mapping of which is found at the end of this section), complete a line item for
each piece of data that will be mapped from the source to the
Variables
target. Once this worksheet is completed, the source data modeler
will be able to begin creating the rules described in Step 9,
independent of the target data modeler. The type of variable and
its ID (label) is all that the source and target data modelers need to
know.
Create the Application Integrator Variable Worksheet as follows:

Section Instructions
Name
Type Identify the type of variable to be used. Each
variable type has different mapping attributes.
Label Assign a label for the variable that will be unique
throughout the total translation session. The label
must begin with a letter and should not begin with
the letters “OT.”
Description Enter a description of what the variable type and
label represent.

Step 8 should generate: r Application Integrator Variable Worksheet completed for


every piece of data to be mapped from the source to the
target.

288 Workbench User’s Guide


List of Steps to Data Modeling

Step 9: Create Using Workbench, you can now apply rules to the data models.
Data Model Rules Since the source and target are independent of each other and
runtime-bound via the variables, either side can be done first or
independently.
The source assigns its data model item values to specific variables,
for example, VAR->PONumber = HeadingRec_PONumber. The
target assigns the variable values to its data model items, for
example, Rec1-PONo = VAR->PONumber.
The primary use of the rules is to create the movement of data from
the source to the target, establishing the desired format in the
process. However, rules are also used for the following purposes:
r Logging of information for audit, message tracking, and for
reporting, using the functions LOG_REC( ) and
ERR_LOG( ), for example.
r Capturing of information for later acknowledgment
creation.
r Performing error recovery, such as defaulting when a value
is absent.
r Verifying relational conditional compliance — a relational
condition may exist among two or more siblings within the
same parent based on the presence or absence of one of
those siblings.
r Obtaining or changing values in the Profile Database.
(Make sure the key prefixes are set in the map component
file or through data model rules.)
r Using keywords to alter the natural processing flow –
ATTACH, EXIT, BREAK, RELEASE, CONTINUE, REJECT,
RETURN.
r Performing string manipulation, for example, sub-string,
trim, concatenate, replace.
r Characters, case conversion.
r Performing computations - +, -, *, /.
r Obtaining or changing the active character sets, decimal
notation and release characters.
r Obtaining or changing values associated with access
counters.
r Obtaining or changing the system’s date or time.
r Obtaining or changing the current error status.

Workbench User’s Guide 289


Section 5. The Data Modeling Process

Guidelines for Rule Creation a. If a rule action fails, the balance of the actions contained in the
rule are not executed. For example, if a variable or data model
item is referenced for its value, and a value has not yet been
assigned to it, the action will fail. (This can occur in a tag item’s
rule which references an optional child defining item, that was
not present in the tag item.) Whenever this possibility exists,
immediately start a new rule for the balance of the actions.

Example:
Incorrect way to model Correct way to model
Tag_A Tag_A
Defining_A1 (optional) Defining_A1 (optional)
Defining_A2 (optional) Defining_A2 (optional)
[] []
VAR->Fld1 = Defining_A1 VAR->Fld1 = Defining_A1
VAR->Fld2 = Defining_A2 []
VAR->Fld2 = Defining_A2
[]
SET_ERR( )
In the incorrect way, if the first action (VAR->Fld1 =
Defining_A1) fails because no value is available for the
reference to Defining_A1, the balance of the rule is not
performed, and Defining_A2 is not assigned to VAR->Fld2.
In the correct way, the third rule is added to set the error
status to zero, so that if second rule fails, the error of the
failure is not carried forward to the occurrence validation of
Tag_A.
b. Remember to always follow an ATTACH keyword action with
a new rule to capture the map component file’s returned status.
If you immediately follow the ATTACH keyword action with
another action, the second action would only occur if the map
component file returned a status of zero.
For example:
[ ]
ATTACH “OTX12Env.att”
[ ]
VAR->OTAttachRtnStatus = ERRCODE( )

290 Workbench User’s Guide


List of Steps to Data Modeling

c. Some rules can easily become complex. Take the time to lay out
on paper all complex rules, since the content of all rules cannot
be viewed within Workbench at one time. Once on paper, the
rules can easily be entered into Workbench, with minimal
editing.
d. On the source side, rules are usually placed on the tag item
rather than on each defining item. This is because often one or
more items qualify one or more other items to provide their
semantic meanings. On the target side, the rules must be
placed on the defining items for them to obtain their values.
All conditional rules should be entered before the null
conditional rules. If the last rule on an item is a conditional
rule, and the condition fails, the status the item will take on for
occurrence validation will be a failure status. To reset the
status, add a null condition rule with the function SET_ERR( ).
For example:
[ ]
SET_ERR(0)

Workbench User’s Guide 291


Section 5. The Data Modeling Process

Profile Database The following rules should be included in your data models to set
Lookups up the views into the database so that information can be accessed.
When accessing the Profile Database, your model must contain
rules that tell the translator what type of information you are going
to access and from where to access it. The type of information you
might access could be cross-references, code list verifications, or
substitutions. You could access this information from any
hierarchy level of the trading partner profile or from any of the
standard version levels. Refer to Section 4 of the Trade Guide for
System Administration User’s Guide for details on setting up or
modifying the trading partner profile and standard version code
lists.
The SET_EVAR data model function allows you to set the
environment variables for these type of lookups. Refer to the
description of the SET_EVAR function in Appendix B for more
details.
For information on the generic model used in trading partner
recognition, refer to the “Map Component Files for Enveloping/De-
enveloping” section in Section 4.

Database Key and Each level in the trading partner hierarchy is represented by a
Inheritance database key. Each database key is delimited by the pipe ( | )
character, and is a maximum of 12 characters long
The key is derived from the information keyed into the trading
partner’s profile, as follows:

Level Value Entry Box Example Data


Interchange Trading Partner ABCIC
(for example ISA/IEA)
Functional Group Trading Partner ABCFG
(for example GS-GE) Division
Message Name ABC850
(for example ST/SE)

The complete concatenated key, using this example, would be


ABCIC|ABCFG|ABC850. You will see this database key under the
title bar (UNIX) or to the right of the tabs (Windows) of many of the
trading partner profile dialog boxes.

292 Workbench User’s Guide


List of Steps to Data Modeling

Inheritance When looking up a value, the lookup is appended to the key prefix,
before the read. A cross-reference lookup of a part number, for
example, might be:
ABCIC|ABCFG|ABC850|part_a
By using this approach, the property of “inheritance” can be easily
applied, where inheritance denotes the use of values from higher
levels (ancestors) in the hierarchy when a specific value is not
found at the current level.
To continue with the example, if the cross-reference value of part_a
is not found at the message level ABCIC|ABCFG|ABC850, the
system automatically removes levels until the value is found or all
levels are exhausted.
ABCIC|ABCFG|ABC850|part_a
ABCIC|ABCFG|part_a
ABCIC|part_a
part_a
The property of inheritance exists for types of values that can be
stored in the Profile Database:
r Substitutions
r Cross-references
r Verification code lists
Inheritance can lessen the redundancy in the Profile Database,
since, all levels of a trading partner hierarchy (for example, all
divisions and/or all messages of the trading partner) may use the
same cross-references and codes.
Inheritance can be turned on/off as a parameter to database
functions. Whether or not the inheritance feature should be used is
determined by the developer during data modeling.
For details on setting up the trading partner hierarchy, refer to the
Trade Guide for System Administration User’s Guide.

Workbench User’s Guide 293


Section 5. The Data Modeling Process

Substitutions The environment keyword HIERARCHY_KEY is used to set the


view into the database to perform substitution lookups, where each
field in the Profile Database is associated with a substitution label.
Once you have identified the type of information and where to
access it, the next step is to identify the substitution variable to
obtain. The $ data model function allows you to do this.
In addition to verifying information, you have the ability to
manipulate the profile information stored. The SET_SUBS data
model function allows you to update values into the substitution
portion of the Profile Database. The DEL_SUBS data model
function allows you to delete a Profile Database substitution value.
For additional information regarding the data model functions,
refer to Appendix B of this manual.
As you use Profile Database lookups in rules, enter them on the
Profile Database Interface Worksheet. An example is found at the
end of this section. Complete the following columns as described:
Column Heading Instructions
Description of Enter a description of what lookup is
Lookup occurring.
Side S/T Enter S when used in the source data model,
and enter T when used in the target data
model.
Type S/X/V Enter S for substitution type database
lookup;
Enter X for cross-reference type database
lookup;
Enter V for code verification type database
lookup.
Label/Category/ When Type is ‘S’ for substitution, enter in the
Verify List ID label used for the substitution, for example,
$X12AckLevel.
When Type is ‘X’ for cross-reference, enter in
the category from which the cross-reference
is to occur.
When Type is ‘V’ for verification, enter in the
verify list ID under which the lookup is to
occur.
Hierarchy Level Enter the key prefix of the Profile Lookup for
Key Prefix the trading partner. The Key Prefix is
obtained from the dialog box in step 10.

294 Workbench User’s Guide


List of Steps to Data Modeling

Cross-references The environment keyword XREF_KEY is used to set the view into
the database to perform cross-reference lookups. Once you have
identified the type of information and where to access it, the next
step is to identify the category that the data is stored underneath
along with the value in the input stream to be cross-referenced.
The XREF data model function allows you to do this.
In addition to cross-referencing information, you have the ability to
manipulate the cross-reference information stored in the database.
The SET_XREF data model function allows you to update values
into the cross-reference portion of the Profile Database. The
DEL_XREF data model function allows you to delete a Profile
Database cross-reference.

Verification Code Lists The environment keyword LOOKUP_KEY is used to set the view
into the database to perform verification lookups. Once you have
identified the type of information and where to access it, the next
step is to identify the verify list ID that the data is stored
underneath. The verify list ID is keyed into the Verify field of the
data model item. The verify list ID is also known as the Category in
the Xrefs/Codes dialog box. The next step is to construct a rule
identifying the data model for which the verification is to occur and
the value to be looked up. The LKUP or DEF_LKUP data model
functions allow you to do this.
In addition to verifying code list information, you have the ability
to manipulate the code list information stored. The SET_LKUP
data model function allows you to update values into the
verification portion of the Profile Database. The DEL_LKUP data
model function allows you to delete a Profile Database code list
value.

Step 9 should generate: r All data model rules using Workbench, for both the source
and target.
r A completed Profile Database Interface Worksheet.

Workbench User’s Guide 295


Section 5. The Data Modeling Process

Step 10: Enter the All lookups into the Profile Database should have been listed on the
Profile Database Profile Database Interface Worksheet, as part of Step 9 when
creating rules within Workbench.
Values
Using Trade Guide for System Administration, enter in all of the
Profile Database lookups, including values to be used in
substitutions, cross-references (x/refs) and verification code lists.
Procedures for entering this information are found in Section 4 of
the Trade Guide for System Administration User’s Guide.

❖ Note: Keep in mind the inheritance capability available in


database lookups. This feature helps to eliminate redundancy
of data, thereby reducing database maintenance and disk
storage requirements. Refer to the Trade Guide for System
Administration User’s Guide for detailed steps on these Trade
Guide features.

Substitutions r Validate that the various trading partner profile records


from which substitution values will be accessed exist.

Cross-references 1. From the Xrefs/Codes dialog box, add the category under
which the cross-reference will occur to the category file, if not
already present.
2. Select the category from within the list box, and enter the
extracted values from the input stream in the Value field. If a
string of values is used, these values should be trimmed of
trailing spaces and concatenated together using the pipe ‘|’
character as a delimiter between the fields.
3. Enter the value to be used for the cross-reference in the
Description field.

296 Workbench User’s Guide


List of Steps to Data Modeling

Verification Code Lists From the Xrefs/Codes dialog box:


1. Add the category under which the code list verification will
occur to the category file, if not already present.
2. Select the category from within the list box and enter the values
from the input stream in the Value field. If a string of values is
used, these values should be trimmed of trailing spaces and
concatenated together using the pipe ‘|’ character as the
delimiter between the fields.
3. If you would like to key in a description of the code list
verification, do so. This field is optional.

Step 10 should generate: r All trading partner Profile database lookups exist for
substitutions, cross-references, and verification code lists.

Step 11: Run Test Using the input file from Step 3, translate the file(s). The files can
Translations and Debug be translated through the Run dialog box of Workbench, or at the
command line, by invoking the inittrans command in UNIX or the
otrun.exe command in Windows. Refer to Section 6 of this manual
for detailed instructions.
During test translation, it may be beneficial to set a high trace level.
The trace facility set at a high level generates a step-by-step log of
the translation. Various levels of the trace can be turned on and off
as needed. Continue testing and debugging until your data model
is free of error and ready to migrate to an official test or production
area.

Step 11 should generate: r Production ready data models and map component files.

Workbench User’s Guide 297


Section 5. The Data Modeling Process

Step 12: Make Once the translation has been modeled, the following modified or
Backup Files created files should be backed up:
r Map component files —.att
r Data model files — .mdl
r Test and input files
r Access model files — .acc
r Profile Database — sdb.dat & sdb.idx
All worksheets and flowcharts should also be packaged together for
later reference. Refer to Section 7 of the Trade Guide for System
Administration User’s Guide for details on scheduling backups.

Step 12 should r A backup copy of all disk files changed or created.


generate: r A consolidated packet of all worksheets and flowcharts.

Step 13: Migrate Once you are through with the complete data mapping process,
the Data including testing, you are ready to migrate your application to an
official test or production functional area. Refer to Section 7 of this
manual for suggestions on migrating to a different functional area.

298 Workbench User’s Guide


Notes on Data Model Development

This section discusses a series of general modeling notes and tips


Notes on Data that should be followed to minimize problems with translation and
Model the portability of data models.
Development

Assigning Names Consider the following when assigning names to models and files.

Operating System When assigning filenames, consider naming conventions for all
Naming Conventions operating systems under which you expect to operate. Use the least
common denominator. That is, if you are expecting to use
Windows, limit the base portion of the filename to eight characters
and the extension to three characters. Note also that the Windows
operating system is not case-sensitive, but UNIX is. If you intend to
develop applications for these platforms, consider using all
lowercase or uppercase characters to avoid any problems following
migration. FTP will migrate files, maintaining the case it
encounters.

❖ Hint: To avoid problems when migrating between UNIX and


Windows (or vice versa), GE Information Services
recommends using all lowercase character names and limiting
filenames to eight characters with three character extensions.

Application Integrator When assigning names to models, variables, etc., use a prefix other
Reserved Prefixes than the two-character “OT” or “ot.” All provided Application
Integrator models (.mdl, .acc), generic variables, and utility shell
scripts begin with the two letters “OT” or “ot.”

Recommended Prefixes For Administration Database files, the prefix “DM_” is


recommended.

Workbench User’s Guide 299


Section 5. The Data Modeling Process

Recommended The following extensions are required or recommended:


Extensions

Extension Description
.att Map component filename (required)
.env Environment filename (required)
.acc Access model filename (required)
.mdl Data model filename (required)
.sh Shell script file
.std Standard data models, such as ASC X12,
UN/EDIFACT or TRADACOMS
.dat Data portion of an ISAM file (Profile or
Administration Database)
.idx Index portion of an ISAM file (Profile or
Administration Database)
.tmp Temporary/work files
.in Test input file to Application Integrator
.out Test output file to Application Integrator

Labels Labels are used in Application Integrator for referencing item


values. All variable labels must begin with an alpha character: A-Z
or a-z. The following is a list of variables with their maximum
length:

Variable Length
Access model variable 40 characters
Data model variable 40 characters
Variable (temporary) variable 40 characters
MetaLink variable 40 characters
Array variable 40 characters
User-defined environment variable 40 characters
Substitution variable 254 characters
Verification list ID 254 characters

Hint: Since translations use these labels, better throughput

❖ will be obtained by using shorter vs. longer label names, and


by keeping names unique.

300 Workbench User’s Guide


Notes on Data Model Development

Comparing When performing a comparison in the rules (for example, [VAR–


Numerics or Strings >A==DMI_01], or [DMI_02>1230]), either a string comparison or a
numeric comparison will be performed. Both sides of the
comparison will be checked to determine if each contains only
numeric characters ([0–9, ., –, +] with the period, minus sign, and
plus sign characters appearing only once in the string) and each is
less than 16 characters long. If both of these tests are true, then a
numeric comparison will take place. If either of these tests are
false, then a string comparison will take place. For example:
r Numeric comparison of 123 to 123.000000 would result in
both sides being equal.
r String comparison of 123 to 123.000000 would result in both
sides being unequal.
r 1==000000000000001 would be TRUE (a numeric
comparison would be performed).
r 1==0000000000000001 would be FALSE (the second side
contains 16 characters, therefore, a string comparison would
be performed).

Using Application The models and files supplied by Application Integrator and
Integrator Models Application Integrator standards implementation packages have
names beginning with “OT” and “ot.” These files are read-only, in
and Files
most cases.
If you desire to modify an Application Integrator file, such as a
sample data model, copy the file to a file with a new name and then
modify the copy.

References to Files References to files and models within the map component files and
data models can be either relative or explicit. Relative referencing
means every file and model being used is located in the same
directory. These include source, target, and access models and map
component files. Using relative referencing, they can be moved to
another file system without changes. This allows the models to be
easily distributed to other users, as they are to Application
Integrator Customer Support, if necessary.

Workbench User’s Guide 301


Section 5. The Data Modeling Process

Explicit referencing means the files and models that are being used
are located in different directories, therefore, a path is necessary to
locate these files. Using explicit referencing, the same directory
structure (path or modifications to the map component files and
models) is always required.
We recommend relative referencing because it is more structured
and easier to track. The following examples illustrate both types of
referencing. Notice that the explicit filenames begin with either a
forward slash (/) or a backslash (\), whereas the relative filenames
do not.

Relative Reference Explicit Reference


within a map component file
S_ACCESS=“OTFixed.acc” S_ACCESS=“/u/OT/OTFixed.acc”
S_MODEL=“edi/OTRecogn.mdl S_MODEL=“/u/OT/edi/OTRecogn.mdl”
within a data model
ATTACH “edi/OTBypass.att” ATTACH “/u/OT/edi/OTBypass.att”

The base directory is a directory in the file system where relative


file references are located. The references are not the files
themselves but rather the pointers to the files.
In UNIX, the Control Server’s start up directory is the base
directory. In Windows, the base directory is the directory you are
currently working in, not the directory that contains the program
that is initiated.

302 Workbench User’s Guide


Functional Area Definition Worksheet
Map Component Filename: _______________________________________________.att
Production Functional Area: q Development/Testing Functional Area: q
Functional Area Description: ____________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
Input File: ___________________________________ q dynamically determined at runtime
q continued from previous environment
q defined in the data model, on a group
Output File: _________________________________ q dynamically determined at runtime
q continued from previous environment
q defined in the data model, on a group
Source:
access model: _______________________.acc data model: ________________________.mdl
q dynamic at runtime q dynamic at runtime
Target:
access model: _______________________.acc data model: _______________________.mdl
q dynamic at runtime q dynamic at runtime
Source: Key Prefixes X-ref Values
Substitution: _______________________________ __________________________________
_______________________________ __________________________________
_______________________________ __________________________________
Target:
Substitution: _______________________________ __________________________________
_______________________________ __________________________________
_______________________________ __________________________________

User-Defined Environment Keyword Variables:


Name Value Assignment From Purpose
___________________ ___________________________ ___________________________
___________________ ___________________________ ___________________________
___________________ ___________________________ ___________________________
___________________ ___________________________ ___________________________
___________________ ___________________________ ___________________________
___________________ ___________________________ ___________________________

Special Access Characters:


Source Target
First Delimiter: _____________________ _________________________
Second Delimiter: _____________________ _________________________
Third Delimiter: _____________________ _________________________
Decimal Notation: _____________________ _________________________
Release Character: _____________________ _________________________
Application Integrator Model Worksheet
Mode: q Source q Target Model Name: _____________________ Functional Area Name: ___________________

Translation Reference: ____________________________________________________________________________________

Group, Tag, Container, or T/C/D C/D D T C/D G G G/T/D


Defining (G/T/C/D) Items

Data Model Item Name Item Occurrence Size Format Match Value Verify Sort File Increment
Type Min Max Min Max List ID Yes/No
Application Integrator Variable Worksheet

Type: VAR->
ARRAY->
M_L->
$SUBS
ENV Label Description
Profile Database Interface Worksheet
Label (S) Hierarchy Level
Side Type Category (X) (Key Prefix)
Description of Lookup S/T S/X/V Verify List ID (V)

urce/Target
S/X/V = Substitution, Cross-reference (x-ref), Verification
Section 6
Translating and Debugging

Once you are finished describing the structure and mapping


requirements (between the input and output) and have defined
your environment, you need to test and debug your data models.
A common problem in most programming languages is that you
may enter data that is not quite what you intended. For example,
while creating your data model structure, you may have
inadvertently used an incorrect masking character for a particular
format.
Only by testing and debugging can you determine if the data model
is accurately translating the data, based on the structures, rules and
input data being processed. You can perform translation testing
and debugging from within Workbench. You can also perform
these functions from the command line.
This section describes both of these methods, as well as provides
information on viewing trace logs and debugging.

Workbench User’s Guide 307


Section 6. Translating and Debugging

Running a translation consists of specifying various parameters to a


Overview of translation interface program, called inittrans in UNIX or otrun.exe
Translating and in Windows. These parameters could include specifying the input
Debugging filename, output filename, the map component filenames, user-
defined and/or generic environment variables, and required trace
levels.
The Control Server program, called cservr in UNIX and cservr.exe in
Windows, performs the translation process. The Control Server
must be invoked before running a translation.
You can also perform a translation by using the command line
interface program called inittrans on a UNIX system or the program
otrun.exe on the Windows system. In many cases, you might
consider creating a script to run inittrans (UNIX) or a batch file to
execute otrun.exe (Windows) with the appropriate parameters for
your production functional area. In other cases, you should set up
the Scheduler to periodically perform a translation.
You may run Trade Guide, Workbench, and one or more
translations at the same time when UNIX and Windows are set up
as multiple user systems. Should the same trading partner or other
record be called from these programs, Application Integrator
automatically performs the appropriate record locking, avoiding
any conflicts. However, when the Windows product is set up as a
single-user system, you cannot work in Trade Guide or perform
multiple translations at the same time you are in Workbench.
To ease the process of development testing and debugging, you can
save your desired parameters for each type of translation. This
makes it easier for you to rerun repetitive translations. The
translator also produces a trace log that is useful in debugging. You
can determine the amount of information to trace.

308 Workbench User’s Guide


Overview of Translating and Debugging

Before Translating

❖ Note to UNIX users: The environment variables must be set


before starting the Control Server. Some of the variables are
required and if they are not preset, the Control Server will not
execute. Some of the variables are optional and are provided
to improve translation performance or session number
tracking. The environment variables are set in each user’s
profile found in the user’s home directory. Refer to Section 1
of the Application Integrator Installation Guide for specific
information about setting the environment variables.

❖ Note to Windows users: The environment variables must be


set before starting the Control Server. Some of the variables
are required and if they are not preset, the Control Server will
not execute. Some of the variables are optional and are
provided to improve translation performance. The
environment variables are set in each user’s AUTOEXEC.BAT
file or the Environment System Properties dialog box. Refer
to Section 2 of the Application Integrator Installation Guide
for specific information about setting the environment
variables.

Before translating, you should verify that the following information


is present or conditions are true.
1. On UNIX systems, the Control Server must be running, for the
appropriate queue ID, for the translator to execute. If it is not
running, the translator will not execute. To bring up the
Control Server at the UNIX command line, type otstart.
On Windows systems, the Control Server must be running at
the appropriate TCP/IP listen port, for the translator to execute.
If it is not running, the translator will not execute. To bring up
the Control Server in Windows, at the Run dialog box, type:
cservr.exe –cs %OT_QUEUEID%.

Workbench User’s Guide 309


Section 6. Translating and Debugging

2. Before running a translation, you should have set up or


considered the following:
r The source and target data models are defined and
saved.
r The map component file (or files) is defined and saved.
r The generic map component files for enveloping and de-
enveloping are present, if you are using them. Refer to
the “Map Component Files for Enveloping/De-
Enveloping” section in Section 4 for details.
r You have created or added trading partner information
to the Profile Database.
3. You should have an input file to perform the translation.

310 Workbench User’s Guide


Translating Using Workbench

Running a translation consists of specifying a series of parameters


Translating Using to define the names of the map component file, files, and trace level
Workbench you desire for a translation session.

Ø To run a translation from within Workbench


1. From the Layout Editor’s main menu, choose Debug.
2. From the Debug menu, choose Run. The Run dialog box will
appear.

3. In the Process type value entry box, type a name to define a


new translation process, then press the Enter key;
- or -
Click the arrow to select a saved translation process session
from the list box.

❖ Caution: When entering a new process definition, you


must enter a Process type and then press Enter before
entering the Parameters. The Parameters will not be
saved unless you first enter the Process type name then
press the Enter key.

If you have previously defined the parameters for a translation,


skip to Step 9.

Workbench User’s Guide 311


Section 6. Translating and Debugging

4. In the Map Component File value entry box, type the name of
the map component file to be used in this translation.
5. In the Copy File value entry box, type the name of the data file
from which you are going to copy. This file will not be
removed after the translation is complete. It will remain in its
current location.
6. In the Trace Level value entry box, type the numeric value for
the trace level desired;
- or -
Select the level using the dialog box options associated with this
box. The trace level will default to zero if a trace level is not
entered.
Refer to the “Setting a Trace Level” section later in this section
for a complete description of this option.
7. In the Additional Parameters group box, enter any additional
parameters needed for this translation. These parameters could
include input filename, output filename, and user-defined
environment variables.

❖ Note: If environment variables are specified in the map


component file, they will automatically be entered in the
Additional Parameters area when the map component file is
specified in the Map Component File value entry box.

Name Type one of the valid environment variable or user-


defined environment variable resources.
Value Set the variable when the translation is executed.

Using the Name and Value box entries, the system creates an
additional parameter statement, such as
“INPUT_FILE=OTX12I.txt”.

312 Workbench User’s Guide


Translating Using Workbench

8. To save these parameters, choose the Save button. This will


save all parameters under the Process type name so that this
process can be rerun without having to reenter the values.

❖ Note: Saving a Process Type does not update your map


component file.

9. To start the translation, choose the Run button.

❖ Note: If the data model has been modified since you last
ran a translation, you will be prompted to save these
changes.

10. To close the Run dialog box, choose the Close button.

Workbench User’s Guide 313


Section 6. Translating and Debugging

Setting a Trace The trace level controls the content of the translation trace log file.
Level The trace level is set using a numeric value. This value represents
which options of the trace will be set. You can manually enter the
trace level options, or use a dialog box for making selections.
The following table describes the trace levels.

Trace Description
Level
0 No Trace Setting or a Trace Setting of Zero (0)
q otrans or otrun.exe version, compile date and
time
q date/time translation began and ended
q loading of libraries, with their compiled
date/time
q translation ending status
1 Data Model Item Values Listing
q Values reported by “No Trace Setting or a
Trace Setting of Zero (0)”
q Names of source access and data models
q “SOURCE VALUES”
q The last source item values parsed
q “M_L VALUES”- declared within this source
data model
q “VAR VALUES” - declared within this source
data model
q “ARRAY VALUES” - declared within this
source data model
q Names of target access and data models
q “TARGET VALUES”
q “M_L VALUES” - declared within the source
and this target data model
q “VAR VALUES” - declared within the source
and this target data model
q “ARRAY VALUES” - declared within the
source and this target data model

314 Workbench User’s Guide


Translating Using Workbench

Trace Description
Level
2 Value Table Listing
q Values reported by “No Trace Setting or a
Trace Setting of Zero (0)”
q “VALUE STACK” - target item labels with the
values assigned to them
q “VSTK” - values being referenced off the
value stack in target Phase 2 processing
4 Source Data Model Items
q Values reported by “No Trace Setting or a
Trace Setting of Zero (0)”
q Two lines for each source item, as processed -
“DM: ItemLabel”, “FINISHED ItemLabel”
8 Target Data Model Items
q Values reported by “No Trace Setting or a
Trace Setting of Zero (0)”
q One line for each target item, as processed -
“DM: ItemLabel”
q One line each time processing returns to a
parent level - for example, “FINISHED
ItemLabel”
16 Rule Execution
q Values reported by “No Trace Setting or a
Trace Setting of Zero (0)”
q One line for when entering rules on an item -
only if item has rules defined
q One line reporting rule execution status - all
items
32 *Rule Functions This level does not output on its own.
Refer to 48.

Workbench User’s Guide 315


Section 6. Translating and Debugging

Trace Description
Level
48 Rule Functions (48, which is 16+32)
q Includes # NUMERIC/# DATE/# TIME access
function. Function NUMERIC_in: dm
PhoneNumber pic “No format”
dm left 10 .. 10 right 0 .. 0 radix
Function NUMERIC_in returns value:
“3255550961”
q Execution of rules - assignment, functions, etc.
64 Source Access Items
q Values reported by “No Trace Setting or a
Trace Setting of Zero (0)”
q Source TAG item matching - “pre condition
Rec_Code met”
q Source parsed values being returned back to
the source data model
128 Error Details (128)
q Values reported by “No Trace Setting or a
Trace Setting of Zero (0)”
q The clearing of the error stack - “err_dump( )”
the capturing of an error - “err_push( )”
256 IO Detail (256) - pertains to source only
q Values reported by “No Trace Setting or a
Trace Setting of Zero (0)”
q Shows file position entering each item
q Shows each character read and checks if
within defined character set.
q Function NUMERIC_in: dm PhoneNumber
pic “No format” dm left 10 .. 10 right 0 .. 0
radix
q Function NUMERIC_in returns value:
“3255550961”
512 Write Output Detail
q Values reported by “No Trace Setting or a
Trace Setting of Zero (0)”
q The order the items are written out and the
access items used
1023 Complete Trace

316 Workbench User’s Guide


Translating Using Workbench

Ø Entering the trace level options using a dialog box


1. Open the Trace Settings dialog box by clicking the ellipses next
to the Trace Level box on the Run dialog box.

The Trace Settings dialog box appears.

2. Select each type of trace level. To select all options, choose the
Set All button. To deselect all options, choose the Clear All
button.
3. Choose the Apply button to return the selected trace level;
- or -
Choose the Cancel button to exit the trace settings without
making changes.

❖ Note: The trace log file is generated once you choose


the Run button on the Run dialog box. Use Debug/View
Trace to display the trace log.

Workbench User’s Guide 317


Section 6. Translating and Debugging

Ø Entering the trace level options manually


You can also calculate the desired trace level and enter it in the Run
dialog box or as a parameter to an inittrans (UNIX) or otrun.exe
(Windows) statement. Refer to the “Ways to Set Trace Log” section
later in this section for more details.
Add the value of the options you want to activate in the trace using
the table below; then type that value in the Trace Level box or as the
appropriate parameter value when setting the trace level elsewhere
in the system.

❖ Hint: You can also subtract the number of the items you do
not want to show on the trace from the total trace value of
1023. For example, to show a total trace minus access
items, you would subtract 64.

Trace Flag Trace Level Value


Deactivate the trace 0
DM item values listing 1
Value table listing 2
Source data model items 4
Target data model items 8
Rule execution 16
Rule functions 32
Access items 64
Error details 128
IO detail 256
Write output detail 512
Sum Total (activate all trace options) 1023

318 Workbench User’s Guide


Translating at the Command Line

For UNIX systems, the program called inittrans invokes a


Translating at translation from the command line. For Windows systems, the
the Command program otrun.exe issues a translation session. Each program is
Line invoked with arguments to specify the configuration of the
translation session (input/output files, initial environment,
environment variables, etc.).

❖ Note: Before invoking a translation, the Control Server must


be running.

The request for processing is passed on to the Control Server


(cservr), together with all of the supplied arguments. The Control
Server will then dispose of the request based on an available
translator (otrans) in a UNIX environment or the single translator in
a Windows environment. The Control Server and queue ID can be
set at the command line.

Invoking the The translation program inittrans has the following common syntax
Translation in a UNIX development area. Refer to the following table for a list
of complete arguments.
Processes - UNIX
inittrans -at <initial map component file>
-cs <Control Server queue id> -tl <trace level> -I

❖ Note: The display of the translation process will terminate


immediately if not executed interactively (set by using the
parameter -I). This parameter is recommended.

Workbench User’s Guide 319


Section 6. Translating and Debugging

Invoking the The translation program otrun.exe has the following command
Translation Process syntax in a Windows development area. Refer to the “Available
Arguments to inittrans/otrun.exe” table for a list of complete
– Windows
arguments.

❖ Note to Windows users: When Application Integrator is


properly installed, its software is loaded into the same
directory and drive as the working directory (the directory
from which you run Application Integrator). However, in rare
cases, the working directory and software directory are on
different drives or in different directories. In these cases, all
references to the directories, models, environment files, map
component files, etc., must be fully qualified with the entire
path to the entity.
Therefore, in a map component file that contains references
to input files, environment files, etc., the map component file
and everything in it would have to be fully qualified.

<path>otrun.exe –at <initial map component file>


-cs <listen port> -tl <trace level> -I
where:
<path> Represents the route the operating system
follows through the directory structure to locate
the appropriate directory or file.
<listen port> Represents the TCP/IP port of your Control
Server. This is specified by the environment
variable OT_QUEUEID, and can be referenced
using the syntax “-cs %OT_QUEUEID%”.

❖ Note: The translation must be run interactively in a Windows


system, therefore, the parameter –I must be included in the
otrun.exe command line statement.

When this command is executed, the following dialog box appears


which shows when the program ran and the error code returned for
this translation.

320 Workbench User’s Guide


Translating at the Command Line

❖ Hint: Once you have established your translation “command


line,” you can add the program to your desktop by creating an
icon shortcut to translate easily.

Available Arguments to
inittrans/otrun.exe
Code Description of Item Defined Environment
Variable Resources
-a source access file S_ACCESS
-A target access file T_ACCESS
-at initial map component file none
-cs specify Control Server queue none
ID
-D declare user-definable none
environment variable
-hk hierarchy key prefix HIERARCHY_KEY
-i input file INPUT_FILE
-I interactive, foreground none
versus background (default
processing)
-lg specifies a file for the none
<filename> translation session output
allowing you to translate in
the background instead of

Workbench User’s Guide 321


Section 6. Translating and Debugging

Code Description of Item Defined Environment


Variable Resources
monitoring feedback in a
Session Output message box

Caution: When this


argument is used, the
destination file is not
overwritten. Instead, new
entries are appended,
therefore, the file always
increases in size.

-lk lookup key prefix LOOKUP_KEY


-o output file OUTPUT_FILE
-P priority (0-low, 99-high) none

Note for UNIX users: This


value is used to set both
the priority and the nice
value based on the
equation: (default nice
value + [9 minus ten’s
priority value]). For
example, the nice value of
20 and a priority of 87
would reset the nice value
to 21, that is, ([9-8]+20).
Refer to Section 2 of the
Trade Guide for System
Administration User’s
Guide for more details.
-s source data model S_MODEL
-t target data model T_MODEL
-tl trace level (0-minimal, 1023- TRACE_LEVEL
full)
-u user name none
-xk xref key prefix XREF_KEY

322 Workbench User’s Guide


Translating at the Command Line

Parameter Explanations The code for an initial map component file “-at” is exclusive of -i,
-o, -a, -A, -s, -t, -hk, -xk, and -lk. If -at is used, these others cannot
be used. If the other codes are used with -at, they will have no
impact.
The following sets are paired: -a with -s (source access and data
model), and -A with -t (target access and data model).
The parameter -lg <filename> generates translation session
feedback to a background file. This parameter writes to the
filename specified the translation session feedback usually
displayed in the Session Output dialog box. If the file cannot be
opened or created, the Session Output message box will be
displayed. It is good practice to keep this filename consistent for all
translations thus making any cleanup easier by handling only one
file.
The –D code is used to declare Application Integrator environment
variables (or user-defined variables) from the command line.
Defined environment variables will override values defined in the
map component file.

❖ Note: Assigning a value to an environment variable that


contains parentheses will cause the string within the
parentheses to be ignored if it is not an environment variable.
For example:
SET_EVAR ("DOGS", "A(B)C")
VAR_TMP=GET_EVAR ("DOGS")
This will return "AC" because the (B) is not an environment
variable and is ,therefore, ignored.

For examples of user-defined variables, refer to the section on


administration reports in Section 6 of the Trade Guide for System
Administration User’s Guide.

Workbench User’s Guide 323


Section 6. Translating and Debugging

With the Application Integrator enveloping models written


generically, specific arguments are used to tailor the enveloping
session to the application file being processed. These arguments are
converted into environment variables which are either used directly
(for example, keyword environment variable INPUT_FILE) or are
referenced for their values within models or environments (such as,
the user-defined environment variables MESSAGE, BYPASS,
ACTIVITY_TYPE). Environment variables are referenced for
values in models through the use of the function GET_EVAR.
Values are obtained in environments by referencing the
environment variable.
The following tables provide some common examples of these
variables:
Example Environment Variable
-DMESSAGE=OTX12SO.att MESSAGE
Must declare the name of the map
component file which will process
(read in) the specific applications’
messages. In the examples
provided, its value is “OTX12SO.att.”
-DBYPASS=OTX12Byp.att BYPASS
Must declare the name of the map
component file which will handle
the bypass/reject logic. In the
example provided, its value is
“OTX12Byp.att.”
-DACTIVITY_TYPE=Invoicing ACTIVITY_TYPE
Should declare a description of the
application for the activity tracking
system. It defines the type of
messaging activity for reporting
purposes. In the example
provided, its value is “Invoicing.”
-DINPUT_FILE=OTX12O.txt INPUT_FILE
Must declare the name of the input
file to be translated. In the
example provided its value is
“OTX12O.txt.”

324 Workbench User’s Guide


Translating at the Command Line

Setting a parameter to two or more words require the use of double


quotes around the parameter value statement, for example:
-DACTIVITY_TYPE=“Shipping Notices”
This requirement is true for two or more words separated by spaces
or the pipe (|) symbol.

❖ Note: On either operating system, the hyphen (-) symbol


cannot be used in parameters.

In UNIX. up to 512 characters can be supplied in the command line


interface.
In Windows Concurrent, up to 236 characters can be supplied in the
command line interface.
In Windows, up to 235 characters can be supplied in the command
line interface.

Translation Examples The following are examples of invoking translation processes:


1. This example uses the supplied map component file for de-
enveloping called “OTRecogn.att” and specifies an input file
called “IBM0001.flt.” Note the -I parameter must be issued for
a Windows translation. The -lg parameter will generate the
session output to the file named runfile.log instead of a Session
Output dialog box.
UNIX:
inittrans -at OTRecogn.att –cs 01
-DINPUT_FILE=IBM0001.flt
Windows:
<path>otrun.exe –at OTRecogn.att –cs %OT_QUEUEID%
-DINPUT_FILE=IBM00001.flt -lg runfile.log -I

Workbench User’s Guide 325


Section 6. Translating and Debugging

2. This example calls an initial map component file “OTRpt,” then


specifies a report map component file to use “OTActR01.att”
with the -D argument, sets the trace to the highest level (-tl
1023), and sets the program to run interactively (-I).
UNIX:
inittrans -at OTRpt.att –cs 01 -DREPORT=OTActR01.att
-tl 1023 -I
Windows:
<path>otrun.exe -at OTRpt.att –cs %OT_QUEUEID%
-DREPORT=OTActR01.att -tl 1023 -I
3. This example uses the supplied map component file for
enveloping (“OTEnvelp.att”) and specifies four of the
environment variables specified in the generic map component
file using the -D arguments. As noted earlier, the -D argument
declare Application Integrator environment variables to
override the values defined in the initial map component file.
In this example, the first -D specifies the file to use, the second
calls a second map component file (OTX12SO.att), the third
calls a map component file to handle error (OTX12Byp.att)
processing, and the fourth provides a description of the activity
for the output and trace log. The Control Server ID is specified
for the UNIX environment (-cs $OT_QUEUEID) and the
program is run interactively in both cases (-I).
UNIX:
inittrans -at OTEnvelp.att -cs $OT_QUEUEID
-DINPUT_FILE=OTX12O.txt -DMESSAGE=OTX12SO.att
-DBYPASS=OTX12Byp.att
-DACTIVITY_TYPE=“Invoice Processing” -I
Windows:
otrun.exe -at OTEnvelp.att –cs %OT_QUEUEID%
-DINPUT_FILE=OTX12O.txt -DMESSAGE=OTX12SO.att
-DBYPASS=OTX12Byp.att
-DACTIVITY_TYPE=“Invoice Processing” –I

326 Workbench User’s Guide


Translating at the Command Line

Terminating In a UNIX environment, the program inittrans will terminate


Translation from immediately if not executed interactively (-I), or will automatically
terminate once the processing session is completed. If the inittrans
the Command Line
program does not terminate, kill the process without specifying a
signal so that the Control Server is notified and makes the user slot
available.
In a Windows environment, use the Task Manager (Windows NT)
or Close Program (Windows 95) to end the task. If the program is
hung, you will receive a message which indicates that the program
is not responding. Click End Task from these dialog boxes to end
the process. It is always a good practice to reboot the system after a
program is hung.
Refer to the Trade Guide for System Administration User’s Guide for
more information on starting and ending the system, user slots
(UNIX only), and the Control Server in general.

Workbench User’s Guide 327


Section 6. Translating and Debugging

Consider the following pointers if you have trouble translating.


If the Translator
Does Not Execute
Successfully
UNIX r Make sure the Control Server is running.
Troubleshooting r Make sure the program otrans is defined within your PATH
and is executable. To check the path of otrans, you can enter
the UNIX command:
type otrans
This command will provide you with the path where the otrans
program resides.
To manually run otrans from the command line and check for
errors, enter:
otrans -cs $OT_QUEUEID &
then re-execute the translation. This displays the status of otrans
and records any runtime errors.
Make sure you have write permission on the following files:
r <queue id>.s<session no>.log
r <queue id>.e<session no>.log
r <queue id>.tr<translator no>.log
Where
<queue id> is the ID of the Control Server that controls the
translator, the environment variable OT_QUEUEID.
<session no> is the translation session number maintained in the
Control Server’s home directory within the file “tsid.”
<translator no> is the translator sequence number that is
incremented for each translator invoked by the Control Server. It is
reset back to zero upon restart of the Control Server.
Upon beginning the Control Server, a translator is automatically
started, waiting for the first processing request from the Control
Server.
If a record is locked by a translator process or if a record is in use by
another user through Trade Guide, other translators will wait then
retry at 1-second intervals for up to 125 seconds to acquire the lock.
After the 125 seconds have passed, the other translators will
terminate the translations process.
For outbound standard data, error # 301, Envelope Substitution
Error, will be returned and logged into the process tracking
database.

328 Workbench User’s Guide


If the Translator Does Not Execute Successfully

Windows r A “tmp” directory must be created after the Application


Troubleshooting Integrator software installation is complete. You will receive
an error message if a “tmp” directory was not created after
Application Integrator was installed. Refer to the
Application Integrator Installation Guide, "Configuring the
Operating System" section for information about setting up
the “tmp” directory.
r Make sure that no other Trade Guide activity or translations
are occurring. This includes all functional areas. If you have
more than one Application Integrator operation running at
the same time, you will receive the following error message:

r When Application Integrator is properly installed, its


software is loaded into the same directory and drive and the
working directory (the directory from which you run
Application Integrator). However, in rare cases, the working
directory and software directory are on different drives or in
different directories. In these cases, all references to
directories, models, environment files, map component files,
etc. must be fully qualified with the entire path to the entity.
Therefore, in a map component file that contains references
to input files, output files, environment files, etc., the map
component file and everything within it would have to be
fully qualified.

Workbench User’s Guide 329


Section 6. Translating and Debugging

This section describes the following information:


Using the r Ways to set the trace log
Translation r Viewing the trace log
Trace Log r Understanding the trace output
r Debugging hints using the trace log
r An example of trace output

Ways to Set the You can alter the amount of detail contained in the trace in four
Trace Log places:
r The Workbench Run dialog box
r The environment (map component file)
r Within the data model rules
r At the command line

Through the Run Dialog You can set the trace level through the Workbench Run dialog box
Box by accessing the Trace Settings dialog box (accessed from the
ellipses next to the Trace Level box). This method sets the trace
throughout the complete translation session. Refer to the
procedures earlier in this section for instructions on completing this
dialog box.

330 Workbench User’s Guide


Using the Translation Trace Log

Through the You can enter a trace level in the Other Environment Variables area
Environment of the Map Component Editor dialog box. This level is set
throughout the complete translation session. It must be specified
on the initial environment. Specifying a trace log level on any child
environment will have no effect.

Refer to “Defining a New Map Component File” in Section 4 for


information on completing the Map Component Editor dialog box.

Workbench User’s Guide 331


Section 6. Translating and Debugging

Using Data Model Rules For any item in the translation session, you can define the trace
reporting detail. Use the function SET_EVAR( ) and the keyword
environment variable TRACE_LEVEL to do this.
[ ]
SET_EVAR(“TRACE_LEVEL”, 1023)

❖ Note: The function must be performed for the trace level to


be altered. The action within a false conditional expression is
not performed, therefore, the trace level is not changed.

At the Command Line You can specify a trace level when invoking a translation by using
the -tl parameter and passing a numeric trace level value. For
example, a complete trace (1023) is specified with the following
line:
In UNIX, type:
inittrans -at train1.att -cs $OT_QUEUEID -tl 1023 –I
In Windows, type:
otrun.exe. –at train1.att –cs %OT_QUEUEID% -tl 1023 –I
Refer to the earlier section, “Translating at the Command Line,” for
more details on command line parameters.

332 Workbench User’s Guide


Using the Translation Trace Log

Viewing the Trace You can view the trace log through Workbench, the UNIX
Log command line, or by opening the trace log via an editor.

Ø To view the trace log through Workbench


1. From the Layout Editor’s main menu, choose Debug.
2. From the Debug menu, choose Run. The Run dialog box will
appear.

3. Choose View Trace and then select the trace log from the list
box.
A window displaying the trace log opens.

❖ Note: The following screen illustration reflects the


Windows 95 interface. If you are running Workbench on
a UNIX system, your view window may differ slightly.

Workbench User’s Guide 333


Section 6. Translating and Debugging

4. You can search for text within the trace log by using the Find
option. To do this, choose the Find button to open a Find
dialog box. Type the text in the Find What value entry box and
choose Find Next.

Note: In most cases, the term you are searching for is


❖ highlighted and the system scrolls to the first reference.
To see every reference to a term, choose Find Next
again.
However, if the file you are viewing has long line lengths
requiring horizontal scrolling, the system highlights the
term, but does not automatically scroll to it. Your
indicator that the term has been found is the lack of a
return message of “Not found” or “Search wrapped
around file.” Usually stretching the display window as
large as possible reveals the highlighted terms, in other
cases, manually scrolling through the file will isolate the
term.

You can narrow your search by determining whether to:


r Match case: Select the Match case box to do this.

334 Workbench User’s Guide


Using the Translation Trace Log

r Use the regular expression: Select the Regular


expression box to do this.
r Choose the Up or Down radio button to indicate the
direction of the search.
r Choose the Cancel button to exit the Find dialog box.
5. Choose the Minimize button to minimize the text display to
review again. Choose the Cancel button to exit the text display
dialog box.

Note: You cannot modify the access model through this


❖ viewing option.

6. After rerunning a translation, choose Refresh to redisplay the


trace dialog box..

Note: These operating systems increment the names of


❖ the trace log based on the session number. Refreshing
the dialog box will have no effect.

Ø To view the trace log from the UNIX command line


Under certain conditions during development, it is valuable to
output the translation trace log to the screen.
To output the translator's trace log to standard output (the screen),
manually invoke the translator at the command line, with the
following line:
otrans -cs <port number> <trans.log&>
where
<port number> is the socket port number at the top of the
cservr.out file.
<trans.log&> defines the destination of the trace log file. The
following example routes the output to display.
otrans -cs <port number> /dev/tty&
Then make the request to the Control Server for the translation
process (inittrans) or run a translation session from within
Workbench. The trace is then output to the screen. This helps to
identify where, and in which model, memory faults occur. When
outputting the log to disk, the buffer is not flushed, and therefore,
the last lines of the trace are not written out to the trace log file.

Workbench User’s Guide 335


Section 6. Translating and Debugging

A trace log file is generated for each translation session.


Understanding
the Trace Output

UNIX Trace Output The name of the trace file is “$OT_QUEUEID.tr00000.log”. Where:
r $OT_QUEUEID is a UNIX environment variable of the range
[“pr”, “pp”, “pt”, “ts”, “01” ... “50”]
r “00000” is a sequential number beginning with zero, for the
sequence of the translation executed since the Control Server
was started. Restarting the Control Server resets this
sequence number back to zero.
The trace files are placed into the log directory specified as part of
the “otstart” or “ottg” shell scripts, which began the Control Server.
cservr $OT_QUEUEID -ld $LOGDIR > $LOGDIR/cservr.$OT_QUEUEID 2>&1&
The argument “-ld $LOGDIR” stands for log directory, which is
previously set in the script to the current directory
(LOGDIR=`pwd`) — the Control Server’s home directory. If the “-
ld” argument is not supplied, the log directory defaults to “/tmp.”

❖ Caution: If the directory defaults to /tmp, the root file


system can potentially run out of space.

The other translation logs, Session Log and Error Log, are also
placed in the log directory (-ld).
Session Log - $OT_QUEUEID,s00000.log, see LOG_REC( )
Error Log - $OT_QUEUEID,e00000.log, see ERR_LOG( ) and
TRUNCATION_FLG

336 Workbench User’s Guide


Understanding the Trace Output

Using Windows A trace log file is generated for each translation session. The name
Trace Log Output of the trace file is “%OT_QUEUEID%.tr00000.log” where
%OT_QUEUID% is a Windows environment variable.
“00000” is a sequential number for the sequence of the translation
executed since the Control Server was started. Restarting the
Control Server resets this sequence number to zero.
The trace log files are placed into the log directory specified as part
of the cservr.exe that began the Control Server. The trace log file is
created with the –cs argument stated in the command. For
example:

Command Log File Output File


cservr.exe .tr0000# .log
cservr.exe –cs 01 01.tr0000# .log
cservr.exe –cs AB AB.tr0000# .log
cservr.exe –cs 6666 6666.tr0000# .log

The explicit port number is passed when the translator is invoked


from the Control Server. To start the translator manually, use the
specific port address, not the two character code. For example, to
connect to the development server ‘dv’, use:
otrans.exe –cs 6050 <log filename>
The translator’s output is directed to the filename indicated in <log
filename>. Application Integrator will interpret the last argument
as the trace log file of the translator’s output.

Workbench User’s Guide 337


Section 6. Translating and Debugging

Windows Trace Windows uses the file named trace.log as the output for any
Output translation output. The file resides in the same directory where the
Control Server was started. This file is produced when executing:
1. otrun.exe
2. the Workbench Run process
3. Trade Guide
To display the trace log, you would choose the View Trace button
on the Run dialog box. View Trace looks in the working directory
for the trace files. When the log directory (ld) is set to something
other than the working directory, you must manually locate the
trace files to display them. They will be found in the directory
specified by the “-ld” Control Server parameter. If this parameter
is not set, the log directory will default to the working directory.
Each time you run a translation or run Trade Guide, the file trace.log
is overwritten. For example, if you run a translation in Workbench
that has a complete trace.log and you want to see the Activity
Tracking report to see how the data was processed, when you run
Trade Guide your translation trace.log will be overwritten.

❖ Caution: Each time you translate or execute Trade Guide the


trace log is overwritten. To avoid this, you should
immediately back up any trace log which requires review.

The “-lg <filename>” option specifies an output file for the


translation session which allows you to translate in the background
instead of monitoring feedback in a Session Output message box.
Additional information about this option can be found in Section 5
of Trade Guide for System Administration User’s Guide.
The other translation system logs are the Session log (which has the
format s<session no>.log) and the Error log (e<session no>.log). Refer
to Section 1 for more details on these files.

338 Workbench User’s Guide


Understanding the Trace Output

Organization of a The typical organization of a trace is:


Trace Log
Translation Initialization:
r otrans version, compile date/time
r Shared library initialization - version, compile date/time

Map Component File Initialization:


r Input file and output file definition and device type - “std
file device”

Source Processing:
Source data model processing - only if source data model declared
in this environment
r Source data and access model filenames —
“get_source acc OTFixed.acc model Example1s.mdl”
r Source item processing — repeat for each item
Data model item label
Access parsing / character reading - tags, containers,
defining
Rules performed, modes (Present/Absent/Error), conditions
and actions
r Source values (values are not seen if the data model is exited,
i.e., “EXIT 503”)
Data model item values
MetaLink values (M_L->) declared within this source data
model, in sequence declared
Array values (ARRAY->) declared within this source data
model, in sequence declared
Temporary variable values (VAR->) declared within this
source data model, in sequence declared

Workbench User’s Guide 339


Section 6. Translating and Debugging

Target Processing, Phase 1:


(Only if target data model declared in this environment)
r Target data and access model filenames -
“put_target acc OTFixed.acc model Example1t.mdl”
r Target item processing - repeat for each item
Data model item label
Rules performed - modes (Present/Absent/Error) -
conditions and actions
r Target values (values are not seen if the data model is exited,
i.e., “EXIT 503”)
MetaLink values (M_L->) declared within the source and
target data models, in sequence declared
Array values (ARRAY->) declared within the source and
target data models, in sequence declared
Temporary variable values (VAR->) declared within the
source and target data models, in sequence declared
Value stack (in the order values were assigned to the target
data model items)

Target Processing, Phase 2:


(There are no rules executed in this phase.)
r Value stack writing off (can encounter truncation (185) or out
of character set (184) errors); be sure the parent environment
properly handles this.
r Closing file(s) -input/output streams
r Unlocking administration file records - closes each file, if last
translator to reference
r Return status to parent map component file/environment

❖ Note: Processing during Phase 1 can be controlled in


a data model using the TRUNCATION_FLG keyword
environment variable and the LAST_TRUNC function.

340 Workbench User’s Guide


Understanding the Trace Output

Debugging Hints The following are hints on using the trace log for debugging.
Using the Trace Log

To See Full Trace Consider setting a full trace (1023) to see all details. To see values
as they are formatted and written out off the value stack, you must
use the full trace level setting.

❖ Note: To avoid possible problems should your trace log


become too large, you should make sure that your system
administrator has set the UNIX ulimit to a reasonable size for
your operating system environment. Most operating systems
allow a default maximum file size value of 4 gigabytes.
Check with your system administrator to verify or set this
limit.

Excessive Looping Use a trace level value of “12”; this shows only the item’s labels
Concern with their occurrences.

Pinpointing Rule Set the trace level value to “16”; this shows the items with rule
Execution Errors errors.
LastName - PRESENT rules
<3> returning eval not instantiated
LastName: ERROR->> status after PRESENT rules 139
vs.
LastName - PRESENT rules
LastName: status OK after PRESENT rules

Workbench User’s Guide 341


Section 6. Translating and Debugging

Keywords to Look for in


the Trace Log
Keyword Meaning
ABSENT Indicates that the rules that follow will
be performed if the item is found to be
absent. The translator determines which
mode it is in by evaluating the current
error code value after parsing a data
model item (if it is a defining item).
ARRAY VALUES Indicates that keywords are marking the
start of a summarization of values
assigned to all array variables on the
source and target data models.
Meta_link values are listed separately
for the source and target data models.
assignment to Indicates that an action is being taken.
XXXXXXX Assigns a value to variable XXXXXXX.
Map Component File Where map component file occurs, i.e,
[]
ATTACH “OTX12Env.att”
map component i.e., “OTCmt1.att”
filename
Calling function Indicates that data model function
XXXXXXX XXXXXXX is being called.
constpush n.nnnnnn Identifies the value (n.nnnnnn) of a
numeric constant being supplied,
usually as a parameter to a function such
as STRSUBS.
data model item’s i.e., “LastName”
label
data model name i.e., “OTCmt1.mdl”
dm left nn .. nn For a data model item defined as a
right nn .. nn numeric, this key phrase indicates the
radix left nn ... nn minimum to maximum number of digits
which may appear to the left of the
decimal point.
dm XXXXXXX pic Identifies the masking type for a data
nnn model XXXXXXX.

342 Workbench User’s Guide


Understanding the Trace Output

Keyword Meaning
DM: XXXXXXX Identifies the initial reference to an
instance of a data model item.
ENTER fill_sdm( ): Identifies the instance of parent group
XXXXXXX parent- item XXXXXXX. The parent instance is
>instance n incremented when a new hierarchical
level is encountered.
Equal returns Indicates the result of an evaluation or
True/False condition statement.
err_dump( ) Indicates that the content of the error
dump stack is being discarded or reset to
a non-error state.
err_push( ) status nnn Assigns interval setting of an error value
msg “xxxxxxxx” to alter the state of processing
depending on keyword used, data
encountered, or data not encountered.
ERROR Indicates that the rules that follow will
be performed if the item is found to be in
error. The translator determines which
mode it is in by evaluating the current
error code value after parsing a data
model item (if it is a defining item).
error number For example, “184”
Evaluated value Displays the contents of the item just
before performing rules associated with
the item.
exec_ops status Identifies the status returned by a
condition or actions.
FINISHED Identifies the end of processing of an
XXXXXXX: occurrence of a data model item.
fp_save set to nnn in Identifies the last position read from the
xxxxxxx input stream.

Workbench User’s Guide 343


Section 6. Translating and Debugging

Keyword Meaning
FUNC XXXXXXX set Identifies where an access model item is
to value xxxx <nnnn> being set to a value of xxxx, where the
decimal value of xxxx is nnnn. The data
model “SET” functions, SET_DECIMAL,
SET_RELEASE, SET_FIRST_DELIM,
SET_SECOND_DELIM,
SET_THIRD_DELIM,
SET_FOURTH_DELIM,
SET_FIFTH_DELIM, and set the access
model items.
Function xxxxxx_in Displays the source access model
returns value: function (xxxxxx_in) used to return a
“yyyyyy” value of yyyyyy from the data input
stream.
Function xxxxxx_out: Displays the target data model output
dm yyyyyy pic function called (xxxxxx_out) to define
FFFFFFF the item type for data model item
yyyyyy. The output format for the item
is defined with the mask format
FFFFFFF.
get_source Where source data model processing
occurs.
infile “xxxxxxx” Specifies the name of the file used to
supply input data for processing.
initial pre condition Indicates that an access model function
XXXXXXX met call has returned a valid value for an
item pre-condition and assigned its
value to data model item XXXXXXX.
instance nnn Identifies which instance of the item was
acted upon. The instance is incremented
by a group. Instances are not
incremented by defining items or tag
items. Instances are reset when control
passes back to the parent item.
IO: char “x”, out of Indicates that data found in the input
char set stream is outside the range of the valid
characters defined for the item.

344 Workbench User’s Guide


Understanding the Trace Output

Keyword Meaning
IO: func XXXXXXX Indicates that the access model function
called XXXXXXX is being called to return a
value to the data model.
level nnn Starting from the top of the data model
and working downward, the level is a
reference to the number of unique data
model group items encountered. This
key word allows you to more easily
determine where in the data model the
item occurs. The level is also referenced
by the pipe ‘|’ symbols along the left
side of the trace file as each data model
item is encountered and when
processing of the item is finished. One
‘|’ is displayed for each level, i.e., “|||”
would be displayed at level 3.
litpush “X” Defines the value of a literal constant
usually assigned to a variable.
M_L VALUES Indicates that keywords are marking the
start of a summarization of values
assigned to all Meta_link variables on
the source and target data models.
Meta_link values are listed separately
for the source and target data models.
Matching XXXXXXX Shows the comparision performed for a
to YYYYYYY tag item. The data taken from the input
stream (XXXXXXX) is compared against
the tag (YYYYYYYY) for a match.
max occurrence nnnn Identifies the maximum number of times
that an item has been defined to occur
successively in a looping sequence.
occurrence nnnn Identifies the number of times the item
was encountered within a looping
sequence.
outfile “xxxxxxx” Specifies the name of the file used to
write processed data out to the disk.

Workbench User’s Guide 345


Section 6. Translating and Debugging

Keyword Meaning
pre_cond xxxxxxx An item is identified if the data found
(not) found meets the requirements of pre_condition
- item - post_condition. This statement
indicates whether the pre_condition has
been found.
PRESENT Indicates that the rules that follow will
be performed if the item is found to be
present. The translator determines
which mode it is in by evaluating the
current error code value after parsing a
data model item (if it is a defining item).
An error code value of “0” defines
present mode - the data model item is
present.
put_target Where target data model processing
occurs.
radix Defines the character to be used to
signify the decimal place within a
numeric item definition.
read_set returning Displays the returned value of a
xxxx character string read in from the data
stream.
resetting fp to nnn Indicates that the data file pointer is
being reset to value “nnn,” usually the
last value referenced for functions such
as # SET_FIRST_DELIM,
# SET_SECOND_DELIM,
# SET_THIRD_DELIM,
# SET_FOURTH_DELIM, and
# SET_FIFTH_DELIM or when an item is
in error. The file pointer may also be
reset to the beginning of the file when
keywords REJECT or RELEASE are
encountered in the data model.
right nn .. nn For a date model item defined as a
numeric, this indicates the minimum to
maximum number of digits which may
appear to the right of the decimal point.

346 Workbench User’s Guide


Understanding the Trace Output

Keyword Meaning
SENTINEL sequence Indicates the sentinels occur between
each grouping of occurences.
SOURCE VALUES Indicates that keywords are marking the
start of a summarization of source
values. This summarization comes at
the end of the trace file listing for the
source data model.
SOURCE_TARGET Indicates that keywords specifying the
put_target acc names of target access and data models
xxxxxxx model being loaded at this point in the
yyyyyyy.mdl trace/translation.
status Identifies the status returned by
operations performed on the parent
item.
statusc Identifies the status returned by
operations performed on an item’s
children.
TARGET VALUES Indicates that keywords are marking the
start of a summarization of target values.
This summarization comes at the end of
the trace file listing for a target data
model.
TIME_in The “_in” (for source or model
TIME_out processing) or “_out” (for target data
model processing) is added to function
names such as TIME and DATE to
indicate access model functions.
VALUE STACK Values parsed or constructed and placed
on the value stack, beginning of Phase 2
target processing.
VAR VALUES Indicates that keywords are marking the
start of a summarization of values
assigned to all temporary variables on
the source and target data models.
Meta_link values are listed separately
for the source and target data models.
VSTK The writing of each value off the value
stack - Phase 2 of target processing

Workbench User’s Guide 347


Section 6. Translating and Debugging

Keyword Meaning
VSTK->dm xxxxxxx Shows a walkthrough of data model
dm xxxxxxx value value assignments and matching
nnnn attempts.
XXXXXX read_set err Indicates that a value read in from the
input stream is outside of the character
set XXXXXX.
XXXXXXX: status OK Indicates that rules on data model items
after PRESENT rules XXXXXXX were performed sucessfully.

Debugging when 1. Insert messages in the model using the SEND_SMSG function
Processing a Large to mark progress during the translation at a high level, for
Volume of Data example:
SEND_SMSG(2, “Field Content:”, Field1)
2. Turn the trace on and off from within the model to get more
detail:
SET_EVAR("TRACE_LEVEL", 1023)
Refer to Appendix B of this manual for a complete description
of these functions.

348 Workbench User’s Guide


Understanding the Trace Output

Example Trace Log The following is an example using the trace level 1023 —all options
selected.
(Output begins here)
Application Integrator(tm) translator [ Version 1.41 - compiled:
01/15/96 16:28:28 ] Translation
(c) Copyright 1992-95 by GE Information Services & Initialization
TCS Enterprises, Inc.
All rights reserved.
Start of translation at Fri Jun 7 11:06:49 1996 Start Time

Total base memory allocated: 47616


Initializing Device-Independent Library Module
[Version 1001 - compiled: 05/13/94 at: 13:12:01] ... Completed
Initializing Access Function Library Module
[Version 1001 - compiled: 05/13/94 at: 13:12:13] ... Completed
Initializing Data Model Function Library Module
[Version 1001 - compiled: 11/02/94 at: 13:57:39] ... Completed
30, nice decrement: 6 new value 38
infile "data.in" outfile "data.out"
defaulting to std file device on "data.in"
defaulting to std file device on "data.out"
SOURCE_TARGET get_source acc OTFixed.acc model Example1s.mdl
| ---------------- Map
| ENTER fill_sdm(): InputRecord parent->instance 0
| ---------------- Component File
| DM: InputRecord instance 0 level 1 Initialization
fp_save set to 0 in InputRecord
InputRecord Matching to dm->dh 1074430864
fill_sdm initial pre condition Rec_Code met
err_dump()
|| ---------------- Source
|| ENTER fill_sdm(): FirstName parent->instance 0 Processing
|| ----------------
|| DM: FirstName instance 0 level 2
fp_save set to 0 in FirstName
IO: func CHARSET called
IO: c = 77 <M> Set start_char 32 end_char 126
IO: c = 97 <a> Set start_char 32 end_char 126
IO: c = 114 <r> Set start_char 32 end_char 126
IO: c = 121 <y> Set start_char 32 end_char 126
IO: c = 32 < > Set start_char 32 end_char 126
IO: read_set returning “Mary "
err_dump()
FirstName: status OK after PRESENT rules
err_dump()
|| FINISHED FirstName: status 0 statusc 0 occurrence 1 max occurrence 1
|| DM: LastName instance 0 level 2

Workbench User’s Guide 349


Section 6. Translating and Debugging

fp_save set to 5 in LastName


IO: func CHARSET called
IO: c = 83 <S> Set start_char 32 end_char 126
IO: c = 109 <m> Set start_char 32 end_char 126
IO: c = 105 <i> Set start_char 32 end_char 126
IO: c = 116 <t> Set start_char 32 end_char 126
IO: c = 104 <h> Set start_char 32 end_char 126
IO: c = 32 < > Set start_char 32 end_char 126
IO: read_set returning “Smith "
err_dump()
LastName: status OK after PRESENT rules
err_dump()
|| FINISHED LastName: status 0 statusc 0 occurrence 1 max occurrence 1
|| DM: PhoneNumber instance 0 level 2
fp_save set to 11 in PhoneNumber
IO: func NUMERIC called
Function NUMERIC_in: dm PhoneNumber pic “No format"
dm left 10 .. 10 right 0 .. 0 radix
IO: c = 51 <3> Set start_char 32 end_char 126
IO: c = 50 <2> Set start_char 32 end_char 126
IO: c = 53 <5> Set start_char 32 end_char 126
IO: c = 53 <5> Set start_char 32 end_char 126
IO: c = 53 <5> Set start_char 32 end_char 126
IO: c = 53 <5> Set start_char 32 end_char 126
IO: c = 48 <0> Set start_char 32 end_char 126
IO: c = 57 <9> Set start_char 32 end_char 126
IO: c = 54 <6> Set start_char 32 end_char 126
IO: c = 49 <1> Set start_char 32 end_char 126
IO: read_set returning “3255550961"
Function NUMERIC_in returns value: “3255550961"
err_dump()
PhoneNumber: status OK after PRESENT rules
err_dump()
|| FINISHED PhoneNumber: status 0 statusc 0 occurrence 1 max occurrence 1
IO: integer 10 read.
err_dump()
InputRecord - PRESENT rules
[ NULL CONDITION ]
| ACTIONS |
+++ EVAL +++ FirstName
Evaluated value “Mary "
assignment to ARRAY->first
+++ EVAL +++ LastName
Evaluated value “Smith "
assignment to ARRAY->last
+++ EVAL +++ PhoneNumber
Evaluated value “3255550961"
assignment to ARRAY->phone
+++ EVAL +++ FirstName

350 Workbench User’s Guide


Understanding the Trace Output

Evaluated value “Mary "


assignment to VAR->first
+++ EVAL +++ LastName
Evaluated value “Smith "
assignment to VAR->last
+++ EVAL +++ PhoneNumber
Evaluated value “3255550961"
assignment to VAR->phone
+++ EVAL +++ FirstName
Evaluated value “Mary "
assignment to M_L->first
+++ EVAL +++ LastName
Evaluated value “Smith "
assignment to M_L->last
+++ EVAL +++ PhoneNumber
Evaluated value “3255550961"
assignment to M_L->phone
exec_ops status 0
| END RULE |
InputRecord: status OK after PRESENT rules
err_dump()
| FINISHED InputRecord: status 0 statusc 0 occurrence 1 max occurrence 100
| DM: InputRecord instance 0 level 1
fp_save set to 22 in InputRecord
InputRecord Matching to dm->dh 1074430864
fill_sdm initial pre condition Rec_Code met
err_dump()
|| ----------------
|| ENTER fill_sdm(): FirstName parent->instance 0
|| ----------------
|| DM: FirstName instance 0 level 2
fp_save set to 22 in FirstName
IO: func CHARSET called
IO: c = 74 <J> Set start_char 32 end_char 126
IO: c = 111 <o> Set start_char 32 end_char 126
IO: c = 104 <h> Set start_char 32 end_char 126
IO: c = 110 <n> Set start_char 32 end_char 126
IO: c = 32 < > Set start_char 32 end_char 126
IO: read_set returning “John "
err_dump()
FirstName: status OK after PRESENT rules
err_dump()
|| FINISHED FirstName: status 0 statusc 0 occurrence 1 max occurrence 1
|| DM: LastName instance 0 level 2
fp_save set to 27 in LastName
IO: func CHARSET called
IO: c = 71 <G> Set start_char 32 end_char 126
IO: c = 114 <r> Set start_char 32 end_char 126
IO: c = 101 <e> Set start_char 32 end_char 126

Workbench User’s Guide 351


Section 6. Translating and Debugging

IO: c = 101 <e> Set start_char 32 end_char 126


IO: c = 110 <n> Set start_char 32 end_char 126
IO: c = 32 < > Set start_char 32 end_char 126
IO: read_set returning “Green "
err_dump()
LastName: status OK after PRESENT rules
err_dump()
|| FINISHED LastName: status 0 statusc 0 occurrence 1 max occurrence 1
|| DM: PhoneNumber instance 0 level 2
fp_save set to 33 in PhoneNumber
IO: func NUMERIC called
Function NUMERIC_in: dm PhoneNumber pic “No format"
dm left 10 .. 10 right 0 .. 0 radix
IO: c = 50 <2> Set start_char 32 end_char 126
IO: c = 49 <1> Set start_char 32 end_char 126
IO: c = 50 <2> Set start_char 32 end_char 126
IO: c = 53 <5> Set start_char 32 end_char 126
IO: c = 53 <5> Set start_char 32 end_char 126
IO: c = 53 <5> Set start_char 32 end_char 126
IO: c = 49 <1> Set start_char 32 end_char 126
IO: c = 50 <2> Set start_char 32 end_char 126
IO: c = 49 <1> Set start_char 32 end_char 126
IO: c = 50 <2> Set start_char 32 end_char 126
IO: read_set returning “2125551212"
Function NUMERIC_in returns value: “2125551212"
err_dump()
PhoneNumber: status OK after PRESENT rules
err_dump()
|| FINISHED PhoneNumber: status 0 statusc 0 occurrence 1 max occurrence 1
IO: integer 10 read.
err_dump()
InputRecord - PRESENT rules
[ NULL CONDITION ]
| ACTIONS |
+++ EVAL +++ FirstName
Evaluated value “John "
assignment to ARRAY->first
+++ EVAL +++ LastName
Evaluated value “Green "
assignment to ARRAY->last
+++ EVAL +++ PhoneNumber
Evaluated value “2125551212"
assignment to ARRAY->phone
+++ EVAL +++ FirstName
Evaluated value “John "
assignment to VAR->first
+++ EVAL +++ LastName
Evaluated value “Green "
assignment to VAR->last

352 Workbench User’s Guide


Understanding the Trace Output

+++ EVAL +++ PhoneNumber


Evaluated value “2125551212"
assignment to VAR->phone
+++ EVAL +++ FirstName
Evaluated value “John "
assignment to M_L->first
+++ EVAL +++ LastName
Evaluated value “Green "
assignment to M_L->last
+++ EVAL +++ PhoneNumber
Evaluated value “2125551212"
assignment to M_L->phone
exec_ops status 0
| END RULE |
InputRecord: status OK after PRESENT rules
err_dump()
| FINISHED InputRecord: status 0 statusc 0 occurrence 2 max occurrence 100
| DM: InputRecord instance 0 level 1
fp_save set to 44 in InputRecord
InputRecord Matching to dm->dh 1074430864
fill_sdm initial pre condition Rec_Code met
err_dump()
|| ----------------
|| ENTER fill_sdm(): FirstName parent->instance 0
|| ----------------
|| DM: FirstName instance 0 level 2
fp_save set to 44 in FirstName
IO: func CHARSET called
IO: c = 66 <B> Set start_char 32 end_char 126
IO: c = 111 <o> Set start_char 32 end_char 126
IO: c = 98 <b> Set start_char 32 end_char 126
IO: c = 32 < > Set start_char 32 end_char 126
IO: c = 32 < > Set start_char 32 end_char 126
IO: read_set returning “Bob "
err_dump()
FirstName: status OK after PRESENT rules
err_dump()
|| FINISHED FirstName: status 0 statusc 0 occurrence 1 max occurrence 1
|| DM: LastName instance 0 level 2
fp_save set to 49 in LastName
IO: func CHARSET called
IO: c = 74 <J> Set start_char 32 end_char 126
IO: c = 111 <o> Set start_char 32 end_char 126
IO: c = 110 <n> Set start_char 32 end_char 126
IO: c = 101 <e> Set start_char 32 end_char 126
IO: c = 115 <s> Set start_char 32 end_char 126
IO: c = 32 < > Set start_char 32 end_char 126
IO: read_set returning “Jones "
err_dump()

Workbench User’s Guide 353


Section 6. Translating and Debugging

LastName: status OK after PRESENT rules


err_dump()
|| FINISHED LastName: status 0 statusc 0 occurrence 1 max occurrence 1
|| DM: PhoneNumber instance 0 level 2
fp_save set to 55 in PhoneNumber
IO: func NUMERIC called
Function NUMERIC_in: dm PhoneNumber pic “No format"
dm left 10 .. 10 right 0 .. 0 radix
IO: c = 51 <3> Set start_char 32 end_char 126
IO: c = 49 <1> Set start_char 32 end_char 126
IO: c = 51 <3> Set start_char 32 end_char 126
IO: c = 53 <5> Set start_char 32 end_char 126
IO: c = 52 <4> Set start_char 32 end_char 126
IO: c = 48 <0> Set start_char 32 end_char 126
IO: c = 49 <1> Set start_char 32 end_char 126
IO: c = 54 <6> Set start_char 32 end_char 126
IO: c = 48 <0> Set start_char 32 end_char 126
IO: c = 48 <0> Set start_char 32 end_char 126
IO: read_set returning “3135401600"
Function NUMERIC_in returns value: “3135401600"
err_dump()
PhoneNumber: status OK after PRESENT rules
err_dump()
|| FINISHED PhoneNumber: status 0 statusc 0 occurrence 1 max occurrence 1
IO: integer 10 read.
err_dump()
InputRecord - PRESENT rules
[ NULL CONDITION ]
| ACTIONS |
+++ EVAL +++ FirstName
Evaluated value “Bob "
assignment to ARRAY->first
+++ EVAL +++ LastName
Evaluated value “Jones "
assignment to ARRAY->last
+++ EVAL +++ PhoneNumber
Evaluated value “3135401600"
assignment to ARRAY->phone
+++ EVAL +++ FirstName
Evaluated value “Bob "
assignment to VAR->first
+++ EVAL +++ LastName
Evaluated value “Jones "
assignment to VAR->last
+++ EVAL +++ PhoneNumber
Evaluated value “3135401600"
assignment to VAR->phone
+++ EVAL +++ FirstName
Evaluated value “Bob "

354 Workbench User’s Guide


Understanding the Trace Output

assignment to M_L->first
+++ EVAL +++ LastName
Evaluated value “Jones "
assignment to M_L->last
+++ EVAL +++ PhoneNumber
Evaluated value \“3135401600”
assignment to M_L->phone
exec_ops status 0
| END RULE |
InputRecord: status OK after PRESENT rules
err_dump()
| FINISHED InputRecord: status 0 statusc 0 occurrence 3 max occurrence 100
| DM: InputRecord instance 0 level 1
fp_save set to 66 in InputRecord
InputRecord Matching to dm->dh 1074430864
fill_sdm initial pre condition Rec_Code met
err_dump()
|| ----------------
|| ENTER fill_sdm(): FirstName parent->instance 0
|| ----------------
|| DM: FirstName instance 0 level 2
fp_save set to 66 in FirstName
IO: func CHARSET called
IO: c = 83 <S> Set start_char 32 end_char 126
IO: c = 117 <u> Set start_char 32 end_char 126
IO: c = 101 <e> Set start_char 32 end_char 126
IO: c = 32 < > Set start_char 32 end_char 126
IO: c = 32 < > Set start_char 32 end_char 126
IO: read_set returning “Sue "
err_dump()
FirstName: status OK after PRESENT rules
err_dump()
|| FINISHED FirstName: status 0 statusc 0 occurrence 1 max occurrence 1
|| DM: LastName instance 0 level 2
fp_save set to 71 in LastName
IO: func CHARSET called
IO: c = 87 <W> Set start_char 32 end_char 126
IO: c = 105 <i> Set start_char 32 end_char 126
IO: c = 108 <l> Set start_char 32 end_char 126
IO: c = 108 <l> Set start_char 32 end_char 126
IO: c = 105 <i> Set start_char 32 end_char 126
IO: c = 115 <s> Set start_char 32 end_char 126
IO: read_set returning “Willis"
err_dump()
LastName: status OK after PRESENT rules
err_dump()
|| FINISHED LastName: status 0 statusc 0 occurrence 1 max occurrence 1
|| DM: PhoneNumber instance 0 level 2
fp_save set to 77 in PhoneNumber

Workbench User’s Guide 355


Section 6. Translating and Debugging

IO: func NUMERIC called


Function NUMERIC_in: dm PhoneNumber pic “No format"
dm left 10 .. 10 right 0 .. 0 radix
IO: c = 53 <5> Set start_char 32 end_char 126
IO: c = 49 <1> Set start_char 32 end_char 126
IO: c = 55 <7> Set start_char 32 end_char 126
IO: c = 56 <8> Set start_char 32 end_char 126
IO: c = 51 <3> Set start_char 32 end_char 126
IO: c = 57 <9> Set start_char 32 end_char 126
IO: c = 56 <8> Set start_char 32 end_char 126
IO: c = 48 <0> Set start_char 32 end_char 126
IO: c = 48 <0> Set start_char 32 end_char 126
IO: c = 56 <8> Set start_char 32 end_char 126
IO: read_set returning “5178398008"
Function NUMERIC_in returns value: “5178398008"
err_dump()
PhoneNumber: status OK after PRESENT rules
err_dump()
|| FINISHED PhoneNumber: status 0 statusc 0 occurrence 1 max occurrence 1
IO: integer 10 read.
err_dump()
InputRecord - PRESENT rules
[ NULL CONDITION ]
| ACTIONS |
+++ EVAL +++ FirstName
Evaluated value “Sue "
assignment to ARRAY->first
+++ EVAL +++ LastName
Evaluated value “Willis"
assignment to ARRAY->last
+++ EVAL +++ PhoneNumber
Evaluated value “5178398008"
assignment to ARRAY->phone
+++ EVAL +++ FirstName
Evaluated value “Sue "
assignment to VAR->first
+++ EVAL +++ LastName
Evaluated value “Willis"
assignment to VAR->last
+++ EVAL +++ PhoneNumber
Evaluated value “5178398008"
assignment to VAR->phone
+++ EVAL +++ FirstName
Evaluated value “Sue "
assignment to M_L->first
+++ EVAL +++ LastName
Evaluated value “Willis"
assignment to M_L->last
+++ EVAL +++ PhoneNumber

356 Workbench User’s Guide


Understanding the Trace Output

Evaluated value “5178398008"


assignment to M_L->phone
exec_ops status 0
| END RULE |
InputRecord: status OK after PRESENT rules
err_dump()
| FINISHED InputRecord: status 0 statusc 0 occurrence 4 max occurrence 100
| DM: InputRecord instance 0 level 1
fp_save set to 88 in InputRecord
InputRecord Matching to dm->dh 1074430864
fill_sdm initial pre condition Rec_Code met
err_dump()
|| ----------------
|| ENTER fill_sdm(): FirstName parent->instance 0
|| ----------------
|| DM: FirstName instance 0 level 2
fp_save set to 88 in FirstName
IO: func CHARSET called
CHARSET read_set err -1
err_push() status -1 msg “Returning error assert FirstName status -1"
type 4 errstkp 0
err_dump()
--------------
*** SOURCE VALUES ***
--------------
DM: LastName val “Willis" instance 0 level 2
end LastName values
DM: PhoneNumber val “5178398008" instance 0 level 2
end PhoneNumber values
SENTINEL sequence 0
end InputRecord values
----------
M_L VALUES
----------
DM: first val “Mary " instance 0 level 2
DM: first val “John " instance 0 level 2
DM: first val “Bob " instance 0 level 2
DM: first val “Sue " instance 0 level 2
SENTINEL sequence 0
end first values
DM: last val “Smith " instance 0 level 2
DM: last val “Green " instance 0 level 2
DM: last val “Jones " instance 0 level 2
DM: last val “Willis" instance 0 level 2
SENTINEL sequence 0
end last values
DM: phone val “3255550961" instance 0 level 2
DM: phone val “2125551212" instance 0 level 2
DM: phone val “3135401600" instance 0 level 2

Workbench User’s Guide 357


Section 6. Translating and Debugging

DM: phone val “5178398008" instance 0 level 2


SENTINEL sequence 0
end phone values
----------
VAR VALUES
----------
DM: first val “Sue " instance 0 level 2
SENTINEL sequence 0
end first values
DM: last val “Willis" instance 0 level 2
SENTINEL sequence 0
end last values
DM: phone val “5178398008" instance 0 level 2
SENTINEL sequence 0
end phone values
----------
ARRAY VALUES
----------
DM: first val “Mary " instance 0 level 2
DM: first val “John " instance 0 level 2
DM: first val “Bob " instance 0 level 2
DM: first val “Sue " instance 0 level 2
SENTINEL sequence 0
end first values
DM: last val “Smith " instance 0 level 2
DM: last val “Green " instance 0 level 2
DM: last val “Jones " instance 0 level 2
DM: last val “Willis" instance 0 level 2
SENTINEL sequence 0
end last values
DM: phone val “3255550961" instance 0 level 2
DM: phone val “2125551212" instance 0 level 2 Target
DM: phone val “3135401600" instance 0 level 2 Processing
DM: phone val “5178398008" instance 0 level 2 Phase 1
SENTINEL sequence 0
end phone values
SOURCE_TARGET put_target acc OTFixed.acc model Example1t.mdl
| DM: OutputRecord instance 0 cur_tdm_inst 0
|| DM: PhoneNumber instance 0 cur_tdm_inst 0
PhoneNumber - PRESENT rules
[ NULL CONDITION ]
| ACTIONS |
+++ EVAL +++ ARRAY->phone
Evaluated value “3255550961"
assignment to PhoneNumber
exec_ops status 0
| END RULE |
PhoneNumber: status OK after PRESENT rules
err_dump()

358 Workbench User’s Guide


Understanding the Trace Output

|| DM: LastName instance 0 cur_tdm_inst 0


LastName - PRESENT rules
[ NULL CONDITION ]
| ACTIONS |
+++ EVAL +++ ARRAY->last
Evaluated value “Smith "
assignment to LastName
exec_ops status 0
| END RULE |
LastName: status OK after PRESENT rules
err_dump()
|| DM: FirstName instance 0 cur_tdm_inst 0
FirstName - PRESENT rules
[ NULL CONDITION ]
| ACTIONS |
+++ EVAL +++ ARRAY->first
Evaluated value “Mary "
assignment to FirstName
exec_ops status 0
| END RULE |
FirstName: status OK after PRESENT rules
err_dump()
|| FINISHED FirstName: returning status 0
OutputRecord: status after children 0
| DM: OutputRecord instance 0 cur_tdm_inst 0
|| DM: PhoneNumber instance 0 cur_tdm_inst 0
PhoneNumber - PRESENT rules
[ NULL CONDITION ]
| ACTIONS |
+++ EVAL +++ ARRAY->phone
Evaluated value “2125551212"
assignment to PhoneNumber
exec_ops status 0
| END RULE |
PhoneNumber: status OK after PRESENT rules
err_dump()
|| DM: LastName instance 0 cur_tdm_inst 0
LastName - PRESENT rules
[ NULL CONDITION ]
| ACTIONS |
+++ EVAL +++ ARRAY->last
Evaluated value “Green "
assignment to LastName
exec_ops status 0
| END RULE |
LastName: status OK after PRESENT rules
err_dump()
|| DM: FirstName instance 0 cur_tdm_inst 0
FirstName - PRESENT rules

Workbench User’s Guide 359


Section 6. Translating and Debugging

[ NULL CONDITION ]
| ACTIONS |
+++ EVAL +++ ARRAY->first
Evaluated value “John "
assignment to FirstName
exec_ops status 0
| END RULE |
FirstName: status OK after PRESENT rules
err_dump()
|| FINISHED FirstName: returning status 0
OutputRecord: status after children 0
| DM: OutputRecord instance 0 cur_tdm_inst 0
|| DM: PhoneNumber instance 0 cur_tdm_inst 0
PhoneNumber - PRESENT rules
[ NULL CONDITION ]
| ACTIONS |
+++ EVAL +++ ARRAY->phone
Evaluated value “3135401600"
assignment to PhoneNumber
exec_ops status 0
| END RULE |
PhoneNumber: status OK after PRESENT rules
err_dump()
|| DM: LastName instance 0 cur_tdm_inst 0
LastName - PRESENT rules
[ NULL CONDITION ]
| ACTIONS |
+++ EVAL +++ ARRAY->last
Evaluated value “Jones "
assignment to LastName
exec_ops status 0
| END RULE |
LastName: status OK after PRESENT rules
err_dump()
|| DM: FirstName instance 0 cur_tdm_inst 0
FirstName - PRESENT rules
[ NULL CONDITION ]
| ACTIONS |
+++ EVAL +++ ARRAY->first
Evaluated value “Bob "
assignment to FirstName
exec_ops status 0
| END RULE |
FirstName: status OK after PRESENT rules
err_dump()
|| FINISHED FirstName: returning status 0
OutputRecord: status after children 0
| DM: OutputRecord instance 0 cur_tdm_inst 0
|| DM: PhoneNumber instance 0 cur_tdm_inst 0

360 Workbench User’s Guide


Understanding the Trace Output

PhoneNumber - PRESENT rules


[ NULL CONDITION ]
| ACTIONS |
+++ EVAL +++ ARRAY->phone
Evaluated value “5178398008"
assignment to PhoneNumber
exec_ops status 0
| END RULE |
PhoneNumber: status OK after PRESENT rules
err_dump()
|| DM: LastName instance 0 cur_tdm_inst 0
LastName - PRESENT rules
[ NULL CONDITION ]
| ACTIONS |
+++ EVAL +++ ARRAY->last
Evaluated value “Willis"
assignment to LastName
exec_ops status 0
| END RULE |
LastName: status OK after PRESENT rules
err_dump()
|| DM: FirstName instance 0 cur_tdm_inst 0
FirstName - PRESENT rules
[ NULL CONDITION ]
| ACTIONS |
+++ EVAL +++ ARRAY->first
Evaluated value “Sue "
assignment to FirstName
exec_ops status 0
| END RULE |
FirstName: status OK after PRESENT rules
err_dump()
|| FINISHED FirstName: returning status 0
OutputRecord: status after children 0
| DM: OutputRecord instance 0 cur_tdm_inst 0
|| DM: PhoneNumber instance 0 cur_tdm_inst 0
PhoneNumber - PRESENT rules
[ NULL CONDITION ]
| ACTIONS |
+++ EVAL +++ ARRAY->phone
eval: no match inst
exec_ops status 140
| END RULE |
PhoneNumber: ERROR-> status after PRESENT rules 140
|| FINISHED PhoneNumber: returning status 140
err_push() status 140 msg “Returning no data/instance PhoneNumber
status 140" type 4 errstkp 0
OutputRecord: status after children 140
err_dump()

Workbench User’s Guide 361


Section 6. Translating and Debugging

| FINISHED OutputRecord: returning status 0


Target
--------------------------------------- Processing
*** TARGET VALUES ********************* Phase 2
---------------------------------------
----------
M_L values
----------
DM: first val “Mary " instance 0 level 2
DM: first val “John " instance 0 level 2
DM: first val “Bob " instance 0 level 2
DM: first val “Sue " instance 0 level 2
SENTINEL sequence 0
end first values
DM: last val “Smith " instance 0 level 2
DM: last val “Green " instance 0 level 2
DM: last val “Jones " instance 0 level 2
DM: last val “Willis" instance 0 level 2
SENTINEL sequence 0
end last values
DM: phone val “3255550961" instance 0 level 2
DM: phone val “2125551212" instance 0 level 2
DM: phone val “3135401600" instance 0 level 2
DM: phone val “5178398008" instance 0 level 2
SENTINEL sequence 0
end phone values
----------
VAR values
----------
DM: first val “Sue " instance 0 level 2
SENTINEL sequence 0
end first values
DM: last val “Willis" instance 0 level 2
SENTINEL sequence 0
end last values
DM: phone val “5178398008" instance 0 level 2
SENTINEL sequence 0
end phone values
----------
ARRAY values
----------
DM: first val “Mary " instance 0 level 2
DM: first val “John " instance 0 level 2
DM: first val “Bob " instance 0 level 2
DM: first val “Sue " instance 0 level 2
SENTINEL sequence 0
end first values
DM: last val “Smith " instance 0 level 2
DM: last val “Green " instance 0 level 2

362 Workbench User’s Guide


Understanding the Trace Output

DM: last val “Jones " instance 0 level 2


DM: last val “Willis" instance 0 level 2
SENTINEL sequence 0
end last values
DM: phone val “3255550961" instance 0 level 2
DM: phone val “2125551212" instance 0 level 2
DM: phone val “3135401600" instance 0 level 2
DM: phone val “5178398008" instance 0 level 2
SENTINEL sequence 0
end phone values
-----------
VALUE STACK
-----------
DM: PhoneNumber [3255550961] inst 0 vstk 1074524520
DM: LastName [Smith ] inst 0 vstk 1074524532
DM: FirstName [Mary ] inst 0 vstk 1074524544
DM: PhoneNumber [2125551212] inst 0 vstk 1074524556
DM: LastName [Green ] inst 0 vstk 1074524568
DM: FirstName [John ] inst 0 vstk 1074524580
DM: PhoneNumber [3135401600] inst 0 vstk 1074524592
DM: LastName [Jones ] inst 0 vstk 1074524604
DM: FirstName [Bob ] inst 0 vstk 1074524616
DM: PhoneNumber [5178398008] inst 0 vstk 1074524628
DM: LastName [Willis] inst 0 vstk 1074524640
DM: FirstName [Sue ] inst 0 vstk 1074524652
vstk 1074524520
vstk dm PhoneNumber val 3255550961
VSTK-> dm PhoneNumber dm OutputRecord value 3255550961
vstk 1074524520
vstk dm PhoneNumber val 3255550961
VSTK-> dm PhoneNumber dm PhoneNumber value 3255550961
Writing dm PhoneNumber acc NumericFld
Writing dm PhoneNumber acc NUMERIC
Function NUMERIC_out: dm PhoneNumber pic: (999) 999@-9999
dm value “3255550961"
NUMERIC_out: return value “(325) 555-0961"
vstk 1074524532
vstk dm LastName val Smith
VSTK-> dm LastName dm LastName value Smith
Writing dm LastName acc AlphaFld
Writing dm LastName acc SET OBJ
vstk 1074524544
vstk dm FirstName val Mary
VSTK-> dm FirstName dm FirstName value Mary
Writing dm FirstName acc AlphaNumericFld
Writing dm FirstName acc CHARSET
returning 3 writes
Writing dm OutputRecord acc LineFeed
Writing dm OutputRecord acc INTEGER OBJ

Workbench User’s Guide 363


Section 6. Translating and Debugging

vstk 1074524556
vstk dm PhoneNumber val 2125551212
VSTK-> dm PhoneNumber dm OutputRecord value 2125551212
vstk 1074524556
vstk dm PhoneNumber val 2125551212
VSTK-> dm PhoneNumber dm PhoneNumber value 2125551212
Writing dm PhoneNumber acc NumericFld
Writing dm PhoneNumber acc NUMERIC
Function NUMERIC_out: dm PhoneNumber pic: (999) 999@-9999
dm value “2125551212"
NUMERIC_out: return value “(212) 555-1212"
vstk 1074524568
vstk dm LastName val Green
VSTK-> dm LastName dm LastName value Green
Writing dm LastName acc AlphaFld
Writing dm LastName acc SET OBJ
vstk 1074524580
vstk dm FirstName val John
VSTK-> dm FirstName dm FirstName value John
Writing dm FirstName acc AlphaNumericFld
Writing dm FirstName acc CHARSET
returning 3 writes
Writing dm OutputRecord acc LineFeed
Writing dm OutputRecord acc INTEGER OBJ
vstk 1074524592
vstk dm PhoneNumber val 3135401600
VSTK-> dm PhoneNumber dm OutputRecord value 3135401600
vstk 1074524592
vstk dm PhoneNumber val 3135401600
VSTK-> dm PhoneNumber dm PhoneNumber value 3135401600
Writing dm PhoneNumber acc NumericFld
Writing dm PhoneNumber acc NUMERIC
Function NUMERIC_out: dm PhoneNumber pic: (999) 999@-9999
dm value “3135401600"
NUMERIC_out: return value “(313) 540-1600”
vstk 1074524604
vstk dm LastName val Jones
VSTK-> dm LastName dm LastName value Jones
Writing dm LastName acc AlphaFld
Writing dm LastName acc SET OBJ
vstk 1074524616
vstk dm FirstName val Bob
VSTK-> dm FirstName dm FirstName value Bob
Writing dm FirstName acc AlphaNumericFld
Writing dm FirstName acc CHARSET
returning 3 writes
Writing dm OutputRecord acc LineFeed
Writing dm OutputRecord acc INTEGER OBJ
vstk 1074524628

364 Workbench User’s Guide


Understanding the Trace Output

vstk dm PhoneNumber val 5178398008


VSTK-> dm PhoneNumber dm OutputRecord value 5178398008
vstk 1074524628
vstk dm PhoneNumber val 5178398008
VSTK-> dm PhoneNumber dm PhoneNumber value 5178398008
Writing dm PhoneNumber acc NumericFld
Writing dm PhoneNumber acc NUMERIC
Function NUMERIC_out: dm PhoneNumber pic: (999) 999@-9999
dm value “5178398008"
NUMERIC_out: return value "(517) 839-8008"
vstk 1074524640
vstk dm LastName val Willis
VSTK-> dm LastName dm LastName value Willis
Writing dm LastName acc AlphaFld
Writing dm LastName acc SET OBJ
vstk 1074524652
vstk dm FirstName val Sue
VSTK-> dm FirstName dm FirstName value Sue
Writing dm FirstName acc AlphaNumericFld
Writing dm FirstName acc CHARSET
Writing dm OutputRecord acc LineFeed
Writing dm OutputRecord acc INTEGER OBJ
Translation successful
Closing file: “data.in"
Closing file: “data.out"
send STATUS 0 to job 48 End
Translation complete at Fri Jun 7 11:06:50 1996 Time

(Output ends here)

Workbench User’s Guide 365


Section 6. Translating and Debugging

You can view the contents of the input and output files from within
Viewing Input the Layout Editor of Workbench. Reviewing the actual data parsed
and Output Files or constructed can be a helpful tool in debugging.

❖ Note: Data is displayed without alteration. If


records/segments do not end with a line feed, the data may
appear as one long record in the display.

Ø To view either the input or output files


1. From the View menu of the Layout Editor window, select either
Input File or Output File.
A text display window appears with either the input file or the
output file for the current map component file. The following
example shows such a window for a TRADACOMS output file.

2. Use the scroll bars to view the entire file.


3. You can search for text within the data file by using the Find
option.
To do this, choose the Find button to open a Search-Type dialog
box. Type the text in the Find What value entry box and choose
the Find Next button.

366 Workbench User’s Guide


Viewing Input and Output Files

❖ Note: In most cases, the term you are searching for is


highlighted and the system scrolls to the first reference.
To see every reference to a term, choose Find Next
again.
However, if the file you are viewing has long line lengths
requiring horizontal scrolling, the system highlights the
term, but does not automatically scroll to it. Your
indicator that the term has been found is the lack of a
return message of “Not found” or “Search wrapped
around file.” Usually stretching the display window as
large as possible reveals the highlighted terms, in other
cases, manually scrolling through the file will isolate the
term.

You can narrow your search by determining whether to:


r Match case: Select the Match case box to do this.
r Use the regular expression: Select the Regular
expression box to do this.
r Choose the Up or Down radio button to indicate the
direction of the search.
r Choose the Cancel button to exit the Find dialog.
4. Choose the Minimize button to minimize the text display to
review again

❖ Note: You cannot modify the input/output file through


this viewing option.

Workbench User’s Guide 367


Section 6. Translating and Debugging

Source to Target Two new options that were added to the Debug File drop down
Mappings Report menu are Source to Target Map Listing and Data Model Listing.
These are new dialog boxes.
The Source to Target Maps dialog box is used to create a report that
shows the source data model item labels, the associated variable
labels, and the target data model item labels. The report can be
displayed on the screen, printed, or sent to a file.
The Data Model Listing dialog box is used to create a report that
shows the data model and offers the option of printing the data
model with or without rules. The report can be displayed on the
screen, printed, or sent to a file.

Ø To view the Source to Target Mappings report


1. From the Layout Editor’s main menu, choose Debug.
2. Choose Source to Target Mappings. The Source To Target
Mappings dialog box appears.

368 Workbench User’s Guide


Viewing Input and Output Files

3. At the Source Data Model value entry box, type the filename of
the source data model to be used in the report;
– or –
Select the down arrow button to display a drop down menu.
Highlight the desired source data model.
4. In the Target Data Model value entry box, type the filename of
the source data model to be used in the report;
– or –
Select the down arrow button to display a drop down menu.
Highlight the desired source data model.
5. In the Label Sequence group box, choose the sequence in which
the report should appear. The option chosen will appear in the
first column of the report.
6. In the Report Format group box, choose the format of the
report. You may choose as many check boxes as necessary to
format the listing. The group box will default to “Source to
Target Direct.”

Source Not Report will list the data model items that
Mapped appear in the source model that do not get
mapped to a variable or to a target data
model item.
Source Indirect Report will list the data model items that
get mapped to a variable in the source
model but do not get carried over to the
target model.
Source To Report will list the data model items that
Target Direct get mapped to a variable then mapped to a
target data model item.
Target Indirect Report will list the data model items in the
target model that get assignments from
variables but the variables don’t appear in
the source model.
Target Not Report will list data model items that
Mapped appear in the target model that have not
been mapped from a variable or source
data model item.

Workbench User’s Guide 369


Section 6. Translating and Debugging

7. In the Output Method area, choose the output method and


enter any required information:
r Display Report: Outputs the report to the screen.
r Print Report: Sends the report to the default printer that
is set up under User Preferences or Print Setup,
depending on your operating system.
r File Report: Saves the report to a file. You must enter
the filename to which the file will be sent, otherwise the
filename will default to your user ID followed by the
“.rpt” extension.
8. Once all the information is entered, choose the OK button to
generate the report;
– or –
Choose the Cancel button to return to the Layout Editor.

Source to Target Mappings Examples


Shown here examples of the five report formats available from the
Source to Target Mappings dialog box.

Source Not Mapped Format


The Source Not Mapped format lists the data model items that
appear in the source model that do not get mapped to a variable or
to a target data model item.
MINGO MANUFACTURING
Date: 01/18/1999 Time: 16:02 Source to Target Mappings Page: 1

SOURCE NOT MAPPED


Sorted In 'Source DMI LabeL' Sequence
Source DMI Labels - KBTDECS.mdl Variable Labels___________ Target DMI Labels - KBTDECT.mdl

AmendmentSeqNo
ConsolidatedShipment
HSHRecordTypeID
MasterSeqNo
MessageType
ReceiverID
SenderID
TDECSeqNo

370 Workbench User’s Guide


Viewing Input and Output Files

Source Indirect
The Source Indirect format lists the data model items that get
mapped to a variable in the source model but do not get carried
over to the target model.
MINGO MANUFACTURING
Date: 01/18/1999 Time: 16:01 Source to Target Mappings Page: 1

SOURCE INDIRECT
Sorted In 'Source DMI Label' Sequence
Source DMI Labels - KBTDECS.mdl Variable Labels___________ Target DMI Labels - KBTDECT.mdl

ArrivalDate ARRAY->DTM_2380
BusinessRegistrationNo ARRAY->RFF_1154
Countryof Dispatch ARRAY->LOC_C157_3225
CountryofUltDestination ARRAY->LOC_C157_3225
CountryofOrigin ARRAY->FTX_4441_12
CustomsControlPoint ARRAY->LOC_C517_3225
DepartureDate ARRAY->DTM_2380
FOBValue ARRAY->MOA_5004
FlightNo VAR->TDT_8028
GoodDescriptionNo1 ARRAY->FTX_4440_1d
GoodDescriptionNo2 ARRAY->FTX_4440_2d
Good DescriptionNo3 ARRAY->FTX_4440_3d

Source To Target Direct


The Source to Target Direct report lists the data model items that
get mapped to a variable then mapped to a target data model item.
The title on the report indicates ‘DIRECT.’
MINGO MANUFACTURING
Date: 01/18/1999 Time: 16:01 Source to Target Mappings Page: 1

DIRECT
Sorted In 'Source DMI Label' Sequence
Source DMI Labels - KBTDECS.mdl Variable Labels___________ Target DMI Labels - KBTDECT.mdl

AddressType ARRAY->NAD_3035 NAD_01_3035


AddressType ARRAY->NAD_3035 NAD_02_03_3055
CIFValue ARRAY->MOA_5004 MOA_01_02_5004
CityName ARRAY->NAD_3164 NAD_06_3164
ContainerNo VAR->SGP_C237_8260 SGP_01_01_8260
DeclarationDate ARRAY->DTM_2380 DTM_01_02_2380
DeclareCd ARRAY->FTX_4441 FTX_03_01_4441
Designation VAR->EXP_3494 EMP_05_3494
DocumentFunction VAR->BGM_1225 BGM_03_1225
FaxNo ARRAY->COM_CO76_3148 COM_01_01_3148
GoodDescriptionNo5 ARRAY->FTX_4440_5d FTX_04_05_4440_001
HSCd ARRAY->CST_C246_7361 CST_02_01_7361

Workbench User’s Guide 371


Section 6. Translating and Debugging

Target Indirect
This report lists the data model items in the target model that get
assignments from variables, but the variables do not appear in the
source model.
MINGO MANUFACTURING
Date: 01/18/1999 Time: 16:00 Source to Target Mappings Page: 1

TARGET INDIRECT
Sorted In 'Source DMI Label' Sequence
Source DMI Labels - KBTDECS.mdl Variable Labels___________ Target DMI Labels - KBTDECT.mdl

ARRAY->CNT_6069 CNT_01_01_6069
ARRAY->COM_3155_1 COM_01_02_3155
ARRAY->DTM_2005 DTM_01_01_2005
ARRAY->FTX_1131 FTX_03_02_1131
ARRAY->FTX_4440_5 FTX_04_05_4440
ARRAY->FTX_4451d FTX_01_4451_001
ARRAY->LOC_3227 LOC_01_3227
ARRAY->MOA_5025 MOA_01_01_5025
ARRAY->QTY_6063 QTY_01_01_6063
ARRAY->RFF_1153 RFF_01_01_1153
ARRAY->RFF_2_1153 RFF_2_01_01_1153
VAR_CNI_1490 CNI_1490

Target Not Mapped


This report lists the data model items that appear in the target
model that have not been mapped from a variable or source data
model item.
MINGO MANUFACTURING
Date: 01/18/1999 Time: 16:00 Source to Target Mappings Page: 1

TARGET NOT MAPPED


Sorted In 'Source DMI Label' Sequence
Source DMI Labels - KBTDECS.mdl Variable Labels___________ Target DMI Labels - KBTDECT.mdl

BGM-01_04_1000
BGM_04_4343
CNI_02_02_1373
CNI_02_03_1366
CNI_02_04_3453
CNI_03_1312
CNT_01_03_6411
CST_03_01_7361
CST_03_02_1131
CST_03_03_3055
CST_04_01_7361
CST_04-02_1131

372 Workbench User’s Guide


Viewing Input and Output Files

Ø To view the Data Model Listing report


1. From the Layout Editor’s main menu, choose Debug.
2. Choose Data Model Listing. The Data Model Listing dialog box
appears.

3. In the Report Selection Criteria area, enter the data model to be


used in the report.
At the Data Model value entry box, type the filename of the
data model to be used in the report;
– or –
Select the down arrow button to display a drop down menu.
Highlight the desired data model.
4. In the Rules area, choose the whether or not you want the rules
associated with the data model item to appear on the report.
5. In the Output Method area, choose the output method and
enter any required information:
r Display Report: Outputs the report to the screen.
r Print Report: Sends the report to the default printer that
is set up under User Preferences or Print Setup,
depending on your operating system.
r File Report: Save the report to a file. You must enter
the filename to which the file will be sent, otherwise the
filename will default to your user ID followed by the
“.rpt” extension.

Workbench User’s Guide 373


Section 6. Translating and Debugging

6. Once all the information is entered, choose the OK button to


generate the report;
– or –
Choose the Cancel button to return to the Layout Editor.

Data Model Listing Example


Shown here is an example of the Data Model Listing report as it is
generated to Display.

GE Information Services, Inc.


Date Model Listing

374 Workbench User’s Guide


Using Trade Guide Reporting Features to Debug

Trade Guide provides reports on:


Using Trade r Exception data
Guide Reporting r Message status
Features to r Process activity
Debug r Archive activity
Run the Process Activity Tracking report by translation session
number to see an overview of the results during translating. A
“Translation Successful” returned at the end of a translation session
reports the overall session was a success, however, the Process
Activity Tracking report may show missing Trading Partner
records, compliance errors, and other issues. The Exception
Activity report allows you to track bypassed translation data and is
also helpful in debugging.
Refer to Section 6 of the Trade Guide for System Administration User’s
Guide for more information on these reports, including how to run
them.

Generating User- Beyond using the reports defined for you, Application Integrator
Defined Reports has provided the groundwork for application-specific report
generation. All Application Integrator reports are developed using
the same structures as data mapping (map component files and
models). To ease your generation of application-specific reports,
Application Integrator includes a set of models, map component
files, and other files that serve as templates for handling both
reports where the reporting data is known (for example, you are
reporting on an output file of X12 invoices (810s)) and others for
reporting on data where the content is not known in advance.
These files contain the logic to deal with the common report
characteristics:
r Pages are set to 66 lines in length
r Each report prints 57 lines per page
r Report headings include:
− Company name, centered
− Date and time, report title, centered page number
− Up to six lines of heading information
r Clean up of temporary report generated files
r Automatic calculation of report width: 80 column, 132
column, etc.

Workbench User’s Guide 375


Section 6. Translating and Debugging

The specific report generation models are added to these generic


models. Specific report models usually consist of a source data
model to extract information to be reported, and a target data
model to construct the heading lines and body of the report.

Reporting Where the The following diagram shows the flow of the generic report system
Data Content is Known for reports where data is pre-identified.

Printing and Generating The printing of a report is invoked through the use of the shell
Generic Reports script OTReport.sh and can be invoked from the command line
using the following syntax:
In UNIX, type:
OTReport.sh <D/P> <specific_report.att> <columns>
In Windows, type:
OTReport.bat <D/P> <specific_report.att> <columns>
For the arguments:
<D/P> — Enter either ‘D’ for display or ‘P’ for printing of report.
<specific_report.att> — Enter the name of the specific report map
component file.
<columns> — Enter the number of columns the report is be printed
into. It is an optional argument, defaults to 132.
Examples:
For UNIX, type:
OTReport.sh P OTActR1.att 80
For Windows, type:
OTReport.bat P OTActR1.att 80

376 Workbench User’s Guide


Using Trade Guide Reporting Features to Debug

UNIX
The shell script OTReport.sh invokes a translation, such as the
following:
inittrans -at -OTRpt.att -cs $OT_QUEUEID
-DREPORT=$2 -DSESSION_NO=$$ -P 1 -DCOLUMNS=$3 -I
where
inittrans — Program that passes a request for translation to the
Control Server.
-at OTRpt.att — Specifies the first map component file with which
to begin the translation.
-DREPORT=$2 — Passes the second OTReport.sh argument (for
example, OTActR1.att) into the translation session, using the
environment variable “REPORT.”
-P 1 — Defines the translation queue priority to be 1.
(1=low priority, 99=high priority).
-cs $OT_QUEUEID — Identifies the Control Server with which to
communicate.
-DCOLUMNS=$3 — Passes the third OTReport.sh argument into
the translation session, using the environment variable
“COLUMNS.”
-I — Invokes the translation to run interactively (in the foreground).
The OTRpt.att map component file then attaches to the
environment specified in the environment variable “REPORT”
(OTActR1.att). Both source and target data models are defined
within this map component file. The source data model is used to
extract/gather information that will be used to construct the report.
The target data model then uses this information to generate the
body of the report. The report is output into a temporary file called
“<SESSION_NO>.tmp,” where <SESSION_NO> is the process ID
number. Once the body of the report is generated, processing
returns to the OTRpt.att.

Workbench User’s Guide 377


Section 6. Translating and Debugging

Next, the common report characteristics are added (paging,


heading, etc.) through “OTRpt.P.att.” OTRptP.att contains both
source and target data models. The source data model reads in the
“SESSION_NO>.tmp” file. The target data model outputs the
source read data, adding the common report characteristics into a
report file called “<SESSION_NO>.rpt.” Once all data is output,
processing returns to OTRpt.att. The temporary file
“<SESSION_NO>.tmp” is removed, and processing returns to the
original shell script “OTReport.sh.”
The shell script then either prints the report invoking OTPrint.sh
which in turn prints the report (lp <SESSION_NO>.rpt) or
displays the report (cat <SESSION_NO>.tmp). Once
printed/displayed, the report file is removed from the disk.
In the creation of the specific report, the following must be
identified in the map component file:
OUTPUT_FILE = (SESSION_NO).tmp
S_ACCESS = OTFixed.acc
S_MODEL = <developer defined name.mdl>
T_ACCESS = OTFixed.acc
T_MODEL = <developer defined name.mdl>
In the creation of the specific target data model, the following must
be identified:
r The records output into the <SESSION_NO>.tmp file are
expected to be delimited with the line feed character.
For tag items, use the item type
“LineFeedDelimRecord.”
r Assign a report title to the variable “VAR->OTRptTitle.”
Its maximum length is sixty characters. This maximum
length can be reduced when using the <COLUMNS>
argument, if <COLUMNS> was specified as 80, then the
maximum report title is 46 characters (34 characters of
the 80 are used for date, time, and page number).
r Assign column headings to the variable “ARRAY-
>OTHeading.” The maximum number of page heading
lines is currently set to six.

378 Workbench User’s Guide


Using Trade Guide Reporting Features to Debug

Windows
The batch file OTReport.bat invokes a translation, such as the
following:
otrun.exe -at -OTRpt.att –cs %OT_QUEUEID%
-DREPORT=%2 -DSESSION_NO=$$ -P 1 -DCOLUMNS=%3 -I
where
otrun.exe — Program that passes a request for translation to the
Control Server.
-at OTRpt.att — Specifies the first map component file with which
to begin the translation.
-cs %OT_QUEUEID% — Identifies the Control Server with which
to communicate.
-DREPORT=%2 — Passes the second OTReport.bat argument (for
example, OTActR1.att) into the translation session, using the
environment variable “REPORT.”
-P 1 — Defines the translation queue priority to be 1.
(1=low priority, 99=high priority).
-DCOLUMNS=%3 — Passes the third OTReport.sh argument into
the translation session, using the environment variable
“COLUMNS.”
-I — Invokes the translation to run interactively (in the foreground).
OTRpt.att then attaches to the environment specified in the
environment variable “REPORT” (OTActR1.att). Both source and
target data models are defined within this map component file. The
source data model is used to extract/gather information that will be
used to construct the report. The target data model then uses this
information to generate the body of the report. The report is output
into a temporary file called “<SESSION_NO>.tmp,” where
<SESSION_NO> is the process ID number. Once the body of the
report is generated, processing returns to the OTRpt.att.

Workbench User’s Guide 379


Section 6. Translating and Debugging

Next, the common report characteristics are added (paging,


heading, etc.) through “OTRpt.P.att.” OTRptP.att contains both
source and target data models. The source data model reads in the
“SESSION_NO>.tmp” file. The target data model outputs the
source read data, adding the common report characteristics into a
report file called “<SESSION_NO>.rpt.” Once all data is output,
processing returns to OTRpt.att. The temporary file
“<SESSION_NO>.tmp” is removed, and processing returns to the
original batch file “OTReport.bat.”
The batch file then invokes the Windows program “Write” so that
all reports (“.rpt” files) can be viewed or printed. Once printed or
viewed, all “.rpt” files are removed from disk.
In the creation of the specific report, the following must be
identified in the map component file:
OUTPUT_FILE = (SESSION_NO).tmp
S_ACCESS = OTFixed.acc
S_MODEL = <developer defined name.mdl>
T_ACCESS = OTFixed.acc
T_MODEL = <developer defined name.mdl>
In the creation of the specific target data model, the following must
be identified:
r The records output into the <SESSION_NO>.tmp file are
expected to be delimited with the line feed character. For tag
items, use the item type “LineFeedDelimRecord.”
r Assign a report title to the variable “VAR->OTRptTitle.” Its
maximum length is sixty characters. This maximum length
can be reduced when using the <COLUMNS> argument, if
<COLUMNS> was specified as 80, then the maximum report
title is 46 characters (34 characters of the 80 are used for date,
time, and page number).
r Assign column headings to the variable
“ARRAY->OTHeading.” The maximum number of page
heading lines is currently set to six.

380 Workbench User’s Guide


Using Trade Guide Reporting Features to Debug

Reporting Where the The following diagram shows the flow of the generic report system
Data Content is for reports where data has not been pre-identified.
Unknown

OTRecogn.att recognizes the type of data contained in the input


stream. As the type of standards are identified, the appropriate
environments are attached for proper processing, for ASC X12, the
OTX12 Env.att environment would be attached. These
environments then break the data up into message or document
units and generate application interface files or report data,
depending on how the data is modeled. As a report is output, it is
accumulated into a session temporary file (<SESSION_NO>.tmp).
Once the input stream has been read through, before output files
are committed to their appropriate directories and data archived,
the report data is printed. The data is printed by the OTRecogn.att
model attaching to the OTRpt.att environment to add the common
report characteristics, for example, paging report title, column
headings. The temporary report file (<SESSION_NO>.tmp) is then
converted into a print ready report file (<SESSION_NO>.rpt).
When control returns to OTRecogn.att (from OTRpt.att), the shell
“OTPrint.sh” is invoked to “lp” the print ready report file.
In the creation of the <specific_report.att>, the following must be
defined:
In the map component file:
OUTPUT_FILE = (OUTPUTFILENAME)
S_ACCESS = <Depends on the source data>,
(Example,:OTX12S.acc)
S_MODEL = <developer defined name.mdl>
T_ACCESS = OTFixed.acc
T_MODEL = <developer defined name.mdl>

Workbench User’s Guide 381


Section 6. Translating and Debugging

In the target data model:


The records output into the (OUTPUTFILENAME) file are expected
to be delimited with the line feed character. For tag items, use the
item type “LineFeedDelimRecord.”
The report title, column headings, report width and paging controls
are output amongst the data. This method allows these values to
differ from message to message. These values are specified by
outputting records as shown in the following example:
;;; NEXT REPORT
;;; COLUMNS=132
;;; TITLE=Activity Tracking Report
;;; HEADING=Session-No Date Time Type Sub-Type...
;;; HEADING= __________ ____ ____ ____ ___________
;;; PAGE
Syntax:
;;; NEXT REPORT Defines the start of a new message, for the
clearing out of the last message’s title,
column headings, width settings, and
resetting the page number to zero.
;;; COLUMNS Sets the report’s width.
;;; TITLE = Sets the report’s title.
;;; HEADING = Sets one of six report column headings.
;;; PAGE A form feed is inserted.

In the Trading Partner Profile,


At the Message Level on the Inbound tab in the Production value
entry box, you must type “REPORT.” This will signal the translator
to call the print script to print your report.

❖ Note: All printing using the “lp” command uses the shell
script “OTPrint.sh.” If necessary, the command in the shell
script can be modified to add the “-d” option to control the
printer or class of printer to which the report is to be sent.

382 Workbench User’s Guide


Section 7
Migrating to Test and Production
Functional Areas

Once you have completed the source and target data models, the
map component files, and the development testing, you are ready
to migrate your Application Integrator electronic commerce
application to a test area or production area.
This section provides background and instruction on migrating
from development to test or from development or test areas to
production areas. It includes procedures for importing and
exporting Profile Databases.

❖ Note: This section describes migration between functional


areas. For instructions on updating Application Integrator
programs, refer to the Application Integrator Installation
Guide. For instructions on migrating between Application
Integrator versions or between operating systems, contact
your Application Sales Engineer or Application Integrator
Support Specialist for the appropriate document and/or
assistance.

Workbench User’s Guide 383


Section 7. Migrating to Test and Production Functional Areas

Migration is the process of moving files or information from one


Planning location to another where similar files may exist. The files or
Development information come from a “source” location, and are moved to a
Migration “target” location.
To successfully migrate files or information from source to target
locations, some important questions must be considered:
r Will the data to be migrated overwrite any existing data?
r Is the data to be migrated dependent on data from another
file?
r If the migration must be reversed or “undone,” what needs
to be done?
This section identifies files usually associated with Application
Integrator development to production migration. Typical migration
files are:
r Map Component Files (*.att)
r Data model files (*.mdl)
r Access model files (*.acc)
r Environment files (*.env)
r Profile Database files (sdb.dat & sdb.idx)
This section is only a guideline for migration. A full analysis of all
items covered is required to ensure a successful migration or
recovery.

384 Workbench User’s Guide


Planning Development Migration

Recommended There are four major considerations recommended for each


Migration Approach migration.
1. Create a detailed migration plan.
After reading this section, plan your application migration,
making sure you account for every user-definable type of file.
a. Document what is to be migrated.
b. Analyze the differences between existing and new data.
c. Determine any dependencies to other data.
d. Save a copy (back up) of any files that might be affected by
the migration or initial operations and testing, before
beginning the migration.
2. Verify Application Integrator code versions and stop the
Control Server.
Verify that the destination functional area is currently running
on the same or later version of the Application Integrator
programs. If development used some features or capabilities
that were only available in the latest version, then the process,
once migrated, may not execute properly.
Both the Control Server and Trade Guide should be completely
stopped before beginning any migration process. Refer to the
Trade Guide for System Administration User’s Guide for
instructions on exiting the system.
3. Migrate files in the following order:
a. The Profile Database files—sdb.dat and sdb.idx.
b. Access models, source and target data models, and map
component files —*.acc, *.mdl, *.att.
c. Adjust any affected shell scripts (UNIX) or batch files
(Windows).
Details on migrating each of the types of files are provided later
in this section.
4. Check the migration.
a. Adjust the processing environment (UNIX), for example,
set permissions, user and group IDs, shell or profile
variables.
b. Begin production testing.

Workbench User’s Guide 385


Section 7. Migrating to Test and Production Functional Areas

c. Capture any data during testing that must be undone if the


migration needs to be reversed.

❖ Note: It is always advisable to test before and after each


migration. It is always advisable to back up the target
location before the migration.

Permission The UNIX operating system assigns permissions to each file. For
Guidelines new and replacement files, the required permissions for owner,
group, and other must be specified for read, write, and execute.
For replacement files, it is customary to match the existing target
permissions unless other permissions are specifically indicated.
To delete an existing file, the user performing the migration will
need sufficient authority.
For Windows, the operating system controls the access permissions
and passwords for each file. These are set up by the system
administrator.

386 Workbench User’s Guide


Migrating Applications

The following sections provide instructions on preparing and


Migrating migrating each type of file used in data modeling.
Applications

❖ Warning: Make a backup copy of every file that will be


overwritten by the migration or initial operations and testing,
before beginning the migration.

Profile Database The Profile Database (sdb.dat and sdb.idx files) can be updated via
Guidelines one of three methods: replacement, manual maintenance, and
export/import.

Replacement To replace the Profile Database, two files must be replaced. They
are:
r sdb.dat — data file of the Profile Database
r sdb.idx — index file of the Profile Database
When replacing these files they must be replaced together as a set
of files, replacing only one without the other will create a serious
problem.

❖ Note: The Profile Databases cannot be moved between


Intel™ and RISC™ systems, due to the byte order.

❖ Work-around: Export the Profile Database. You can use the


export feature of Trade Guide or run the “otstdump” program.
To import the Profile Database, you can use the import
feature of Trade Guide.

Workbench User’s Guide 387


Section 7. Migrating to Test and Production Functional Areas

Manual Maintenance of Substitutions, cross-references, and verification lists, all part of the
Xrefs/Codes Profile Database, can be maintained through Trade Guide menu
options by an operator or system administrator. In cases where
changes are not minor, using the Trade Guide export and import
features for these changes are the recommended methods of
migration.

❖ Note: Use the export and import features for trading partner
profiles and standards to migrate minor changes to the trading
partner profiles and cross-reference lists.

Refer to the section, “Exporting Selected Portions of the Profile


Database,” later in this section for instructions.

Export/Import You can update the Profile Database by exporting the entire
database, exporting portions of the database, and then importing
the database or database portions. Refer to the section, “Importing
and Exporting Profile Databases” later in this section for complete
instructions.

388 Workbench User’s Guide


Migrating Applications

Development Migration During normal operations of the Application Integrator system,


of the Profile Database new development work will occur that often requires additions,
changes, and deletions to the production Profile Database in order
to migrate newly developed models and map component files to
the production system.
The following list of steps outlines the general flow or life cycle of
the Profile Database from development to production.
1. The developer creates a development Profile Database in the
developer’s directory (a development seat). The developer’s
Profile Database could be a copy of the production Profile
Database. By using a copy of the production database as a
starting point, the developer has the best chance that all new
entries will successfully coexist with production entries.
2. The developer adds, changes, and/or deletes entries in the
development Profile Database as required to complete
development activities.
3. When the developer is ready to move the project to production,
the migration process begins. On UNIX, stopping the Control
Server and waiting for all translator (otrans and inittrans)
processes to stop will ensure that no further access to the Profile
Database occurs and “freeze” the Profile Database to avoid
corruption.
4. A file containing only the entries needed for migration is
provided by the developer.
5. A copy of the production Profile Database is saved for
recovery.
6. The Control Server is started, and migration of the
development Profile Database entries to the production Profile
Database occurs. The file containing the development Profile
Database entries is moved to the production directory. Using
the import function, the development Profile Database entries
are loaded into the production Profile Database.
7. The system is closely monitored until a high level of confidence
is achieved.

Workbench User’s Guide 389


Section 7. Migrating to Test and Production Functional Areas

Access Model File


(*.acc) Guidelines

To Migrate *.acc Files 1. Obtain a list of access model files that are to be migrated.
Access model files will have a suffix of “.acc.”
2. Determine the migration mode. Which of these access models
are new, replacements, or are to be deleted from the target
directory?
3. Make a backup copy of any access files in the target directory
that will be affected (overwritten) by the migration.
4. Perform the migration. New access models are copied from the
source directory into the target directory.
Changed access models can be either replaced or edited.
When removing old access models, be sure they are not still in use
by any map component files or in use through an environment
variable.
In UNIX, to determine where access models are being used, use the
UNIX grep command to search for the name of the access model
throughout the entire directory. For example, the following
command would return a list of all files that referenced
OTFixed.acc:
grep OTFixed.acc *

Considerations In UNIX, if an item type defined in an access model is changed,


then any map component file using the access model is in question.
To determine the specific effect of a changed item type within an
access model, the entire directory can be scanned using the UNIX
grep command for any data model item that uses the item type. For
example,
grep NumericFldNA *

Note: Windows does not have a “grep”-type utility. If you

❖ make changes to access models, you must manually review


each map component file (*.att) looking for models that might
be affected, then view the models for the item type in
question.

390 Workbench User’s Guide


Migrating Applications

Data Model File


(*.mdl) Guidelines

To Migrate *.mdl Files


1. Obtain a list of source and target data model files that are to be
migrated. Data model files created by Application Integrator
from within Workbench will have a suffix of “.mdl.”
2. Determine the migration mode. Which of these data models
are new, replacements, or are to be deleted from the target
directory?
3. Make a backup copy of any models in the target directory that
will be affected (overwritten or changed) by the migration.
4. Perform the migration. New data models are copied from the
source directory into the target directory. Changed data
models can be either replaced or edited.
When removing old data models, be sure they are not in use by any
map component files or in use through a referenced environment
variable.
In UNIX, to determine where data models are being used, use the
grep command to search for the name of the data model throughout
the entire directory. It is also recommended that the Profile
Database be dumped and checked as well to ensure that the model
is not being referenced by a cross-reference or substitution. For
example, the following command will return a list of all files using
OT810T.mdl:
grep OT810T.mdl *

Workbench User’s Guide 391


Section 7. Migrating to Test and Production Functional Areas

Considerations
r Data models can attach to other map component files. It is
important to identify all map component files that will be
affected by the use of any data model to be migrated. To
determine whether a data model performs an ATTACH,
use the UNIX grep command or an editor such as vi and
search the data model for the keyword ATTACH. If the
data model contains the keyword ATTACH, then the model
must be examined to determine what other processing will
occur and how it will affect production processing.
r Data models have the ability to execute external programs
or shells. When migrating a data model, it is important to
consider what effect it may have on existing processing in
the target area. To determine what external executions are
being performed by a data model, use the UNIX grep
command or an editor such as vi, and search for the string
EXEC.
r When Application Integrator generic data models are
modified, or general purpose models are written, these
models should be updated into each development seat and
the models’ (release version) directory (/u/aidev/OT).
Thereby, all current and future development seats will be
using the latest data models.

392 Workbench User’s Guide


Migrating Applications

Map Component Map component files identify all resources (other files) used during
File (*.att) a processing session. When migrating the map component files, it
is important to look at the resources referenced by the map
Guidelines
component file to determine what else will be affected.

To Migrate *.att Files


1. Obtain a list of map component files that are to be migrated.
Map component files created by Application Integrator from
within Workbench will have a suffix of “.att.”
2. Determine the migration mode. Which of these are new,
replacements, or are to be deleted from the target directory?
3. Make a backup copy of the target directory that will be affected
(overwritten and changed) by the migration.
4. Perform the migration. New map component files are copied
from the source directory into the target directory. Changed
map component files can either be replaced or edited.
When removing old map component files, be sure to remove any
resources referenced that are no longer needed. Take care when
removing other resources that are no longer used since they may
still be referenced by other map component files. In UNIX, it is
possible to easily determine what is being referenced by using the
UNIX grep command utility. For example, the following command
will return a list of all the files using OTX12810.att.
grep OTX12810.att *

❖ Note: Windows does not have a “grep”-type utility. If you


make changes, you must manually review files looking for
ones that might be affected.

Workbench User’s Guide 393


Section 7. Migrating to Test and Production Functional Areas

Considerations
r If the map component file being migrated contains
references to data models, access models or other resources
already in production, then any dependency between them
must be identified.
r Map component files are usually small and can be viewed
or printed with little consideration, this will ease the
process of analysis.
r Map component files may contain key prefixes for
substitutions and cross-references (xrefs). If key prefixes or
other environment variables are used, the effect on existing
Profile Database entries must be considered.

394 Workbench User’s Guide


Migrating Applications

Environment File
(*.env) Guidelines

To Migrate (*.env) Files


1. Obtain a list of environment files that will be migrated.
Environment files created by Application Integrator from
within Workbench will have a suffix of “.env.”
2. Determine the migration mode. Compare the new environment
files to any existing old environment files. If the new
environment file is different than an existing one, identify what
entries are to be moved from the new one to the old one. If no
changes exist, no migration is required.
3. Back up any environment files in the target directory that will
be affected (overwritten) by the migration.
4. Perform the migration. New environment files are copied from
the source directory into the target directory. Changed
environment files can be either replaced or edited.
When removing old environment files, be sure to remove any
resources referenced that are no longer needed. In UNIX, it is
possible to easily determine what is being referenced by using the
UNIX grep command utility. For example, the following command
will return a list of all the files using the environment file
ENVIRON1.env.
grep ENVIRON1.env *

Note: Windows does not have a “grep”-type utility. If you

❖ make changes to map component file files, you must


manually review files looking for ones that might be affected.

Considerations If existing environment files change, then any model that uses those
files must be considered. Use the UNIX grep command to check for
these references.
In general, if you are unsure of whether or not a file has been
changed, you can use the UNIX diff command or the Windows fc
command to compare the files:
diff <first file> <second file> (UNIX)
fc <first file> <second file> (Windows)

Workbench User’s Guide 395


Section 7. Migrating to Test and Production Functional Areas

The Import and Export options allow you to import into/export


Importing and from all or a portion of a Profile Database. The values stored in a
Exporting Profile Profile Database consist of your partner profile, communication
Databases profile, configuration profile, code list, and cross-reference data.
These values can be transferred between development seats, testing
areas, production areas, or platforms. For example, you could use
the Export option to capture an entire database, selective code lists,
or an individual trading partner from a development seat on an
RS6000 system, and use the Import option to bring that data into a
production area on an SCO Intel based system.

❖ Note: If you are exporting the entire database, use the Export
option in the File menu, but if you are moving portions of the
database, such as a trading partner or code list, use the
Export option within the Trading Partner, Standards,
Application Recognition, or Post Processing dialog boxes.

These individual files may be concatenated, copied, then


imported into the production system.

Refer to the section, “Profile Database Guidelines,” earlier in this


section for considerations on migrating the entire or portions of the
Profile Database.

396 Workbench User’s Guide


Importing and Exporting Profile Databases

Exporting Complete
Profile Databases

To export data 1. From the Trade Guide File menu, choose Export Values. The
dialog box shown below appears.

The File Name value entry box appears with a default filename
of the user’s ID followed by the “.exp” extension. You can
specify any filename.
2. Select Append To Output File to append to the existing file;
- or -
Select Overwrite Output File to overwrite the existing file.
3. Choose the OK button to export the Profile Database;
- or -
Choose Cancel to return to the Trade Guide main menu.
If you are overwriting a file when exporting data, the dialog
box shown below appears to verify that you want to overwrite
the existing file.

4. Choose the OK button to overwrite the existing file;


- or -
Choose the Cancel button to return to the Export Values dialog
box to enter a different filename or change mode.

Workbench User’s Guide 397


Section 7. Migrating to Test and Production Functional Areas

To import complete or 1. From the Trade Guide File menu, choose Import Values. The
selected portions of the dialog box shown below appears.
Profile Database

The File Name value entry box appears with a default filename
of the user’s ID followed by the “.exp” extension. You can
specify any filename you wish to import the data from.
2. Select Replace Existing & Add New Records to overwrite to the
existing Profile Database with the new values;
- or -
Select Add New Records Only to add the new values to the
Profile Database.
3. Choose the OK button to import the file into the Profile
Database;
- or -
Choose Cancel to return to the Trade Guide main menu.
When you are importing data, a Question dialog box appears to
verify that you want to load the entries.

4. Choose the Yes button to load the entries. If you choose Yes, a
working dialog box appears showing entries loading. (You
may terminate the load process early by choosing the Stop
button);
- or -
Choose the No button to return to the Import Values dialog
box.
5. Once all records have been loaded, the Stop button changes to
Continue. Choose the Continue button to return to the Import
Values dialog box.

398 Workbench User’s Guide


Importing and Exporting Profile Databases

Exporting Selected You can migrate the following selected portions of the Profile
Portions of the Database:
Profile Database
r Trading Partner Profiles
r Cross-references (Xrefs)/Codes Standards (a complete
standard)
r Cross-references (Xrefs)/Codes Standards (a selected
version)
r Cross-references (Xrefs)/Codes Standards (a selected
category)
r Cross-references (Xrefs)/Codes Application Recognition
(for example, 840-Catering/DAL840s.att)
r Cross-references (Xrefs)/Codes Standards Post Processing
(for example, GMC/gmupd)
Export the selections following the instructions below. To import
this information into a second database, follow the procedures
provided earlier in this section.

To export a trading 1. From the Profiles menu of Trade Guide, choose Trading
partner profile Partners.
2. Select the trading partner and choose the Export button. The
Export Values dialog box appears:
The File Name value entry box appears with a default filename
of the user's ID followed by the “.exp” extension. You can
specify any filename.

Workbench User’s Guide 399


Section 7. Migrating to Test and Production Functional Areas

3. Select Append To Output File to append to the existing file;


- or -
Select Overwrite Output File to overwrite the file.
4. Choose the OK button to export the Profile Database specified;
- or -
Choose Cancel to return to the Trade Guide main menu.
If you are overwriting a file when exporting data, a dialog box
appears to verify that you want to overwrite the existing file.

5. Choose the OK button to overwrite the existing file;


- or -
Choose the Cancel button to return to the Export Values dialog
box to enter a different filename or change mode.

400 Workbench User’s Guide


Importing and Exporting Profile Databases

To export 1. From the Xrefs/Codes menu of the Trade Guide, choose either:
cross-references/codes
r Standards
r Application Recognition
r Post Processing
If you select Application Recognition or Post Processing, skip to
Step 3.
2. From the Standards dialog box, do one of the following:
r Select a standard to export.
r Double-click a standard to see a list of versions and then
select a version to export.
r Double-click a version to see a list of categories and then
select a category to export.
3. With the information to export highlighted, choose the Export
button. A dialog box similar to the following appears.

The File Name value entry box appears with a default filename
of the user's ID followed by the “.exp” extension. You can
specify any filename.
4. Select Append To Output File to append to the existing file;
- or -
Select Overwrite Output File to overwrite the file.

Workbench User’s Guide 401


Section 7. Migrating to Test and Production Functional Areas

5. Choose the OK button to export the cross-references and codes;


- or -
Choose Cancel to return to the Trade Guide main menu.
If you are overwriting a file when exporting data, a dialog box
appears to verify that you want to overwrite the existing file.

6. Choose the OK button to overwrite the existing file;


- or -
Choose the Cancel button to return to the Export Values dialog
box to enter a different filename or change mode.

402 Workbench User’s Guide


Glossary

Access model A collection of item type definitions and declarations (access model
items) used during a translation in conjunction with an
environment, as specified in a map component file (.att).

Access model item The definition of an item type in order to recognize data in an input
stream, and construct data for the output stream. The definition of
an item type may include the information that precedes or follows
the character set and any special function processing.

Action One of two parts of a rule. Once a condition is met, the operations
(actions) defined with rules, such as assigning a value to a variable,
are to be performed.
See also condition.

Administration The Application Integrator supplied database system that captures


Database information from translation sessions for archiving, and exception
(error), message, and process tracking. The Application Integrator
system includes features for reporting and viewing information
logged in the Administration Database.
See also Profile Database.

AODL Access Object Definition Language. The language that gives the
access model the ability to describe components or types of items
that are both fixed and variable in length (delimited by specific
characters).

Workbench User’s Guide 403


Glossary
Array A type of variable that is a list of values. User-defined control
handling is recommended when this variable type is used to ensure
that the proper data stays together, for example, detail records are
maintained with header or sub-detail records.
See also MetaLinks and Temporary Variable.

Attachment See map component file.

Base condition Part of the access model definition of an item type that describes
the allowable value set, for example, all alphabetic characters of the
set A-Z and a-z.
See also pre-condition and post-condition.

Category A standards implementation or user-defined grouping that


identifies a list for cross-referencing or code verification purposes.

Child A “child” item is one hierarchical level under/down/within the item


selected.
Example:
Parent_A
Child_A — child to Parent_A, Sibling to Child_B
Child_B — child to Parent_A, Sibling to Child_A
Parent_B — sibling to Parent_A
See also parent and sibling.

Code verification list A value parsed from the input file is verified for existence against a
list of valid codes using a code list label (lookup ID) specified in the
data model item's declaration (verify list ID).
The lookup has a prefix key associated with it, that is referenced in
the environment (Verify Key Prefix). The Profile Database is
hierarchically structured, with the levels of the hierarchy developer
defined. This prefix key defines where in the structure of the
database, the lookup is to occur. Because the prefix is always
appended with the lookup value, the same “Verify List ID” can be
used throughout the database hierarchy.

404 Workbench User’s Guide


Glossary
Condition One of two parts of a rule. It defines the state when the actions
associated with it are to be performed. The condition can be
declared as a “null condition,” which means that its state is always
true; its actions are always performed. Or the conditions can be
complex (a string of multiple items and symbols). If the result of
the condition processing is true, its actions are performed.

Construct To build meaningful units of information from elements of the data


stream, for example, to construct interchange, functional group,
and message level identifiers. The opposite of parse.

Container item A data model item assigned a composite/component item type.


Containers are used to declare an association among defining
items. See also defining item, group item, and tag item.

Composite/component Composite or component item types, like tag and defining item
item type types, are declared in the access model by the developer. These
item types are assigned a base value of “Container” in the access
models. Composite items in the X12 and UN/EDIFACT
environments would be defined by these “container” item types in
Application Integrator.
See also defining item type and tag item type.

Control Server The hub of the Application Integrator system, this process server
controls, resolves conflicts for, and schedules the many resources
used throughout the system, including translators, databases, input
and output files, etc. All requests (invoking a translator, database
lookups, status inquiries, etc.) go through the Control Server's
messaging system, that consists of an input (request) queue and
multiple output (response) queues; a response queue is established
for every requestor.

Cross-reference A value parsed from the input file is looked up in the Profile
Database to be replaced with another value.
The lookup is accomplished by using the # XREF data model
function.

Workbench User’s Guide 405


Glossary
Data model A collection of data structure definitions and the rules for
processing the input and output files.
See also source data model and target data model. The act of creating a
data model is referred to as data modeling.

Data model item A representation of one component of a message. Each data model
item consists of two parts –declaration and rules. The declaration
part, which is required, defines the item’s label, reference to the
access model item type, size, occurrence, and other attributes. The
rules, which are optional, define actions to be performed that are
associated with the item.

De-reference See reference

Declare First use or reference to items and variables that bring them into
existence. Item definitions in the access and data models declare
the item, so that it can be referenced later in the translation
definition process. Variables (environment, temporary, Array, and
VAR–>) are declared the first time they are used, and at that time,
they are defined in existence.

Defining item A data model item assigned a defining item type. Defining items
are the lowest level item in a data model. No other type of data
model item can exist below defining items hierarchically.
See also container item, group item, and tag item.

Defining item type An item type that identifies a data string's characteristics (name,
type, length, occurrence and format/mask).
Defining item types are developer declared in an access model. The
developer has control to create any and all types of defining items
that are necessary to properly identify/construct items within a
data stream. Once the environment associates an access model with
a data model, the defining item types can then be used in the data
model declarations to model the data structure.
Some common types of defining item types declared in the access
model are: numeric. alphanumeric, alpha, date, and time.
See also composite/component item type, defining item type, and tag item
type.

406 Workbench User’s Guide


Glossary
Development functional Also referred to as a development seat, development system, or
area development area. The name given to an Application Integrator
development and runtime system that has been set aside for data
modeling, defining one or more environments (map component
files), and the initial testing and debugging of data models. GE
Information Services highly recommends that the results of
development are migrated to a formal testing functional area
(UNIX) or fully testing within development (Windows), before
migration and usage in a production functional area.
See also production functional area and testing functional area.

Entity The name given to the concatenated string that identifies a trading
partner in the Profile Database. Each level of the trading partner
profile (interchange, functional group, or message) includes an
ENTITY statement in the Profile Database. Also referred to as the
entity lookup ID, database lookup, or simply lookup.

Environment An environment consists of a collection of the resources that control


the translation (the input/output files, data and access models to be
used, and data base lookup key prefixes). In Application
Integrator, an environment is referred to as a “map component file”
(the environment definition is “attached” to the translator;
environment definitions are stored in “map component” files with
the extension “.att”). During the execution of a translator, the
environment can change through the calling of different
environments (map component files).

Environment file (.env An optional file that can be used to enhance the configuration of the
file) translator. Environment files declare user-defined environment
variables, for example:
ACTIVITY_TRACK_SUM=“DM_ActS”
ACTIVITY_TRACK_DET=“DM_ActD”
An environment file is loaded by using the function ENVIRON_LD( )
in the data model.

Workbench User’s Guide 407


Glossary
Environment variable Variables that contain a single value that can be referenced from
either the source or target data model. Environment variables can
be defined (declared) and referenced in either a map component
file (.att) or environment (.env) file, or through the use of a function
call in a data model (see # SET_EVAR and # GET_EVAR). Most
environment variables only exist or are accessible in the current
environment. Some environment variables, referred to as keyword
environment variables, can propagate their values into a child
environment.

Functions Special processing routines that tend to bring data into the
translation process that is non-input stream data, manipulate the
data, or output log messages. Functions are used in both the access
and data models. There are two types of functions that can be
invoked: standard and user-written.
The standard Application Integrator functions fall into many
categories, including the following:
Category Functions Performed
Database Cross-referencing, code verification, update
database
Date/time Get system date and time
String Concatenate, determine a string's length, extract
a substring
Environment Get and set an environment variable's value
Error Check the current error code value, set the value
Logging Output various log records - archive, error,
general messages
Other Reset a MetaLink list pointer, declare character
set

User-written functions, when included in shared libraries, are


dynamically linked for use at run time. Functions that return a
value can be used within other functions for condition and action
statements. Functions can be nested (a function within a function),
for example:
DMI05 = #STRLEN, ( #STRCAT, DMI01, ( #STRCAT, DMI02, DMI03 ) )
In this example, the string length is determined on the values
contained in DMI01, DMI02 and DMI03 once they are concatenated
together. The string length is assigned to DMI05.

408 Workbench User’s Guide


Glossary
Group item A data model item assigned the item type group. Used within a
data model to define a loop/set of group, tag, or defining items.
Group item types are not declared in the access model, since they
neither parse or construct data.
The following example uses an invoice type message with records
to depict where group items might be declared in a data model:
Invoice_Document ;Group Item1
Heading_Rec_1
Heading_Rec_2
Heading_Rec_3
Detail_Loop ;Group Item 2
Detail_Rec_1
Detail_Rec_2
Sub_Detail_Rec_1 ;Group Item 3
Summary_Rec_1
Summary_Rec_2
Group Item 1:
Optionally, the invoice document can be defined to repeat. Since
translation follows the data model structure, if a group item is
declared, once an invoice is parsed, control automatically returns to
the top of the group for the parsing of the next invoice. Without the
group item declared, once an invoice is parsed, the data model
would then be exited.
Group Item 2:
This group is necessary to represent multiple occurrences of detail
information existing within the Invoice_Document. For each
occurrence, three different types of records can exist in the specified
sequence, including Detail_Rec_1, Detail_Rec_2, Sub_Detail_Rec_1.
Group Item 3:
Since sub-detail information consists of only one record type, the
repeat of this record can be defined in the Sub_Detail_Rec_1's
declaration without the need for a preceding Group Item. The use
of a Group Item would only achieve the same result.
See also defining item, tag item, and container item.

Hierarchy Refers to the tiered structure of data model items in the data
structure. Data model items are defined in relationship to other
items (parent, child, sibling), in this way, relationships such as
header records to detail records are established.

Workbench User’s Guide 409


Glossary
Inheritance Refers to the Profile Database's ability to automatically propagate
down values defined at higher hierarchy levels if they are not
found at the current level. This feature eliminates the need to enter
redundant data in circumstances where many to all levels of the
hierarchy use the same information, for example, the same code
lists.

Instance An occurrence of an item.

MapBuilder The Transaction Modeler Workbench program that allows you to


automate data mapping from source to target. For a source data
model, MapBuilder creates a rule that assigns a data model item’s
value to a variable. For a target data model, MapBuilder creates a
rule that references the variable for its value and assigns it to the
data model item.

Map component file A file that defines the resources that are pulled together for a
(.att file) portion or a complete translation process – an environment.
Multiple environments are typically brought together to form a
complete translation session.
Example of a map component file’s content:
Input File =
Output File =
Source Access Model =
Source Data Model =
Target Access Model =
Target Data Model =
Substitution Key Prefix =
Xref Key Prefix =
Verify Key Prefix =
Trace Level =
Additional Parameters =
(Additional Parameters represent an unlimited number of
developer-defined environment variables.)
See also environment.

410 Workbench User’s Guide


Glossary
Mask The ability to control the output of numeric values. For example,
masking accomplishes the formatting of a string of numbers to
represent a date (12-09-92), a phone number (313-462-1200), or a
social security number (375-85-3477).

MetaLink A type of Application Integrator variable used to store and


reference a list of values, much like an array. MetaLink variables,
however, maintain a data model item’s instance and its parent’s
instance with each value placed on the variable. Only the source
data model can assign values to the MetaLink. A MetaLink is
declared by assigning it a label and using the label in a rule.
Source data model action statement to store a value:
M_L->Part_No = DMI12
A target data model action statement is used to reference a value:
DMI03 = M_L->Part_No
See also Array and Temporary Variable.

❖ Note: MetaLinks are only intended to be used when the looping of


source and target data are the same or closely related.

Model A definition of data. Two types of models are used in Application


Integrator: Access and data models. Access models define types of
data by character sets and delimiter characters. Data models define
the organization of data by representing the data in structure (the
relationship of items to each other). On the input side, data models
are referred to as source data models, and on the output side, these
models are referred to as target data models.

Modeling The creation of models that describe the content of input and
output data streams and define and control the processing of each.

Workbench User’s Guide 411


Glossary
Parent An item that is on a hierarchical level immediately above items that
are considered to be subordinate to that item.
Example:
Parent_A
Child_A — child to Parent_A, Sibling to Child_B
Child_B — child to Parent_A, Sibling to Child_A
Parent_B — sibling to Parent_A
See also child and sibling.

Parse To break down information, such as a data stream, into its


individual parts as in segments, segment delimiters, and data
model items.

Pre-condition Part of the access model definition of an item type that describes
any rules about the data that precedes the item, for example, a
leading delimiter.

Production functional Also referred to as a production system or production area. The


area name given to an Application Integrator runtime system that has
been set aside for real-time electronic commerce activity. GE
Information Services highly recommends that data models be fully
tested before migration to a production functional area.
See also development functional area and testing functional area.

Profile Database An Application Integrator provided database that stores


communication and trading partner profiles, including substitution
values, cross-references, and verification list codes.

Post-condition Part of the access model definition of an item type that describes
any rules about the data that follows this item, for example, a
trailing delimiter.

Radix The decimal notation character, often a period or comma.

412 Workbench User’s Guide


Glossary
Reference Application Integrator stores values in the database as a
NAME=VALUE pair, much like the operating system. Database
referencing is used to substitute a referenced label with its
associated value from the Profile Database. Before being
referenced, the Hierarchy Prefix Key must be set by declaration in
the map component file, or through the function # SET_EVAR. An
example of a rule referencing a label could be:
Condition use: DMI02 > $CREDIT_LIMIT (If the value associated
with DMI02 is greater than the value from the Profile Database
associated with the label CREDIT_LIMIT, the condition is true.)
Action use: DMI04 = $STDS_VER (Assigns the value associated
with the label STDS_VER to the data model item DMI04.)
Environment referencing is used to substitute a label with its
associated value from an environment variable.
Map component file declaration
OUTPUT_FILE=“(OUTPUTFILENAME)”
The environment keyword variable OUTPUTFILENAME is queried
for its value.

RuleBuilder The Transaction Modeler Workbench program that allows you to


create customized mapping rules via a graphical user interface.
Using RuleBuilder, you have access to the full functionality of the
Workbench rules system.

Rules Actions to be performed on a data model item. Rules consist of two


parts, a condition statement and one or more action statements.
The condition defines the state when the actions associated with it
are to be performed. The condition can be declared as a “null
condition,” which means that its state is always true; its actions are
always performed. Or the conditions can be complex - a string of
multiple or nested items and symbols. If the result of the condition
processing is true, its actions are performed.

Scope Refers to the range or period when data is available during the
translation process — from the point the data is declared (comes
into existence) to the point when the data ceases to exist (the data is
not associated with a label any more).

Workbench User’s Guide 413


Glossary
Semantic The association of items on the input side of a data stream with
MetaLink variables that are then associated with items on the
output side of a data stream. Refers to item mapping through the
use of a label to associate the two sides together, versus a positional
or index association.

Sibling A “sibling” is synonymous with an item on the same hierarchical


level in the data model.
Example:
Parent_A
Child_A — child to Parent_A, Sibling to Child_B
Child_B — child to Parent_A, Sibling to Child_A
Parent_B — sibling to Parent_A
See also parent and child.

Sort The ability to reorganize the occurrence of data. A sort is specified


on a group type item. Only defining items (items that have a value
associated with them) can be sorted.

Source data model Data model that applies to the input (incoming) data to be
processed.

Substitution values A value that is associated with a reference label. The label becomes
a database lookup key allowing substitution of trading partner-
specific information into a generic data model or environment
definition.

Tag item A data model item assigned a tag item type. A tag item in a data
model identifies different records or segments in the data stream.
An application record typically contains a record type or code.
Records in the public standards X12 and UN/EDIFACT contain a
segment ID that is the “tag.”
See also container item, defining item, and group item.

414 Workbench User’s Guide


Glossary
Tag item type Tag item types, like defining item types, are declared in the access
model by the developer. A tag item type declares that the item will
be identified in the data stream by a matching value defined by a
tag item in the data model.
See also composite/component item type and defining item type.

Target data model Data model that applies to the output (outgoing) data to be
processed.

Temporary variable A temporary variable, referred to as a Variable in RuleBuilder (as


opposed to an Array or MetaLink), is used to store and reference a
single value. If more than one assignment is made to the same
variable name, the last assigned value is the value that will be
referenced. That value is accessible from either the source or target
side. It is declared by assigning it a label and using the label in a
rule. A common use of a temporary (Variable) is a counter:
VAR->Sgmt_Cnt = VAR->Sgmt_Cnt + 1
A temporary variable can span multiple environments.
See also Array and MetaLink.

Testing functional area Also referred to as a testing system or testing area. The name given
to an Application Integrator runtime system that has been set aside
for the formal testing of data models before migration to a
production area. This area exists only in UNIX environments.
See also development functional area and production functional area.

Trace log The log of the translation process. A trace log (also called a trace)
shows the process flow through the data model(s), including the
assignment of variables and their associated values, conditions and
actions, and map component files. The trace log can be set to
various levels from minimal to full details (for example comparing
a character to the current character set). The trace log provides an
immediate and detailed debugging tool.

Translation session ID The ID of the last session number used. Stored in the Translation
Session ID file (tsid). The session number is used to create unique
administration records and filenames.

Workbench User’s Guide 415


Glossary
Translator “Translates” or converts a formatted data input stream to any other
formatted data output stream.

Triad The separator character between the thousandths and hundredths


numeric places, usually a comma.

User slot The name given to each Application Integrator Control Server
“user slot.” A user slot is opened each time the Trade Guide is
invoked, a translation session is started, or a Control API (C object
model) is linked to the Control Server.

Value stack The value stack is a list of values in memory where constructed
data gets stored. In Phase 1, values assigned to a Defining are
stored on the value stack, in the order assigned. Phase 2 references
the value stack to write the values out, while following the flow of
the data model structure. The values in the value stack consist of
the actual data and the label associated with the Defining.

Verify list ID A key value that can be looked up in the Profile Database for
verification purposes. A value parsed from the input file is verified
for existence against a list of valid codes using a code list label
(Verify List ID) specified in the data model item's declaration.

416 Workbench User’s Guide


Index

$$ (session number) keyword environment


# variable, 171, 179

#CHARSET function
A
defining, 27
#DATE function, 7, 81, 287, 316 About
#DATE_NA function, 27, 81 Application Integrator help option, xv
# FIFTH_DELIM function About Workbench information box, 21
defining, 27 Absent mode rules, 104
# FINDMATCH function Access icon
defining limit, 278 description of, 34
# FIRST_DELIM function toggling display of, 51
base values for, 27 Access model
post-condition values, 29 base, 24
#FOURTH_DELIM function, 27 definition of, 403
#LOOKUP function, 27, 287 list of standard models, 24
#NUMERIC function, 28, 73, 287, 316 overview, 7, 24
#NUMERIC_NA function, 28, 73 post-condition, 24, 29
# SECOND_DELIM function, 26, 27 pre-condition, 24
#SET_FIFTH_DELIM function, 346 viewing contents of, 31
#SET_FIRST_DELIM function, 346 Access Model dialog box, 49
#SET_FOURTH_DELIM function, 346 Access model item
#SET_SECOND_DELIM function, 346 definition of, 403
#SET_THIRD_DELIM function, 346 Action
#THIRD_DELIM function, 26, 27 definition of, 403
#TIME function, 28, 82, 287, 316 Administration database, 9
#TIME_NA function, 28, 82, 83 definition of, 403
overview, 11
$ Aliases, 222
AODL (Access Object Definition Language)
$ (dollar symbol), 74, 80 definition of', 403
$ (substitution) function, 28, 133, 142, 224, 284

Workbench User’s Guide 417


Index

Application Integrator values for, 27


Customer Support, xvi Bidirectional sockets. See Sockets
de-enveloping files provided, 187 BREAK keyword, 109, 135, 289
enveloping files provided, 187 Button
file suffixes, 300 for collapsing data models, 34
program not responding, 327 for expanding data models, 34
reserved names, 299
Array C
definition of, 404
overview of support for, 106 Calculations
Arrays tab, 131 limitations, 70
Assignment Case sensitivity, 299
inserting into rules, 120 Category
ATTACH keyword definition of, 404
using, 184 Changing environments, 184
Attachment Definition dialog box, 177 Character sets
Attachment dialog box, 17, 171, 182, 288, 331 analyzing for data modeling, 272
troubleshooting, 178 Check Syntax command
Attachment file, 170, 171, 176 using, 140
defining, 176, 177 Child. See Parent
definition of, 410 overview, 5
dialog box for defining, 17 CLEAR_VAL keyword, 135, 136
error codes, 184 Clipboard
for de-enveloping, 187 support for, 43
for enveloping, 189 support in RuleBuilder, 139
for generic report writing, 375 Closing
modifying, 181 Layout Editor, 99
naming conventions, 176 Code (Verification List)
overview, 170 definition of, 404
referencing, 301 Code list
setting trace level in, 331 inheritance between trading partner levels, 293
troubleshooting, 178, 182 setting, 85
AUTOEXEC.BAT, 309 understanding for data modeling, 295
Collapse button, 34
Colors
B
Windows display, viii
Backups Comments
Windows files, xix illegal characters, 125, 126
Base inserting into data models, 125
(condition) definition of, 404 Composite item type

418 Workbench User’s Guide


Index

definition of, 405 understanding before data modeling, 295


Condition, 289 Customer Support
definition of, 405 calling, xvi
inserting into rules, 122, 127 contract, xvi
Conditional expression, 9, 111, 130, 185, 332 Cut command, 40, 43
description of, 106 Cutting
icon, 116 data model item, 43
inserting into rules, 128 rules, 139
types, 127
Conditions tab, 127 D
Connection Mode. See Sockets
active, 220 -D parameter
passive, 220 description of, 323
Construct Data mapping. See also Data modeling
definition of, 405 overview, 1, 6
Container item steps to mapping using Application Integrator,
definition of, 405 271, 272
icon, 66 Data model
overview, 3, 5 adding items, 64
CONTINUE keyword, 135, 289 applying logic to, 8
Control Server components, 6, 11
definition of, 405 defining new, 59, 93, 94, 95, 100, 101, 162, 368,
directory, 24, 59, 302, 336 373, 368, 373
environment variables, 309 definition of, 406
number, 229, 336 establishing hierarchy, 91
queue ID, 309, 321, 326, 309, 321, 326 inserting items into rules, 129
running in UNIX, 319 Items tab, 129
starting in Windows Concurrent, 309 overview, 6
trace log, 13, 336, 338, 336, 338 saving, 97
user slot, 416 saving under a new name, 98
version, xvi Data model item
Copy, 40 adding, 64
Copy command, 43 assigning item type, 66
using when data modeling, 43 assigning min/max occurrence, 68
Copying attributes of, 56
rules, 139 changing data hierarchy, 91
COUNTER keyword, 278 changing the name of, 65
Cross-reference copying to Clipboard, 43
definition of, 405 counters for, 90
inheritance between trading partner levels, 293 cutting to Clipboard, 43

Workbench User’s Guide 419


Index

defining format for, 70 source to target mapping, 288


definition of, 406 steps to, 270
duplicating, 45 syntax of data items, 272
entering, 58 templates, 6
establishing data hierarchy, 91 test files for, 277
format, 56 understanding database lookups, 292
increment counter, 57 understanding pre- and post-conditions, 273
match value, 56 Database
pasting from Clipboard, 43 description of key, 292
relationships between, 5 inheritance, 292
requiring value, 56 key, 292
setting min/max size, 69 DATE_CALC function, 9
setting verification code list for, 85 Debug menu, 311
size, 56 Layout Editor, 55
sorting, 57, 88 Debug/View Trace, 317
specifying a second input/output file, 86 Debugging, 2, 19, 169, 174, 270, 297, 307, 308, 341,
verifying, 56 375, 307, 308, 341, 375, 407, 415. See also Debug
Data Model menu, 48 Menu
Data modeling hints for, 341
analyzing the data, 272 large data volumes, 348
applying rules using MapBuilder, 152 overview, 308
considering Profile database, 296 Declarations
creating attachment files, 288 inserting into rules, 138
creating data models, 285 Declarations tab, 138
creating rules/logic, 289 Declare
cross-referencing, 295 definition of, 406
data logic, 274 DEF_LKUP function, 85, 295
data occurrence, 273 Defining item, 66, 85, 88, 104, 339, 342, 344, 339,
data relationships, 274 342, 344
data sequence, 273 definition of, 406
data structures, 273 drag and drop, 160
debugging, 297 formats, 3
defining environments, 280 icon, 66
laying out environments, 278 in process flow, 173
naming conventions, 299 output, 88
needed characters sets, 272 overview, 3
obtaining requirements, 271 rules, 6, 110, 153, 155
overview of rules, 104 Defining item type, 51, 56, 107, 286, 290
relative references, 302 definition of, 25, 406
running test translations, 297 icon, 34

420 Workbench User’s Guide


Index

in rules, 290 multiple, 174


DEL_LKUP function, 295 overview, 10, 169, 170
DEL_SUBS function, 294 parsing sequence, 171
DEL_XREF function, 295 single, 173
Delimiters, 4 single vs. multiple, 174
Development functional area ERR_LOG function, 289, 336
definition of, 407 ERRCODE function, 184, 290
Development tools, 2 Error Code Reference dialog box, xi
DM Items tab Error handling, 184
inserting data model items, 129 checking syntax while modeling, 141
DM_READ function, 142 Error log
Documentation, iv overview, 13
conventions, ix Error mode rules, 105
Duplicate, 40 Errors
Duplicating error code ranges, xii
data model items, 45 Errors in Parse dialog box, 140
EXIT keyword, 136, 189, 289
E Expand button, 34
Extended access device types. See also Sockets
Edit menu using, 211
Layout Editor, 40
RuleBuilder, 115
F
Entity, 187, 188
definition of, 407 FIFO
path to, 329 devices, 211
Xref, 188, 189, 191 specifications, 266
ENVIRON_LD function, 12, 407 File menu
Environment file (.env) Layout Editor, 37
definition of, 407 RuleBuilder, 114
overview, 12 Workbench main menu, 19
Environment System Properties dialog box, 309 Find, 40
Environment variable Find dialog box, 46, 334
assigning values to, 323 Find Next Parameter command
defining in attachment files, 179 instructions for, 140
definition of, 408 FINDMATCH_LIMIT environment keyword,
Environments 171, 278
changing during translation processing, 184 Fixed length data, 4
common errors while attaching, 186 Formatting
defining for data modeling, 280 data model items, 70
definition of, 407 dates, 81, 82

Workbench User’s Guide 421


Index

test model for, 99 I


Function
syntax checking, 150 Increment option
Functions explanation of, 90
definition, 408 Inheritance
GET_EVAR, 323 definition of, 410
inserting into rules, 134 description of, 293
overview, 109 inittrans, 297, 318, 319, 335, 318, 319, 335
SET_EVAR, 323 arguments for, 321
Functions tab, 134 Control Server, 377
overview, 308
troubleshooting, 328
G
Input data
GET_EVAR function, 171, 324 requirements for translating, 271
GET_GCOUNT function, 142 Input file
Grid lines defining in attachment file, 178
toggling display of, 53 specifying a secondary, 86
Group item viewing in Workbench, 366
definition of, 409 Input File dialog box, 50
icon, 66 INPUT_FILE keyword environment variable, 12,
overview, 5 171, 211, 213, 222, 228, 233, 238, 243, 250, 262,
sorting option for, 88 321, 324, 321, 324
specifying file, 56 Insert Files into Project dialog box, 226, 227
variable counters, 90 Instance
definition of, 410
H Internet, 213
IP Address dialog box, 218
Help
using, xi, xii
Help menu
J
RuleBuilder, 117 Justifying data
Workbench main menu, 21 masking characters for, 78
Hierarchy
definition of, 409 K
HIERARCHY_KEY keyword environment
variable, 171, 187, 189, 294, 321 Keywords
Horizontal scroll bars, 34 inserting into rules, 135, 137
hostname overview, 109
fully qualified, 213 Keywords tab, 135
locating for your computer, 214

422 Workbench User’s Guide


Index

L M
Labels Map component file
data modeling limits, 300 naming conventions, 280
Layout Editor MapBuilder, 20, 42, 110, 161
closing, 99 default values, 151
Data Model menu, 48 definition of, 410
Debug menu, 55 icon, 15
Edit menu, 40 loop control, 162
File menu, 37 mapping data, 157
Help menu, 55 modeling session helps, 157
menus, 37 overview, 2, 110, 151
overview, 34 Preferences dialog box, 151, 153
overview of window, 33 processing messages, 161
restoring, 39 symbol, 158
toggling access icons, 51 using, 152, 158
toggling grid lines of, 53 Mask
View menu, 49 definition of, 411
window, 17 Masking characters
Layout Editor dialog box, 17 for dates, 81
Listing disk/tape contents, xx for justifying, 78
Literals for literals, 78
inserting into rules, 124 for numbers, 70
masking for, 80 for positive/negative sign, 75
LKUP function, 85, 287, 295 for time, 82
LOG_REC function, 289, 336 for triads, 78
LOOKUP_KEY keyword environment variable, Math operators
85, 171, 295, 322 inserting into rules, 130
Loop control Message queues
enabling, 156 specifications, 267
Loop Control Needed dialog box, 164 using, 211
overview, 162 MetaLink
source loop rules, 165 as a variable, 406, 415
target loop rules, 167 clearing values, 135
troubleshooting, 163 comparing to Array, 107
undoing, 162, 165 counters for, 90
using, 162 declaring references, 172
Loop Control Needed definition of, 8, 107, 411, 414
dialog box, 164 EXPORT, 136
using, 164 increment, 287

Workbench User’s Guide 423


Index

length limit, 300 Open dialog box, 62


resetting, 408 Operators tab, 130
resetting pointer, 134 otcsvr, xvi, 308
source rules, 275 OTEnvelp.att, 10, 170
values, 339, 340, 339, 340 using, 189
variables, 57, 90 OTmdl, 98, 141
MetaLinks otrans, xvi, 308, 314, 319, 328, 335, 339, 308, 314,
tab, 132 319, 328, 335, 339
Microsoft Developer Studio, 225 OTRecogn.att, 10, 170, 171, 381
Model. See also Data model using, 187
definition of, 411 OTReport.sh, 377
Modeling. See also Data modeling description of, 376
definition of, 411 otrun.exe, 99, 297, 314, 318, 319, 338, 314, 318,
319, 338
N overview, 308
Outbound X12 Values dialog box, 189
Naming Conventions, 176, 299 Output
Negative sign sorting, 88
masking for, 75 Output data
Network dialog box, 214, 217 requirements for translating, 271
New dialog box, 225 Output file
New Model Definition dialog box, 59 defining in attachment file, 178
New Project Workspace dialog box, 225 specifying a secondary, 86
Notebook viewing in Workbench, 366
inserting options from, 123 Output File dialog box, 50
Null condition
description of, 106
P
inserting into rules, 121
Numbers Parameter
handling, 70 prompting for, 140
Numeric data Parent
formatting, 70 definition of, 412
masking characters for, 70 overview, 5
Parse
O command line syntax checking, 142
definition of, 412
Occurrence translator and Workbench syntax checking,
understanding for data model item, 56 143
On-line Help Parse on Errors dialog box, 141
using, xi Paste, 40

424 Workbench User’s Guide


Index

Paste command, 43 referencing, 413


using when data modeling, 43 trading partner lookup, 187, 189
Pipes values, 9, 270, 293, 296
specifications, 266 verification lookup, 416
using, 211 Profile Database Interface Worksheet, 294, 295
Positive sign Project Settings dialog box, 225, 226
masking for, 75 Properties for OTCSVR.EXE dialog box, xvi
Post-condition
defined in access model, 24, 29 R
definition of, 412
understanding, 273 Radix
values for, 29 definition of, 346, 412
Pre-condition IO detail, 316
defined in access model, 24 key phrase, 342
definition of, 412 rule functions, 316
understanding, 273 using in scripts, 350, 352, 354, 356, 350, 352,
values for, 26 354, 356
Preferences dialog box Record
accessing, 154 data mapping for, 4
default values, 151 Record lock
option settings, 154 acquiring. See also Profile database. See also
overview, 153 Profile database
Present mode rules, 104 Redo, 40, 41
Processing flow, 173 actions, 41
Production functional area clearing actions, 41
definition of, 412 data model action, 42
Profile database, 56, 85, 310 Reference
changing values, 289 definition of, 413
cross-references, 188, 295, 405 References
data model names, 282 explicit, 301, 302
defining, 187 relative, 301, 302
definition of, 412 Reports
entity, 407 generating user-defined, 375
extensions, 298 on translations, 375
functions of, 188, 292 Restoring
hierarchy codes, 293, 294 Layout Editor, 39
inheritance, 410 Rule
key prefix, 278, 283, 287, 404 modes, 104
lookups, 283, 292, 294, 295, 296, 297 notebook, 119
overview, 11 Rule Edit Workspace

Workbench User’s Guide 425


Index

description of, 113 Run dialog box, 55, 229, 235, 240, 246, 247, 252,
Rule Notebook 257, 258, 297, 311, 317, 330, 311, 317, 330
description of, 112 Runtime
RuleBuilder, 110 syntax checking, 146
accessing, 117
Arrays tab, 131 S
Conditions tab, 127
Data model items tab, 129 Save As dialog box, 62, 98
Declarations tab, 138 Saving
definition of, 413 data model, 97
dialog box, 17, 48, 111, 118 standard data models under new names, 62
Edit menu, 115 Scope
File menu, 114 definition of, 413
Functions tab, 134 Scroll bars, 34
Help menu, 117 Search-Type dialog box, 366
icon, 34 Segment
Keywords tab, 135 data mapping for, 4
MetaLinks tab, 132 Semantic
Operators tab, 130 definition of, 414
overview, 2, 104, 110 Session log
Rule Notebook, 111 overview, 13
Substitutions tab, 133 Session number, 13, 55, 171, 309, 328, 375, 309,
tabs, 127 328, 375
toolbar, 113 translation session ID, 415
Variables tab, 131 Session Output dialog box, 231, 235, 236, 240,
window, 17 241, 246, 247, 252, 253, 257, 259, 323, 325, 323,
Rules 325
adding, 119 SET_CHARSET function, 27
applying changes, 120 SET_DECIMAL function, 73, 344
checking the syntax of, 140 SET_ERR keyword, 137, 290, 291
completing via prompts, 140 SET_EVAR function
copying to Clipboard, 139 enveloping, 189
cutting to Clipboard, 139 explanation of, 292
definition of, 413 rules, 332
methods of creating, 110 using with trace, 332, 348, 332, 348
modifying, 119 SET_FIFTH_DELIM function, 344
overview, 9, 104 SET_FIRST_DELIM function, 29, 344
reasons for using, 9 description, 27
types of conditions, 106 SET_FOURTH_DELIM function, 344
using to set trace level, 332 SET_LKUP function, 295

426 Workbench User’s Guide


Index

SET_RELEASE function, 344 using MS Visual C++, 225


SET_SECOND_DELIM function, 26, 344 Sort
SET_SUBS function, 294 data model items, 88
SET_THIRD_DELIM function, 26, 344 definition of, 414
SET_XREF function, 295 Sort dialog box, 88, 89
Sibling Sort option
definition of, 414 changing, 129
overview, 5 Source data model
Sign creating, 23
formatting characters for, 75 definition of, 414
Sockets overview, 6
aliases, 222 Source model. See Source data model
attributes, 213, 220 Standard data models
bidirectional, 214, 221 saving under new names, 62
client/server relationship, 212, 214, 220 Standards dialog box, 85
closing, 220 Steps to data modeling, 270
compiling C programs, 224 STRCAT function
configuring, 262 concatenating, 187
connection mode, 212, 220 logic, 189
data transfer mode, 221 STRSUBS function, 342
defining, 211, 213, 221 STRTRIM function
editing UNIX shell scripts, 224 concatenating, 187
editing Windows batch files, 225 logic, 189
editing Windows Concurrent batch files, 225 Substitutions
error messages, 264 definition of, 414
examples, 223, 228–61 inheritance between trading partner levels, 293
hostname, 213, 214, 222 inserting into rules, 133
overview, 212, 222 understanding for data modeling, 294
passive, 214, 220 Substitutions tab, 133
persistent, 214, 220 Syntax
port number, 213, 222 checking error conditions, 141
retry attempts, 214 checking rules, 140
retry attribute, 220 items not checked, 148
senders and receivers, 213
specifications, 213 T
syntax, 211, 213
unidirectional, 221 T_ACCESS keyword environment variable, 12,
UNIX, Windows Concurrent and Windows 171, 172, 321, 378, 380, 381, 321, 378, 380, 381
examples, 212 T_MODEL keyword environment variable, 12,
using, 211, 221 171, 172, 322, 378, 380, 381, 322, 378, 380, 381

Workbench User’s Guide 427


Index

Tabs understanding output, 336


inserting options from, 123 using for debugging, 341
Tag item viewing via command line, 335
definition of, 414 viewing via Workbench, 333
icon, 66 ways to set, 330
overview, 4 Trace Settings dialog box, 317, 330, 317, 330
setting match value for, 84 TRACE_LEVEL keyword environment variable,
Tag item type 171, 322, 332, 348, 322, 332, 348
definition of, 415 Trade Guide
Target data model Customer Support, xvi
creating, 23 prerequisites, viii
definition of, 415 using to report on translations, 375
overview, 6 Trading partner
Target model. See Target data model database key, 292
TCP/IP inheritance between levels, 293
network. See Sockets recognizing during processing, 187
Properties dialog box, 215 Trading Partner Profile dialog box, 133, 189, 292
using sockets, 211 Transaction Modeler Workbench. See Workbench
Templates Translating
saving under new names, 62 arguments for, 321
Temporary variable at the command line, 319
definition of, 415 example command lines, 325
overview of support for, 106 overview, 308
Testing functional area preparation for, 309
definition of, 415 reporting on, 375
Thousands separator character. See Triad terminating, 327
Toggling trace log for, 330
access icons, 51 troubleshooting, 328
grid lines, 53 UNIX, 319
Tools menu, 20 using Workbench interface, 311
Trace level Translation requirements
setting, 314 obtaining, 271
setting in Run dialog box, 318 Translation Session ID file
table of values, 314 definition of, 415
Trace log overview, 13
definition of, 415 Translator
example output, 349 definition of, 416
generating, 337 syntax checking, 146
organization of, 339 Triad
overview, 13, 330 definition of, 416

428 Workbench User’s Guide


Index

masking for, 80 supported by Application Integrator, 106


using, 9, 71, 80 Variables tab, 131
Troubleshooting Verfication code list. See Code list
error messages, 163 Verify list ID
loop control, 163 definition of, 416
Loop Control Needed dialog box, 164 Vertical scroll bars, 34
MapBuilder, 163 View menu
UNIX, 328 Layout Editor, 49
Windows, 329
Windows Concurrent, 329 W
TRUNCATION_FLG keyword environment
variable, 337 Windows
editing batch files, 225
hostname, 214, 217
U
maximum command line characters, 325
ulimit command, 341 running Workbench from, 14
using, 341 starting sockets examples, 229
Undo, 40, 41 starting sockets translations, 230
actions, 41 translator program, 308
clearing actions, 41 viewing trace output, 338
data model action, 42 Windows Concurrent
Unidirectional sockets. See Sockets as multiple user system, 308
UNIX Control Server, 14, 309
compiling sockets examples, 223, 232 editing batch files, 225
hostname, 214, 232 ending tasks, 327
maximum command line characters, 325 hostname, 217
starting sockets examples, 232 invoking translations, 320
starting sockets translations, 232 maximum command line characters, 325
translating at the command line, 319 running offline, 14
troubleshooting translations, 328 running translations, 308
viewing trace output, 336 sockets theory, 222
User slot starting sockets examples, 229
definition of, 416 starting sockets translations, 230
terminating, 327 trace log, 337
troubleshooting, 329
V unsupported setups, 14
Winsock, 222
Variable length data, 4 Workbench
Variables development tools, 2
overview, 8 File menu, 19

Workbench User’s Guide 429


Index

Help menu, 21 X
Layout Editor window, 34
main window, 15 XREF function, 187, 188, 189, 295
overview, 1, 2 XREF_KEY keyword environment variable, 171,
saving changes in, 37 187, 189, 295, 322
Tools menu, 20 Xrefs/Codes dialog box, 295, 296, 297
window components of, 15

430 Workbench User’s Guide

You might also like