Real Time Reference

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 118

OASyS DNA®

RealTime Reference

www.schneider-electric.com
Legal Information
The Schneider Electric brand is the sole property of Schneider Electric Industries SAS. Any
trademarks of Telvent Canada Ltd. or Telvent USA, LCC referred to in this guide are the sole
property of Telvent Canada Ltd. or Telvent USA, LCC. The Schneider Electric brand and
trademarks may not be used for any purpose without the owner's permission, given in writing. This
guide and its content are protected, within the meaning of the applicable copyright and trademark
laws and international conventions. Unless otherwise agreed in writing, you may not reproduce all
or part of this guide on any medium whatsoever without the permission of Telvent Canada Ltd. You
also agree not to establish any hypertext links to this guide or its content. All other rights are
reserved.
© 2015 Telvent Canada Ltd. All rights reserved.
Table of Contents

Table of Contents
1 Introduction.................................................................................................................................4
1.1 General Functionality........................................................................................................5
1.2 Database Concepts...........................................................................................................6
1.3 Communications................................................................................................................7
2 Omnicomm Process...................................................................................................................8
2.1 Omnicomm Connection Basics.........................................................................................9
2.2 Configuring Tables..........................................................................................................10
2.3 Communication Scenarios...............................................................................................11
3 Alarms and Events...................................................................................................................14
3.1 Events.............................................................................................................................14
3.2 Alarms.............................................................................................................................15
3.3 Alarm Suppression..........................................................................................................24
3.4 Interaction Between Suppression Types.........................................................................30
4 SWANA....................................................................................................................................31
4.1 Starting the SWANA Program.........................................................................................31
4.2 SWANA window..............................................................................................................32
5 RealTime Database Utilities.....................................................................................................42
5.1 Reading and Writing Data...............................................................................................42
5.2 Deleting Records (dbdel)...............................................................................................47
5.3 Database Lister/Loader...................................................................................................47
5.4 Creating a Template from RealTime................................................................................49
5.5 Listing RealTime Tables..................................................................................................50
5.6 Loading Data into RealTime from a text file....................................................................53
5.7 Creating Remote Records...............................................................................................54
5.8 Loading Records Using Calculation and Control Routines.............................................54
5.9 Other Utilities for Loading and Saving.............................................................................56
5.10 Obtaining Information from RealTime Tables................................................................59
5.11 minSendTimeMs............................................................................................................63
6 OPC Data Access Server.........................................................................................................65
6.1 OPC Server.....................................................................................................................65
6.2 OPC DA Client and Server Connection..........................................................................66
6.3 Data Sources and Tag Names........................................................................................68
6.4 Writing Data.....................................................................................................................70
6.5 Security...........................................................................................................................73
6.6 OPC Timestamps............................................................................................................74
6.7 OPC Data Quality............................................................................................................75
6.8 Redundancy and RealTime Failovers.............................................................................78
6.9 Client Browsing...............................................................................................................79
6.10 Performance and Optimization......................................................................................80
7 Protocols..................................................................................................................................82
7.1 Typical Configuration Procedure.....................................................................................82
7.2 Modbus (Generic)............................................................................................................89
7.3 OPC...............................................................................................................................100
8 Accessing XOS Elements (Sound, Icon, Bitmap Files) from ADE..........................................115

3
Introduction

1 Introduction
This manual describes the system for real-time data gathering, alarm annunciation and
response, and interactive device control. Originally developed for oil and gas pipelines, it is also
used in the electrical and water transmission distribution and control industries. This document
discusses configuration and administration information and procedures that are involved with
RealTime Services and databases.

• RealTime Service collects and scales data, checks for alarm conditions, stores real-time
information, communicates with remote terminal units (RTUs) or programmable logic
controllers (PLCs), and enables the user to send out control commands to field devices.

• ezXOS enables users to interact with the system; it includes data summaries, dynamic
maps, device control dialog boxes, and a mouse-controlled command interface.

There are three user groups: technical, administrative, and operational staff.
• Technical staff (including engineers and programmers) are responsible for RealTimeDB
configuration, interface design and applications development in languages such as C.

• Administrative staff, which includes system administrators, managers and supervisors, are
responsible for system maintenance, troubleshooting and future system development. They
have access to the operating system and are able to start and stop programs and run utilities
and other specialized software.

• Operations staff are responsible for the daily operations of the system, which includes
monitoring of facilities, response to alarms, and control of field devices. Some configuration
changes, such as alarm-limit settings, are also under operator control.

Figure 1 - System Users

The administration of the system involves the configuration and maintenance of RealTime
database, as well as the design and generation of custom reports and displays. This manual

4
Introduction

provides reference information about these databases. This information is necessary in


performing the tasks that are involved.

1.1 General Functionality


The OASyS DNA system consists of three main subsystems: RealTime Service, ezXOS, and
Historical Service.

RealTime Service

This service provides a real-time database and program package that collects data, checks for
alarm conditions, scales values, drives devices, provides storage space for holding current
information, and provides processes to enable the user to send out commands to field devices.

The RealTime Service communicates with the remote terminal units (RTUs) or programmable
logic controllers (PLCs), transmitting commands and gathering current system information.

ezXOS

The ezXOS subsystem provides the system displays and windows that appear to the users on
the monitor screen. ezXOS (eXtended Operator Station) lets operators and other authorized
users interact with the other components of the system using a Graphical User Interface (GUI).
The ezXOS system includes workstations that typically have a mouse-controlled command
interface.

Historical Service

The Historical subsystem provides the storage space, HistoricalDB (a relational database) for
historical data, as well as the capacity for creating reports from this information. Historical
Service is connected to RealTime Service for information transfer from the real-time and into the
historical system. For more information, refer to the Historical Reference.

Other OASyS DNA Components

The following table describes additional components within the OASyS DNA system.

Table 1 - Other OASyS DNA System Components


Component Description
Engineering Station The Engineering Station or System is used for ezXOS display
or System development, offline RealTime configuration, and system backups. It is
also used as the source for file distribution and in maintaining and
enhancing the system’s functionality.When system startup is
performed, this service acts as an independent system containing its
own RealTime and Historical services. The service, in this mode, is
used for offline database configuration and offline ezXOS display
development. If the service startup is not selected, then this host can
be used as an ezXOS station for online display viewing or
development, and for database viewing or configuration.
DES The DES (Display Edit Station) is used for ezXOS display
development. The Engineering Station is also used for display
development, but the DES subsystem provides additional stations from
which display development is possible. The system always includes an
Engineering Station; however, it may or may not include the DES.

Table continued…

5
Introduction

Table 1 - Other OASyS DNA System Components (continued)


Component Description
Except for the ability to develop ezXOS displays, a DES does not
possess the other functions of the Engineering Station.
Redundant Hosts The system is managed and operated from a master control station,
which functions as the central data storage and control center. Many
installations also include a secondary host(s) that provide(s) standby
services at the site, or off-site services that serve as backup in the
event of a disaster or maintenance at the primary site. The standby and
off-site services take control of the system when the controlling station
requires off-line maintenance or fails to operate.
Other Hardware In addition to multiple computers, several hardware devices support the
Devices system, including workstations, printers, remote terminal units (RTUs),
programmable logic controllers (PLCs), field sensors, and
controllers.Operator workstations and printers can be located at the
main control site as well as at remote locations. Each workstation has
its own central processing unit (CPU), one or more high resolution color
monitors for graphics display, a keyboard, and a “pointing device,”
typically a mouse or trackball. The system interface features high
resolution operator workstations and mouse-selected operations for
greater efficiency.Printers are typically connected to the workstation.
These are used to produce reports and to output hard copies of
console windows. Most systems are typically configured with two
printers: one for producing scheduled or demand reports, and another
for printing the colored graphics of windows or screens. RTUs or PLCs
are typically used for data acquisition and control at remote sites.
Schneider Electric manufactures its own line of data acquisition and
control equipment.

1.2 Database Concepts


The database management system, which is the central structure of your system, is organized
into two major components, one optimized for real-time data and the other for managing long-
term historical data. The real-time functionality is managed using the RealTime database
system. Long-term system history is managed using the Historical database.

The database management system provides all of the interfaces required to move data between
the components. It is structured to gather data, store acquired data, and provide framework for
data processing, device control, and internal monitoring of system processes.

The RealTime database is specially designed for real-time data processing.

Typically, both configuration data and real-time data from the field instruments are contained in
the same RealTime database table. For example, in the Analog and Rate tables, each record
(i.e. each row) represents the current configuration and state of one device. RealTime
databases remain fixed in size because the real-time data is constantly flowing through them.
The number of records only grows when new devices are configured.

The system continuously scans, or polls, the data sources for new data. (This polling takes from
several hundred microseconds to several seconds.) Every time the system gathers data from
the data sources, it populates the RealTime database fields. This data only remains in the
RealTime database until the next update, at which time it is overwritten with fresh data. Real-
time data from the RealTime database is displayed on the ezXOS user interface and updated
on every scan.

6
Introduction

1.3 Communications
Omnicomm is the communication process that controls and manages data transfer. The
communication between hosts and remotes occur over connections that are managed by one or
more Omnicomm processes.

Data acquisition describes the process of reading data into the system from field devices. The
process of writing commands from the system down to field devices represents the operational
control of these devices.

Acquired data refers to the telemetered data values that measure conditions such as pressure
and temperature in the field. It also includes data sent from the remote site to the system in the
form of text messages that indicate alarm and event conditions. These text messages help to
monitor the process conditions and alert the user to problem situations.

Data and commands are transferred along the connection between the host computer and the
remote device (e.g. RTU, PLC), and between that remote and its attached field devices.

A variety of devices serve as remote processing units (remotes) to process and transfer the field
data back to the host. These include different remote terminal units (RTUs), gas
chromatographs (GCs), programmable logic controllers (PLCs), and flow computers (FCs). The
remote scans the field devices, which are mapped into the RealTime database by unique data
input/output coordinates.

Data is passed between the remotes and the host computer through a connection.

For more information about Omnicomm, its associated RealTime tables and communications
scenarios, refer to the Communications chapter.

7
Omnicomm Process

2 Omnicomm Process
Omnicomm is the communication process responsible for controlling and managing the transfer
of data between the RealTime database and remote devices such as RTUs. An Omnicomm
process manages all connections between remotes and the host computer. The process also
initiates all queries and moderates communications.

An Omnicomm process runs on the host computer to control a group of remotes and their
connections. Omnicomm interprets packets of data according to predefined protocols. A
protocol defines the interpretation of data during polling and the commanding of field devices.
Each remote uses a specific protocol to communicate with the host computer.

Figure 2 - Basic Data Acquisition Process

The protocol-specific configuration of a Remote record establishes the aspects of


communication that are distinct to a given protocol/remote combination. Several standard
protocols can be supplied with all systems, including Modbus and OPC.

Other records in the RealTime database, such as Analog records, reference logical data input/
output coordinates. The protocol driver maps protocol-specific entities (e.g., Modbus registers)
to the appropriate coordinates when it processes the data into the RealTime database.

• The Omnicomm process requests data from each remote, and transfers it to the appropriate
RealTimeDB table (analog, rate, or status).

• An Omnicomm process runs against a grouping of remotes and their connections.

• Omnicomm treats all connections generically, independent of the process using the
connection. Thus, connections with different attributes (dial, network) and needs (e.g.,
modems for dial connections) are all handled by the same Omnicomm process. A dial
connection is unique in that it can lock a modem if a relevant modem is available; hence, it

8
Omnicomm Process

requires a combined connection and modem record, whereas other connections (e.g.,
network) do not.

• Modems are dynamically allocated from the modem bank.

2.1 Omnicomm Connection Basics


All telemetry systems follow a similar pattern of connections when using Omnicomm.

Figure 3 - General Connection Diagram for any Telemetry System

• Remotes can be reached via many connections. There is no limit to the number that may be
configured.

• Circuits group physical connections together. Two physical connections are in the same
circuit if they either directly or associatively conflict with each other. Circuits are used to
prevent Omnicomm from sending multiple messages over different connections where these
messages would collide or interfere with each other. Omnicomm will never attempt to

9
Omnicomm Process

perform more than one operation at a time over a single circuit, regardless of the number of
connections in that circuit.

• Each physical connection used by each remote is assigned a Cost Factor. Cost Factors are
configured in the Remote Connection Join (remconnjoin) row edit dialog box Refer to the
Advanced Database Editor for more information.

• Omnicomm uses this factor to determine which connection to use when the current one fails.
The Cost Factor is an integer that indicates how expensive a connection is. Higher
numbered values indicate increasingly higher Cost Factors.

NOTE: A Cost Factor is a mechanism to indicate the relative priority for connections. It is not
necessarily associated with monetary cost.

• Omnicomm never fails back. If a remote has multiple connections, Omnicomm will
continuously use the connection with the lowest Cost Factor until it breaks. Omnicomm will
then fail over to an alternate connection with a higher Cost Factor. If the connection with the
lower Cost Factor becomes available again, Omnicomm will never switch back (fail back) to
using it. However, once the connection controller has sent test polls down the repaired
connection, it can determine whether or not it is appropriate for Omnicomm to switch back to
the connection with the lower Cost Factor.

• External scripts (or user commands) are used by the connection controller to force
Omnicomm to fall back to previously used connections. Omnicomm does not automatically
test connections. Instead, user-written scripts or manual commands may be used to test any
connection-remote combination. This way, you may implement any desired control or timing
logic.

• All connections that are used for a single remote must use the same communication mode
(e.g., Host Poll).

2.2 Configuring Tables


There are several RealTime tables that contain records associated with Omnicomm. There are
certain considerations that should be taken into account when working in these tables to ensure
the connections are configured properly.

The following must be observed when configuring records in tables associated with Omnicomm:

• All connections for a remote must use the same Omnicomm process.

• Connections usually have datasets owned by one system. If the network routing allows,
connections may also be switched from one system to another.

• A remote’s dataset can usually be switched from one system to another.

• Remotes may have distinct datasets from the connections

Example:

− Dataset DS1 for connections C1 and C2; always owned by system A

− Dataset DS2 for connections C3 and C4; always owned by system B

− Dataset DS3 for remote R1; control of this remote may be switched between system A
and system B.

On System A, R1 uses connections C1 and C2. On System B, R1 uses connections C3


and C4

10
Omnicomm Process

The following table lists the different tables involved in configuring Omnicomm.

Table 2 - RealTime Tables Associated with Omnicomm


Table Function
omnicomm This table lists all Omnicomm process on the system.
Mbank This table is used to group modems into virtual banks. It also posts
reservations for modems when no modems are currently available.
modem This table provides dynamic allocation of physical modems for
connection use.
connection This table contains all unique paths of communication to the remotes.
remconnJoin This table contains all remote-connection assignments.
remote This table contains one row for each defined remote.
circuit This table is a placeholder that indicates the existence of a circuit within a
connection.
sigconfig The signal configuration (sigconfig) table contains port configuration
information (e.g. baud rate, parity).

2.3 Communication Scenarios


Leased-line, VSAT, or temporary dial connections are used for communication between the
remotes and the RealTime server. Each type of communication media is different and can be
used with a multi-drop or point-to-point configuration.

2.3.1 Multi-Drop and Point-to-Point Configuration


Multi-drop and point-to-point configuration can be used with any of the communications media
that are described in this section; however, the examples shown for leased-line and dial
connection are typical. Leased-line connections are generally multi-drop, and VSAT and
temporary dial connections are point-to-point

2.3.2 Communication Diagrams


The Leased-line connection (multi-drop), VSAT connection (point-to-point), and
Temporary dial connection (point-to-point) figures illustrate the main differences between
leased-line, VSAT, and temporary dial connections.

Leased-Line Connection

The figure below shows a leased-line connection with multi-dropped remotes of varying speeds.
A leased-line connection is a fast connection. A relatively small amount of time is required to
establish the connection, and the communication delay time, or latency, is low.

Note that although the same connection is used for both remotes, communication between the
host and each remote is different because of the delays and timeouts configured in the remote
records.

11
Omnicomm Process

Figure 4 - Leased-line connection (multi-drop)

VSAT Connection

The figure below shows a VSAT connection with a slow remote. A VSAT connection is a slow
connection. The time required to establish the connection is similar to that of a leased-line,
since the connection is usually to a local VSAT router. However, a VSAT connection has a large
communication delay time because of the propagation delay inherent in satellite
communication.

Figure 5 - VAST connection (point-to-point)

Dial Connection

The figure below shows an example of a temporary dial connection with several remotes.
Although it takes some time to establish a dial connection, once it is connected it can be as fast
as a leased-line connection.

12
Omnicomm Process

Figure 6 - Temporary dial connection (point-to-point)

13
Alarms and Events

3 Alarms and Events


Alarms and events are used to announce and record important activities within the system.
There are several configuration options that allow you to set conditions for the generation and
logging of alarms and events.

An event is a record of conditions and activity within the system. The event history provides a
chronological record of changes in the system’s condition, as well as actions taken by system
users over time. An alarm is used to announce a significant event that requires an operator’s
immediate attention. The generation of an alarm also creates a corresponding event record.
However, the generation of an event does not necessarily create a corresponding alarm.

You can configure conditions for the following:

• Generation of alarms

• Annunciation and display of alarms


• Suppression of alarms by conditions in other related records

• Printing of event logs

3.1 Events
The Event Summary window provides a detailed summary of the operational activity on the
SCADA system. Events are recorded both for operator-initiated actions and for application-
generated activities.

A record is generated in the event summary when any of the following occur:

• The system detects a condition which will generate an alarm

• A significant event occurs in an application

• The user issues commands to field devices

• The user modifies system configuration parameters

• The user acknowledges an alarm

Event Logging

When an event is generated, it is recorded in two locations. First, a copy is stored in the
HistoricalDB Event table (refer to the Historical Reference). You can view the event through the
Event Summary window in ezXOS, as discussed in the Operation Reference).

Second, the event message is formatted and placed in the queue for the appropriate spooler.
The spooler process records the event on the appropriate log printer or in a log file. The event
message takes the form of a single line of text stating the nature of the occurrence. For more
information, refer to the Historical Reference. The group to which the field device is assigned
determines which spooler is used. The designated “system” spooler is used for events that do
not have an appropriate group. (Groups are discussed in detail in the group Table (Module 19)
in theRealTime Tables Reference).

The Event database will quickly fill with data if you do not empty or purge it periodically. The
archive/cleanup process performs the necessary database purging. If you store events in a log
file, you should periodically delete old entries from the log file. Spooling events to a file is also
discussed in the Historical Reference.

14
Alarms and Events

3.2 Alarms
Alarms are used to notify the operator of a significant event or state. Alarming generates
database and system alarms and its alarm conditions are highly configurable.

The system generates three kinds of alarms: database alarms, system alarms, and application
alarms. A database alarm is always associated with a specific record in the RealTime database.

A condition that generates a system or application alarm may or may not have a specific
RealTime record associated with it. System and application alarms are similar in that they do
not need to be tied to a specific point in the database; however, you should elect to configure an
application alarm in situations where you want to use one or multiple fields in the alarm
database that are not available for system alarms. Another benefit of application alarms is the
ability to have them replicated based on the dataset field. Most alarms are database alarms.

Alarms refer to either a state or a significant event. If the alarm refers to a state, it persists until
the alarm is acknowledged and the condition that caused the alarm is cleared. For example, a
value that moved an analog record into a high alarm state would generate a state alarm, which
persists for the entire time that the record remains in that state. Even if the operator
acknowledges the alarm, the record remains in the Alarm Summary until the record’s value
moves out of that high alarm state.

If the alarm is caused by a transient condition or an event, the alarm is not persistent: it
vanishes from the Alarm Summary after the operator acknowledges it. For example, a rate-of-
change (ROC) alarm for an analog record is a non-persistent alarm. Such an alarm serves to
notify the operator of a condition that has occurred, even though the record may still be well
within its normal operational range.

InstAlarm automatically suppresses alarms that may occur when a record has recently been
commanded to change state, or when its alarm state has just changed. This component helps
reduce the number of nuisance alarms. For example, starting a pump could create a pressure
wave that causes several sensors downstream from the pump to go into an alarm state
temporarily. You can configure InstAlarm to suppress these alarms in the downstream devices.

The alarm/event inhibit features available through ezXOS provide you with the flexibility to
specify whether or not a given point generates event messages or alarms.

The following baseline windows notify the operator of alarms:

• The Alarm Summary window .

• The Newest Priority Alarms window.

• The Station Alarm Summary window.

• Alarm summaries for individual tables, such as analog, rate, and status.

For more information on these windows, refer to the Operation Reference.


NOTE: Within ezXOS, the system identifies an alarm condition by replacing the color of the
affected device or monitored value with a different solid or flashing color.

15
Alarms and Events

3.2.1 Instrument Fail Check


Instrument Fail Check can be configured on rate and analog records to generate alarms when
RTUs receive values outside their measurable range.

Sometimes, due to instrument or sensor malfunction, analog and rate instruments and/or their
associated transducers try to send a value to the RTU that is outside of the RTUs’ measurable
range. If the Instrument Fail Check check box is selected for RTUs capable of sensing
instrument failures, an analog or rate point alarm is generated when an instrument failure
occurs.

For an RTU that is not capable of sensing “out-of-range” failures, selecting Instrument Fail
Check causes the generation of alarms whenever the RTU encounters a raw value that is
outside its measurable range.

If the RTU is not capable of sensing “out-of-range” failures, select Instrument Fail Check to
generate alarms whenever the raw value is not within the raw value range.
NOTE: Instrument failure handling is protocol-specific.

For analog points, instrument failure alarms “clamp” a value that exceeds the minimum or
maximum alarm threshold to the value of the threshold it exceeded. For example, if the
maximum pressure value allowed is 50, and this value increases from 45 to 55, the value will be
locked in at 50.

For rate points, instrument failure alarms “clamp” a value that exceeds the minimum or
maximum alarm threshold to the last known good value. For example, if the maximum flow rate
value allowed is 50, and this value increases from 45 to 55, it will be locked in at 45.

3.2.2 Deadbands
Alarm deadbands control the sensitivity of the high and low alarms. These alarms are always
triggered when the value being monitored crosses the high or low limit. The value that causes
the alarm state to end depends on the configured deadband.

When the record is in the high alarm state, it remains in that state until it drops below the high
limit minus the deadband, as shown in the figure below. These conditions prevent minor
fluctuations from repeatedly putting the value into and out of the alarm state. Similarly, when a
record is in the low alarm state, it remains in that state until it rises above the low limit plus the
deadband. If the deadband is set to zero, this feature is disabled. These deadband rules apply
in the same manner to all of the limits.
An ADE validation rules ensures that bad configurations cannot be entered. The rules are:

• High must be greater than Low for at least deadband value for that level

• High-High must be greater than High for at least the High-High deadband value

• Low-Low must be less than Low for at least the Low-Low deadband value.

16
Alarms and Events

Figure 7 - Deadband Example

3.2.3 Alarm Limits


Standard normal operating values for the analog and rate records lie within certain High and
Low limits. When a measured value exceeds the high limit, or goes below the low limit, a state
alarm is generated to indicate that a condition requires attention.

A second set of limits, known as High-High and Low-Low, are above and below the high and
low limits. When a measured value exceeds the High-High, or goes below Low-Low limits, a
state alarm is generated to indicate that a critical condition has occurred.
NOTE: Typically, a higher severity is assigned to High-High / Low-Low conditions than to the
High/Lowconditions.

In ESCADA, there are expanded alarm limit levels. It is possible to configure four High Limit
levels and four Low Limit levels:

• L4-High

• L3-High

• High-High

• High

• Normal

• Low

• Low-Low

• L3-Low

• L4-Low
It is possible to configure the levels for all of these limits either individually, in sets or by linking
them to levels set for other records.

17
Alarms and Events

3.2.4 Rate of Change Checks


Analog and rate instruments register a value that increases or decreases depending on process
conditions. InstAlarm can monitor the rate-of-change (ROC) in the value. Rate-of-change is
determined by normalizing the last scan value and the current value to the unit time (in
seconds).

Some instruments have a manufacturer’s specification indicating that errors can occur if a
certain ROC value is exceeded. At times, a monitored process variable can require a ROC limit
to prevent errors based on rapid adjustments to the system. In these cases, select the Rate of
Change Alarm check box on the Alarming tab in the analog Row Edit dialog box in the
RealTime Tables Reference, and enter the maximum allowable rate-of-change (in engineering
units per second) in the Rate of Change Limit: field. An alarm is generated if the calculated
rate of change is greater than this amount.

The correct limit is determined by the instrument specifications and the process limitations.

3.2.5 Creep Detection


Creep Detection can detect calibration deterioration in analog and rate instruments. Since
analog and rate instruments can creep out of their calibrated state over time, Creep Detection
can be configured to measure the difference between the current value and the creep setpoint
and trigger an alarm if the difference exceeds the maximum deviation value specified for that
instrument.

You may need to properly calibrate analog and rate instruments to ensure that their values are
accurate and do not creep out of the calibrated state. Analog and rate records have an option
that allows you to store an initial value or creep setpoint, updated on startup and whenever a
creep alarm is generated. This initial value can then be compared to all subsequent values. This
comparison measures any creep deviation of the input value. The amount of creep is the
absolute difference between the current scan value and the creep setpoint that was set when
the last creep alarm occurred:

If the analog record is supposed to test for creeping, select the Creep Detection check box on
the Alarming tab in the Analog Row Edit dialog box. You should do this when the instrument
specifications indicate a maximum raw deviation value that is acceptable before calibration
deterioration occurs. Convert this raw value to the applicable engineering units for the record
and enter it in the Deviation Alarm Limit: field. An alarm is generated if the calculated creep
exceeds this limit.

3.2.6 Flatline Alarming


Analog instruments and rate instruments register a value that increases or decreases
depending on process conditions. OASyS DNA can monitor the change in this value. The
change is determined by subtracting the last scanned value from the current value. A flatline
alarm is generated if the analog or rate value does not change within a specified time period
(timeout).

NOTE: The flatline monitoring process runs once a minute; therefore, you can delay the
generation of a flatline alarm by one minute.

18
Alarms and Events

Flatline Alarm Message


The message text that indicates that an analog, a rate, a status, or a reservoir record is in a
flatline alarm:

Analog: (or Rate: or Reservoir: or Status:) <name> in Flatline state


for N minutes. Value = M

Where:

<name> = the name of the point

N = the configured timeout

M = the point value.

Alarm Severity: High

The message text that indicates that an analog, a rate, a status, or a reservoir record returned
from a flatline alarm:

Analog: (or Rate: or Reservoir: or Status:) <name>: RTN from


Flatline state. Value = M

Alarm Severity: Low


Flatline Alarm Conditions
A flatline alarm is declared when an analog or rate record value does not change within a
configured amount of time (timeout).

A flatline alarm is not declared, if the flatline alarm is disabled or the record is:

• manual-entry or calculated

• in instrument failure

• offscan

• manual

• configured to have a timeout of zero

• stale

3.2.7 State-Based Alarming


State-based alarming is implemented for analog, rate, and status records. State-based alarming
helps prevent nuisance alarms, which occur when the actions of devices affect the readings of
other devices. High/low/creep/rate-of-change alarm checking is implemented for tables that hold
floating record values, such as analog and rate records.

Status records can be configured to have normal and abnormal states. Analog records have
lowlow/low/high/highhigh abnormal states and a normal state.
Abnormal state alarming
Abnormal state alarming is used to represent a status or analog record that is in an abnormal
state.

Unless alarming of the state for a given status record is inhibited and the point shifts to an
abnormal state, the following occurs:

19
Alarms and Events

• The record goes into alarm.

• The appropriate workstation displays an alarm message.

• The alarm message is spooled to the event log.

If the alarm represents an abnormal state, it remains in the alarm summary as a non-flashing
alarm after it is acknowledged; it is then cleared from the Newest Priority Alarms window. The
name of the record, its associated remote, and its description field appear in both the alarm
message and the event log.
Return-to-normal alarming
When a record that is in alarm returns to a normal state, a return-to-normal alarm is generated
to communicate the change in state.

When an alarm condition clears, the system:

• Generates a return-to-normal alarm message

• Clears the alarm from the alarm summaries after the operator acknowledges it

• Spools the return-to-normal alarm message to the event summary


When the alarm message is generated, it flashes until the user acknowledges it. After the user
acknowledges it, it stops flashing and is deleted from the alarm summary unless the status
record has been configured to sustain off-normal alarms. When the record returns to normal,
the return-to-normal alarm message appears on the user’s workstation and the message begins
flashing again. When the user acknowledges the alarm, the return-to-normal alarm disappears
from the alarm summary.

If the record returns to normal before the user acknowledges the alarm, the return-to-normal
alarm message is submitted to the user and remains flashing until it is acknowledged. Even if
the alarm condition clears before the alarm is acknowledged, the user must still acknowledge
the alarm.

NOTE: An alarm is not generated if the user commands a status record to an abnormal state.

3.2.8 Commanded Status Record Alarming


Two types of alarm processing are associated with commanded status records: uncommanded
change-of-state (COS) and command failure.
NOTE: State-based alarming is triggered whenever an uncommanded status change occurs. A
status record with a configured output can generate a state-based alarm if it changes
state without being commanded.
Uncommanded Change-of-State
An uncommanded change of state occurs when a status record changes state without being
commanded. The alarm message and event log will display the name of the status record, its
associated remote, and its description field.

When the user acknowledges an uncommanded change-of-state alarm, the alarm normally
disappears immediately from the alarm summary. However, if the status record has been
configured to sustain off-normal alarms, the system consults the abnormal state table. If the
state changed to an abnormal state, the alarm remains in the alarm summary even if the user
has acknowledged it.

Like state-based alarming, it is possible to independently disable alarming of transitions to


normal or abnormal states, as well as the logging of an uncommanded change-of-state.

20
Alarms and Events

NOTE: An alarm is not generated if the user commands a status record to an abnormal state.
Command Failure Alarming
There are two alarms associated with command failures: change-of-state failure alarm and
command failure timeout alarm.

The system generates a change-of-state alarm when a device, which has been directed or
commanded to change state, takes a long time to change from its present state. Some devices,
such as large block valves, can take several minutes to attain the commanded state. Rather
than waiting for several minutes to generate a command failure alarm, you can specify a
maximum amount of time to wait for the device to change state. The state which the device
failed to reach is likely not the final state, but the fact that a change has occurred indicates that
the commanded action is taking place.

NOTE: A change-of-state alarm applies only to status records configured as outputs.

The system generates the command failure timeout when the commanded device does not
reach the final commanded state within the maximum time allowed. Refer to .

The only limitation on the time specifications is that the command failure timeout for the final
commanded state must be larger than the change-of-state failure timeout.

3.2.9 Timeout Alarms


When commands are sent to an analog or status record, a timeout period is required to give the
device time to perform the operation. If the operation is not performed within this period, a
timeout alarm is generated.
NOTE: Rate records are typically flow measuring devices with no control capabilities; therefore,
rate records do not have this functionality.
Command Failure Timeout Alarming
For status records, the timeout period for command failure (in seconds) is configured through
the Cmd Failure Timeout: field on the Output tab for Status Row Edit dialog box. This is the
maximum period of time for a command to succeed. After the time period has passed, the
system issues a failure alarm. The state that the device changes to is likely not the final state,
but the fact that a change has occurred indicates that the commanded action is taking place. A
typical value would be three to five seconds, but this is device dependent. If the device does not
change state within the specified period, the system generates an alarm.

If a command can not be sent to the status record due to communication problems, the system
generates an alarm and logs the event.
Command Timeout Alarming
For analog records, the timeout period for setpoint commands (in seconds) is configured
through the Command Timeout: field on the Output tab in the Analog Row Edit dialog box.
Because analog values do not immediately stop at the setpoint, there is also a Setpoint
Tolerance value. A setpoint is reached when the value is within the tolerance boundary (the
setpoint value plus or minus the Setpoint Tolerance value).

The operation of the instrument plays an important role in determining the correct timeout
period. Generally, the correct timeout period will be arrived at by trial and error in the testing of a
commanded output.

21
Alarms and Events

If a command can not be sent to the analog record due to communication problems, an alarm is
generated and the event is logged. All generated alarms are cleared from the alarm summary
upon acknowledgement.

3.2.10 Communication Alarms


The communication line between the RTU and the host computer may encounter minor errors
and problems. The details from all communication alarms and events are recorded in the
Historical database.
The statistics for communication errors are recorded in the Remote table, and then transferred
to the Historical database’s CommStats database, where they are stored in the RemPeriodStats
table. The connection statistics are recorded in the Connection table, and then transferred to the
Historical database’s CommStats database, where they are stored in the ConnPeriodStats
table.

To view historical results, click any field of a record’s information line on the Remote Summary,
Remote Primary Statistics Summary, or Remote Alternate Statistics Summary windows in
ezXOS. When the action menu appears, click Historical Statistics to open the
Communications Statistics Edit dialog box that displays historical results.

3.2.11 Communication Timeouts


A no-reply timeout period is configured for all RTUs. If an RTU does not communicate within
that period, an alarm is generated.

The system reports most types of communication failures as soon as they occur (e.g. security
error, illegal message, short message). However, if the remote fails to communicate, the system
generates a no-reply alarm if the failure lasts for longer than the no-reply timeout period.
Network Alarms
InstAlarm and InstEvent can also generate alarms related to critical network components and
the network communication between the host computers and the terminal servers. For example,
if the primary LAN fails over to the secondary LAN, an alarm message is generated to indicate
that a failover has occurred.

The following tables shows the messages for the internal network failover flag alarms defined in
the CPU message set.

Table 3 - Network communication alarm text


Alarm Text Description
Fail This message indicates the workstation CPU is failing over to a backup
unit (or there is a failing LAN).
Pending This message indicates the workstation CPU is attempting to start up.
Init This message indicates the workstation CPU is in the initialization stage of
recovery.
Standby This message indicates the workstation CPU is in standby mode and
ready to go live if all conditions are right.
Hot This message indicates the workstation CPU is “hot” or in a live
operational state.

Table continued…

22
Alarms and Events

Table 3 - Network communication alarm text (continued)


Alarm Text Description
Switch This message indicates the host CPU and its LAN is failing, requiring a
LAN switch. (This is also used for device failover to another unit.)
Doswitch This message indicates the process of switching LANs during a host CPU
failover. (This is also used for device failover to another unit.)

3.2.12 Non-Covered Alarms


Non-covered alarms refer to alarms related to an area that is not currently selected for control
by a user. This may occur during times when fewer users are on duty, such as during the night
shift. Any or all workstations can be configured to receive non-covered alarms.

“Non-covered” does not apply to system alarms. This is controlled by a separate configuration
setting, the Receive System Alarms check box, which can be found in the NMC. If a database
record has no value in its group field, it is considered as not belonging to any area and is
treated the same as a system alarm. These types of database records will be received, or not
received, by an ezXOS station depending on that station's Receive System Alarms setting.
They will never be considered non-covered alarms.

If a database record goes into alarm and no operator is currently controlling the area(s) that
record exists in, it is considered a non-covered alarm, and all ezXOS stations that have
Receive System Alarms configured will see it. As soon as an operator selects that control
area, the alarm is no longer non-covered, and it will be removed from all ezXOS stations except
for the operator’s who selected that control area.

The following should be considered with non-covered records:

• Database records can belong to groups, and a group can belong to any number of areas. Be
wary that if the group belongs to Area 1, and no operator is controlling Area 1, the group
may also belong to Area 2, and if there is an operator controlling Area 2, the alarm is not
considered non-covered.

• In the Area Row Edit dialog box, you can Enable Alarm Cover Checking. If the area is not
configured to perform cover checking, any non-covered alarms from that area will not be
reported as non-covered. A record will only be reported as non-covered if the record is in a
group that is in an area that has alarm cover checking enabled and no operator is controlling
the areas enabled for alarm cover checking. It is highly recommended that you check the
area records’ settings and see if alarm cover checking is enabled.

• Toggling control areas on or off may take up to 1 minute to take effect. Despite the fact that
you can configure a Check-in Timeout (sec) for area records that defines how often cover
checking occurs, the check for covered or non-covered alarms has been hard-coded to
occur about once a minute. Therefore, when control areas are updated, it may take up to 1
minute before an alarm shows up as non-covered or has its non-covered status removed.

When alarms are being generated for an area that is controlled by an operator, non-covered
alarms from another area occur only if alarm cover checking is enabled. If alarm cover checking
is disabled, the system will not generate non-covered alarms.

NOTE: If an operator receives a non-covered alarm, she does not automatically have the
authority to acknowledge it or to control the necessary devices in that area. First, she
must be able to select the area with the non-covered alarm for control, which can only

23
Alarms and Events

occur if the workstation rights and/or user rights allow it. At any given time, therefore, at
least one user should have rights to control each area.

3.2.13 Logging Commanded COS and Setpoints


User-commanded status change-of-state (COS) and analog setpoints are both logged as
events. Successful commands are only logged with a success statement if Log Command
Success is selected in either the Analog Output dialog box or Output tab for Status Row Edit
dialog box.
The system will record any permitted command issued by the operator (i.e., if the command is
aborted due to a command tag, the system will not record the event). Unsuccessful commands
are logged with the command and either a communication failure statement, if communication
failure prevented the command from reaching the remote, or a command failure statement, as
explained in Communication Failure Timeout Alarming and Communication Timeouts.

Most protocols process an output command at the RTU when the RTU is successful in receiving
the command.
Related Information
Commanded Status Record Alarming on page 20
Timeout Alarms on page 21

3.3 Alarm Suppression


Alarm suppression can be used to manage alarming and suppress the generation of predictable
alarms.

The following alarm suppression can be configured for ADE records:

• Parent control alarm suppression

• Parent alarm suppression

• Transient alarm suppression

• Test mode alarm suppression

• Alarm disturbance mode


NOTE: Alarm suppression applies to current value alarms (such as data krunching alarms) and
does not block other types of alarms (such as command failure alarms).

3.3.1 Parent Control Alarm Suppression


Parent control alarm suppression handles alarms by inhibiting alarm events that are a direct and
predictable result of an operator’s command to a field device.

Starting a pump, for instance, results in a pressure wave in the pipe. A pressure sensor further
down the pipeline from the pump would likely go into alarm when the pressure wave hits. By
configuring the sensor as a child of the pump device, it is possible to inhibit the alarm.

Relationships between parent and child records are configured in the RealTime database’s
Alarm Suppression (almsuppression) table. Extensions that are added to alarm suppression,
such as alarm suppression based on state, are also available for parent control alarm
suppression. For more information, refer to Parent Alarm Suppression.

24
Alarms and Events

When an operator issues a control command to an RTU, for example setpoint, and the
command arrives successfully, the alarm suppression (almsuppression) records that have the
controlled record as a parent are marked as suppressed. The alarm suppression timeout is set
up, and at expiration, the children’s alarm conditions are reevaluated. If, at reevaluation, the
child is in a different alarm state from where it was when the parent was commanded, the
system generates an alarm. The parent control alarm suppression is then cleared. If the control
command failed to execute at the remote, then the alarm suppression timeout is cancelled, and
any suppressed child alarm is immediately evaluated.

Figure 8 - Parent Control Alarm Suppression Example

In Figure 1, the state of the child goes into alarm shortly after the command is sent to the
parent. The child returns to normal at the 100-second mark. If the control suppression timeout is
set to a value greater than 100 seconds, then the child will not alarm: it has returned to normal
before the alarm suppression timeout expired. If the timeout value were shorter, for instance 90
seconds, then the child alarm would generate after 90 seconds.

NOTE: If a parent is commanded while a control suppression timeout is already under way, the
control suppression timeout will be reset. However, when the updated alarm
suppression timeout value expires, control suppression does not record a new child
state value for evaluation.

NOTE: Only child state alarms that result from data krunching are suppressed by parent control
alarm suppression. Other types of alarms (e.g. command failure) execute as normal.
Parent control alarm suppression is supported for analog, multistate and status records.
Related Information
Parent Alarm Suppression on page 25

3.3.2 Parent Alarm Suppression


With parent alarm suppression, child alarms are suppressed on the basis of the parent’s alarm
state. Unlike parent control alarm suppression, parent alarm suppression is not concerned with
suppressing alarms that are directly related to an operator’s command to a field device.

When a parent with alarm suppression records goes into alarm, the children’s alarms are
suppressed for a configured amount of time (i.e. a timeout value). Child alarm suppression is

25
Alarms and Events

cancelled after the timeout value has expired. When the timeout value expires, the system
reevaluates the alarm state of the children.

NOTE: Timeout values are not reset if the parent toggles between alarm states.
In the case of alarm suppression based on the child alarm state, child alarm suppression is
cancelled when a non-suppressed state is reached. In this event, the parent alarm timers for
alarm, return-to-normal, and alarm hold-off are either cancelled or prevented from being
triggered. For example, assume that a child is configured for parent alarm suppression in the
normal, hi, and low alarm states. If the record’s parent changes alarm state and triggers alarm
suppression, the child’s alarm remains suppressed while the child remains in the normal, hi, or
low alarm state. If the child reaches the high-high or low-low alarm state, its alarm suppression
is cancelled immediately and the system generates an alarm.

Parent alarm suppression only suppresses child state alarms that result from data krunching.
Other types of alarms (e.g. command failure) execute as normal. Parent alarm suppression is
supported for analog, multistate and status records.
Related Information
Parent Control Alarm Suppression on page 24

Parent Alarm Timeout


The parent alarm timeout allows you to suppresses a child alarm for the configured timeout
value after its parent enters an alarm state.

When a parent record goes into alarm or enters a different alarm state, the children (i.e. the
alarm suppression records that have the alarmed record as their parent) are marked as
suppressed. The alarm suppression timeout is set up, and, at expiration, the children’s alarm
conditions are reevaluated. If at the time of reevaluation the child is in a different alarm state
than where it was when suppression was triggered, the system generates an alarm. The alarm
suppression is cleared once the alarm suppression timeout expires.

If configured for state-specific suppression, the child alarms that come into the system are
checked to ensure that they match the configured state(s) to be suppressed.

Figure 9 - Parent Alarm Suppression Example

In Figure 1, the parent alarm occurs and the child alarms are suppressed. The parent returns to
normal before the alarm suppression timeout value expires.

26
Alarms and Events

Child alarms are reevaluated when the parent alarm timeout value expires. This value can be
set to a relatively short value or to a value that is beyond the parent’s alarm duration.

NOTE: The alarm suppression timeout value is not reset if the parent transitions to normal and
then back to alarm state before the timeout value expires.
Parent Return-to-Normal Timeout
The parent return-to-normal (RTN) timeout value assists in suppressing child alarms that are
held until after the parent returns to normal. This timeout value can be used either with or
without a configured parent alarm timeout value.
In Figure 1, if the child stays in alarm for a period of time after the parent returns to normal,
then the parent RTN timeout is used rather than the parent alarm timeout. In this case, the
parent alarm timeout is set to zero, which causes the indefinite suppression of the children
alarms while the parent is in the alarm state. This timeout value is set so that child alarm
reevaluation is made after the parent has been in the normal state for the timeout duration (for
example, 80 seconds).

Figure 10 - Parent Return-to-Normal Suppression Example

The parent RTN alarm suppression timeout can be used in combination with the parent alarm
suppression timeout. Figure 1 shows both timeout values configured. In this case, the parent
returns to the normal state and the RTN alarm suppression timeout extends the suppression
interval. As the child returns to the same state it was in at the time of suppression, the system
will not generate any child alarms. If the parent stays in the alarm state for more than 40
seconds, the RTN timer is not triggered; the system reevaluates the child and, consequently,
generates an alarm.
NOTE: If only the RTN suppression is configured (i.e. alarms are indefinitely suppressed while
the parent is in alarm), then the suppression is cleared if the remote containing the
parent goes stale due to communication failure or the remote being placed offscan.

NOTE: Regardless of the configuration of the timeout values, the child’s state is compared to
its state when the parent went into alarm after all timers have expired. The child’s alarm
state is not reevaluated as each timeout value (or timer) expires.

The timeout values are not reset during transitions between alarm states or during transitions
between normal states. The alarm suppression timeout value is cleared when the parent shifts
into a normal state and after the successful setup of the RTN alarm suppression timeout.
However, if the parent shifts back into an alarm state while the RTN timer is active, then the
system will neither clear the RTN alarm suppression timeout value, nor set up an alarm
suppression timeout value. This avoids the indefinite suppression of alarms when the parent
toggles continuously between abnormal and normal states.

27
Alarms and Events

3.3.3 Transient Alarm Suppression


Transient alarm suppression is invoked when a status or multistate record changes state or an
analog record changes between the high/low alarm states. Transient alarm suppression can
filter out alarms caused by expected short-term value spikes that should not require any
operator action.

For example, transient alarm suppression is configured for an interface where two pipelines
meet and have different controlling companies on the upstream and the downstream. The
interface has metering and other telemetry sensors such as pressure sensors. A valve is closed
far downstream on the portion of the pipeline that does not use OASyS DNA telemetry. As the
pressure in the pipeline changes, the child goes into alarm and the hold-off timer is triggered.
Since the alarm was triggered by a valve closure from another company not using OASyS DNA,
the child in this example does not have a parent.

When triggered, the alarm hold-off timer records the child’s current state and marks the child as
alarm-suppressed. When the alarm hold-off timer expires, the system determines if it should
generate an alarm by comparing the current state of the child to the recorded state at the time
the hold-off timer was triggered.

Figure 11 - Transient Alarm Suppression Example

The alarm hold-off timer is not triggered anew if a new telemetry value is received for the child
while the alarm hold-timer is in effect.

3.3.4 Test Mode Alarm Filtering


Test mode alarm filtering is used to filter nuisance alarms, generated from testing or
maintenance activities, out of the Alarm Summary window.

Since many alarms are generated when the system is being tested or undergoing maintenance,
this type of filtering can significantly reduce the number of unnecessary alarms that appear in
the operator's Alarm Summary window. Test mode sets can be configured for groups of
records that are predictably affected when a specific part of the system is being maintained or
tested. When testing or maintenance is performed, “test mode” can be activated for the related
test mode set(s). This filters the alarms for all the records in the given test mode set out of the
Alarm Summary window. Once test mode is deactivated, the Alarm Summary window's
behavior returns to normal and displays all generated alarms.

28
Alarms and Events

3.3.5 Alarm Hold-Off


Alarm hold-off is used to temporarily suppress child state alarms that result from data krunching.
Other types of alarms, such as command failure, execute as normal. Alarm hold-off is supported
for analog, multistate and status records.
Communication Order and Alarm Suppression
Communication order can cause real-time events to be reported in an order that differs from the
actual order in which they occurred. A value for a dependent device (child) can even be read
before the recently changed value of its parent. This can result in false alarms that would
normally be suppressed using alarm suppression. The alarm hold-off timer helps delay alarm
processing to counter the effects of communication order.

For example, a breaker (parent) and a voltage sensor (child) exist on different RTUs. If the
breaker trips, a low voltage alarm from the child is expected. Since the data for the voltage
sensor could be processed before the data for the breaker, an alarm hold-off for the voltage
sensor is configured. An alarm hold-off can suppress the child’s alarm until the breaker data has
been updated and processed, preventing a nuisance alarm.

If the telemetry protocol does not provide a method to force the processing of the parent before
its dependent child, the alarm hold-off can be used to suppress the child alarm until it is
reevaluated at a later time. Both alarm and RTN alarm events are generated for the child
regardless of alarm suppression. The alarm hold-off timer postpones the alarm long enough to
receive the updated parent value.

Figure 12 - Alarm Hold-off Example

Figure 1 shows a parent (real parent) going into alarm in real-time. However, the telemetry
parent value is not processed until after the child’s value is read.

NOTE: In Figure 1, the telemetered child value matches the real child value (i.e., there is no
significant delay between when the child value changed and when the data was
received in the SCADA system).
The alarm hold-off timer is triggered when the child’s alarm state changes and neither its parent
alarm nor its return-to-normal alarm suppresses it. Alarm hold-off saves the child’s previous
value so it can compare it with the child’s value when the timer expires.

29
Alarms and Events

When the parent value is processed, the telemetered parent goes into alarm and triggers the
alarm suppression timeout that cancels the alarm hold-off timer. Since the child returns to a
normal state before the alarm suppression timeout expires, no alarm is generated.

3.4 Interaction Between Suppression Types


The combination of alarm suppression types does not affect the performance of each type. An
alarm is suppressed when it satisfies one of the suppression criteria.

The following illustrates the interaction of the different suppression types:

• The alarm hold-off timer prevents a record from going into alarm. The alarms for any child
records of that record that are specified for parent alarming or transient alarm suppression
are also suppressed.

• The child of a record that is specified for parent alarming goes into alarm. If the alarm is
suppressed, the child suppresses alarming for any of its children that are configured for
parent alarm suppression.

• A record’s alarm is reevaluated when a suppression timer (e.g., parent alarm timer) expires.
At this point, the system evaluates any suppression criteria before generating an alarm.

3.4.1 Alarm suppression behavior


Alarm suppression behaviors are additive, and to determine if an alarm should be generated,
the system compares the alarm state when the first suppression is triggered to the state when
the last suppression timeout ends.

The comparison of alarm states is important since a child record can be configured for parent
control, parent alarm, and transient alarm suppression thereby activating the timers for all the
configured suppression types.

If, for instance, (a) a control suppression event is followed by a parent alarm suppression event,
and (b) the child record’s state changes, then after the control suppression timeout expires, the
system does not generate an alarm because the child is still suppressed by parent alarm
suppression. In the event that the parent alarm suppression timeout expires, and no
suppression timeout exists for the child, then the child point’s current state is compared to the
state when the control suppression trigger occurred. If it is different, then the system generates
an alarm.

In addition, ezXOS summaries and annotated displays show the alarm icon next to the child
data whenever the child is in the alarm state, regardless of alarm suppression. The suppression
only applies to the alarm summary displays.

The alarm event is also recorded within the event summary. If the child record’s alarm state
changes during the suppression timeout, the summary shows when the system triggers and
cancels alarm suppression.

The analog and status summaries can show filtered lists of records that are currently in alarm
suppression. The summaries show child records actively suppressed if they have undergone an
alarm state change since alarm suppression was triggered.

30
SWANA

4 SWANA
The Software Analyzer (SWANA), also known as Software Protocol Analyzer, monitors
communications between Omnicomm and the remotes.

An interface, consisting of menus and dialog boxes, is provided for configuring SWANA. With
the interface, you can configure the following:

• Omnicomm connections to display

• message filters

• display properties

• files for saving the currently displayed messages

• files for recording incoming messages


The SWANA interface also allows you to control the display update and to search for message
strings in the displayed messages.

SWANA runs independent of the RealTime service. It can be started on any host that is
currently running the Common service (for example, ezXOS, RealTime host). SWANA
configuration is not affected by RealTime service shutdowns, startups, or failovers. Neither is it
affected by changes in a connection’s DistribuSyS ownership. It can display messages from
different Omnicomm processes running on several systems.

4.1 Starting the SWANA Program


The SWANA program is started from the command prompt. Command line parameters can be
used to configure SWANA usage and options.

Procedure

1. Open the command prompt.


2. Type dnaSwana.
3. Press ENTER.

Result
The SWANA Main window appears.

Table 4 - SWANA Command Line Parameters


Parameters Description
-c ConnectionList Example:dnaSWANA -c
connection1,connection2,connection3
-d DisplayFilterRegularExpression Refer to SWANA Filters.
-s StartTriggerRegularExpression
-e StopTriggerRegularExpression
-l LogFilename A discussion of log files is provided in the File
menu.
-h This option displays the SWANA Usage and
Options window below.

31
SWANA

Figure 13 - SWANA Usage and Options

NOTE: All SWANA data is published with the “DNA Permission - Control_SCADA” permission.
You must have permission before you can view SWANA data.
Related Information
SWANA window on page 32

4.2 SWANA window


The SWANA Main Window is a scrollable, resizable window that displays communication
messages.

The View Menu commands and their toolbar button equivalents allow you to stop, start, pause,
and clear the window.

Figure 14 - SWANA Main Window

32
SWANA

Messages are displayed as they arrive. The timestamp typically represents the time that
Omnicomm sent or received the message. However, if the protocol supports message
timestamps, the timestamp may represent the timestamp set in the message by the remote.
Related Information
Starting the SWANA Program on page 31

4.2.1 SWANA message


The SWANA main window displays SWANA messages that provide communication information.

Figure 15 - Basic Structure of a SWANA Message

Table 5 - Parts of a SWANA message


Message Part Description
Message Type Refer to “Message prefixes” below for more information.
Timestamp Timestamp in the following format:hours:minutes:seconds.milliseconds
[AM|PM]
Base X This is the base used for Non-ASCII data.
Connection This is the name of the connection.
System This is the name of the system that generated the connection message
Remote This is the name of the remote that generated the connection message
(if available from the protocol in use).
Non-ASCII Data This is the connection message displayed in the configured base (for
example, hexadecimal). If the base is 0 or null, or the message is of
type M: or S:, then no Non-ASCII data is written.
ASCII Data This is the connection message displayed in ASCII format. For
message types Q:, R:, X:, the message is converted to ASCII where
unprintable characters are displayed as dots.

Message prefixes

The table below lists the characters that are used to precede a string of bytes to indicate the
type of message

Table 6 - Message Prefixes


Prefix Description
M This indicates that what follows is a message from the Omnicomm.
S This indicates that what follows is a message from SWANA.

Table continued…

33
SWANA

Table 6 - Message Prefixes (continued)


Prefix Description
Q This indicates that the message was a query from the Omnicomm.
R This indicates that the message was a reply from the RTU.
X This indicates that the message contains errors and was discarded by
Omnicomm (a spurious message).Spurious messages commonly result from
the host timing out before the remote is able to respond. Since the remainder
of the message arrives before the host transmits its next query, these bytes
are unexpected and are labelled spurious. Spurious bytes are not displayed
until after the next poll.

NOTE: When SWANA is configured to receive messages from several Omnicomm


connections, the messages from the connections may be interleaved (for example,
query from one connection may be followed by a response from another connection).
The connection name, which appears in the message header, should be noted
whenever multiple connections are configured.

4.2.2 File menu


Use the File menu to save the connection data, begin a log of incoming data and open or close
the SWANA window.

Table 7 - File menu commands


Command Purpose
Save Click Save to save the contents of the connection data text box to the
Save file; this is not available until the Save file has been assigned.This
command corresponds to clicking the Save Display Buffer to File
toolbar button.
Save As... Click Save As... to open a Save File dialog box with the default directory
set to %CMN_ERRLOG% and no default filename. It saves the contents
of the connection data text box to the file. The chosen Save file is saved
for reuse.
Start Log of Click Start Log of Incoming Data to open a Save File dialog box with
Incoming Data the default directory set to %CMN_ERRLOG% and no default filename.
This command enables logging every connection message until the Stop
Log of Incoming Data is selected. Messages are logged even when
SWANA is in a paused state. SWANA filters determine when and which
messages are logged.After the Start Log of Incoming Data has been
selected and logging has started, a log indicator text box appears to
indicate that logging has started. The name of the log file is also
indicated.After logging has started, this command changes to Stop Log
of Incoming Data. Clicking this stops logging, closes the log indicator,
and resets the command text to Start Log of Incoming Data.
New Window Click New Window to open a new instance of SWANA Window. This
new instance runs independent of the current SWANA instance. For
example, you may choose to analyze different connections with different
SWANA filters; this will require you to use an instance for
connection.Closing an instance of SWANA does not affect other
instances.
Exit Click Exit to exit SWANA. If logging is active, then a dialog box appears
with the message "Exiting will stop logging. Do you wish to Exit?".

34
SWANA

Related Information
SWANA filters on page 40

4.2.3 Edit menu


Use the Edit menu to search for and copy text strings in the SWANA window.

Table 8 - Edit menu commands


Command Description
Find... Click Find... to open a dialog box that allows you to type a text string
to find. It saves the text string for reuse and finds the text string in
the connection data text box. It also enables the Find Next button
on the toolbar and the Find Next command.
Find Next Click Find Next to find the next instance of the text you entered in
the Find dialog box.
Copy Click Copy to copy the selected text into the paste buffer.

4.2.4 View menu


Use the View menu to play, pause, stop and clear the SWANA window.

Table 9 - View menu commands


Command Description
Play Click Play to put the SWANA window in the play state. When
selected, this command enables the Pause and Stop commands.
The Play command also corresponds to the Start Recording
toolbar button. If connections have not been configured, selecting
this command opens the SWANA Connection Configuration dialog
box.
Pause Click Pause to put the SWANA window in the paused state. This
command starts in the disabled state until SWANA has been placed
in the play state. When selected, the Pause command enables the
Play command and also the Start Recording toolbar button.
NOTE: Pausing only affects the connection data text box, it does
not affect logging.
Stop Click Stop to put the SWANA window in the stopped state. It
enables the Play command, which is equivalent to the Start
Recording toolbar button, and the Pause command, which is
equivalent to the Pause Recording toolbar button.
NOTE: Stopping prevents data from being recorded into the
connection data text box and the log file.
Clear Display Click Clear Display to open a confirmation popup. Clicking Yes on
the popup clears the connection data text box.

Related Information
SWANA connection configuration on page 38

35
SWANA

4.2.5 Tools menu


Use the Tools menu to configure connections, display properties and apply a filter to the
SWANA window.

Command Description
Connections... Click Connections... to open the SWANA Connection
Configuration dialog box.
Display Properties... Click Display Properties... to open the SWANA Display Properties
dialog box.
Filters... Click Filters... to open the SWANA Filters dialog box.

Related Information
SWANA connection configuration on page 38
SWANA display properties on page 39
SWANA filters on page 40

4.2.6 SWANA toolbar buttons


SWANA toolbar buttons are provided for the most frequently used menu options. Hover over a
toolbar button to see the tool’s function.

Figure 16 - SWANA Toolbar Buttons

The following table describes the buttons on the SWANA toolbar.

36
SWANA

Table 10 - Buttons on the SWANA toolbar


Button Button Description
Name
Save Click Save Display Buffer to File to save the file. If a save file has
Display been assigned, clicking this button performs the same operation that
Buffer to the Save command under the File menu performs. If a file has not
File been assigned, then it behaves like the Save As command.
Find Click Find... to open a dialog box that allows you to type a text string
to find. It saves the text string for reuse and finds the text string in the
connection data text box. It also enables the Find Next button on the
toolbar and the Find Next command.
Find Next Click Find Next to find the next instance of the text you entered in the
Find dialog box. If a find string has not been assigned, clicking this
button opens a dialog box that allows you to assign a find string.
Copy Click Copy to copy the selected text into the paste buffer.

Stop Click Stop Recording to put the SWANA window in the stopped
Recording state. This button executes the same operation as the Stop
command.
NOTE: Stopping prevents data from being recorded into the
connection data text box and the log file.
Start Click Start Recording to put the SWANA window in the play state.
Recording This button executes the same operation as the Play command. If
connections have not been configured, selecting this command
opens the SWANA Connection Configuration dialog box.
Pause Click Pause to put the SWANA window in the paused state. This
Recording button executes the same operation as the Pause command.
NOTE: Pausing only affects the connection data text box, it does
not affect logging.
Clear Click Clear Display to open a confirmation popup. Clicking Yes on
Display the popup clears the connection data text box. This button executes
the same operation as the Clear Display command.
Assign Click Assign Connections to Record to open the SWANA
Connection Connection Configuration dialog box. This button executes the same
s to Record operation as the Connections... command.
Change Click Change Display Properties to open the SWANA Display
Display Properties dialog box. This button executes the same operation as
Properties the Display Properties... command.
Filter Click Filters Display to open the SWANA Filters dialog box. This
Display button executes the same operation as the Filters... command.

Related Information
SWANA connection configuration on page 38
SWANA display properties on page 39
SWANA filters on page 40

37
SWANA

4.2.7 SWANA connection configuration


The SWANA Connection Configuration dialog box is used to select connections for analysis.
The Available Connections are loaded from the RealTime database.

To open the SWANA Connection Configuration dialog box:

• Click Tools > Connections... on the SWANA main window.

Figure 17 - SWANA Connection Configuration dialog box

The SWAMA Connection Configuration dialog box lists the following information for each
connection:

• the name of the connection

• the description that was provided during connection configuration

• the name of the owning system

The connections are sorted by name in alphabetical order with connections currently owned by
the local system appearing at the top of the list.
Adding a connection
Use the SWANA Connection Configuration dialog box to add a connection.

Procedure

1. Select a connection(s) from the Available Connections list in the SWANA Connection
Configuration dialog box.
2. Click Add Connection.

Step Result: The selected connection(s) appears in the Active Connections list.
3. Click OK.
Click Cancel to abandon any changes made to the SWANA Connection Configuration
dialog box.

38
SWANA

Removing a connection
Use the SWANA Connection Configuration dialog box to remove a connection.

Procedure

1. Select a connection(s) from the Active Connections list in the SWANA Connection
Configuration dialog box.
2. Click Remove Connection.

Step Result: The selected connection(s) are removed from the Active Connections list and
appear in the Available Connections list.
3. Click OK.
Click Cancel to abandon any changes made to the SWANA Connection Configuration
dialog box.
Removing all connections
Use the SWANA Connection Configuration dialog box to remove all connections.

Procedure
1. Click Remove All Connections.

Step Result: All connections are removed from the Active Connections list and appear in
the Available Connections list.
2. Click OK.
Click Cancel to abandon any changes made to the SWANA Connection Configuration
dialog box.

4.2.8 SWANA display properties


The SWANA Display Properties dialog box is used to configure the connection data text box.

To open the SWANA Display Properties dialog box:

• In the SWANA main window, click Tools > Display Properties....

Figure 18 - SWANA Display Properties dialog box

Table 11 - Fields on the SWANA Display Properties dialog box


Fields Description
Non-ASCII Display Base Select one of the options to ensure only one base is valid at a
time. Select none when only the ASCII translation is to be
shown (for OPC protocol, for instance, the data messages are

Table continued…

39
SWANA

Table 11 - Fields on the SWANA Display Properties dialog box (continued)


Fields Description
natively in ASCII form, so a non-ASCII display does not contain
useful information).
No. of bytes of non-ASCII Type a value or use the arrow buttons to adjust the number of
data per line: ASCII characters per line. For example, if 20 hexadecimal bytes
is selected for this field (as the No. of bytes of non-ASCII data
per line), then 20 ASCII characters per line will be written. If 20
octal bytes is selected for this field, then 20 ASCII characters will
be written.Any change to the number of non-ASCII data per line
will affect only incoming connection data and not data that
already appear in the SWANA connection data text box.
Display Buffer Size Type a value or use the arrow buttons to adjust the size of the
(bytes): connection data textbox. The maximum buffer size is 2MB.
NOTE: If you plan to leave SWANA running for long periods,
reduce the buffer size to reduce the cost of maintaining
the connection data text box; doing this will reduce
SWANA resource usage.
When the display buffer size is decremented, the oldest data in
the connection data text box will be dropped. This may occur in
the middle of a connection data message.
Display Font Size: Type a value or use the arrow buttons to change the font size of
the data shown in the connection data text box.
OK and Cancel Click OK to accept the changes that you have made or Cancel
to abandon the changes before closing the dialog box.

4.2.9 SWANA filters


The SWANA Filters dialog box is used to configure three display filters.

To open the SWANA Filters dialog box:

• In the SWANA main window, click Tools > Filters....

Figure 19 - SWANA Filters dialog box

40
SWANA

The SWANA Filters dialog box contains three tabs that allow a filter to be specified. It also
provides help on regular expression syntax.

Table 12 - SWANA Filters tabs


Tab Description
Display Filter If the Regular Expression field is not left blank, only the data that
matches the regular expression that is provided will be copied into the
display and log file.
Start Trigger The Start Trigger is used to disable copying to the connection data text
box or log file until an incoming data that matches the regular
expression is found. The matched line is included in the connection data
text box and, when logging is enabled, is logged. SWANA generates
"Waiting for Start Trigger" and "Found Start Trigger" messages so you
know why data is not being displayed or why data starts to be displayed
again.
Stop Trigger The Stop Trigger is used to disable copying to the connection data text
box and log file after it finds incoming data that matches the regular
expression provided. The matched line is included in the connection
data text box and log file. SWANA generates "Found Stop Trigger"
messages so you know why data is no longer displayed. If a stop trigger
exists and a start trigger does not, then SWANA will automatically place
the display into the stopped state.

NOTE: Except for the case where a Stop trigger exists without a Start trigger, the filters and
triggers do not affect the recording state (as determined by the Play, Pause and Stop
commands).

41
RealTime Database Utilities

5 RealTime Database Utilities


There are several utilities that can be used with RealTime to perform common operations such
as reading and writing data, deleting records and printing database content. The commands
and syntax that apply to the RealTime database are defined along with any related examples.

NOTE: Some of the utilities are case-sensitive.

5.1 Reading and Writing Data


A function put (fnput) is used to perform complex operations on table records. The dbget
command reads data from the database, and the dbput command writes data to the database.

NOTE: The dbput, dbget and fnput should only be run on the hot RealTime service.

5.1.1 Fnput
Function puts (fnputs) can be called from the Microsoft DOS prompt.

From the Microsoft-DOS prompt, the syntax is:

fnput <table.record> <command_string>

If the command string contains more than one argument separated by spaces, it must be
enclosed in double quotation marks.

For example:

fnput analog.analog84 “inhibit commands”

fnput analog.analog84 “enable commands”

fnput analog.pressure58 “inhibit alarms”

fnput analog.NK-AISUC “jogup %3”

The command syntax for status records is:

fnput <status.record> “command <command_string>”

where <command_string> is one of the states listed in the Message table.

For example:

fnput status.valve24 “command open”

fnput status.valve32 “command close”

fnput status.valve1 “command acknowledge”

The following table lists commands that apply to the RealTime database tables.

42
RealTime Database Utilities

Table 13 - Function puts and other commands


Command Internal Is Set Description Table to
Field To which the
command
applies
onscan or manl false Sets the point to real-time mode analog, rate,
realtime status
offscan or manl true Sets the point to manual mode
manual
"inhibit alminh true Inhibits alarming for the specified
alarms" point
"enable alminh false Enables alarming for the specified
alarms" point
"inhibit log" evtinh true Inhibits logging for the point
"enable log" evtinh false Enables logging for the point
"inhibit rtn" clrinh true Inhibits alarming on return to normal
or "inhibit rtn for the point
alarms"
"enable rtn" or clrinh false Enables alarming on return to
"enable rtn normal for the point
alarms"
"inhibit rtn cevinh true Inhibits logging of return to normal
log" condition for the point
ADDGROUP yes/no group[x].sl group[ Adds a group to the area, where area
<groupname> x].cont <groupname> is the name of the
ot
rol set group. Sets the group as controllable
group[x].c to NO if "yes" is specified for
ontrol by group[x].control or not controllable if
default "no" is specified.
.
SETCONTROL yes/ group[x].c group[ Sets a given group to be controllable area
no <groupname> ontrol x].cont (where <groupname> is the name of
rol set the group) if "yes" is specified, or not
to NO controllable if "no" is specified. If the
by group is not a member of the area, it
default will be added to it.
.
REMOVEGROUP group[x].c N/A Removes the specified group from area
<groupname> ontrol the area.
“tag add” tag add func=[ no Creates command inhibit tags.
commands | no open-type | no close-
NOTE: The tag “func=” supports
type | no operator cmd | no program
cmds | warning] oper=<operatorName> the status and multistate
[wo=<workOrder>] [desc=<string output command message
describing why commands should be text. This, “func=no<status/
inhibited>] multistate command>” can
be used. For example, for
a status point using a base

Table continued…

43
RealTime Database Utilities

Table 13 - Function puts and other commands (continued)


Command Internal Is Set Description Table to
Field To which the
command
applies

message set of
3DCompressor, you could
add tags with “func=no
run” and “func = “no stop”.
“tag rem”tag rem func=[ no commands Removes one or more command
| no open-type | no close-type | no inhibit tags.
operator cmd | no program cmds |
NOTE: You need only specify the
warning] oper=<operatorName>
[wo=<workOrder>] [desc=<string minimum fields that
describing why commands should be describe the tag(s) to be
inhibited>] removed. For instance, to
remove all tags created for
a work order wo3254, you
would use "tag rem
wo=wo3254"; to remove all
warning tags created by
operator Fred, you would
use "tag rem func=warning
oper=Fred"

NOTE: The tag “func=” supports


the status and multistate
output command message
text. Thus, “func=no
<status/multistate
command>” can be used.
For example, for a status
point using a base
message set of
3DCompressor, you could
add tags with “func=no
run” and “func = “no stop”.
acknowledge Acknowledges the alarm for this application,
point in the alarm summary analog, rate,
status
"basic stop" Stops a specified DataBASIC routine
attached to a database point
"basic activate" Activates a specified DataBASIC
routine attached to a database point
"basic deactivate" Deactivates a specified DataBASIC
routine attached to a database point
manval.value Puts the point into manual state and analog
sets its value to value

Table continued…

44
RealTime Database Utilities

Table 13 - Function puts and other commands (continued)


Command Internal Is Set Description Table to
Field To which the
command
applies
setpoint sptvalue Performs a setpoint to the specified
value (sptvalue)
jogup {%percent_change | Increases the current setpoint by a
increment_value_in_EU} percentage (for example, jogup
%5.2) or by a value in engineering
units (for example, jogup 3)
jogdown {%percent_change | Decreases the current setpoint by a
decrement_value_in_EU} percentage (for example, jogdown
%5.2) or by a value in engineering
units (for example, jogdown 3)
realtime Puts the rate point into real-time rate
mode and updates currate and accur
fields with actual field data
total.value Puts a manual value into the accur
(current accumulation) field and
places the point in manual mode (for
example, total.3800)
fastscan interval Sets the remote to fast scan for the remote
specified number of seconds (for
example, fastscan 45)
interrogate on Places the remote in interrogate
mode
interrogate off Cancels interrogate mode on the
remote
connect Finds an available dial line and
connects this remote to it
disconnect Disconnects from the connected dial
line and reconnects to the
configured line
startopenon Executes the status on command for status
the associated status point
stopcloseoff Executes the status off command for
the associated status point
invstate Inverts the current operating state of
the device (for example, changes
“on” to “off,” and “off” to “on”)

NOTE: Commands with spaces require double quotation marks to delineate the command.

5.1.2 dbget
To obtain a field of RealTime data, query the RealTime database using dbget.

Syntax:
dbget [-t] <table.ptname.fldname>

45
RealTime Database Utilities

Table 14 - dbget options and parameters


Parameter Description
table This parameter indicates the name of the table.
ptname This parameter indicates the name (or record number) of the record.
fldname This parameter indicates the name of the internal field.
-t This parameter forces terse output (optional).

Example

dbget analog.anal.curval

dbget analog.33.curval

dbget status.status1.cursta

Normally dbget gives verbose output. For example, dbget analog.1.curval returns:

DBGET: analog.1.curval = 120.000000

When the optional -t argument is used, dbget gives a terse output and only prints the value.
For example, dbget -t analog.1.curval returns:

120.000000

5.1.3 dbput
To populate a RealTime database field with data, use dbput.

Syntax

dbput table.ptname.fldname = “newval”

Parameter Description
table This parameter indicates the name of the table.
ptname This parameter indicates the name (or record number) of the record.
fldname This parameter indicates the name of the internal field.
newval This parameter indicates the new value of the database field. Double
quotation marks are not necessary for numeric data.

Example

dbput analog.ana1.curval = 2700

dbput analog.33.curval = 2700

dbput notepad.note1.txtline = “Pump P43 needs repair.”

46
RealTime Database Utilities

5.2 Deleting Records (dbdel)


The dbdel command is used to delete records. It can be used to delete an individual RealTime
record or all of the records within a RealTime table.

5.2.1 Deleting Individual Records


Use dbdel to delete a RealTime record and all data associated with it.

Syntax:

dbdel [-F] <table> <record>

Table 15 - dbdel options and parameters


Option/Parameter Description
-F This option performs a fast delete by avoiding table delete rules.
table This parameter indicates the name of the table.
record This parameter indicates the name (or record number) of the record
you want to delete.If “*” is entered, all records in the table are
deleted.

Example:

dbdel analog ana1

dbdel analog 44

dbdel notepad note1

WARNING: Exercise caution when using the -F option. It should only be used on tables that do
not have record deletion rules. Do not use this option for the analog, status, rate,
application, sysusers, tag, xosdisplay, remote, almsum, cmx_async_trans,
cmx_sync_trans, xis_async_trans, or xis_sync_trans tables.

5.2.2 Deleting Entire Tables


Use dbdel to delete all of the records contained in a RealTime table.

Syntax:

dbdel <table> ‘*’


where <table> is the name of the table that contains the records you want to delete.

5.3 Database Lister/Loader


The dbll command allows you to print a list of the contents of a RealTime table to an ASCII
text file or load a RealTime table with the contents of an ASCII input file. You can create an

47
RealTime Database Utilities

ASCII file of RealTime table configurations using any editor, work processor, spreadsheet, or
database.

Prior to upgrading your system, you can create a list of the contents of all RealTime database
tables and then immediately populate new RealTime tables. Listed RealTime data can also be
passed to other 4GL packages.

Lister/Loader Options

With dbll, all inputs must be specified on the command line. Typing dbll at the command
prompt displays the options described below. The Lister/Loader on the hot RealTime Service
provides the following options:

Table 16 - Lister/Loader Options


Option Description
template This option is used to create a default template file.
flat This option is used to create a flat file containing comma-separated
data.
tabular This option is used to generate a report-style output. The generated
output can be used as input load file.
list This option is used to generate a vertical list output. The generated list
can be used as input load file.
load This option is used to quickly load a table without data validation. This
should be used only on a non-hot system.
safeload This option is used to load a table with the contents of an input file.
bulkfastload Bulk load a database without performing validation. Same as load
processing, except the bulk list of input (INIT, L) files is passed in.
bulklist Lists databases that are specified in the bulk list file.
bulktemplate Bulk dump a template for all specified databases. It is similar to the
template option, except that the list of tables that the template file is
generated from is specified through the bulk list file.

NOTE: Unless some sort of a remote login tool is available, dbll should be run from a
RealTime server.
NOTE: The remote, connection and remconnjoin entries must be consistently updated when
dbll is used. If you use dbll to save database contents, then modify the values in the
files before using dbll to reload the values, you must ensure that:

1. Any remote name change must be applied to the remconnjoin rname field for all
remconnjoin records with the remote name.

2. Any connection name change must be applied to the remconnjoin cname field for all
remconnjoin records with the connection name.

3. Any remote dataset change must be applied to all remconnjoin records with the
remote name.

NOTE: ADE is the recommended tool to use for changing remconnjoin data.

48
RealTime Database Utilities

5.4 Creating a Template from RealTime


The first step in creating a text file that describes the contents of a RealTime table is the
production of a template file. The dbll template option creates an ASCII text file containing all of
the defined fields in a RealTime table.

The template file is initially created with all fields in a table, but you can use an editor to
eliminate unnecessary fields. Such selective processing ensures that only specified fields are
loaded or listed. The order of these fields is important only if the resulting text file is loaded into
RealTime using SAFELOAD.

For example, you can use a template of the analog table to obtain a list of all current values.
This file can then be edited to delete all fields (such as alarm limits, setpoints, etcetera) that are
not relevant to current point values. To deal with unwanted lines, either place a semicolon (;) at
the beginning of each of the lines, or simply delete them.

Syntax:

(To create a template):

dbll template <table_name> <output_file>

where:

<table_name> is the name of a RealTime table (for example, alarm, group, rate, remote,
etcetera).

<output_file> is the name of the output template file.

The output template file (output_file) is placed in the current working directory from which
dbll was executed, unless another directory is specified in conjunction with the filename. This
holds true for all files created by dbll.

NOTE: The directory specification should not contain environment variables.

RealTime tables can be listed in text files in three different format: list, tabular, or flat. The
syntax used for these formats is:

dbll [list|tabular|flat] <template_file> <list_file> <output_file>

where:

<template_file> is the name of the template file containing the table name and desired
fields.

<list_file> is the point list file. Enter the “wild card” asterisk (*) to list all of the records in the
table. If a record has been created, enter the name of the file. If there is no record list file, you

49
RealTime Database Utilities

can create it using a text editor such as notepad. The list file lists the name of all the records
that were included on separate lines. Use semicolons to denote comments. For example:

;RTU DV02

DV02PTDV0602

DV02PTDV0611

<output_file> is the name of the output file. This file is stored in the current working
directory, unless another directory is specified in conjunction with the file name.
Related Information
Listing RealTime Tables on page 50

5.4.1 Bulktemplate
Bulktemplate allows you to bulk dump a template for all specified databases. It is similar to
template, except that the list of tables that the template file is generated from is specified
through the bulk list file.

Syntax
dbll bulktemplate <inbulkfile>

Example
<inbulkfile> : dbll_bulk_template.txt
The following is an example of how each line of the file will be formatted: analog|\path
\analog.t.

5.5 Listing RealTime Tables


RealTime tables can be listed in three different formats: list, tabular, or flat.

5.5.1 List
The list format creates a text file that displays one field per line with the record numbers
separating the records. Use dbll list to list the contents of a RealTime table in a text file in
list format.

Syntax:
dbll list <template_file> <list_file> <output_file>

Example:
dbll list analog.t * analog.1
The output file is similar to the text file shown below.

50
RealTime Database Utilities

Figure 20 - List Format Output

5.5.2 Tabular
In the tabular format, output is listed in columns, one row per record, with the field names as
headers. Use dbll tabular to list the contents of a RealTime table in tabular format within a
file.

Syntax:
dbll tabular <tempate_file> <list_file> <output_file>

Example:
dbll tabular analog.t * analog.tab
To specify the record number to be used, the first column must have a header of POINT.
Whether modifying or creating records, omitting the POINT column results in the unique key
(typically the name field) being used as a locator. Ensure that the key for the record is one of the
columns in the table. The remaining headers must all be single words denoting the field names
of the table. Structure items may be included; for example, flag.alminh is a valid format for a
column header. Column titles must be followed by a line of dashes to indicate the width of the
field. The fields must be separated by one or more spaces. The width of each field is either the
field width as defined in the table, or the number of characters needed to print the field name,
whichever is greater. An example is shows below.

51
RealTime Database Utilities

Figure 21 - Tabular Format

5.5.3 FLAT
The flat format lists each record on a single line and separates the fields by commas. Flat files
are used to allow the transfer of data from a RealTime table to a third-party spreadsheet or
DBMS.

Syntax:
dbll flat <template_file> <list_file> <output_file>

Example:
dbll flat analog.t + analog.flat
Text fields in the output are surrounded by double quotation marks. An example is shown below.

Figure 22 - Flat Format

5.5.4 Bulklist
Bulklist lists databases that are specified in the bulk list file.

Syntax
dbll bulklist <inbulkfile>

Example
<inbulkfile> : dbll_bulk_list.txt

The following is an example of how each line of the file will be formatted: analog|\path
\analog.t |*|\path\analog.l.

52
RealTime Database Utilities

5.6 Loading Data into RealTime from a text file


There are two ways to load data into RealTime from a text file when using dbll. The format of
the text file is the same as that created using the dbll list command.

The following options are used with dbll:

• load

• safeload

• bulkfastload

5.6.1 Load
This option is used to quickly load data into RealTime with minimal data validation. You can only
use the load option on a RealTime database server that is currently in the FAIL (or shutdown)
state. Since no field level checks are performed, the load option is typically used only to restore
data from files that are known to be valid.

A record in RealTime that has the same record number as a record in the text file that is used in
loading the database will have all its field information overwritten even if it has a different unique
key (normally the Name field).

If a record in the text file has a record number of zero (i.e. its POINT# field is set to zero), a new
RealTime record is created if this record’s unique key is different from the unique keys of any
records that exist in the RealTime database. Loading a text file with a record that has its
POINT# field set to zero, but a unique key identical to the unique key of a record in the pre-
existing RealTime database, generates an error. As a result of the error, the system will not
modify the existing database record or load the text file record.

Syntax:

dbll load <data_file> <error_file>

The load option instructs the dbll program to read from the data_file containing the table
record specification, field names and field values. The data_file can be in list or tabular format
(created using the list or tabular options, or through a text editor). General tabular formats
can also be used. Any errors that occur during the load are written to the file error_file.
NOTE: The dbll load utility can only be run on a RealTime machine that is not currently running
(i.e. RealTime service is neither hot nor standby). If run on a hot or standby RealTime
service, dbll load prints a warning and exits; however, the dbll safeload utility can be
used on a hot RealTime service.

5.6.2 Safeload
This option is similar to load except that field checks are performed during the loading process.
Although you can run a safeload option on a failed system, you will receive a warning that not
all database checks will work. This option can be used to load data on a live system.

Loading a text file containing a record that has a unique key identical to the unique key of any of
the records that exist in the RealTime database generates an error. As a result of the error, the
system will not modify the existing database record or load the text file record.

53
RealTime Database Utilities

NOTE: Exercise extreme caution when performing dbll safeload on a live system. “Runtime”
fields, such as analog’s curspt field or status’ lastKrunchTime field, should never be
included in the ASCII data file. The file should include only configuration fields, such as
analog’s units field or status’ run.doit field. Use of this utility should be limited to those
with a thorough understanding of the system.

Syntax:

dbll safeload <data_file> <error_file>

NOTE: The dbll safeload option should be used when creating new records. The dbll load
option does not check for duplicate keys when loading a database.

5.6.3 Bulkfastload
Bulkfastload allows you to bulk load a database without performing validation. The bulk list
of input (init,l) files is passed in. This option is similar to load.

Syntax
dbll bulkload <inbulkfile>

Example
<inbulkfile> : dbll_bulk_load.txt
The following is an example of how each line of the file will be formatted: analog| \path
\analog.l | \path\analog.l.err.

5.7 Creating Remote Records


Remote records can be loaded using dbll. To load remote records using dbll, both the
remote table and the required protocol table must be loaded.

A remote record always has one partner record in one of the protocol tables. Records are
associated by their point numbers and point names, so the same point number and name must
be assigned to both the remote record and its corresponding protocol record. For example, if
the remote record number 23 is called RTU-XXX, there must also be a protocol record number
23 with the name RTU-XXX (for example, in the modbus table).
NOTE: The protocol tables only appear as options in the Database Management Tool when
you edit the remote record. However, they must be treated as independent tables when
using dbll.

5.8 Loading Records Using Calculation and Control


Routines
Calculation and control routines can be used to load records. Each record that contains a
configured calculation and control routine also contains a reference to a record in the task
control block or tcb table in the TCBnum field.

The tcb record contains:

54
RealTime Database Utilities

• The BASIC execution environment and the read/write portion of the BASIC code that runs
from the calculated/control record

• A reference to the record it serves through the field name defn.execpoint

• A reference to the databasic record that defines the DataBASIC routine to execute through
the field name defn.routine

If the calculated/control record is configured to run periodically, the tcb record contains a
reference to a job scheduler (jsh) record through the field name defn.JSHpntnum.

If the calculated/control record is configured to run on a trigger, the tcb record contains
references to the trigger records through the field names defn.excpRec[0], defn.excpRec[1],
and defn.excpRec[2].

If the Database Management Tool is used to create, delete, and/or rename calculated/control
records, all of these table references are maintained. However, if a table is saved, manually
modified, and reloaded using dbll, the user is responsible for maintaining the references. The
ADE is the preferred method of performing changes, since dbll does not provide full error
detection, and errors can be introduced into the RealTime database if the load file is incorrectly
modified.
An example of table and basic program relationships is illustrated in the figure below. In this
example, an analog record, A1, is configured to periodically execute the DataBASIC routine
level_check. Upon configuration of the BASIC Execution Block in the analog record:

• A new, unique tcb record (TCB3), and its defn.execpoint field, are set to the path of the
analog record.

• The TCBnum field in the analog record is automatically assigned to the tcb record just
created.

• The defn.routine field in the tcb record is assigned to the DataBASIC routine
level_check.

• A new, unique job scheduler record is created and assigned an integer argument that
indicates which tcb record is involved.

Upon subsequent dbll listing, editing and reloading of any of these records, the relationships
shown in the following figure must be maintained.

Figure 23 - Table and basic program relationships for an analog record

55
RealTime Database Utilities

Whenever analog, status, rate, application, and/or jsh table list files are manually modified and
loaded using dbll, the tcb table must not only be loaded, but must also be the last table loaded.
Reloading the tcb table corrects any references if table records were reloaded at different
locations in the table.

If the name of a calculated/control record is modified, then the name in the defn.execpoint field
of the tcb record referenced by the TCBnum field in the calculated/control record must also be
modified. The same applies to modifying the name of records that provide the table triggers
defn.excpRec[0], defn.excpRec[1], and defn.excpRec[2] used by a calculated/control record.
The tcb table is loaded after the table containing the calculated/control record is loaded.

If a duplicate of a calculated/control record (i.e. an analog, status, rate, or application record that
has a non-zero TCBnum value) is created in the list file, then a copy of the tcb record
referenced by the TCBnum field in the original calculated/control record must also be created.
This is not the recommended method of creating copies of calculated/control records – the ADE
is both easier and safer to use. The tcb field defn.execpoint must contain the new calculated/
control record name, while the new record's TCBnum field must contain the record number of
the new tcb record.

If the new record is executed periodically, then a copy of the job scheduler record referenced by
the tcb field defn.JSHpntnum in the original tcb record must be created. The new tcb record’s
defn.JSHpntnum is then updated to reference the new jsh record. The new jsh record is
updated to contain a name indicating the referenced tcb record and an argument that equals the
record number of the new tcb record. The calculated/control table and the jsh table are then
loaded, followed by the tcb table.

5.9 Other Utilities for Loading and Saving


The savedata, text_save, loadblankdb, loaddata, and text_load utilities can all be
used to load or save records and tables from the RealTime database.

5.9.1 savedata
The savedata command saves a copy of the current RealTime database from memory to disk
regardless of the current state of the machine.

Syntax:

savedata

or

savedata [full][-d][-w][-h][n]

Table 17 - savedata options and arguments


Option/Argument Description
full An optional parameter indicating that root.txt and types.txt are
save in addition to save.dat
-d Optional parameters indicating that the database is saved as
save.dat.day.n. If n is not specified, 5 is used.

Table continued…

56
RealTime Database Utilities

Table 17 - savedata options and arguments (continued)


Option/Argument Description
-w Optional parameters indicating that the database is saved as
save.dat.week.n. If n is not specified, 4 is used.
-h An optional parameter that displays options and/or parameters of the
command
n An optional parameter, which is an integer, that indicates the number of
backup files to keep

5.9.2 text_save
The text_save command generates templates and list files of all the RealTime tables.

The command is used to save the RealTime tables in text format, which can be edited and
individually loaded using other utilities such as dbll. This command is executed from a
command prompt on a machine where the RealTime database is installed.

NOTE: The RealTime service must be shut down and in the FAILED state in order for
text_save to run safely.

Syntax:

text_save

or

text_save -t <directory>

Where -t <directory> specifies the directory where the files are saved

5.9.3 loadblankdb
The loadblankdb utility loads a blank RealTime database from the *.init text files into a
RealTime shared memory.

Syntax:

loadblankdb [-x][-h][-o]

Table 18 - loadblankdb options


Option/Argument Description
-x This parameter dictates that the program exits upon encountering an
error.
-h This optional parameter displays options and/or parameters of the
command.
-o This parameter is an override; used by startup.pl.

WARNING: The loadblankdb command is used during RealTime service startup. It should
never be used on a live system. Only those with a thorough understanding of the
system should use this utility. The indiscriminate use of loadblankdb will
destabilize a live and hot system.

57
RealTime Database Utilities

5.9.4 loaddata
The loaddata command loads the RealTime database from the saved binary (data) files.

The command reads RealTime from the files created by the most recent run of savedata. It
will initialize and then load the entire database. This command is executed from a command
prompt on a machine where RealTime is installed.
NOTE: The RealTime service must be shut down and in a FAILED state in order for loaddata
to run safely.

Syntax:

loaddata [-h][-f][filename]

Where:

-h prints the line argument options on the screen.

-f forces the load of the new data even if RealTime is still running and the shared memory has
already been loaded.

filename is the name of the datafile to be loaded. If no filename is supplied, loaddata uses
save.dat. The file is decompressed automatically if compressed.
NOTE: The loaddata utility is used during RealTime service startup. Only those with a
thorough understanding of the system should use this utility. The indiscriminate use of
loaddata will destabilize a live and hot system, resulting in, among other things, the
sending of unintended commands to field devices.

5.9.5 text_load
The text_load command is used to restore the RealTime database.

The command loads RealTime from the files created from a previous text_save command.
The list files (*.1) contain text versions of the tables in RealTime. text_load is executed from
a command prompt on a machine where RealTime is installed.
NOTE: RealTime service must be shut down and in a FAILED state in order for text_load to
run safely.

Syntax:

text_load

or

text_load [-t <directory>]

Where -t <directory> specifies the location of the files to be loaded.

NOTE: The text_load command is normally executed when restoring RealTime. Only those
with a thorough understanding of the system should use this utility. The indiscriminate
use of text_load can have serious consequences.

58
RealTime Database Utilities

5.10 Obtaining Information from RealTime Tables


Schneider Electric has written many stored procedures that allow the RealTime database to be
viewed and modified.

The following stored procedures will return the output “0 rows affected” if there are errors in
processing or executing the stored procedure. No other error is returned to the user. The output
“one row affected” appears if the stored procedure executes successfully.

Errors are returned during processing is if there are errors in the syntax of the command.
NOTE: These stored procedures are not case sensitive.

For more information on the use of stored procedures, refer to the Business Object Reference
manual.

5.10.1 sp_databasesXML
This stored procedure lists the available databases in the SQLEngine. It lists all the databases
from all data stores that are attached.

Syntax:

sp_databasesXML()

Example:

call sp_databasesXML();

Output:

<?xml version=”1.0” standalone=”no”?>

-- <OASyS DNASQLEngineSchema>

<database name=”RTDB”/>

<OASyS DNASQLEngineSchema>

5.10.2 sp_tablesXML
The sp_tablesXML stored procedure lists all the tables within a database or all databases.
You can use the SQL wildcard character “%” as a database name to specify all databases.

Syntax:

sp_tablesXML (‘<dbname>’)

Example:

call sp_tablesXML (‘%’);

59
RealTime Database Utilities

Output:

The output will be a list of all the tables in all the databases.

Example:

call sp_tablesXML (‘RTDB’)

Output:

The output will list all the tables in the RTDB database.

<?xml version="1.0" standalone="no" ?>

- <OASyS DNASQLEngineSchema>

- <database name="RTDB">

<table name="message" description="Message text and color"


allowUpdate="yes" allowSelect="yes" allowDelete="yes"
allowInsert="yes" />

<table name="webxosdisplay" description="Table of WEB clients


currently connected" allowUpdate="yes" allowSelect="yes"
allowDelete="yes" allowInsert="yes" />

<table name="almdisturbance" description="Alarm Disturbance"


allowUpdate="yes" allowSelect="yes" allowDelete="yes"
allowInsert="yes" />

</database>

</OASyS DNASQLEngineSchema>

5.10.3 sp_columnsXML
The sp_columnsXML stored procedure lists all the columns (fields) within a table and
database. You can use the SQL wildcard character “%” for the table and column parameters,
but not for the database parameter.

Syntax:

sp_columnsXML(‘<database>’,’<table>’,’<column>’)

Example 1:

call sp_columnsXML(‘RTDB’,’%’,’%’);

Output:

Lists all the columns of all the tables in the RealTime database

60
RealTime Database Utilities

Example 2:

call sp_columnsXML (‘RTDB’,’analog’,’%’);

Output:

Lists all the columns of the analog database

Example:

call sp_columnsXML(‘RTDB’,’message’,’name’);

Output:

Lists the name column of the message database

<?xml version="1.0" standalone="no" ?>

- <OASyS DNASQLEngineSchema>

- <database name="RTDB">

- <table name="message" description="Message text and color"


allowUpdate="yes" allowSelect="yes" allowDelete="yes"
allowInsert="yes">

<column name="name" datatype="string" description="Message key name"


length="15" nullable="no" casesensitive="no" defaultvalue="yes" />

</table>

</database>

</OASyS DNASQLEngineSchema>

5.10.4 sp_PKeysXML
The sp_PKeysXML procedure lists the primary keys of the table within a database. You can use
the SQL wildcard character “%” in place of the table and/or key name.

Syntax:

sp_PKeysXML(‘<database>’,’<table>’,’<key>’)

Example:

call sp_PKeysXML(‘db’,’message’,’%’);

61
RealTime Database Utilities

Output:

Lists the primary keys of the message table

<?xml version="1.0" standalone="no" ?>

- <OASyS DNASQLEngineSchema>

- <database name="RTDB">

- <table name="message" description="Message text and color"


allowUpdate="yes" allowSelect="yes" allowDelete="yes"
allowInsert="yes">

<index name="name" column="name" isunique="yes" type="other"


sequence="1" collation="ascending" cardinality="" />

<index name="setName" column="setName" isunique="no" type="other"


sequence="1" collation="ascending" cardinality="" />

</table>

</database>

</OASyS DNASQLEngineSchema>

5.10.5 sp_FKeysXML
The sp_FKeysXML stored procedure lists the foreign keys of a table within a database. You can
use the SQL wildcard character “%” in place of the table and/or key name.

Syntax:

sp_FKeysXML(‘<database>’,’<table>’,’<key>’)

Example:

call sp_FKeysXML(‘db’,’message’,’%’);

Output:

<?xml version="1.0" standalone="no" ?>

- <OASyS DNASQLEngineSchema>

- <database name="RTDB">

- <table name="message" description="Message text and color"

62
RealTime Database Utilities

allowUpdate="yes" allowSelect="yes" allowDelete="yes"


allowInsert="yes">

<foreignkey name="name" primarycolumn="name" foreigndatabase=""


foreigntable="" foreigncolumn="" sequence="1" />

</table>

</database>

</OASyS DNASQLEngineSchema>

5.10.6 use
The use stored procedure is used to override the default database given by the connection.
This allows you to shorten the database specification string from database.dbo.table to
table when composing SQL.

Syntax:

use (‘database’)

Example:

Without using use, a statement would look like:

select currentTemp from weather.dbo.Calgary;

Using use, a statement would look like:

Call use (‘weather’);

select currentTemp from Calgary;

5.11 minSendTimeMs
You can add the minSendTimeMs variable to the config.sys.template.sql file for
cmxrepclient. It is a performance tuning variable that enables you to control the frequency that
the CMXrepsvr's transmit buffer sends data to the destination.

Recommended Usage

The CMXrepsvr's transmit buffer holds data before it gets sent to the destination. Normally, this
buffer gets filled before being sent; however, this behavior is not ideal for a heavily loaded
system. The transmission of a large amount of data can have a negative impact on CMXrepsvr
performance because the destination cannot process all of the data it receives at once.

Use minSendTimeMs to configure the frequency in which CMXrepsvr flushes the transmit
buffer. This allows CMXrepsvr to flush the data according to your system's needs and speed up
processing at the destination.

63
RealTime Database Utilities

Setting minSendTimeMs
Follow these steps to configure minSendTimeMs:
1. Locate the config.sys.template.sql file.

2. Add the following:


configure server set minSendTimeMs = <amount of time that should
elapse before the buffer flushes>

3. Save the file.

This registry entry changes the default behavior of the system.

After this variable is added, you can execute a command to change the behaviour at runtime.

Command Syntax
configure server set minSendTimeMs = <amount of time that should elapse
before the buffer flushes>

Default Behaviour
The default behaviour of the CMXrepsvr is to flush the transmit buffer every 10 milliseconds.
NOTE: If the buffer is full before the time set by minSendTimeMs, the buffer will be flushed.

Example
configure server set minSendTimeMs = 500

64
OPC Data Access Server

6 OPC Data Access Server


The Schneider Electric DNA OPC server allows OPC clients to access DNA RealTime data, as
well as data that has been published through the DNA Publish/Subscribe middleware. The
server has been certified as OPC compliant for both Data Access V2.05 and Data Access V3
standards.

Since the OPC server provides access to RealTime data, as well as the ability to write data to
the RealTime database, you must install this server on the DNA RealTime machines (both hot
and standby). It will run on the hot RealTime machine.

6.1 OPC Server


An OPC server acquires data, stores it in an internal cache, and makes that cached data
available to one or more OPC clients.

Since the OPC protocol used between OPC clients and OPC servers is standardized, the only
real distinguishing feature for any OPC server is from where the data is acquired.

The Telvent OPC server acquires its data from the RealTime SCADA database, via Publish/
Subscribe middleware. The Schneider Electric OPC server does not poll the SCADA PLCs
directly; that operation is performed by Omnicomm and its field protocols. When a client writes
data back to the OPC server, it writes the data back to the RealTime database using either the
Business Logic Tier (BLT) middleware or directly through HPDB (High Performance Database).
This preference is set in the OPC Server instance’s configuration file, and applies to all clients
of that OPC Server

The OPC server in an OASyS DNA system is nothing more than a gateway process. It allows
clients to read PubSub accessible and write RealTime data by using the OPC protocol. The
OPC server can be bypassed entirely by using PubSub and BLT middleware to perform the
identical read and write operations from a client application.

The diagram below graphically represents how the OPC server interfaces to our system.

65
OPC Data Access Server

Figure 24 - OPC Server Interface to DNA Product

An OPC client is any application that acquires data from one or more OPC servers using the
OPC standard protocol. Although OPC clients can also write data back to the server, this is not
recommended. Using BLT, it is a slower operation than reading data, and it is not a good design
structure if you plan on using an OPC client to push large amounts of data into the RealTime
database. Using HPDB for writing data is faster, but at the cost of security and AOR checking.

There are many resources on the internet that describe the OPC protocol in detail. A good
starting point would be the links found on the following page:

http://www.opcconnect.com/opcintro.php
Related Information
OPC on page 100

6.2 OPC DA Client and Server Connection


In order to connect to an OPC server, your client application must specify both a computer
hostname and an OPC server name. The default OASySDNA OPC server name is
OASySDNA.OPCDAserver.1 or OASySDNA.OPCDAserver (the version independent name).

For hostname selection, you must specify the virtual RealTime hostname. For example, you
have redundant RealTime machines named masterRTS1 and masterRTS2, and the virtual
RealTime hostname is masterRealTime. Your client should always connect to the host named
masterRealTime. This ensures that your connection is always made to the hot and operational
RealTime OPC server.

WARNING: While many clients allow you to browse for all servers on a specified machine, the
virtual RealTime hostname will not show up this way. You must type it in explicitly.
If you are having connectivity problems, first verify that the OPC server is running on the hot
RealTime machine. Any subsequent problems can probably be traced to DCOM issues.

66
OPC Data Access Server

WARNING: If a client is connected during a RealTime failover, the client will be required to
reconnect. This is because the OPC Server they were connected to went on
standby, so the client must establish a connection to the newly hot machine.
The following web pages cover DCOM security settings and issues in detail:

http://www.opcconnect.com/dcomcnfg.php

http://www.opcfoundation.org/WebUI/DownloadFile.aspx?CM=1&RI=23

http://www.gefanucautomation.com/opchub/opcdcom.asp

http://www.opcactivex.com/Support/DCOM_Config/dcom_config.html

http://www.kepware.com/Support_Center/FAQ_DCOM.html

6.2.1 Additional instances of the OPC Server


In order to address concerns over the Area of Responsibility for RealTime data, you can create
multiple instances of the OPC Server. This will enable you to limit the read and write access per
instance, so that you do not inadvertently provide full read (and potentially write) access to all
OPC clients.
Multiple instances of the OPC Server can be run on the same RealTime machine. With them,
you can employ whitelists that will expose different subsections of the RealTime database per
instance of the OPC Server. Each instance can have its own access levels, and clients using
the OPC Server will be confined to the views of the RealTime database that you have
configured in the whitelists. For example, the record analog.blat can be configured to be
readable and writable for one OPC Server, and read-only to a second OPC server. You can also
make any instance of the OPC Server oblivious to a record’s existence by leaving it out of the
instance’s whitelist.

Multiple instances are created by soft-linking the files in the OPC directory from bin to child. The
configuration file is modified for each instance, making each unique. Multiple instances are set
up by Schneider Electric personnel upon request.
OPC Directories
OPC Server directories contain shortcuts to all of the necessary files that connect the OPCDA
Server instance to the OASyS DNA SCADA system, including the DANSrvNet4.exe and
DANSrvNet4.exe.config files.

OASyS DNA SCADA can use one or more instances of the OPC DA Server. Each instance has
a directory where the instance can be configured. The figure below displays an example of an
OPC Server directory and the files it contains.

67
OPC Data Access Server

Figure 25 - Contents of an OPC Server Instance

NOTE: For all systems prior to ElkSP3, DANSrv.exe and DANSrv.exe.config are used instead
of DANSrvNet4.exe.
DANSrv Configuration file
The DANSrvNet4.exe.config file (DANSrv.exe.config for ElkSP2 and earlier) defines the OPC
DA server using unique application IDs and names. This file must be modified to be unique for
each instance of the OPC DA server.

The default values in the original file need to be modified to create a new instance of the server;
new GUIDs must be generated for the values in the configuration file to allow the new instance
of the OPC Server to be registered.

The ClsidServer and ClsidApp values both need a new GUID to replace the existing one.

The ServerProgID, CurrentServerProgID, and ServerName values must all be changed for the
new instance of the OPC Server. This name should not have been used by previous instances,
as they are the unique identifiers of the new OPC Server instance.

6.3 Data Sources and Tag Names


The OPC server provides potential access to any data that is (or can be) published via the
PubSub middleware. Included in this data is any field of any record within the RealTime
database, so an OPC client can access any RealTime data of its choice. Whitelists can be used
to restrict this access.

When specifying OPC item names, the following conventions hold for all data contained within
the RealTime database:
<system name>.realtime.db.<table name>.<key name or number>.<field
specifier>
where the names mean the following:
<system name>
All DNA systems have a system name, such as: master. The first field in an item name must
contain the system name from where the data is acquired. The system name is either supplied
by the OPC Client user, or by instructing the OASySDNA OPC Server to supply it implicitly. To

68
OPC Data Access Server

make the OPC Server supply the system name, you need to enter option=NOSYSTEMNAME
into the OPCDAserverConfig.txt file for the OPC Server instance you want modified. In
almost all instances, this is the name of the system on which the OPC server is running;
exceptions to this rule should be kept to a minimum because of the performance degradation
that could result.
<table name>
This refers to the name of the table within the RealTime database, such as: analog or status.
<key name or number>
This specifies which record or record number within the table, such as: Fred or 1.
<field specifier>
This indicates the field within the record that contains the data of interest, such as
flag.fresh. This field must be specific down to an elemental data value – you cannot attach
to an item that is a structure or an array.

For example, attaching to master.realtime.db.analog.fred.flag is incorrect, since the


flag field of an analog is itself a structure.
NOTE: Some OPC Clients can use a configuration file that contains paths, which are used to
subscribe to points. By making use of the NOSYSTEMNAME option, the paths can be
made relative or independent of the system that owns the OPC Server to which the
client connects. For example, the line es.realtime.db.analog.blat.curval
would be appropriate in the configuration file of a client running on the ES system, but
not appropriate in the configuration file for main. The NOSYSTEMNAME option turns
the line into realtime.db.analog.blat.curval which will work regardless of the
system from which the OPC Client is running. With this option specified in the
OPCDAserverConfig.txt file, the OPC client can use the same list of points to
subscribe to data on multiple machines/systems.

Examples of legal RealTime data item names:

• master.realtime.db.analog.fred.curval

• master.realtime.db.status.pump1.cursta@raw

• master.realtime.db.analog.fred.flag.fresh@raw

• master.realtime.db.remote.sched[0].timerblk.day

PubSub Topic Space

In addition to items contained within the RealTime database, you may also attach to any item
that is available within the PubSub topic space, provided it has been published using the
standard PSbuffer format.

For example, if you wish to monitor the arbitration state of the primary RealTime server, the you
would add an item as follows: master.realtime.arb.primary.states.state. Similarly,
the hostname of the primary RealTime machine can be acquired via:
master.realtime.arb.primary.states.hostname.

If a PubSub message contains multiple data items within the PSbuffer message, only the first
data item is available to the OPC Client

One other restriction is applicable to topics within the PubSub topic space. You may not specify
any item that contains less than five segments. For example, master.b.c.d.e is legal, while,

69
OPC Data Access Server

master.b.c.d is illegal. This restriction prevent the OPC Server from being overloaded by a
client that attemps to make a subscription that is too broad. As an example, any attempt to
attach to an entire RealTime table, such as master.realtime.db.analog, would be not
only erroneous, but very deleterious to system performance.

Invalid OPC item names are logged in the <InstallDirectory>\log\OPCServer.log files and the
OPC Server returns invalid argument code. Invalid item names are those that do not parse
correctly. For example, those that end with a dot or contain two consecutive dots, as well as
invalid RealTime table, point, or field specifier values. However, it is possible to subsctribe to
any RealTime field with a correct naming convention, including one that does not exist. This is
to allow access to a process that will be published to that field in the future.
Related Information
Client Browsing on page 79

6.4 Writing Data


The OPC Server can be configured to perform writes in two ways: via BLT and via HPDB. BLT
is the process that is used by default. Its use is described in the examples below. HPDB (High
Performance Database) is the process that can be used when your OPC Server instance
requires high-throughput writing. Should you wish to used HPDB, you will need to set this
writing behavior in the OPCDAserverConfig.txt file.

Figure 26 - Methods used by the OPC Server to write to the RealTime Database

HPDB allows the server to write to RealTime Database points as quickly as is possible. HPDB
performs faster writes because it does not use security or AOR checks. HPDB should only be
used when security is not a significant concern for the OPC Server instance. All OPC clients of
an instance using HPDB as its write method are authorized to perform writes, so long as
option=READWRITE appears in the configuration file.

With a default OPC Server configuration, all RealTime items are marked as writable, and OPC
clients may attempt to write to any of them. Although writes are possible, they may not always
succeed since fields within the RealTime database have complex rules as to what can be
modified.

For example, attempting to modify the flag.fresh field of an actively telemetered analog record
always fails, even though the OPC server attempts this operation when requested. The OPC
server takes any write request and attempts to modify the database via a virtual fieldput
operation. The result of the v_fldput call will be returned to the client.

70
OPC Data Access Server

Even when the write succeeds, the resulting change depends on the virtual layer routine that
services the write request. For example, writing a value to a telemetered analog’s setpoint field
would result in a setpoint command being sent out to the PLC. Another example would be
writing a value to the curval of an analog that is in manual mode; the write succeeds, but alarm
limits are not checked, nor are alarms created as a side effect of this change.

To change the way your OPC Server performs writes from BLT to HPDB, you need to open the
OPCDAServerConfig.txt file and insert the line: option=HPDB. The OPC Server needs to be
restarted to apply the change. The best way to switch write methods is to change the
OPCServer on standby first, and then failover so the change can be made on the new standby.If
the change is successfully applied, a message will appear in the OPCServer.log that states
HPDB enabled for non-override writes.

Items that are not contained within the RealTime database cannot be modified. These items
exist in the PubSub ether and cannot be modified by anyone except the original publisher of the
message. All items outside of the .realtime.db. name space are marked as read only.

All of the RealTime database fields that end in @raw are not writable fields. For example, you
may read main.realtime.db.status.cursta@raw, but you may only write to
main.realtime.db.status.cursta. This is due to the vdb layer processing.

CAUTION: Writing data back to the RealTime database is a much slower operation than
reading. Use the HPDB option for OPC Server instances that need to perform a
high amount of RealTime database writes.

6.4.1 OPC Server whitelists


The use of an OPC Server whitelist modifies the way distinct instances of the OPC Server can
access the RealTime database. Permission to read, write, and browse to points are both
affected when a whitelist is used.

Whitelists contain a list of RealTime database points that the OPC Server and its clients are
allowed to access. Ultimately, whitelists control whether or not a point can be written to by
limiting the OPC Server's access; if the whitelist does not contain a point, access to it is blocked
for reading, writing, and browsing.

The whitelist is read and applied to the OPC Server instance at startup or restart of the OPC
Server. Therefore changes to the whitelist must be entered before startup, or the OPC Server
must be restarted if changes are made. The OPC Server reads and processes the whitelist into
an internal hash lookup table, so that it can check all client actions against this table and
disallow ones that are not whitelisted.

An OPC Server Whitelist can be made for any OPC Server instance on any project. All
whitelists must be named OPCDAWhiteList.txt. They are specifically written per OPC Server
instance, and are saved and maintained in the corresponding directory. Whitelists contain three
entries: TableName, PointName, and the writable status.

To employ whitelisting, a project must create a whitelist that matches its specifications for the
associated OPC Server instance, and check it prior to implementation.To be implemented, the
whitelist must be saved in the instance folder for the OPC Server to which it pertains. If there is
no whitelist in the folder, the OPC Server instance defaults to normal behavior as specified by
the OPCDAServerConfig.txt file.

NOTE: The OPCDAServerConfig.txt file appears in every OPC Server instance directory. This
file configures the default access to RealTime database points. The whitelist, which is
the first to be processed upon instantiation, works in conjunction with the configuration

71
OPC Data Access Server

file, but it does not override it. In other words, access must be granted by both files, or
attempts to write to the point will be blocked.
Related Information
Additional instances of the OPC Server on page 67

Writable status
Whitelist entries control whether a point is writable, in addition to granting access to database
points, thereby allowing projects broad control over the way distinct OPC Server instances
interact with the RealTime database.

Points in the RealTime database that do not appear in the whitelist, can be neither read nor
written to. Each whitelist entry contains: the table in which the point is located, the point name,
and the writable status. A point will either be set as READWRITE or READONLY. READWRITE
grants full access to the point; it can be browsed and written to. The READONLY status is used
to block writes to a point that is whitelisted. Any time a client tries to access an unlisted point, or
write to one that is listed as READONLY, the action will be prevented and an error message will
be sent to the OPC Server log.
Creating a whitelist
Whitelists can contain a select number of points, from a select number of records, or you can
use them to configure the writable status of all RealTime records. Because you create it, a
whitelist is flexible to the needs of any instance of the OPC Server.
Before you write the whitelist, you will need to consider the objectives for its use with the OPC
Server instance to which it will be applied. This might mean analysis of the points in the
RealTime database and the groups to which these points belong. All of the records in the
RealTime database are candidates for whitelisting.Planning the objectives of the whitelist is the
first step, and should include verification of the names and spelling of the tables and points the
whitelist will include.The whitelist is written so that the TableName, PointName, and writable
status appear on the same line for each entry. The values in each entry can be separated by
any number of commas, spaces, or tabs depending on your formatting preferences.

Procedure

1. Open a txt. file and save it as OPCDAWhiteList.txt. in the OPC Server directory for the
instance you are modifying.
2. Type the first entry on the first line.

a) Type the name of the Table followed by any number of commas, spaces, or tabs.
b) Type the name of the Point followed by any number of commas, spaces, or tabs.

c) Type the writable status.

The writable status will be set to READWRITE or READONLY. This part of the entry is case-
sensitive.
3. Enter a new line to add another entry.
4. Save the whitelist when it is complete.
Related Information
Data Sources and Tag Names on page 68

72
OPC Data Access Server

Debugging Whitelists
The entries in a whitelist are processed individually, that way the whitelist will be applied to the
OPC Server instance despite any invalid entry.

If an invalid entry appears in a whitelist, it will be skipped during processing. The failure to
register this entry will be logged as an error message in the OPC Server log. The whitelist for
any OPC Server instance should be edited when errors related to the whitelist appear in the
OPC Server log.

The log contains information about the line number for all erroneous entries, in addition to the
reason processing failed. Processing of an entry can fail for several reasons: the point does not
exist, the table does not exist, improper formatting, and an invalid writable status.

The failure of all client attempts to read or write to a point will also be logged. If it is decided that
the OPC Client should be able to access or write to any point for which errors are frequently
being logged, the point can be added or modified in the whitelist for the OPC Server instance at
any time.
NOTE: Any time the whitelist is changed, the OPC Server must be restarted for changes to
take effect.

6.5 Security
Due to the design of the OPC standard, AOR (Area of Responsibility) security for reading data
is not possible with the OPC server, unless you employ a whitelist to achieve similar results.
Any OPC client that has permission to connect to the OPC server will be able to read all data
items within the RealTime database.

Care must be taken when choosing applications that are or will be connected to the OPC
server. Refer to the following links for DCOM security settings:

http://www.opcconnect.com/dcomcnfg.php

http://www.opcfoundation.org/WebUI/DownloadFile.aspx?CM=1&RI=23

http://www.gefanucautomation.com/opchub/opcdcom.asp

http://www.opcactivex.com/Support/DCOM_Config/dcom_config.html

http://www.kepware.com/Support_Center/FAQ_DCOM.html
WARNING: For security, all items that can be attached to by an OPC Client are, by default,
read-only. If OPC is used to write to any items, the following line must be in the
configuration file (<InstallDirectory>\bin\OPCDAserverConfig.txt):
option=READWRITE

NOTE: If you change OPCDAserverConfig.txt, you must restart RealTime for the changes to
take effect.

NOTE: If you create an OPCDAserverConfig.txt file, you should consider whether the file
should be distributed to other RealTime servers. If so, then edit the <InstallDirectory>
\config\RealTime.txt file and add an entry for OPCDAserverConfig.txt.

NOTE: This is an all-or-nothing setting for individual OPC servers. There is no way to set the
OPC read/write attribute on a per-client basis.

Although all data may be marked as writable, this setting does not necessarily reflect the actual
capability of a client. Unless you use HPDB as your writing method, writing data to the RealTime

73
OPC Data Access Server

database is done with full AOR and Windows security; the client application’s credentials and
authorization are checked before any write capability is allowed. Thus, an OPC datum may be
marked as writable, but the actual write operation can fail because of insufficient client security,
or because the database item does not allow writes.

Some OPC clients override the default DCOM security settings, and appear to the OPC server
as being partially or entirely anonymous. This behavior is completely determined by the client
application, and there is no method by which the OPC server can override this anonymity.
Normally, the OPC server will not allow these anonymous clients to write data into the RealTime
database, since their security permissions cannot be fully ascertained. If you wish to override all
security and allow all clients write permission to the RealTime database, then place the
following entry into the config file:

option=SECURITYOVERRIDE

NOTE: This entry only works if the READWRITE option is also set, this also causes writes to
occur as the dnaApp* user, not the user running the client. The implication is 1) the
client has the authority of the dnaApp* account and 2) logs record that the dnaApp*
user as the one making the database changes.
NOTE: The Advosol DA2.05 client was written to be one of these partially anonymous clients,
and will therefore encounter security problems when writing data to RealTime. The
Advosol DA3.0 client does not override the security settings, so it is recommended that
you use the DA3.0 client.

6.6 OPC Timestamps


The timestamps of all OPC data default to the time(s) when the PubSub message containing
the data arrives. In the case of RealTime data, this may not be the timestamp of interest.

For example, if you attach to master.realtime.db.analog.fred.curval, you may be more interested


in the timestamp of the last time the analog curval internal field was changed. Telemetered data
records in the RealTime database contain a timestamp field called lastKrunchTime indicating
the time that the data was last processed. This allows the OPC client to read the
lastKrunchTime value separately; however, it is not guaranteed to be synchronized with the
current value (e.g., if the current value is quickly updated more than once, you could associate
the lastKrunchTime with the wrong current value).

For the current value fields (e.g., curval) and data quality fields (e.g. flag fields: alminh, alarm,
error, fg (color) flash, manl, msgtxt, offscan, severity, tag) if you wish the OPC Server to return
the time that the data was last processed rather than the time that PubSub received the data
update, then add the following line to the configuration <InstallDirectory>\bin
\OPCDAserverConfig.txt file: option=MAPTIMESTAMP.

6.6.1 Enabling OPC Timestamps


You can edit the OPCDAserverConfig.txt file to enable OPC timestamps.

Procedure

1. On the server running the OPC server, navigate to the <InstallDirectory>.


2. Open the OPCDAserverConfig.txt file.
3. Add the following line: option=MAPTIMESTAMP

74
OPC Data Access Server

4. Save and Close the file.


5. Restart the RealTime service.

6.7 OPC Data Quality


The Schneider Electric OPC server allows OPC clients to access RealTime data as well as data
that has been published through PubSub middleware. Data quality of the points is primarily
determined by the OPC server connectivity.

The OPC server accesses the PubSub middleware and not the SCADA PLCs directly.
Therefore, the data quality of the points is based on the OPC server connectivity via PubSub to
the RealTime database. This means that if the OPC server acquired the point value from the
RealTime database without error, the OPC data is of good quality.

Optionally, there are several telemetered points that can have their data quality determined
based on the actual data quality of the point. This is done by enabling the MAPDATAQUALITY
flag.

6.7.1 Enabling Data Quality


RealTime record fields can be added as items to the OPC client program connected to the OPC
server. You can configure the data quality to be returned as the quality of the connection or the
quality of the data value.

For any analog, status, rate, or tank telemetered field added as an item to the OPC client, you
can configure the OPC server to return the data quality as:

• The quality of the connection between the OASyS DNA OPC server and the PubSub service
for that point or

• The quality of the data value itself, as determined from the record's flag structure.

Once the OPC client receives the data and quality, it must display and/or process the quality.
CAUTION: If the connection between the OPC server and the PubSub service for a given point
failed, the data quality for that point is NOT-CONNECTED, regardless of the data
quality of the point. For example, if you delete or remove a point added as an item
to an OPC client, the system marks the quality as NOT-CONNECTED for that point.

If enabled, the system provides the actual data quality of the following telemetered records:

Table Point
Analog curval
Status cursta
cursta@raw
Rate curate
acccur
Tank field.volume
field.num_inc
currentGO.volume
currentGO.num_inc

Table continued…

75
OPC Data Access Server

Table Point
currentGS.volume
currentGS.num_inc
flevel
GOflow_rate
GSflow_rate
Multistate inputCurRaw

OPC Data Quality Calculation


The OPC DA server calculates the data quality of each of the supported points, based on the
OPC Data Quality Truth table.

For all tables (Analog, Rate, Status, Tank, and Multistate):

• Flag. severity

• Flag.alarm
• Flag.alminh

• Flag.error

• Flag.offscan

• Flag.tag

• Flag.fresh

• Flag.manl

• Flag.msgtxt

• lastKrunchTime

Analog specific fields:

• Curval

Status specific fields:

• Cursta

Rate specific fields:

• Curate

• Accur

Tank specific fields:

• field.volume

• field.num_inc

• currentGO.volume

• currentGO.num_inc

• currentGS.volume

• currentGS.num_inc

• flevel

76
OPC Data Access Server

• GOflow_rate

• GSflow_rate

Multistate specific fields:

• inputCurRaw

Table 19 - OPC Data Quality Truth Table


SCADA Quality Data OPC Quality
flag.fresh flag.manl OASyS DNA Quality
Enumeration
NO NO LAST_USABLE
NO YES LOCAL_OVERRIDE
YES YES LOCAL_OVERRIDE
YES NO GOOD

OPC Data Quality Configuration


The OPC server provides an optional data quality that reflects the quality of the individual data
points for specific fields in the RealTime database.

This mapping only supports data quality for the telemetered fields in Analog, Rate, Status and
Tank tables in the RealTime database. The supported fields are as follows:

Table 20 - Data Quality Mapping


Table Fields
Analog, Rate, Status, Tank, and Flag.fg@raw
Multistate:
Flag.severity@raw
Flag.alarm@raw
Flag.alminh@raw
Flag.error@raw
Flag.offscan@raw
Flag.tag
Flag.fresh@raw
Flag.mani@raw
Flag.msgtxt
lastKrunchTime
Analog specific: Curval
Status specific: Cursta
Cursta@raw
Rate specific: Curate
Accur
Tank specific: field.volume

Table continued…

77
OPC Data Access Server

Table 20 - Data Quality Mapping (continued)


Table Fields
field.num_inc
currentGO.volume
currentGO.num_inc
currentGS.volume
currentGS.num_inc
flevel
GOflow-rate
GSflow-rate
Multistate specific: inputCurRaw

Table 21 - OPC Data Quality Configuration


Code Value
00 LAST_USABLE
01 LOCAL_OVERRIDE
11 LOCAL_OVERRIDE
10 GOOD

Enabling Data Quality Mapping

You can edit the OPCDAserverConfig.txt file to enable data quality mapping.

Procedure

1. On the server running the OPC server, navigate to the appropriate <InstallDirectory>.
2. Open the OPCDAserverConfig.txt file.
3. Add the following line: option=MAPDATAQUALITY
4. Save and Close the file.
5. Restart the RealTime service.
NOTE: In the absence of the Data Quality flag, the OPC server defaults to providing the OPC/
PubSub connection quality.

6.8 Redundancy and RealTime Failovers


OPC does not have intrinsic support for redundant servers and failovers between them.
However, Schneider Electric’s OPC server is configured to run as a redundant pair if the
RealTime service is itself redundant.

OPC clients connect to the virtual RealTime hostname, so they always obtain a connection to
the hot RealTime OPC server. If the OPC client attempts to connect to the standby OPC server,
the standby server will refuse the connection. It is advisable for an OPC client to always connect
to the virtual RealTime IP address, as this automatically and correctly routes them to the OPC
server on the hot RealTime box. In the case of a RealTime failover, the OPC server on the failed
machine shutsdown in order to ensure that OPC clients cannot keep a connection to a server
that is no longer active.

78
OPC Data Access Server

The OPC client is responsible for detecting any OPC connection failures and must then
reconnect and resubscribe to the desired points.

OASyS DNA supports change in point ownership (DistribuSyS). If the OPC client has a
connection to an OASyS system that does not own the point that the OPC client is reading, the
OPC client will receive the data that was replicated from the owning system to the non-owning
system.

If the OPC client writes a data value to a non-owned point and the OASyS DNA system has
configuration and control privilege for the non-owned point, one of the following will occur:

• The OASyS DNA BLT software will replicate the written value to the owned system and other
non-owned systems.

• The change will be made directly on the owning system and replicate the change out to all
non-owning systems (the decision on where to make the change and how to replicate it is
configured for each RealTime database field).

6.9 Client Browsing


The client browser sees all RealTime database records that can be accessed by the client.
Since many OPC browsing clients do not handle large amounts of data well, be careful when
browsing the RealTime database.

A default configuration of the OPC server supports browsing of the entire RealTime database,
not just records that have clients with current attachments or records that have been loaded into
the OPC server’s cache. The client browser sees all RealTime database records that are
capable of being accessed by the client.

When an instance of the OPC Server uses a whitelist, it will affect client's ability to browse the
RealTime database. As opposed to the default configuration, an OPC client will only be able to
see the points that are whitelisted in the browser. The figure below demonstrates the effect that
whitelisting has on browsing.

Figure 27 - Browsing with a whitelist

The whitelist that was used by the OPC Server in the above figure contained only three points:
0analog, 0rate, and 1 from the remconnjoin table. There were other rate, analog, and

79
OPC Data Access Server

remconnjoin records in the tables, but they do not show up in the browser because the whitelist
has been applied. The RealTime database that was accessed by this example’s whitelisted
OPC Server instance also contained status records. The whitelist prevented the status records
from appearing in the browser. Any whitelist is applied to the OPC Server at startup or upon
restart, and will affect the client's ability to browse the RealTime database.

Although client browsing is useful for the configuration of small applications, you should be
careful when attempting to browse the RealTime database. Many OPC browsing clients do not
handle large amounts of data very well. Some projects have telemetered tables containing
thousands of records. Expanding a table with this many records in the browser, can cause a
timeout or delay for the browser. It is best to type the paths for records in these situations, rather
than browsing to the point.

CAUTION: If your client has a browse all option, you must ensure that this option is turned off
prior to connecting to the Schneider Electric OPC server.
Related Information
Data Sources and Tag Names on page 68
OPC Server whitelists on page 71

6.10 Performance and Optimization


Although the OPC server is fully capable out-of-the-box, an OPC client can cause performance
problems when attaching to a large number of items from the RealTime database. Using entries
in the OPC server’s configuration file can eliminate this issue.

The OPC server establishes a separate PubSub subscription for every item a client adds to
their attach list. Once the number of items gets large (greater than 5000) the process
responsible for tracking and publishing RealTime data begins to use too much memory and
CPU in order to track all the subscriptions.

This problem can be eliminated by using some entries in the OPC server’s configuration file. If
you know that your application is attaching to the same field in a large number of RealTime
records, you can inform the OPC server when it first starts. The OPC server can then use star
subscriptions to subscribe to an entire collection of data with a small number of subscriptions.
This results in a largely reduced load on the RealTime server with no cost to the client.

As an example, let us assume that your client wishes to acquire data from the following:

master.realtime.db.analog.*.curval --> curval field for all analog records

master.realtime.db.analog.*.timestamp --> timestamp field for all analog records

master.realtime.db.analog.*.flag.fresh --> flag.fresh field for all analog records

master.realtime.db.status.*.cursta@raw--> numeric cursta value for all status


records

master.realtime.db.status.*.flag.msgtxt --> string status value (open, close, etc)

master.realtime.db.analog.*.timestamp --> timestamp field for all analog records

master.realtime.db.analog.*.flag.fresh --> flag.fresh field for all analog records

80
OPC Data Access Server

To reduce the load on the RealTime server, edit the OPCDAserverConfig.txt file and add
the following lines:

subscribe=realtime.db.analog.|*|.curval

subscribe=realtime.db.analog.|*|.timestamp

subscribe=realtime.db.analog.|*|.flag.fresh

subscribe=realtime.db.status.|*|.cursta@raw

subscribe=realtime.db.status.|*|.flag.msgtxt

subscribe=realtime.db.status.|*|.flag.fresh

The OPC server establishes optimized subscriptions to this data when it starts, and the data is
pre-loaded into the server’s cache when the client connects to the server.
You can also optimize the performance of the OPC server by careful selection of the RealTime
fields to which the client attaches. For example, realtime.db.status.fred.cursta is
returned to the client as a string value, such as, open, close, on, off. This requires a substantial
amount of effort by PubSub to read the raw enumeration value and then look up the translated
string. A faster alternative is to attach to the cursta@raw field if you want the raw numeric value,
and if you desire the current value as a string, attach to status.flag.msgtxt.

Always use the @raw variations of any field where possible, such as flag.fresh@raw instead of
flag.fresh. It is always faster to obtain the boolean value (1 or 0) for the flag.fresh field as
opposed to obtaining the string yes or no.

81
Protocols

7 Protocols
Schneider Electric supports various communication protocols such as Modbus and OPC. Each
protocol contains unique fields that define the communication session, including data location
within the remote memory and data access.

Selection of the correct protocol is determined by the type of data communication required for
each remote device. This module provides an overall perspective for protocol-specific
configuration of remote devices and details on implementing the Modbus and OPC protocols.

7.1 Typical Configuration Procedure


The process of adding a new remote or changing the location of an existing remote is a six step
process.

The following steps should be followed to add or change the location of a remote:

Procedure

1. Create an Omnicomm process (if necessary).


2. Configure the connection.
3. Configure the remote.
4. Configure the protocol.
5. Configure RealTime records (as needed).
6. Configure special functions.
Refer to the links below for instructions on completing these steps.

7.1.1 Create an Omnicomm Process


The Omnicomm process is created and configured through the Omnicomm Row Edit dialog
box.

Procedure

1. Open the Omnicomm Row Edit or Omnicomm Row Details dialog box.
2. Configure the applicable fields and save the new record.
For details on how to configure the different fields, refer to the Omnicomm Records
documentation.
Omnicomm Records
Each Omnicomm record represents an Omnicomm process. Omnicomm records are located in
the Omnicomm table and are created and edited using the Omnicomm Row Edit dialog box.

A single Omnicomm process runs for each Omnicomm record. Multi-omnicomm processes are
possible, but this is typically used for development purposes where isolated environments are
required to facilitate parallel testing among many staff members. One Omnicomm process is
usually enough.

Configure the fields on the Omnicomm Row Edit dialog box to edit or create a new Omnicomm
record. Refer to the table below for descriptions of how to configure each field.

82
Protocols

Figure 28 - Omnicomm Row Edit dialog box

Table 22 - Items on the Omnicomm Row Edit dialog box


Item Description
Name Type the name of the omnicomm record, or use the
ellipsis button (...) to select the name of an existing
omnicomm record.
All table records must have a unique name to distinguish
it from other records in the Omnicomm table. A record
name can be up to 32 characters in length and can
contain any alphanumeric or punctuation character with
the exception of the following: , . ; : / ‘ “ [ ] ( ) @ %

Names cannot be purely numeric (e.g., 12345) since


numeric names would make it impossible to differentiate
between record names and record slots.

If you enter a purely numeric name or a name containing


an unacceptable character, an error message is
generated.
Description Type a brief description of the record. The description can
be up to 47 characters in length.
The Name and Description will appear in the result
tables for the record on the ezXOS interface.
Group Click the ellipsis button (...) to open the Group Select
dialog box, and select the group that corresponds to this
record.
Message Click the ellipsis button (...) to open the Message Select
dialog box, and select the message record that contains
the appropriate messages for this record.
PI Historical Click the PI Historical button to open the PI table and
edit the record for historical purposes.

83
Protocols

7.1.2 Configure the Connection


The Connection table defines communication characteristics such as communication line type
and data rate. Connection records are configured through the Main tab in the Connection Row
Edit dialog box.

Procedure

1. Open the Connection Row Edit or Connection Row Details dialog box.
2. Configure the applicable fields in the Main tab, and save the new record.
For details on how to configure the different fields, refer to the Main Tab Connection Row
Edit documentation.
Main Tab Connection Row Edit
Each communication record defines a connection that is used to communicate with a remote
device. Basic configurations of a connection record can be set and edited in the Main tab on the
Connection Row Edit dialog box.

Configure the fields on the Main tab in the Connection Row Edit dialog box to edit or create a
new connection record. Refer to the table below for descriptions of how to configure each field.

Figure 29 - Main tab in the Connection Row Edit dialog box

84
Protocols

Table 23 - Items on the Main tab in the Connection Row Edit dialog box
Item Description
Name Type the name of the connection record, or use the
ellipsis button (...) to select the name of an existing
connection record.
All table records must have a unique name to distinguish
it from other records in the Connection table. A record
name can be up to 32 characters in length and can
contain any alphanumeric or punctuation character with
the exception of the following: , . ; : / ‘ “ [ ] ( ) @ %

Names cannot be purely numeric (e.g., 12345) since


numeric names would make it impossible to differentiate
between record names and record slots.

If you enter a purely numeric name or a name containing


an unacceptable character, an error message is
generated.
Description Type a brief description of the record. The description can
be up to 47 characters in length.
The Name and Description will appear in the result
tables for the record on the ezXOS interface.
Group Click the ellipsis button (...) to open the Group select
dialog box, and select the group that corresponds to this
record.
Dataset Click the ellipsis button (...) to open the Dataset select
dialog box, and select the privileges associated with the
record. Privileges are assigned for each system and
mode.
Once a record is assigned to a dataset value, the dataset
can only be changed to a value for which the system has
a privilge record. A record’s dataset can not be changed
so the system can no longer access the record.
Message Click the ellipsis button (...) to open the Message select
dialog box, and select the message record that contains
the appropriate messages for this record.
Omnicomm Process Click the ellipsis button (...) to open the Omnicomm
select dialog box, and select the name of the Omnicomm
process associated with the connection.
Connection Protocol Click the drop-down arrow, and select the name of the
protocol driver used with this connection.
Circuit Click the ellipsis button (...) to open the Circuit select
dialog box, and select the name of the circuit to which this
connection belongs. In no circuit is selected, Omnicomm
assumes the connection is independent of all other
circuits (e.g., point-to-point IP connection).
Issue Integrity Update Select the check box to request an integrity update
Automatically whenever a connection is established.

Table continued…

85
Protocols

Table 23 - Items on the Main tab in the Connection Row Edit dialog box (continued)
Item Description
Enable Connection FEP Select the check box to enable front-end processing.
This only applies if a front-end processing driver has been
developed for the project.
Protocol Specific Configuration Type additional parameters that may be required for the
protocol driver selected in the Connection Protocol field.
Historical... Click the Historical... button to access the Collect Table
Edit dialog box, and configure records for historical data
collection.
PI Historical... Click the PI... button to open the PI Template Table Edit
dialog box, and configure PI records.

7.1.3 Configure the Remote


The Remote table defines which remotes are involved in the communication process. A
remote’s protocol, polling address, and internal software configuration are configured through
the Main tab in the Remote Row Edit dialog box.

One remote record must be configured for each remote device.

Procedure

1. Open the Remote Row Edit or Remote Row Details dialog box.
2. Configure the applicable fields in the Main tab, and save the new record.
For details on how to configure the different fields, refer to the Main Tab Remote Row Edit
documentation.
Automatic and manual events are scheduled through the Remote Schedule tab in the Remote
Row Edit dialog box. The remote scheduler handles various commands to/from the remotes and
ensures the remotes are connected and processing requests efficiently. The commands
available for each remote depend on the protocol and are limited to commands that do not
transfer parameters (e.g., digital command) to/from the remote. For more information, refer to
the Remote Scheduler tab and Remconnjoin Table documentation.

86
Protocols

Main tab Remote Row Edit


The Main tab on the Remote Row Edit dialog box is used to configure the basic properties of
the record such as group, dataset, and communications details.

Figure 30 - Main Tab in the Remote Row Edit Dialog Box

The Main tab on the Remote Row Edit dialog box also gives the user access to the historical
database via the Historical and PI Historical buttons. Refer to the table below for field
descriptions and instructions on how to configure the editable fields.

Table 24 - Items on the Main tab in the Remote Row Edit dialog box
Item Description
Name Click the ellipsis button (...) and select the name of the
Remote record from the Remote Select dialog box.
Description Type a description of the record. The Description can be
up to 47 characters in length. This field is for information
purposes only and can be used to describe the record, its
association with other points, or any other textual
information. The Name and Description configured for a
record appear in the summary tables in ezXOS.
Protocol Record Edit... Click the button to edit the remote protocol for the record

Table continued…

87
Protocols

Table 24 - Items on the Main tab in the Remote Row Edit dialog box (continued)
Item Description

For details on editing the protocol, refer to “Using the


Remote to Protocol Form”.
Group Click the ellipsis button (...) and select the name of the
group this record belongs to from the Group Select
dialog box.
Dataset Click the ellipsis button (...) and select the name of the
dataset you want to associate with this record from the
Dataset Select dialog box.
A dataset contains privileges assigned for each system
and mode. Once a record is assigned to a data set value,
the dataset can only be changed to a value for which the
system has a privilege record. A record’s dataset can not
be changed in such a way that the system can no longer
access the record.
Message Click the ellipsis button (...) and select the message
associated with the record from the Message Select
dialog box.
The default message for a remote is RTU.
Priority Display Specify the graphic you want to be associated with the
priority display button on an ezXOS control panel.
Protocol Click the drop-down arrow, and select the name of the
protocol used for communication between this remote and
the host. The drop-down menu lists all of the available
protocols assigned for your project.
NOTE: The same protocol must be defined at both ends
of the communication process (the host and the
remote) to complete the communication cycle.
Address Type the numerical address of the remote. The address is
an integer (e.g., 1, 2, 3, etc.) assigned to the remote
through its field unit configuration.
NOTE: A setup referred to as multi-dropping allows a
number of remotes to share the same
connection. Remotes with the same protocol
must have unique addresses to be multi-
dropped. Remotes with different protocols can
also be multi-dropped.
Enable Communications Failure Select the check box to enable Omnicomm to attempt
communication over a different connection when it fails to
communicate with a remote (failover). Omnicomm will use
the cheapest available connection for this remote as
determined by the Cost Factor.

Table continued…

88
Protocols

Table 24 - Items on the Main tab in the Remote Row Edit dialog box (continued)
Item Description
Historical... Click the button to open the Collect Table Edit dialog box
that can be used to configure historical data collection.
PI Historical... Click the button to open the PI Table Edit dialog box and
begin configuring PI records.

7.1.4 Configure the Protocol


The remote protocol for any remote record can be edited and configured by using the Protocol
Record Edit button.

Procedure

1. On the Main tab in the Remote Row Edit dialog box, click the Protocol Record Edit...
button.
Step Result: The Remote to Protocol Form appears briefly followed by the Row Edit
dialog box for the protocol related to the record (e.g., if the Protocol field in the Remote
Row Edit dialog box is set to Modbus, the Modbus Row Edit dialog box will appear).
2. Edit the protocol-specific Row Edit dialog box as desired, and save the changes.

NOTE: The Remote to Protocol Form displays the name and protocol associated with the
record. It cannot be edited.

7.1.5 Configure Special Functions


Depending on the selected protocol, your system may support special uploads, downloads or
special commands (e.g., timesync). These functions usually require additional configuration of
the remote record or other tables.

7.1.6 View and Control the Remote


Once the remote is configured, it can be controlled and viewed through the Remote Control
dialog box in ezXOS. This dialog box contains several commands (e.g., poll now, integrity scan,
time synchronization) that can be manually executed.
NOTE: The controls allowable in the Remote Control dialog box are specific to the protocols
installed on your system.

7.2 Modbus (Generic)


The communication protocol polls for data from several types of remotes and sends digital and
setpoint commands. The Modbus protocol is based on master/slave relationships, where your
system is the master station, and the remotes are the slaves.

Tables at the host are configured to match the register layout of each remote. This allows
communication with a variety of remotes: Remote Terminal Units (RTUs), Programmable Logic
Controllers (PLCs) or Flow Computers (FCs). The Modbus protocol is a Query/Response
(normal operation) protocol that controls the query and response cycle between the master
station and the remotes.

The master station:

89
Protocols

• initiates all communication with the remotes

• polls the remotes on the communication line for data on a round robin basis

NOTE: A remote may have several blocks of registers defined. Each block or range of
registers is polled with its own query. To allow communication with a variety of
configured remote registers, database tables at the master station can be configured
to match the register layout of each remote.

• sends setpoint commands, analog and digital data to the remotes on an as needed basis

NOTE: Commands take precedence over polling; therefore, once the command is issued, it is
sent before any other remotes on the communication line are polled.

The system supports a variety of data formats for the raw values received and sent to the
remotes. The remote that is being polled determines the location and type of data within the
remote registers. This varies between the brands and models of equipment found in your
system, as described in later sections of the documentation.

7.2.1 Modbus Table


The Modbus records define the protocol that controls the query and response cycle between the
master station and the remotes. Modbus records are located in the Modbus table and are
created and edited using the Modbus Row Edit dialog box.

Figure 31 - Modbus Table Edit dialog box

NOTE: The remote is configured through the Main tab in the Remote Row Edit dialog box.
Protocol and Address are protocol-specific fields.

90
Protocols

Modbus Row Edit: Main Tab


The Main tab on the Modbus Row Edit dialog box is used to set up the maximum transmission
values of the record. Configure the fields on the Main tab to edit or create a new Modbus
record.

Refer to the table below for field descriptions and instructions on how to configure the editable
fields.

NOTE: Maximum transmission size refers to the maximum size of data packets transmitted by
the remote.

Figure 32 - Main tab in the Modbus Row Edit dialog box

Table 25 - Items on the Main tab in the Modbus Row Edit dialog box
Item Description
Name This field displays the name of the Modbus record.
Group This field indicates the group that corresponds to this
record.
Dataset This field indicates the dataset to which the record is
assigned. Datasets are used to associate the record with
the privileges assigned for each system mode. Once a
record is assigned to a dataset value, the dataset can
only be changed to a value for which the system has a
privilege record.

Table continued…

91
Protocols

Table 25 - Items on the Main tab in the Modbus Row Edit dialog box (continued)
Item Description
PI Historical Click the PI Historical button to open the PI table and
edit the record for historical purposes.
Maximum register value Type the maximum transmission size of the registers (16-
bit and 32-bit points). The default value of registers that
contains 16-bit word is 125. The default value of registers
that contains 32-bit word
Maximum coil value Type the maximum transmission size of the coils (1-bit
point). The default value of coils is 2000.
NOTE: While most remotes operate near the default
values, you can optimize the transmission size
according to the communication conditions. For
weak lines, reduce the transmission size to
improve data throughput. Smaller packets
reduce the risk of data loss that is due to
intermittent line impairments such as impulse
noise or poor signal0to-noise ratio.

Modbus Row Edit: Poll Table Tab


The Poll Table tab in the Modbus Row Details dialog box defines function codes, register
ranges, and poll information.

The Poll Table allows for a maximum of 10 different poll ranges to be defined for a Modbus
remote. Each range is polled for with its own query.

Figure 33 - Poll Table Tab in the Modbus Row Edit Dialog Box

92
Protocols

Table 26 - Items on the Poll Table tab in the Modbus Row Edit dialog box
Item Description
Name This field displays the name of the Modbus record.
Function Code Click the drop-down arrow and select the desired polling
function from the list. For more information, refer to the
Function Codes documentation.
NOTE: You can only select polling function codes (fc 1
to 4).
Freq Type the frequency that the master station will poll this
range. The frequency refers to the number of poll cycles
that elapse before the range is polled.
0 - The range is not polled (disabled).

1 - The range is polled every poll cycle.

3 - the range is polled every 3rd poll cycle.


Integ Select the check box to poll the poll range during an
integrity update.
Start and End Type the start address and end address of the poll range.
You can define non-overlapping or overlapping poll
ranges. For more information refer to the Modbus
Register Ranges documentation.
NOTE: Bit numbers are not specified in the poll range
for status values contained within holding
registers as the entire register is polled.
Bits Click the drop-down arrow and select the number of bits.
The options are:
• Unknown

• 1 bit

• 16 bit
When a larger value is required (e.g., 32 bit), the value is
returned in two sequential 16 bit words (long words).

Related Information
Function Codes on page 96
Modbus Register Ranges on page 97

Modbus Table Details


The Modbus table contains records on the possible transmission size and polling details. It
contains several internal fields.

Table 27 - Items on the Main tab in the Modbus Row Edit dialog box
Internal Field Name Data Type Description
accessFlag pntname This field is used by ezXOS to
determine whether or not the table
is being modified by someone

Table continued…

93
Protocols

Table 27 - Items on the Main tab in the Modbus Row Edit dialog box (continued)
Internal Field Name Data Type Description
through ADE. If so, no one else will
be given access to modify it.
dataset string This field displays the dataset
associated with the Modbus
record.
group string This field displays the group of
responsibility the Modbus record
belongs to.
maxcoil unsigned_short This field indicates the maximum
number of coils in one transfer.
maxreg unsigned_short This field indicates the maximum
number of registers in one transfer.
name string This field displays the name of the
Modbus remote.
ptnum integer This field displays the slot number
in the database.
range.[0-9].bpreg string This field indicates the number of
bits per register.
range.[0-9].ileave unsigned_char This field displays the polling
interleaving value.
range.[0-9].intg_updte string This field displays part of integrity
update.
range.[0-9].mfc string This field displays the Modbus
function code.
range.[0-9].r_max unsigned_short This field indicates the range
maximum.
range.[0-9].r_min unsigned_short This field indicates the range
minimum.
strTimeReg unsigned_short This field displays the starting
register for time.
x_type string This field indicates the
transmission type.

7.2.2 Commands
The Modbus protocol supports the Poll Command, Digital Command, and Set Point Command.

Commands take precedence over regular polling. If a command fails, the operator is notified
and given the reason for the failure.
Poll Command
The master station initiates all “polls for data” commands to the remotes.

The Modbus protocol:

94
Protocols

• supports both high and low instrument failure detection

• uses a poll cycle to interrogate the remote on a communications line for the contents of the
desired registers

• allows multiple blocks of contiguous remote register ranges to be configured for polling from
each remote

• allows the frequency with which each block of contiguous remote registers is queried (polled)
to be configured

The number determines the number of poll cycles that must expire between subsequent
polls for this range

• allows the register layout of each remote to be mapped out

This mapping identifies registers that are polled and/or commanded.

NOTE: When the remote sends numeric data to the master station, the data is converted from
the remote native format to the host native format and stored in the master station.
Digital Command
The master station sends a digital command (e.g., on/off) to the remote. The operator initiates
the digital command through the Remote Control dialog box.
NOTE: A digital command must be initiated bu the operator and cannot be scheduled.
Set Point Command
The master station sends a set point command to an analog record. The operator initiates a set
point command through the Remote Control dialog box.
WARNING: A set point command must be initiated by the operator and cannot be scheduled.
Commands and Data Processing
The Modbus protocol supports a variety of commands and records.

The Modbus protocol supports:

• setpoint commands to 16 and 32 bit registers and returned values from 16 and 32 bit
registers

• latched/unlatched digital commands

• reading/updating status records via holding registers and/or coils


• 1 bit and 2 bit status input records

When performing a digital output command on a specific bit or bits within a holding register, the
remaining bits within the register are set to the current status of applicable status records at the
same absolute address. If all the bits within the register do not have a corresponding status
record defined, the value of these undefined bits defaults to 0.

NOTE: When you issue a command for a Modbus record in any telemetered table, the record
must be within the poll range defined in the Poll Table tab in the Modbus Row Details
dialog box. If it is outside the poll range, the command will not succeed.
Related Information
Modbus Row Edit: Poll Table Tab on page 92

95
Protocols

7.2.3 Data Types


There are four Modbus data types.

Table 28 - Modbus Data Types


Data Type Description Access

Discrete Input single bit read only

Coil single bit read/write

Input Register 16-bit word read only

Holding Register 16-bit word read/write

7.2.4 Function Codes


There are seven different Modbus function codes.

Function Action Description Command


Code
(decimal)
01 Read coils This function code reads the Poll
status of discrete output coils.
02 Read discrete inputs This function code reads the
status of discrete contact inputs.
03 Read holding registers This function code reads the
analog output holding registers.
04 Read input registers This function code reads the
analog input registers.
05 Write single coil This function code forces a single Digital
output coil.
06 Write single register This function presets a single Set Point / Digital
holding register.
07 Read exception status

NOTE: The ranges defined for a particular remote include ranges that are polled and ranges for
setpoint and digital commands.

7.2.5 Point Types


There are five Modbus point types.

Table 30 - Modbus Point Type Formats


Name Type Bit Description
short word unsign 16 int 16 16 bits for data
signed short word signed 16 int 16 15 bits for data, 1 bit for sign
long word unsign 32 int 32 32 bits for data

Table continued…

96
Protocols

Table 30 - Modbus Point Type Formats (continued)


Name Type Bit Description
signed long word signed 32 int 32 31 bits for data, 1 bit for sign
float 32 32-bit format IEEE Single-Precision
floating point.

NOTE: There are 16 bits per register. For all point types, the bytes are transported with the
most significant byte first.

7.2.6 Modbus Register Ranges


There are several ways of organizing data within the remotes. Two of the most common ways of
organizing data are to use non-overlapping register ranges or use overlapping register ranges.
Non-Overlapping Register Ranges
In non-overlapping register ranges, each register type (holding registers, input registers, coils,
or input status) has a unique address range so that there is no ambiguity for register type that is
based on the register address.

The register address alone is enough to determine the correct function code used to read the
register type, which is then defined as a poll range in the Modbus configuration.

For example, coils may use addresses 0 to 1000, and holding registers may use addresses
4000 and up. An IOspec value of 20 would refer to a coil, and an IOspec value of 4020 would
refer to a holding register. This will be true as long as 20 falls into a defined polling range of
coils using function code 1, and 4020 falls into a defined polling range of holding registers using
function code 3. The poll ranges are configured through the Poll Table tab in the Modbus Row
Details dialog box.
Related Information
Modbus Row Edit: Poll Table Tab on page 92

Overlapping Register Ranges


In overlapping register ranges, each register type uses the same range of values. For example,
coils may use addresses 0 and up, and registers could also use addresses 0 and up.

The IOspec itself does not disambiguate the register type, so this is resolved by using a prefix
corresponding to the register address. The prefix is in the form of "FCx:" where x is the function
code used to read the given register type (1 for reading coils, 2 for reading input status, 3 for
reading holding registers, etc). Refer to the Function Codes documentation for descriptions of
each function code.

For example, FC3:4020 refers to holding register address 4020. FC1:4020 refers to coil address
4020. Note that the prefix is only a symbolic means to identifying the register type, and the read
function code was chosen for this purpose. One would still use the read function code even if a
register is intended for output (i.e., writing) only.
Related Information
Function Codes on page 96

97
Protocols

7.2.7 RealTime Data Record Configuration


The RealTime analog, status, and rate records are configured through the Advanced Database
Editor (ADE). Each of these records contain protocol-specific fields that should be configured
according to the protocol you are using.
Modbus Analog Record Configuration
The analog input/output records are configured through the Input and Output tabs on the
Analog Row Edit dialog box.
The protocol-specific fields on the Input tab are Point Type and Input Coordinate. The
protocol-specific fields on the Output tab are Output Type and Output Coordinates.

Table 31 - Protocol-specific fields on the Analog Row Edit dialog box


Field Description
Point Type/Output Type Use the drop-down to select the appropriate data type.
Modbus supports the following types:

• unsign 16 int

• signed 16 int

• unsign 32 int

• signed 32 int

• float
Input Coordinates/Output Type the address that indicates where the data is mapped within
Coordinates the remote. The address format is: fcn:xxx, where xxx is the
register address and fcn is the desired function code.
NOTE: For non-overlapping registers, the poll range is unique
to each function code/data type. Therefore, you do not
need to specify the function code in the input/output
coordinates. The coordinate format is xxx, where xxx is
the register address.

Modbus Analog Record Configuration Example

Table 32 - Modbus Analog Input/Output Configuration Example


Example Number Field Example
1 Point Type (input) unsign 16 bit
non-overlapping registers Input Coordinate 400

2 Point Type (input) float


overlapping registers Input Coordinate fc4:63

Example 1: The input register points to a 16 bit, function code 4 configured to poll register 400.
The function code is 4 since the address 400 falls between 384 and 511. The data in that
register is interpreted as a 16 bit binary.

Example 2: The input register point to a float point, function code 4, configured to poll register
63.

98
Protocols

Related Information
Function Codes on page 96
Point Types on page 96

Modbus Status Record Configuration


The status input/output records are configured through the Input and Output tabs on the
Status Row Edit dialog box.

The protocol-specific fields on the Status Row Edit dialog box’s Input tab are described in the
following table:

Table 33 - Protocol-specific fields in the Input tab on the Status Row Edit dialog box
Field Description
Number of Input Bits Type 1 or 2.
Bit in RTU status Check Bit in RTU Status to obtain the status bit (in the Bit
Number field) from the RTU status word.
NOTE: The Bit in RTU Status / Bit Number are only
applicable if fc7 (read exception status) was used to
poll this data.
Coordinates Type the addresses that indicate where the data is mapped
within the remote.
NOTE: The coordinate depends on whether the data is read
from a coil (fc1) or from a holding register (fc3). If data
is read from a coil, the data is 1 bit. If data is read from
a holding register, the data is 16 bits and you need to
specify the Bit Number within the holding register.
Bit Number Type the bit number within the holding register or within the
remote status word.
NOTE: Reading coils always use bit number 0.

Modbus Status Record Output Configuration

The status output records are configured through the Output tab in the Status Row Edit dialog
box. The protocol-specific fields are described in the following table

Table 34 - Protocol-specific fields in the Output tab on the Status Row Edit dialog box
Field Description
Output Type Select the type of output command. The Modbus protocol
supports Latched or Momentary.
Coordinates Type the addresses that indicate where the data is mapped
within the remote.

Related Information
Function Codes on page 96
Point Types on page 96

99
Protocols

Modbus Rate Record Configuration


Rate records can be pulse input and/or analog input. The rate/pulse input records are
configured through the Input tab in the Rate Row Edit dialog box.

The fields that are protocol-specific are Point Type and Input Coordinates. These are identical
to the fields in the Input tab on the Analog Row Edit dialog box.

NOTE: If rate is both analog and pulse, both values must be retrieved in a single poll message
in order to prevent inconsistent behavior of the rate record.
Related Information
Modbus Analog Record Configuration on page 98

7.3 OPC
To use the OPC server, you must implement and configure communication between OASyS
DNA and the OPC server.

The OPC interface uses the standard Omnicomm protocol infrastructure to leverage the existing
baseline functionality for krunching data, alarming, etc. Since a typical Omnicomm protocol is a
one-thread process and uses native protocol rather than COM or DCOM, a front-end processor
(OPC client process) is required to facilitate communication between the SCADA host and the
OPC server. A visual representation of this is shown in the model below..

Figure 34 - OPC Communication Model

• The OPC Client process communicates with the OPC server via COM or DCOM.

• Omnicomm communicates with the OPC Client process using OPC Client process protocol
via TCP/IP.
Related Information
OPC Data Access Server on page 65

100
Protocols

7.3.1 OPC table


OPC records are stored in the OPC table and are created and edited using the OPC Row Edit
dialog box.

Figure 35 - OPC Table Edit dialog box

The OPC remote information is configured through the Main tab in the Remote Edit dialog box.
Protocol and Address are the protocol-specific fields in the Remote Row Edit dialog box.
Main tab OPC Row Edit
The Main tab on the OPC Row Edit dialog box is used to configure the OPC client frequency
time and enter the OPC server’s name and host name.

Refer to the table below for descriptions and instructions on how to configure the tab.

101
Protocols

Figure 36 - Main tab in the OPC Row Edit dialog box

Table 35 - Items on the Main tab in the OPC Row Edit dialog box
Item Description
Name This field displays the name of the OPC record.
Group This field indicates the group to which this record belongs.
Dataset This field indicates the dataset to which the record is assigned.
Datasets are used to associate the record with privileges assigned
to each system mode. Once a record is assigned to a dataset
value, the dataset can only be changed to a value for which the
system has a privilege record.
Host Type the host name of the machine where the OPC server is
running. If the OPC Client is on the same host as the OPC server,
the host value should not be assigned.
Server Type the OPC server nameThis should be the name that is
registered in COM+ on the host machine.
Heartbeat Type the minimum frequency time for OPC Client to send the
heartbeat message to the SCADA hostThe heartbeat message is
only required if there is no other data to be sent. If the SCADA host
does not receive any valid message in 2.5 heartbeat times, the
connection is disconnected and an alarm is generated.

Table continued…

102
Protocols

Table 35 - Items on the Main tab in the OPC Row Edit dialog box (continued)
Item Description

NOTE: The heartbeat message is used to determine the health


state of the OPC Client application.

NOTE: If Heartbeat is set to 0, the OPC Client does not send a


heartbeat message.

NOTE: Each enabled group is defined on an OPC Server, but only groups that are setup as
active will send data back to the SCADA host.

OPC Group Definition tab OPC Row Edit


The OPC Group Definition tab on the OPC Row Edit dialog box is used to set up OPC.

Refer to the table below for descriptions and instructions on how to configure the tab.

Figure 37 - OPC Group Definition tab in the OPC Row Edit dialog box

Table 36 - Items on the OPC Group Definitions tab in the OPC Row Edit dialog box
Field/Check box Description
Enable Select the check box to enable the group.
Active Select the check box to activate the group on the OPC server.

Table continued…

103
Protocols

Table 36 - Items on the OPC Group Definitions tab in the OPC Row Edit dialog box
(continued)
Field/Check box Description
Update Rate This field displays the update rate for a group on the OPC server.
The update rate time is expressed in milliseconds.
Integ This check box indicates whether or not the OPC server is asked
for fresh group data when a new connection is established.

NOTE: Each enabled group is defined on an OPC server, but only groups that are set as
active send data back to the SCADA host.

7.3.2 Commands
The OPC client protocol supports the poll, digital and set point commands.

The following table lists the commands the OPC client protocol supports:

Table 37 - Commands supported by the OPC client protocol


Command Description
Poll Command This command triggers the integrity update command.
Digital Command For details on this command, refer to the Digital Command
documentation.
Set Point Command For details on this command, refer to the Set Point Command
documentation..
NOTE: The operator is notified if a command has failed. The
reason for the failure is also indicated.

Related Information
Digital Command on page 95
Set Point Command on page 95

7.3.3 OPC special commands


OPC special commands are used as service commands to establish and control connection
between Omnicomm and the OPC server via the OPC client process.

The special commands are triggered automatically when a new connection is established
between Omnicomm and the OPC client process. The following table lists the OPC special
commands:

Table 38 - OPC special commands


Command Description Example
INITIALIZE This command initializes a fnput remote.OPCremote “INITIALIZE”
connection between the OPC
Client and the OPC Server
INITGROUP This command initializes a group fnput remote.OPCremote “INITGROUP”
definition of the OPC server via the
OPC Client processEach group is
defined with a group name, an
active flag and an update rate.

Table continued…

104
Protocols

Table 38 - OPC special commands (continued)


Command Description Example
INITITEMS This command initializes items in a fnput remote.OPCremote “INITITEMS
group on the OPC server via the 1”This initializes the item list for group 1.
OPC Client processThe list of item
NOTE: “INITITEMS” or “INITITEMS 0”
names is obtained from analog,
status and rate configuration. Each triggers initialization of item lists
of the analog, status, and rate for all groups that are defined
records has a coordinate, which under the remote.
includes a group number and an
item name.
INTEGRITY This command requests an fnput remote.OPCremote “INTEGRITY
integrity update for items that are 1”This requests integrity update for
defined in a group on the OPC group 1.
server vial the OPC Client process
NOTE: “INTEGRITY” or “INTEGRITY
0” triggers a request for
integrity update for all groups
that are defined under the
remote.

7.3.4 Data types


The OPC client protocol supports seven different data types and supports conversion between
different data types.

The following table lists the data types supported by the OPC client protocol:

Table 39 - OPC client protocol-supported data types


OPC Client protocol Database
data type
analog status rate
Boolean N/A Write N/A
Unsigned 16 bit Read/Write N/A Read
Signed 16 bit Read/Write N/A Read
Unsigned 32 bit Read/Write Read Read
Signed 32 bit Read/Write N/A Read
Float Read/Write N/A Read
Double Read/Write N/A Read

Data type conversion

The OPC server allows conversion between the following data types:

Table 40 - OPC server conversion table


From/To BOOL I1 UI1 I2 UI2 I4 UI4 R4 R8
BOOL OK ! ! ! ! ! ! ! !
I1 OK OK ! ! ! ! ! ! !
UI1 OK OK OK ! ! ! ! ! !

Table continued…

105
Protocols

Table 40 - OPC server conversion table (continued)


From/To BOOL I1 UI1 I2 UI2 I4 UI4 R4 R8
I2 OK OK OK OK ! ! ! ! !
UI2 OK OK OK OK OK ! ! ! !
I4 OK OK OK OK OK OK ! ! !
UI4 OK OK OK OK OK OK OK ! !
R4 OK OK OK OK OK OK OK OK !
R8 OK OK OK OK OK OK OK OK OK
Legend: OK - conversion is valid! - indicates possible overflow. In case of overflow, the target
value is not changed (OPC Data Access Custom Interfaces specification 2.05).

7.3.5 Input/Output coordinates for OPC


The Input/Output coordinates for the OPC protocol represent the existing group and item names
in the OPC server.

The Input and Output tabs in the Analog, Status, and Rate records each contain a
Coordinates field. These coordinates are used by the OPC protocol, and represent the existing
group and item names in the OPC Server. All coordinates reference the OPC Group definitions
set in the opc Row Edit dialog box. Their format is:
groupnumber:itemname
groupnumber:itemname is a zero-terminated string with a total length of 128 characters
(127+null).

Groups are collections of data points that share and adhere to a specified update rate. Group
definitions range from 1 to 20; the group number corresponds to the Row number of the OPC
Group Definition.

itemname is a zero-terminated string that represents point coordinates in the OPC server.

The image below shows an example of an analog configuration. You can see how the Output
Coordinates field uses the format above. The Output coordinates, which are entered as
2:Random.Int1, reference the group definition in the second row of the opc Row Edit dialog
box. The coordinates represent the location in the remote to which this analog point
corresponds. Input and output coordinates need to be set for all Analog, Status, Multistate, and
Rate Records using the same format.

106
Protocols

Figure 38 - Use of an OPC Group Definition in Input/Output Coordinates

Related Information
OPC Group Definition tab OPC Row Edit on page 103

7.3.6 OPC protocol configuration


OPC protocol-specific fields can be configured in the Connection, Analog, Status, Rate and
Remote Row Edit dialog boxes.

The following tables list the protocol-specific fields in different row edit dialog boxes. The
appropriate entries are also described. For more information on these fields, refer to the related
RealTime table documentation.

Table 41 - OPC protocol-specific fields on the Scan Edit tab in the Connection Row Edit
dialog box
Field Input Description
Comm. Mode Select Remote Poll.
No communication timeout (sec) Set this to a small value (e.g., 5 seconds).
Maximum Time to Connect (sec) Set this to a small value (e.g., 5 seconds).

Table 42 - OPC protocol-specific fields on the Physical Connection Edit tab in the
Connection Row Edit dialog box
Field Input Description
Term. Server/Host Name Type the name of the machine host where the
OPC client process runs.
Port Number Type the OPC client’s listen port number. Refer
to “OPC client process” for more information.

Table 43 - OPC protocol-specific fields on the Input tab in the Analog Row Edit dialog box
Field Input Description
Point Type Select an OPC-supported data type. Refer to “Data Types” for
more information.
Input Coordinates Type the group number and item name in the following format:
groupnumber:itemname. Refer to “Input/Output coordinates for
OPC” for more information.

107
Protocols

Table 44 - OPC protocol-specific fields on the Output tab in the Analog Row Edit dialog
box
Field Input Description
Output Type Select an OPC-supported data type. Refer to “Data Types” for
more information.
Output Coordinates Type the group number and item name in the following format:
groupnumber:itemname. Refer to “Input/Output coordinates for
OPC” for more information.

Table 45 - OPC protocol-specific fields on the Input tab in the Status Row Edit dialog box
Field Input Description
Number of Input Bits Type 1 or 2.
Coordinates Type the group number and item name in the following
format: groupnumber:itemname. Refer to “Input/Output
coordinates for OPC” for more information.
Bit Number Type the bit number or bit position of the return value.
Refer to the examples below.
NOTE: To support bit-pack, return values for status
points are 32-bit unsigned integers.

Example 1:
Item RTU.1 is configured on the OPC server as a boolean value, but returns as a 32-bit
unsigned integer (0x00000000 for False or Ox00000001 for True). Bit number is 0.
Example 2:
Item RTU.1 is configured on the OPC server as a 16-bit register, but returns as a 32-bit
unsigned integer (0x0000XXXX where X is a used bit). Bit number a value from 0 to 15 to
access the specific bit.

Table 46 - OPC protocol-specific fields on the Output tab in the Status Row Edit dialog
box
Field Input Description
Output Type Select Latched. OPC Client protocol only supports
Latched.
Coordinates Type the group number and item name in the following
format: groupnumber:itemname. Refer to “Input/Output
coordinates for OPC” for more information.

Table 47 - OPC protocol-specific fields on the Input tab in the Rate Row Edit dialog box
(Pulse Input section)
Field Input Description
Input Coordinates Type the group number and item name in the following
format: groupnumber:itemname. Refer to “Input/Output
coordinates for OPC” for more information.
Point Type Select an OPC-supported data type. Refer to “Data
Types” for more information.

108
Protocols

Table 48 - OPC protocol-specific fields on the Input tab in the Rate Row Edit dialog box
(Analog Input section)
Field Input Description
Input Coordinates Type the group number and item name in the following
format: groupnumber:itemname. Refer to “Input/Output
coordinates for OPC” for more information.
Point Type Select an OPC-supported data type. Refer to “Data
Types” for more information.

Table 49 - OPC protocol-specific fields on the Main tab in the Remote Row Edit dialog
box
Field Input Description
Protocol Select OPC Client.

NOTE: Many connections to the OPC client may be configured, but only one remote record
may be associated to a connection.

Table 50 - OPC protocol-specific fields on the Scan Parameters tab in the Remote Row
Edit dialog box
Field Input Description
RTU Turnaround Time Set the time (in milliseconds).
(ms)
Overhead Processing This field is set depending on the maximum number of items in a
Time (ms) group.This applies to INITITEMS commands. Refer to “OPC
Special Commands” for more information.

Related Information
OPC special commands on page 104
Data types on page 105
Input/Output coordinates for OPC on page 106
OPC client process on page 109

7.3.7 OPC client process


The OPC client process facilitates communication between the SCADA host and the OPC
server.
The OPC Client process allows Omnicomm to create a connection between itself and the OPC
Client process by using a listen port. Listen ports on the OPC Client process are configurable.
Each connection is created through the listen port, which acts as a stand-alone OPC Client
process.

There are two types of OPC Clients, a 64-bit and a 32-bit. Some OPC Servers, such as RSLinX
Classic, require a 32-bit client in order to work properly. Both types of clients are installed on 64-
bit platforms using OPC Client media. On 32-bit platforms, only the 32-bit client will be installed.

NOTE: The OPC Client process should run as a window service, on the same machine as the
OPC server.

109
Protocols

Starting/Stopping the OPC client


It is recommended that you use the Services application or Services Control Manager (SCM) to
control or configure the OPC Client service. Services is a Windows application that appears
under Administrative Tools.

When you start the OPC Client, it can either obtain its settings from the registry, or specify Start
parameters. If no parameters are specified, configuration parameters are retrieved from the
opcClientWS.xml file in <DataDirectory>\config\Registry when the OPC Client starts. Both
options involve opening the Services Control Panel. The 32-bit client is called "OASyS DNA
OPC Client Service Win32", and the 64-bit client is called "OASyS DNA OPC Client Service x
64".

When the OPC client is running, it outputs the required information (debug, performance or
monitor) to a log file. The log file is in a “log” folder within the OPC client installation folder. The
maximum size of log file is 5MB. There may be up to 10 archived log files in the log folder.

Stopping the OPC client

Use the Administrative Tools menu in the Control panel to stop the OPC client.

Procedure

1. Click Start > Settings > Control Panel > Administrative Tools > Services.

Step Result: The Services window appears.


2. Right-click the OPC client.

Step Result: An action menu appears.


3. Click Stop.

Starting the OPC client with default registry settings

You can start the OPC client with settings obtained from the registry.

Procedure

1. Click Start > Settings > Control Panel > Administrative Tools > Services.

Step Result: The Services window appears.

110
Protocols

Figure 39 - The Services Control Manager

2. Right-click the OPC client.

Step Result: An action menu appears.


3. Click Start.
Starting the OPC client with specified parameters

You can start the OPC client and specify its parameters.

Procedure

1. Click Start > Settings > Control Panel > Administrative Tools > Services.

Step Result: The Services window appears.


2. Right-click the OPC client.

Step Result: An action menu appears.


3. Click Properties.

Step Result: The OASyS DNA OPC Client Properties dialog box appears.

111
Protocols

Figure 40 - OASyS DNA OPC Client Properties

4. Click Stop. This activates the Start parameters: field. Type the parameters in this field as
demonstrated above.
5. Click Start.

Result
Once the OPC client starts, it checks for any start parameters. If no parameters are specified,
the registry settings are used.
NOTE: Do not click OK after entering Start parameters: as this action deletes the information
you have entered.
Account type for the OPC client window service
You can run the OPC client window service either as a local system or a network.
In general, if both the OPC client service and the OPC server are installed on the same
machine, they will run without any problem. However, if the OPC server is configured by DCOM
configuration tools, ensure that the account type of the OPC client can access and launch the
OPC Server. The following table describes the account types:

Table 51 - OPC client account types


Account Type Description
LocalSystem This account type acts as a non-privileged user on the local computer
and presents anonymous credentials to any remote server.
User This account type is defined by a specific user on the network.Consult
your system administrators or IT specialists for user privileges and
security settings.

112
Protocols

Advanced options for the OPC client process


There are some advanced options available for debugging the OPC client process.

Normally, the advanced options are disabled for better performance. However, you may enable
some of these options to debug or monitor the OPC Client process.
Debugging an OPC client process
When the options for debugging an OPC client process are enabled, you can obtain log
information about the communication between Omnicomm and the specific OPC client process.
The debugging options are shown in the following table:

Table 52 - OPC client debug options


Option Description
-d xxxxxxxxxxxxxxx debug flaglog command/message details
-p xxxxxxxxxxxxxxx performance flaglog receiving or sending commands/
messages with time stamp
xxxxxxxxxxxxxxx is a hexadecimal number that is up to 16 digits. It is used to turn on logging-
specific commands or messages between Omnicomm and the OPC client process.

The first 4 digits from the right are used for commands or messages that are sent by
Omnicomm to the OPC Client process; the next 4 digits are reserved for those sent by the OPC
client process to Omnicomm.

The following table provides a summary of the commands or messages that are sent between
Omnicomm and the OPC client process.

Table 53 - Commands/messages between Omnicomm and the OPC client


Omnicomm to OPC Client Process OPC Client Process to Omnicomm
bit hex Description bit hex Description
1 1 Initialize 1 10000 Success
2 2 Group definition 2 20000 Error
3 4 Item definition 3 40000 Warning
4 8 Integrity update 4 80000 Data
5 10 Write data 5 100000 Heartbeat
6 200000 Shutdown

7.3.8 OPC table details


The OPC table contains several internal fields.

The following table lists all of the OPC table’s internal fields:

113
Protocols

Table 54 - Fields in the RealTime OPC table


Internal Field Name Data Type Description
accessFlag string Access flag
dataset string Associated dataset
debug unsigned_integer Debug
group string Group of responsibility
heartbeat unsigned_integer OPC heartbeat
host string OPC host name
name string OPC remote name
opcGroup[0-9].active string Active group
opcGroup[0-9].enable string Enable group
opcGroup[0-9].integrityUpdate string Integrity update
opcGroup[0-9].updateRate unsigned_integer Update rate in milliseconds
ptnum integer Slot number in database
server string OPC server name

114
Accessing XOS Elements (Sound, Icon, Bitmap Files) from ADE

8 Accessing XOS Elements (Sound,


Icon, Bitmap Files) from ADE
XOS_Elements files come from ezXOS. In order for ADE to access the files, they need to be
distributed from ezXOS.

NOTE: This workflow is only applicable for the 7.7 version of ADE and up.

NOTE: This workflow outlines how to access the sound files from ADE. The steps are similar
for accessing the icon and bitmap files.
In ADE, open the xosalmattr Row Edit dialog box for an alarm attribute record. On the row edit
dialog box, click the ellipsis button (...) for the Audio File field to open the SoundFilePopUp
dialog box. If the XOS_Elements file has not been distributed, you will receive an error
message saying there is no XOS_Elements folder. To distribute the XOS_Elements file, follow
the steps below.

Figure 41 - Example - Missing XOS_Elements Folder Error Message

Procedure

1. From the display repository ezXOS workstation, open the command shell for ezXOS, and
type the following:
distribute -S XOSELEMENTS -H \\[target machine]
\ADE_[version]_xos_elements

Figure 42 - ezXOS Command Shell

2. Once the distribute finishes (and is successful), go back to the target machine and restart
ADE.
3. Open the xosalmattr Row Edit dialog box for an alarm attribute record.

115
Accessing XOS Elements (Sound, Icon, Bitmap Files) from ADE

Figure 43 - xosalmattr Row Edit Dialog Box

4. On the row edit dialog box, click the ellipsis button (...) for the Audio File field to open the
SoundFilePopUp dialog box.

Step Result: If the distribute was successful, the dialog box will be populated with the right
icon files. You will also find a new folder called ezXOS_[version]_XOS_Elements in the
following location: [target machine]\ADE_[version]_xos_elements.

Figure 44 - ezXOS_7.7_XOS_Elements Folder

NOTE: ADE does not allow two versions of the XOS Elements folder (for example, for both
versions 7.7 and 7.8). If you receive a duplicate version error message, you must
choose which version you want to keep and delete the other one.

116
Accessing XOS Elements (Sound, Icon, Bitmap Files) from ADE

Figure 45 - SoundFilePopUp Dialog Box

117
Schneider Electric
49 Quarry Park Blvd SE
Calgary, AB T2C 5H9 - Canada
Phone: 1 (403) 253-8848
www.schneider-electric.com

As standards, specifications, and designs change from time to time,


please ask for confirmation of the information given in this publication.

© Schneider Electric. All Rights Reserved.

You might also like