Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

6—Node Domain

6 Node Domain
Network models in OPNET Modeler are constructed from two broad classes of
components: communication nodes and communication links. The internal
structure of these objects, for the most part, is not visible at the network level.
This section presents the methods used to specify the internal structure and
therefore much of the capabilities of communication nodes. For more
information about communication links, see Chapter 9 Communication
Mechanisms on page MC-9-1.

Both the structural complexity of network nodes and their scope of activity can
vary greatly depending on the system which is modeled. On the low end of
complexity and function, simple terminals (connected to a host computer via
serial connections) can be considered nodes of a star network. On the other end
of the complexity spectrum, the network management center of a nationwide
public packet network can also be considered as one hub node. The table below
presents some typical examples of node models that can be built with
OPNET Modeler.

Table 6-1 Typical Node Models


Node Type General Functions

Workstation Can generate and receive transfers of files or sparse packets and
manage several concurrent network connections.

Packet Switch Supports large numbers of incoming and outgoing data links and
performs packet routing at high speeds.

Satellite Terminal Generates and receives packets according to a channel access


protocol. Acts as a multiplexor for many incoming links.

Remote Data Sensor Acts as a simple source of packets, usually transmitting them in
bursts.

Because the goal of using OPNET Modeler is to construct executable


simulation models of communication networks, internal node functions must be
modeled sufficiently well that the overall simulation of network behavior is
accurate. The system provided by OPNET Modeler is powerful enough to
specify virtually any form of communication node. Moreover, it supports
reusability of node substructures, and sharing of effort between various
members of a design or development team.

OPNET Modeler’s node modeling system is based on a block-structured


approach, which is familiar to most engineers and used extensively in modeling
system architectures. Some examples of other applications which use this
approach are signal processing, process control, and three-dimensional
computer graphics. In these applications, the blocks operate respectively on
analog or digital signals, fluids or other media, and polygons or vectors.
Block-structuring is a natural technique for describing hardware systems, and
can also be an effective tool when describing the relationships between

OPNET Modeler/Release 16.1 MC-6-1


6—Node Domain

high-level software objects with well defined interfaces. In fact, the International
Standards Organization’s Open Systems Interconnect (OSI) Reference
Architecture for data communications networks is based on a block-structured
approach, where each block corresponds to a different protocol layer. Like the
OSI architecture, the node models in OPNET Modeler contain blocks which
operate on packets, so it is well suited to modeling protocol layers. This aspect
of OPNET Modeler is described in Modeling Layered Protocols on
page MC-6-30.

A node model is composed of a series of connected blocks called modules.


Each module contains a set of inputs and outputs, some state memory, and a
method for computing the module’s outputs from its inputs and its state memory.
The manner in which this computation takes place depends upon the type of
module. Some modules have predefined behavior to accomplish a specific
purpose, while the behavior of other types of modules may be specified by the
model designer.

Node models specify the manner in which the inputs and outputs of various
modules are connected using objects called connections. There are two types
of connections, one to carry data packets, and one to transmit individual values.
A module can send data packets over its output packet streams, and receive
them from its input packet streams. It may also send individual values over
output statistics, and receive them from input statistics. The connection from a
source module’s output stream to a destination module’s input stream is called
a packet stream object. Likewise, the connection between a source module’s
output statistic and a destination module’s input statistic is called a statistic wire
object. (the connection objects are usually referred to as packet streams and
statistic wires, or simply streams and statwires, respectively). Other more
abstract intermodule communication mechanisms are available. For more
details, see Chapter 9 Communication Mechanisms on page MC-9-1.

Modules represent the various functional areas of a node, whether implemented


in hardware or software. In cases where portions of a node model correspond
to hardware structures, separate modules may correspond to specific hardware
subsystems that physically exchange data via parallel buses or serial data links.
For software structures, modules may correspond to software layers (or
sublayers) that pass packets via function calls or shared memory. In some
cases, modules might have no physical counterparts in the real node, but may
be used to set up specific test cases or be dedicated to computing specialized
statistics.

The next two sections define the capabilities of the various types of modules and
connections, and illustrate their specific uses within node models. The
remaining sections describe the techniques used in creating specific types of
models, including layered protocols, queues, and shared resources.

For more information see:

• Module Definitions on page MC-6-3

MC-6-2 OPNET Modeler/Release 16.1


6—Node Domain

• Connection Definitions on page MC-6-11

• Node Model Interfaces on page MC-6-17

• Modeling Queues on page MC-6-26

• Modeling with External Systems on page MC-6-30

• Modeling Layered Protocols on page MC-6-30

• Modeling Shared Resources on page MC-6-33

Module Definitions
Modules represent those parts of a communication node within which data are
generated, consumed, or processed. There are several different types of
modules, distinguished by their function within a node: processors, queues,
generators, receivers, and transmitters. Some types of modules (processors,
queues, and generators) have functions that are strictly internal to a node, while
others (transmitters and receivers) have external connections to data
transmission links. The behavior of processors and queues can be specified in
precise detail by means of user-specified process models; other modules have
predefined behavior which can be modified only by setting the values of their
attributes. Some module types are limited in the way they can be connected to
other modules.

A module’s internal algorithm is invoked when an external event that will affect
the state of the module occurs. This is a consequence of the event-driven
simulation methodology used in OPNET Modeler. For example, a module’s
process model is invoked when a packet arrives on one of its packet input
streams, or a statistic to which it is connected changes. Invocations can also be
scheduled at regular intervals, or at specific times by the module itself (to model
a timer expiring, for instance).

The following sections describe each module type, giving an overview of its
capabilities and applications.

• Processor Modules on page MC-6-4

• Queue Modules on page MC-6-5

• Esys Modules on page MC-6-7

• Transmitter Modules on page MC-6-8

• Receiver Modules on page MC-6-9

For additional details about specific types of modules, including attributes and
output statistics, see Node Reference on page MR-2-1.

OPNET Modeler/Release 16.1 MC-6-3


6—Node Domain

Processor Modules
Figure 6-1 Standard Icon Used for a Processor Module

Processor modules are the primary general-purpose building blocks of node


models. Their behavior can be completely specified by setting their “process
model” attribute, which specifies the process model to be executed by the
module. This process model can respond to external events or interrupts as
desired to model a specific function. Processors can be connected to other
modules to send and receive packets via any number of packet streams. For
example, a module designed to multiplex a number of data channels over a
communications link might have several input packet streams and one output
packet stream.

Processor modules are used to do general processing of data packets. A typical


processor might receive a packet on an input stream, do some processing, and
send the packet out again on an output stream. The output packet might be
delayed for a short time, or it might be modified with respect to the input packet.
For example, a common use of processors is in modeling a layered protocol
stack, where each layer provides a well-defined service to the adjacent layer
through a specific interface. In this case, a separate processor module is
typically used to model each layer of the protocol, exchanging packets of data
with the modules that represent the next higher and lower layers.

Processor modules can act as traffic generators when you specify the correct
process model for the module’s “model” attribute. When you specify such a
model, attributes that configure traffic (interarrival times, packet size, probability
density functions) are promoted and appear in the processor module’s attribute
list. The standard model set includes process models implemented as traffic
generators for several traffic types, including bursty traffic and self-similar traffic.

While processor modules are often connected to other modules via pairs of
streams (input streams and output streams), this is not always the case. For
instance, it is possible to use a processor module as a “controller” within a node
model. In such cases, the processor often communicates directly with other
modules through statistic wires or remote interrupts. In fact, it might not use any
packet streams at all. Processor modules can also be used as traffic sources
(generators) or sinks, in which case they might have only output streams or only
input streams.

MC-6-4 OPNET Modeler/Release 16.1


6—Node Domain

Node models may employ both processor modules and queue modules
(described in the next section) to implement general processing of packets.
Normally, a processor module would be used in cases where a packet can be
completely processed in response to the interrupt associated with its arrival or
generation. If this is not the case, and it is necessary to buffer the packet while
awaiting a later event to complete processing, then a queue module, with its
additional buffering resources, is likely to be more correct. This is particularly
true if multiple packets must be buffered simultaneously.

Like all objects, processors have a number of built-in attributes that configure
their behavior. These attributes can be specified in the node domain, typically
using the graphical user interface of the Node Editor. However, processors are
one of the special objects that support a “model” attribute, in this case to select
the processor’s root process model. When selecting a process model, a
processor’s attributes may change: new attributes may appear, existing
attributes may change values, and certain attributes may disappear. This
depends on the process model’s declared model attributes and on its attribute
interfaces. Process model attribute interfaces allow the process model
developer to specify values in advance for processor built-in attributes. At the
same time, a processor attribute can be given a status of hidden which means
that it will exist and have a specified value, but will not appear on the processor’s
attribute list in the user interface. The process model’s model attributes allow
new attributes to be invented and to automatically be appended to be inherited
by the processor object.

For additional information, see:

• Chapter 5 Process Domain on page MC-5-1 about process model attributes


and process attribute interfaces.

• Chapter 4 Modeling Framework on page MC-4-1 about model attributes and


attribute interfaces in general.

Queue Modules
Figure 6-2 Standard Icon Used for a Queue Module

Queue modules provide a superset of the functionality of processor modules.


Like processors, they can execute an arbitrary process model that describes the
behavior of a particular process or protocol, and can be connected via packet
streams to other modules, allowing them to send and receive data packets. The
process model can also affect the queue object’s list of attributes. The previous
section on the processor module provides information concerning the
relationship between process models and processors that is also applicable to
queues.

OPNET Modeler/Release 16.1 MC-6-5


6—Node Domain

The primary difference between processors and queue modules is that queues
contain additional internal resources called subqueues, as illustrated in the
figure below. Subqueues facilitate buffering and managing a collection of data
packets. While it is possible to implement this functionality with ordinary
processor modules, the use of subqueues, together with the Kernel Procedures
that manipulate them, provide greater flexibility and ease of implementation of
a variety of queueing disciplines. Moreover, subqueues automatically compute
a number of statistics about their operation, which may be collected in a
straightforward manner by using probes. For more information, see Probe
Reference on page MR-4-1.

Figure 6-3 Internal Structure of Queue with k Subqueues

 subqueue 0 packet n0 - 1 packet n0 - 2 packet 1 packet 0

 subqueue 1 packet n1 - 1 packet n1 - 2 packet 1 packet 0



 subqueue (k - 3) packet nk-3 - 1 packet nk-3 - 2 packet 1 packet 0

 subqueue (k - 2) packet nk-2 - 1 packet nk-2 - 2 packet 1 packet 0

 subqueue (k - 1) packet nk-1 - 1 packet nk-1 - 2 packet 1 packet 0

tail of subqueue head of subqueue

Each queue module contains a definable number of subqueues. A subqueue is


an object which is subordinate to the queue object and which has its own
attributes used to configure it. It is contained within the queue by means of the
queue’s “subqueue” compound attribute. This attribute holds a
compound-attribute object which is the parent of one or more subqueues. See
Chapter 4 Modeling Framework on page MC-4-1 for more information about
how subordinate objects are stored using compound attributes.

The capacity of each subqueue to hold data is unlimited by default, but a limit
may be set on the number of packets or the total size of all packets (or both)
within a subqueue. It is up to the processes in the queue to determine what
action to take when subqueues become full: packets may be removed to create
space for new arrivals, or the new arrivals may be discarded. The Simulation
Kernel merely provides notification that a subqueue can no longer accept
additional packets when an insertion is attempted.

MC-6-6 OPNET Modeler/Release 16.1


6—Node Domain

Most of the Kernel Procedures related to queue modules operate directly on a


specified subqueue (for example, packet insertion and deletion, accessing
statistics, or flushing a subqueue). Some Kernel Procedures, however, access
properties of the queue module as a whole (for example, obtaining aggregate
statistics or flushing the contents of an entire queue module), or of a packet
within the queue module without regard to the particular subqueue which holds
it (for example, queue insertion time or waiting time). For more information about
these Kernel Procedures, see Queue Package on page DES-15-1 and
Subqueue Package on page DES-21-1.

Because the user controls the process model executed by a queue, it is possible
to model any queueing protocol by defining the manner in which the subqueues
are accessed and managed. Within a module, subqueues may be selected
directly by a physical index, or via abstract indices, which allow the process
model to choose a subqueue based on its state relative to other subqueues. For
example, the shortest subqueue may be chosen, or the one with the largest
queueing delay. Likewise, packets within a subqueue may be accessed directly
by index number, or they may be retrieved abstractly by requesting the packet
at the head or the tail of the subqueue, or the packet with the highest priority.
For more details on how these features can be used to implement a particular
queueing discipline, see Modeling Queues on page MC-6-26.

Esys Modules
Figure 6-4 Standard Icon Used for an esys Module

The external system module, or esys module, provides a superset of the


queue’s functionality. In addition to all of the attributes that a queue has, an esys
module also has an “esd model” attribute, which specifies an external system
definition (ESD) model to use during a cosimulation. If “esd model” is set to
NONE (the default), the esys module behaves exactly like a queue.

Double-clicking on an esys module opens the Process Editor, in which you can
edit the esys module’s process model. To create or edit the esys module’s ESD,
click the Edit ESD Model button in the esys module’s Attributes dialog box.
When an esys module uses an ESD model, the module’s process model acts
as the interface between the external system and the rest of OPNET Modeler.
Therefore, the process model is OPNET Modeler’s interface to the external
system.

OPNET Modeler/Release 16.1 MC-6-7


6—Node Domain

Transmitter Modules
Figure 6-5 Icons Used for Transmitter Modules

Point-to-point transmitter Bus transmitter

Transmitter modules serve as the outbound interface between packet streams


inside a node and communication links outside the node. There are two types
of transmitter modules, corresponding to the different types of communication
links: point-to-point and bus (in addition, radio transmitters are available in the
Wireless functionality). Each of the transmitter module types has essentially the
same basic behavior, although there are some differences related to the
different communication mechanisms.

Transmitter modules can collect packets from one or more input packet streams
and relay them over corresponding channels within the communications link. A
packet received on a given input stream is transmitted over the channel with the
same index number. Each channel can have its own data rate, which can be
used with the size of the packet to determine the transmission time. Packets
arriving on an input stream while the corresponding channel is busy with a
previous packet are automatically queued in a buffer. This buffer has a default
capacity of 1000 packets of any size.

Channels are subordinate objects of transmitter modules. The transmitter


contains them within its “channel” attribute, which is a compound attribute. The
“channel” attribute’s value is a compound attribute object, which is the parent of
all of the contained channels (see Chapter 4 Modeling Framework on
page MC-4-1 for more information on compound attributes and subordinate
objects). Each channel maintains a set of attributes that control various aspects
of packet transmission. Some of these attributes are also used to determine
compatibility between transmitters, receivers, and link objects to form viable
links in the Project Editor. Specifically the “data rate” and “packet formats”
attributes of channels are used to establish consistent links for point-to-point
and bus transceiver channels. For more information about link consistency, see
Network Domain on page MC-7-1.

Within a node model, a transmitter module is considered to be a data sink.


Therefore, although they may have many input packet streams, transmitter
modules do not have output packet streams. From the point of view of the
network model, a transmitter module acts as the node’s output port, to which a
communication link of the corresponding type may be connected: simplex and
duplex links to point-to-point transmitters, and bus links to bus transmitters.
These relationships are illustrated in the following figure.

MC-6-8 OPNET Modeler/Release 16.1


6—Node Domain

Figure 6-6 Relationship of Transmitter Modules to Network and Node Models

packet streams comm. link

other other
modules node(s) in
in node network
model model

node

Several of the parameters controlling transmission of packets from


point-to-point and bus transmitter modules are actually specified as attributes of
the link (such as error rate and propagation delay).

Transmitter channel objects respond to an “abort” command, which causes


them to abort the transmission of the current packet. The
op_ima_obj_command() Kernel Procedure is used for this purpose; for more
details, see the description of this procedure in Internal Model Access on
page DES-10-1.

Transmitter channel objects calculate a number of statistics which can be


usefully monitored by other modules via statistic wires, or collected through use
of a statistic probe. For example, the “bit_thruput” and “pk_thruput” statistics
may be used to monitor the throughput of a specific transmitter channel, based
on the total size or number of data packets, respectively. For more information
about the statistics supported by a particular transmitter channel object, see
Node Reference on page MR-2-1.

Receiver Modules
Figure 6-7 Icons Used for Receiver Modules

Point-to-point receiver Bus receiver

OPNET Modeler/Release 16.1 MC-6-9


6—Node Domain

Receiver modules serve as the inbound interface between communication links


outside a node and packet streams inside the node. As with transmitters, there
are two types of receiver modules, corresponding to the different models of
communication links: point-to-point and bus (in addition, radio receivers are
available in the Wireless functionality). Each of the receiver module types has
essentially the same basic behavior, although there are some differences
related to the different communication mechanisms.

Receiver modules can distribute packets to one or more output packet streams
upon receiving them over corresponding channels within the communications
link. A packet received on a given input channel is relayed to the output stream
with the same index number. Channels are subordinate objects of receiver
modules. The receiver contains them within its “channel” attribute, which is a
compound attribute. The “channel” attribute’s value is a compound attribute
object, which is the parent of all of the contained channels (refer to
Chapter 4 Modeling Framework on page MC-4-1 for more information about
compound attributes and subordinate objects). Each channel maintains a set of
attributes that control various aspects of packet reception. Some of these
attributes are also used to determine compatibility between receivers,
transmitters, and link objects to form viable links in the Project Editor.
Specifically the “data rate” and “packet formats” attributes of channels are used
for point-to-point and bus transceiver channels. For more information, see Link
Consistency on page MC-7-12.

Within a node model, a receiver module is considered to be a data source.


Therefore, although they may have many output packet streams, receiver
modules do not have input packet streams. From the point of view of the
network model, a receiver module acts as an input port, to which a
communication link of the corresponding type may be connected: simplex and
duplex links to point-to-point receivers, and bus links to bus receivers. These
relationships are illustrated in the following figure.

MC-6-10 OPNET Modeler/Release 16.1


6—Node Domain

Figure 6-8 Relationship of Receiver Modules to Network and Node Models

comm. link packet streams

other other
node(s) in modules
network in node
model model

node

Many of the parameters controlling reception of packets by point-to-point and


bus receiver modules are actually specified as attributes of the link, such as
error rate and propagation delay.

Receiver channel objects calculate a number of statistics which can be usefully


monitored by other modules via statistic wires, or collected through use of a
statistic probe. For example, the “busy” statistic may be used to do carrier
sensing by indicating the receipt of a packet over the link associated with the
receiver. Likewise, the “collision_status” statistic of bus receiver channel objects
may be used to detect the collision of multiple packets on a particular bus
channel. For more details on the statistics supported by a particular receiver
channel object, see Node Reference on page MR-2-1.

Connection Definitions
Connections represent the communication paths and associations between the
various modules in a node. There are three types of connections in node
models: packet streams, which support the flow of data packets between
modules, statistic wires, which support the transmission of numerical state
information between modules, and logical associations, which indicate a binding
between two modules allowing certain pairs of modules to do a function
together. Note that while connections are the most commonly used means of
communication between models, it is possible to transfer data between modules
via other means. For more information, see Chapter 9 Communication
Mechanisms on page MC-9-1.

The following sections describe each connection type, giving an overview of


their capabilities and applications. For a detailed description of these
connections and their attributes, see Node Reference on page MR-2-1.

OPNET Modeler/Release 16.1 MC-6-11


6—Node Domain

• Packet Streams on page MC-6-12

• Statistic Wires on page MC-6-13

• Logical Associations on page MC-6-15

Packet Streams
Figure 6-9 Packet Stream representation

Packet Stream

Packet streams are connections that carry data packets from a source module
to a destination module. They represent the flow of data across the hardware
and software interfaces within a communication node. They are also regarded
as completely reliable (that is, no packets are lost and no errors are introduced)
and of potentially infinite bandwidth (because an arbitrarily large packet can
traverse a stream with no delay). Furthermore, streams have an unlimited
capacity to buffer data packets at the destination module in the order of their
arrival.

OPNET Modeler provides three different methods for transferring a packet over
a stream and notifying the destination module of its arrival: scheduled, forced,
or quiet. In a scheduled arrival, a “stream interrupt” is scheduled for the
destination module, occurring after all other events of lower priority previously
scheduled for that time. In a forced arrival, the “stream interrupt” occurs
immediately, before any other scheduled events. Finally, with a quiet arrival, no
“stream interrupt” occurs at all, and the packet remains buffered within the
stream. The destination module can obtain the packet only if it explicitly checks
for its presence when it responds to some other event. (For a detailed
description of various interrupt methods, see Interrupt Methods on
page MC-4-91 and Packet Streams on page MC-9-3.)

Output streams of generators and receivers (that is, modules which have no
process models) use the type of arrival specified by the “intrpt method” attribute
of the stream. For output streams of processors or queues, the module’s
process model overrides the value of this attribute by sending the packet to the
output stream using one of the Kernel Procedures op_pk_send(),
op_pk_send_delayed(), op_pk_send_quiet(), or op_pk_send_forced(). See
Packet Package on page DES-12-1 for more information.

MC-6-12 OPNET Modeler/Release 16.1


6—Node Domain

Packet streams support the introduction of a delay between the time a packet is
placed on a stream by the source module and the time it arrives at the
destination module. By default, this value is zero, and in many cases this is
appropriate. However, a nonzero value may be used to model a constant
processing time, propagation delay, or transfer rate. Furthermore, a process
model executing in a processor or queue module may specify its own delay
when it places a packet on an output stream. In this way the delay may be
computed as a function of the size or contents of the transferred packet, for
example.

Packet streams are normally used to carry the actual data packets that the
modeled communication node is intended to process, rather than control data
which represents communication between adjacent modules. While it is
possible to send such control data between processor modules over packet
streams, it is often more appropriate to use other mechanisms, such as statistic
wires or ICIs (Interface Control Information structures). The use of ICIs to
communicate control data between layers of a protocol stack is shown in the
Modeling Layered Protocols on page MC-6-30. For more information about
communication between modules, see Chapter 9 Communication Mechanisms
on page MC-9-1.

Statistic Wires
Figure 6-10 Statistic Wire Representation

Statistic Wire

Statistic wires carry data from a source module to a destination module. Unlike
packet streams, which convey packets, statistic wires convey individual values.
They are generally used as an interface by which the source module can share
certain values with the destination module, and thereby provide information
regarding its state. They are also used as a signalling mechanism, allowing the
source module to notify the destination module that a particular condition of
interest has been reached.

Each module within a node has a set of local output statistics whose values are
updated at correct times during the simulation. It is this set of statistics that can
act as the sources of statistic wires. Processor and queue modules, which can
execute user-defined process models, have a set of statistics whose
computation and interpretation are defined by their process model. These
statistics are updated when the process model calls the Kernel Procedure
op_stat_write() (refer to Chapter 5 Process Domain on page MC-5-1 for more
information on user-defined local output statistics). Other modules have a

OPNET Modeler/Release 16.1 MC-6-13


6—Node Domain

predetermined set of statistics that are updated automatically by the Simulation


Kernel at appropriate times. Each statistic wire can convey only one output
statistic, which is specified in the src stat attribute of the statistic wire, as shown
in the following figure.

Figure 6-11 Statistic Wire Source Statistic Selection

Any of the local output


statistics supported by
the “Relay” module can
be selected as sources
for the statistic wire.

Processor and queue modules also have a set of input statistics, which act as
destinations for statistic wires. Process models within these modules can
access the current value of a particular input statistic by calling the Kernel
Procedure op_stat_local_read(). The statistic wire binds together an output
statistic and an input statistic, so that their values change in a linked manner. In
this way the value sharing described above is accomplished. Note that an
output statistic can be the source of many statistic wires, and similarly an input
statistic can be the destination of more than one statistic wire.

When the value of a statistic changes, the destination module can be notified by
means of a “statistic interrupt”. The conditions under which this interrupt occurs
can be controlled by means of attributes of the statistic wire, called statistic
triggers. For instance, a wire can be set to cause an interrupt when its statistic
becomes larger than a particular value, or when it becomes smaller than its
previous value. If no triggers are enabled, then no “statistic interrupts” will occur,
and the destination module must poll the input statistic on its own to determine
if its value has changed.

Statistic wires are typically used in one of two ways within node models. First,
they may be employed to dynamically monitor the state of another component
of a communication node. Often this is an “unintelligent” physical device like a
transmitter or receiver, but it might also be a queue module, for example, if a
processor needed to alter its behavior based on the size of a queue. In these

MC-6-14 OPNET Modeler/Release 16.1


6—Node Domain

cases, the statistics being monitored are computed automatically by the


Simulation Kernel. Second, statistic wires may be used to allow one module to
signal another of a change in its state. Such uses are often called semaphores.
In contrast to the first case, these values are computed by a process within a
processor or queue module.

An example of both uses can be seen in the ethcoax_station_adv node model


shown in the following figure. The defer module computes a variable called
“deference” and supplies it to the mac module via a statistic wire, to inform it
when it can transmit the next packet. The defer module computes this value
from two input statistics: one is a value supplied to defer by mac, indicating
whether it has packets awaiting transmission, and the other is the value of the
“busy” statistic from the bus receiver module. Here the “deference” and “packets
waiting” statistic wires are used as semaphores, whereas the “busy” wire is used
as a monitor.

Figure 6-12 ethcoax_station_adv Node Model

“busy” wire
“deference” wire

“packets waiting” wire

Logical Associations
Logical associations are special connections that to not actually carry data
between modules. In fact, logical associations do not exist during simulation, but
are used purely as specification devices. The purpose of a logical association is
to indicate that a relationship exists between two modules in a node model. The
existence of this relationship is used to interpret the node model’s structure.

While logical associations rely on the general concept of establishing a binding


between two objects, they are currently only supported for creating associations
between particular types of transmitter and receiver modules. Specifically, a
point-to-point transmitter and a point-to-point receiver may be logically
associated, and so may a bus transmitter and bus receiver. No other modules
support this type of connection. For this reason, logical associations are also
called logical transceiver associations.

OPNET Modeler/Release 16.1 MC-6-15


6—Node Domain

The role of the logical transceiver association is to indicate that a transmitter and
a receiver function as a pair. While the objects exist independently,
OPNET Modeler must sometimes choose which transceivers to use when
attaching nodes to links in the Project Editor. Each duplex point-to-point link, for
example, makes use of one point-point transmitter and one point-to-point
receiver in each of the nodes to which it is connected. Each node may contain
multiple modules that can act as these “termination points” for the link. If it is
important to the node modeler that a particular transmitter and receiver be used
for the same attached link, then this requirement can be expressed by creating
a logical association between them. This same capability can be used for bus
transmitters and receivers, though it is only applicable on the end of the tap that
attaches the node to the bus.

The following figure shows the effect of logical associations when creating
point-point links in the Project Editor.

Figure 6-13 Selection of Transceivers Using Logical Association


Logical associations keep (Rx_0
and Tx0) and (Rx_1 and Tx_1)
together in link attachments.
(Rx_0, Tx_1) combination is
prevented.

duplex

point-to-point link
remote
node

remote
node
duplex point-to-point link

The transceiver selections made by OPNET Modeler can be manually


overridden by a user. However, because most users have no visibility into the
structure of node models, the concept of transceivers is foreign to them.
Therefore, the user interface of other OPNET analysis software products does
not permit control of the transceiver selections, and all selections are
automatically performed by the tool. For this reason, it is particularly important
to use transceiver logical associations when developing models to be delivered
to end-users of other OPNET analysis software products.

MC-6-16 OPNET Modeler/Release 16.1


6—Node Domain

Node Model Interfaces


Two different types of OPNET Modeler users work with node models: node
model developers and node model users. Node model users have an external
perspective of the node model, and are not necessarily cognizant of the model’s
internal structure. They are primarily concerned with two questions:

1) What is the model’s intended functionality?

2) What is the model’s interface?

The model’s interface dictates how one can use the model and control it
externally, without modifying any of its internal specifications.

Node model developers have a perspective that is the mirror-image of the node
model users. They are concerned with two questions as well:

1) What is the desired functionality, and how should it be achieved?

2) What aspects of this functionality should be presented and controllable by


the user?

In other words, the developer must determine what the model’s interface should
offer to the model user. This section describes the components that may
constitute a node model’s interface. Many of the concepts rely on the more
general modeling approach of OPNET Modeler, which is described in
Chapter 4 Modeling Framework on page MC-4-1.

Node Model Attributes


A developer can create any number of new attributes and associate them with
a node model. These node model attributes belong to no particular object in the
Node Domain, but to the model as a whole. When the node model is instantiated
as a node object in the Project Editor, these attributes are inherited by the node
object. Each separate instance of the model inherits its own private “copies” of
the attributes. Node model attributes therefore provide a way of augmenting the
set of attributes supported by a class of node objects.

Node model attributes are generally used to provide additional means of control
to node model users. By exposing a characteristic of a node model and allowing
it to be modified externally (i.e., on a per-instance basis), the node model gains
generality and can be more broadly re-used in different modeling situations, a
principle that is described in Chapter 4 Modeling Framework on page MC-4-1.
The same approach is supported by process model attributes, which allow new
attributes to be introduced for modules in the Node Domain. This raises the
question of when to employ node model attributes versus process model
attributes. In general, the answer is dependent on the “scope” of the attribute.
In other words, the question is: does the attribute rightfully belong to a particular
object in the node or does it belong to many objects in the node, or even to the
node as a whole? For example, an attribute specifying the velocity of the node

OPNET Modeler/Release 16.1 MC-6-17


6—Node Domain

object is truly an attribute of the node rather than any particular object in the
node. In contrast, the processing speed of a packet-switch’s switching element
is probably a characteristic of the processor or queue that represents the
switching element, rather than a characteristic of the node as a whole. Of
course, the processing speed is still part of the description of the node as a
whole because the node encompasses the switching element. However, it is
more closely associated with the processor and belongs to the node only by
incorporation.

Another useful question which is more concrete and easier to answer is: which
objects actually use the attribute during simulation? If the answer includes only
one object, say a processor module, then in general it makes more sense to
declare that attribute in the process model that is associated with that
processor. If on the other hand, many objects in the node model require access
to the attribute, then the model developer may consider defining the attribute as
part of the node model. Because model attributes are user-defined, they must
in all cases be interpreted by user-defined logic embedded within process
models, or in some rarer cases transceiver pipeline stages. Therefore, there is
usually at least one process model that must have prior knowledge of the
existence of a model attribute, regardless of which level it is declared in. For this
reason, there is a natural bias toward declaring model attributes at the process
level rather than the node level. Another way to understand this bias is to realize
that, if a model attribute is declared at the node level and a process model
expects to refer to that attribute, then problems can occur if the process model
is used within a different node model which does not declare the required
attribute. This usually means that process models need to verify the existence
of a node model attribute by using the KP op_ima_obj_attr_exists() prior to
accessing the attribute on the node object. For these reasons, model
developers generally tend to use process model attributes for most applications

Figure 6-14 Node Model Attribute Declaration

MC-6-18 OPNET Modeler/Release 16.1


6—Node Domain

Node model attributes are declared using the Node Editor, which supports
attributes of several different data types, including compound attributes. The
properties of each attribute can also be defined to control the attribute’s
behavior when it appears in the Project Editor. See Chapter 4 Modeling
Framework on page MC-4-1 for information about attribute data types and
properties. See Model Attributes on page ER-5-6 for details on how to declare
and edit model attributes.

Node Attribute Interfaces and Derived Node Models


The previous section described how model attributes provide the capability to
create new attributes for a node model. This section describes how node model
developers can use a separate mechanism called model attribute interfaces to
exercise control over existing attributes, as well as to specify a variety of aspects
of a node model’s interface presented to the model user.

For more conceptual information, see Chapter 4 Modeling Framework on


page MC-4-1.

Attribute Interface Descriptions


The primary capability of attribute interfaces is to support modification of
attributes that promote from objects within the node model. Attributes that are
declared in the node model attributes are not dealt with in the attribute
interfaces, because the model developer has full control over those at the
definition stage. Object-promoted attributes can either be built-in attributes of
modules and connections, or process model attributes that have been
appended to the attribute lists of processors and queues. Using the attribute
interface descriptions component of the attribute interfaces, the node model
developer can apply the following types of changes to the attributes that the
node model offers:

Table 6-2 Control provided by Node Attribute Interface Descriptions


Mechanism1 Description

Attribute Renaming Provide a new name for an attribute as it is appears from the
node model user’s point of view. This is the name of the
attribute as it is seen on a node object in the Project Editor.

Attribute Merging Replace a set of attributes with one new attribute. The
underlying attributes are tied together in the sense that
changes to the new attribute are propagated to each of them.

Attribute Properties Changes Certain attribute properties, including range, symbol map,
comments, and default value can be altered.

Attribute Status Changes An attribute can be given one of three status values:
promoted, set, or hidden. Promoted attributes appear on a
node object without a value. Set attributes appear on the
node object with a value chosen by the node model
developer. Hidden attributes have a specified value, but are
not visible on the node object.

OPNET Modeler/Release 16.1 MC-6-19


6—Node Domain

1. See Chapter 4 Modeling Framework on page MC-4-1 for an in-depth description of the mechanisms in this table.

The mechanisms listed in Table 6-2 apply to built-in attributes of node objects
as well as to the object-promoted attributes originating in the node model. For
example, the “condition” attribute of a node object can be renamed, or hidden
using a node model’s attribute interfaces. The following figure shows such a
specification in the Node Editor’s Node Interfaces dialog box.

Figure 6-15 Attribute Interface Descriptions for a Node Model

Built-in attributes of the node object have Attributes promoted from modules have
been set to appropriate values and hidden. been renamed and set to reasonable
The user can no longer change these. initial values.

MC-6-20 OPNET Modeler/Release 16.1


6—Node Domain

Additional Node Model Interfaces


The Node Model Attribute Interfaces also regroup several other parts of a
node’s interface, though these are unrelated to attributes. These are listed in the
following table and described in detail in Chapter 4 Modeling Framework on
page MC-4-1.

Table 6-3 Additional Node Model Interfaces


Interface Description

Keywords Each node model can carry a set of keywords that are used to
identify its relevance to particular network modeling efforts. Users
of the Project Editor can specify keywords of interest to display
only those models that are useful to them in the editor’s Object
Palette.

Comments A textual description of the node model allowing the node model
developer to provide any desired information to the model user.
Typically information in the comments includes: node model
functionality (i.e., what sort of real-world device it represents);
physical link interfaces; significance of major attributes;
compatibility with other models; available statistics, etc.

Supported Node Type Specifies whether the model is capable of being used by fixed,
mobile, or satellite nodes, or any combination of these.

Node Model Derivation


In certain cases, node model developers, or even model users, will find that an
existing node model has all of the capabilities that they require, but in addition
provides too much flexibility to the model user. While providing extra capability
may not seem like a problem, it may in fact be undesirable for a variety of
reasons, which are listed in the following table.

Table 6-4 Reasons to Specialize a Node Model


Problem Details

Complexity can The more flexible and parameterized a model is, the more
overwhelm end-user complex it is for an end-user to configure. In many cases, the
parameters of a general model will not appear familiar to the
end-user and the work required to configure it for each instance
will exceed the end-user’s knowledge. If attribute values can be
specified in advance and match the user’s requirements, then the
node model will be significantly easier to deploy at the network
level.

Flexibility can permit The availability of many attributes may provide the opportunity for
invalid configurations the user to create an invalid or unintended node configuration.

End-user expects The end-user of the node model typically expects a particular
particular presentation interface that matches his experience with similar real-world
devices or other models. If this interface can be realized, the
model will be much more “user-friendly”

OPNET Modeler/Release 16.1 MC-6-21


6—Node Domain

In these problem situations, users are seeking what we can refer to as a


specialization of the node model. A specialized version of the node model is less
general and flexible, but is tailored to the needs of a particular application, or at
least a more specific application than the original model. To create a
specialization of a node model, OPNET Modeler provides a mechanism called
model derivation. Model derivation allows the mechanisms that are available in
the Node Model Attribute Interfaces to be applied to an existing node model.
The result of this operation is a new node model, called derived node model that
depends on the original model, which is called the parent node model. The
parent node model provides all information that is structural, such as the node
model diagram, including modules and connections.

For example, suppose we have a complex and general model R of a router


capable of handling several different protocols P0, P1, and P2, which processes
packets at a selectable speed S, and which has a selectable buffer size B for
holding packets awaiting processing. Now consider that we have an application
that requires a router that can handle only protocol type P0, that has S = 5000
packets/sec., and B = 200 packets. By creating a model R' derived from R, we
can quickly achieve our goal. In the derivation of R' we can set appropriate
attributes of R to their required values (assuming an attribute exists to control
the supported protocols) and we can set the status of these attributes to hidden
to prevent network-level users from changing these settings.

MC-6-22 OPNET Modeler/Release 16.1


6—Node Domain

The following figure shows the derivation that would be performed to obtain
<R'> from <R>. See Chapter 4 Modeling Framework on page MC-4-1 for an
in-depth description of model derivation.

Figure 6-16 Specialization of Model R to Generate Derived Model R’

Model R' has attributes “B”, “S”, and


“Protocols” set to specific values and
hidden.

Statistic Promotion
The issue of customization and control described in the previous section exists
not only for attributes of a node model, but for statistics as well. This section
describes the node developer’s mechanisms that allow control of the statistics
that are available to node model users.

Most modules at the node level offer a set of built-in statistics. There are
statistics that are computed automatically and made available by the Simulation
Kernel. In addition, process models can declare and compute
application-specific statistics that appear as local output statistics of their
encompassing queue and processor modules. These sources of statistics
combine into one list of statistics that a node model is capable of reporting.

OPNET Modeler/Release 16.1 MC-6-23


6—Node Domain

In general, not all of the statistics of a node model are of interest to its users.
Many built-in statistics of modules are not relevant or interesting for particular
applications. In addition, process models may compute specialized statistics
that are useful only in very particular situations. To avoid overwhelming the
node model user with the complexity of a large statistic set, the node model
developer requires the ability to eliminate those statistics that he considers
unimportant. In fact, because the majority of a node’s statistics are typically not
wanted, OPNET Modeler by default assumes that each statistic should be
prevented from appearing in the node model’s interface. Explicit designation of
the desired statistics is then required and OPNET Modeler provides this
capability with a mechanism called statistic promotion. Statistic promotion is
simply an indication to OPNET Modeler that a model should make a particular
statistic available to its users. Promoted statistics then become available to be
assigned to node statistic probes for data collection during simulation. For more
information about node statistic probes, see Chapter 10 Simulation Design on
page MC-10-1.

For each statistic that is promoted, the node model developer has the
opportunity to make three changes to its presentation. First, the name of the
statistics can be changed to be more meaningful for particular applications. In
particular, by default, each statistic name is prefixed with the name of the
module that generates it. This is necessary to distinguish between identically
named statistics associated with different modules. However, the module
names can confuse node model users, especially users of other
OPNET analysis software products who are unaware of the internal
architecture of the node. For this reason, in almost all cases, some adjustments
to the statistic name are specified when the statistic is promoted.

The second change that can be made is to the statistic’s group. Promoting
statistic groups allows the node model developer to organize statistics into
logical groups, defined by the developer. This makes it easier for users to find
and view statistics when they select Choose Results in the Project Editor.
Arranging statistics in groups also enables the model developer to reuse
descriptive names (such as “Throughput (bits/sec)”) for similar statistics in
different contexts.

The third change that can be made is to replace the descriptive comments
associated with the statistic. These comments serve as documentation for the
statistic and can be customized to be appropriate for the anticipated end user.

MC-6-24 OPNET Modeler/Release 16.1


6—Node Domain

Node statistic promotion is performed using the Promote Node Statistics


operation of the Node Editor. The following figure shows the dialog box that
supports statistic promotion.

Figure 6-17 Specialization of Model R to Generate Derived Model R’

The new name of each statistic appears New descriptions for each statistic The new group names of promoted
in this column. Units may be included if are provided in this column. These statistics appear in this column. Statistic
desired. are the descriptions that can be seen names appear within these groups in the
in the Probe Editor. Statistic Browser.

OPNET Modeler/Release 16.1 MC-6-25


6—Node Domain

Modeling Queues
OPNET Modeler’s queue modules are similar to ordinary processor modules, in
that they can execute arbitrary process models. However, they have the
additional capability to manage multiple subqueues of packets, as previously
described. This arrangement allows greater flexibility than if queues were only
available in special-purpose modules with predefined behavior. On the one
hand, a general-purpose process model that also requires the ability to manage
groups of packets can simply be placed within a queue module, where it can
take advantage of the additional resources of subqueues which are not
available in a processor module. On the other hand, a queue module can be
made to execute any queueing discipline simply by changing its process model
to one that implements the desired behavior.

Certain common queueing disciplines are needed in a wide variety of systems


and are provided as predefined base models. These models are divided into two
broad classes: active queues may forward packets to their output stream of their
own accord, whereas passive queues only forward packets in response to an
access interrupt delivered by another module. The predefined queue process
models are listed in the following table.

Table 6-5 Predefined Queue Process Models


Model Name Description

acb_fifo active, concentrating, bit-oriented, first-in first-out

acb_fifo_ms active, concentrating, bit-oriented, first-in first-out, multi-server

acp_fifo active, concentrating, packet-oriented, first-in first-out

pc_fifo passive, concentrating, first-in first-out

pc_lifo passive, concentrating, last-in first-out

pc_prio passive, concentrating, priority

pf_fifo passive, flowthrough, first-in first-out

prq_fifo passive, requeueing, first-in first-out

For more information, see:

• Queue Object on page MR-2-32

• Process Model/Queuing on page GM-1-1

MC-6-26 OPNET Modeler/Release 16.1


6—Node Domain

Abstract Subqueue and Packet Indices


When using a Kernel Procedure from the Subqueue Package to operate on a
subqueue, an index is used to specify the desired subqueue. Alternatively,
special constants may be used for the index parameter to select a subqueue
according to some property other than its index number. For instance, a
constant may be used to select the smallest subqueue, or the one with the
highest mean packet waiting time. These constants are listed in the following
table.

Table 6-6 Subqueue Selection Index Symbolic Constants


Constant Description

OPC_QSEL_MAX_PKSIZE subqueue with the maximum/minimum current


number of packets
OPC_QSEL_MIN_PKSIZE

OPC_QSEL_MAX_IN_PKSIZE subqueue with the maximum/minimum number


of packets, as seen by incoming packets (that is,
OPC_QSEL_MIN_IN_PKSIZE
before the last packet was inserted)

OPC_QSEL_MAX_DELAY subqueue with the maximum/minimum


queueing delay of most recent packet
OPC_QSEL_MIN_DELAY

OPC_QSEL_MAX_BITSIZE subqueue with the maximum/minimum current


number of bits
OPC_QSEL_MIN_BITSIZE

OPC_QSEL_MAX_IN_BITSIZE subqueue with the maximum/minimum number


of bits, as seen by incoming packets (that is,
OPC_QSEL_MIN_IN_BITSIZE
before the last packet was inserted)

OPC_QSEL_MAX_OVERFLOWS subqueue with the maximum/minimum number


of space overflows
OPC_QSEL_MIN_OVERFLOWS

OPC_QSEL_MAX_FREE_PKSIZE subqueue with the maximum/minimum current


number of free packet slots
OPC_QSEL_MIN_FREE_PKSIZE

OPC_QSEL_MAX_FREE_BITSIZE subqueue with the maximum/minimum current


number of free bits
OPC_QSEL_MIN_FREE_BITSIZE

OPNET Modeler/Release 16.1 MC-6-27


6—Node Domain

Similarly, a packet position for extraction or insertion may be specified to a


Simulation Kernel procedure either with a physical index (that is, 0 for the first
packet in the queue, 1 for the second, etc.), or with a special constant that
selects the packet at the head or tail of the subqueue, or the packet with the
highest priority as indicated by its “priority” attribute. These constants are listed
in the following table.

Table 6-7 Packet Position Index Symbolic Constants


Constant Description

OPC_QPOS_HEAD head position of subqueue

OPC_QPOS_TAIL tail position of subqueue

OPC_QPOS_PRIO priority-derived position of subqueue

Implementing Priority Queues


Frequently models will require the implementation of priority queueing systems,
in which different types of packets are accorded precedence over others while
waiting to be forwarded to another module. In some cases, packets may be
classified into a discrete set of priority levels. For example, a system might
support just two priorities, high and low. In other cases, a continuous range of
priorities may be needed. This might occur, for example, in a system which
gives higher priority to queued packets the longer they have been in existence.

In either case, priority schemes create an ordering of packets in the queue. The
continuous priority range model is used within subqueues, because a discrete
priority classification can be regarded as a special case of the continuous
system. However, the collection of subqueues within the queue can also be
used as a mechanism to represent discrete priorities. This is done by creating
one subqueue per priority level, and inserting packets into the correct subqueue
based upon their priorities. When a packet is removed from the queue, the
subqueues are scanned in priority order until a non-empty subqueue is found;
this guarantees that the highest priority packet available is always removed first.
This technique is illustrated by the following code fragments.

Figure 6-18 Using Subqueues to Implement Discrete Priorities

/* Insert an arriving packet into the appropriate subqueue. */


/* The "get_prio()" function returns the priority, which */
/* may be derived from a packet field, or from the packet's */
/* "priority" attribute, and is an integer between 0 and */
/* NUM_PRIOS - 1, where NUM_PRIOS is the number of priority */
/* levels (and also of subqueues). 0 is the highest priority. */

prio = get_prio (pk_ptr);


op_subq_pk_insert (prio, pk_ptr, OPC_QPOS_TAIL);

/* Extract the packet with the highest priority, that is, */


/* the packet at the head of the lowest-numbered subqueue */
/* that contains packets. As before, NUM_PRIOS is the */
/* number of priority levels and subqueues. */

MC-6-28 OPNET Modeler/Release 16.1


6—Node Domain

pk_ptr = OPC_NIL;
for (i = 0; i < NUM_PRIOS; i++)
{
if (op_subq_empty (i) == OPC_FALSE)
{
pk_ptr = op_subq_pk_remove (i, OPC_QPOS_HEAD);
break;
}
}

if (pk_ptr != OPC_NIL)
op_pk_send (pk_ptr, OUT_STRM);

Within individual subqueues, the abstract subqueue position index constant


OPC_QPOS_PRIO provides a convenient method for inserting or extracting
packets while respecting priority ordering. For this purpose, the priority of a
packet is specified by its “priority” attribute, which can be set with the Kernel
Procedure op_pk_priority_set(). If the OPC_QPOS_PRIO constant is passed as
the packet position to the Kernel Procedure op_subq_pk_insert(), the packet of
interest will be inserted ahead of all packets of lower priority and behind all
packets of higher or equal priority. Similarly, if this constant is passed to
op_subq_pk_access() or op_subq_pk_remove(), these procedures will operate
on the highest priority packet present in the subqueue. See
Chapter 21 Subqueue Package on page DES-21-1 for more information.

A simple priority queueing process model called pc_prio (passive concentrating


priority queue) is included with OPNET Modeler as part of the standard models
library. This process model operates only on one subqueue, using the “priority”
attribute of packets to order them. It can be employed in its default form for many
priority queueing applications, or modified as desired, and serves as a basic
example of a priority queue implementation.

Implementing Finite Queues


Limitations in buffer space can be modeled by specifying the maximum number
of packets or bits that can be held by each subqueue of a queue module.
Usually, either a packet limit or a bit limit is set. However, if both are used, they
are applied as separate criteria, so that if either limit is reached, further
insertions into the subqueue will be prevented. These limits are set with the pk
capacity and bit capacity attributes of subqueues. The default value for these
attributes is “infinite”.

Limitations imposed on subqueue sizes have no effect on queueing behavior


until a packet insertion is attempted that would cause the limits to be exceeded.
For example, if a subqueue with a capacity of fifty packets already contains fifty
packets, then calling the Kernel Procedure op_subq_pk_insert() to insert an
additional packet results in a failure: instead of returning the status flag
OPC_QINS_OK as usual, the procedure returns OPC_QINS_FAIL, indicating
that the queue insertion was not successful. The Simulation Kernel does not
resolve the contention for buffer space. Instead, the process model must take

OPNET Modeler/Release 16.1 MC-6-29


6—Node Domain

some explicit action. Depending on the nature of the process being modeled, it
might choose to insert the new packet at the expense of a packet already in the
queue, which would first have to be removed from the queue to permit a
successful insertion, or it might choose to discard the newly arriving packet
instead.

Each time an insertion fails due to lack of capacity in a subqueue, a queue


statistic called overflows is increased by one. This statistic either can be written
to the simulation output file and viewed in the Results Browser or used
dynamically by process models as the simulation progresses, to monitor and/or
modify the behavior of the queue.

Modeling with External Systems


The esys module is a superset of the queue module, containing all of the
attributes of a queue. If the possibility exists that a model will be used in a
cosimulation at some point, even if not immediately planned, use the esys
module instead of the queue or processor object. It is much more difficult to
upgrade a queue or processor to an external system than it is to start out with
the external system in the first place. When upgrading to an external system
from an existing processor or queue, you must delete the previous module and
then create an esys module, set its attributes, and establish streams and other
links.

Because OPNET Modeler treats an external system as a black box, it has no


way to know what happens to the data it passes through the external system.
The model developer must decide on the meaning and format of the data that
is to be passed through the ESD in both directions. On the OPNET Modeler
side, the model developer is responsible for building the ESD and constructing
the process model that serves as an interface between the external system and
OPNET Modeler. The external code determines the behavior of the external
system and then sends data through the ESD’s esys interfaces for the module’s
process model to handle.

Modeling Layered Protocols


Layered protocols are used in many communications networks to decompose
an otherwise complex system into more manageable parts. Each of these parts,
or layers, has a well-defined interface to the adjacent layers, offering a specific
set of services to the higher layer while acting as a client of the services
provided by the lower layer. OPNET Modeler’s block-structured approach to
modeling is well-suited to modeling layered protocols. Each layer in a protocol
can be modeled by a processor or queue module. In some cases, it might be
appropriate to further decompose each layer into multiple modules.

MC-6-30 OPNET Modeler/Release 16.1


6—Node Domain

The following figure shows the concept of a layered protocol as described by the
ISO OSI Model, and an analogous layered node model. Each layer of the OSI
Model has a corresponding processor module in the node model, and these
modules exchange data over packet streams only with the modules that
represent the adjacent layers. Just as each layer of the OSI Model is typically
specified individually, so each layer of the node model can be implemented by
a processor module executing a process developed separately from the others.

Figure 6-19 OSI / OPNET Modeler Layering Analogy


OSI Model OPNET Modeler Process Stack

Application
Layer

Presentation
Layer

Session
Layer

Transport
Layer

Network
Layer

Data Link
Layer

Physical
Layer

OPNET Modeler/Release 16.1 MC-6-31


6—Node Domain

Many of the example models supplied with OPNET Modeler are designed this
way. For example, the X.25 node model, which is in the models/std/x25
directory, incorporates an application processor, the X.25 DCE or DTE module
(which implements the network layer), and the LAPB module (which implements
the data-link layer). The following figure shows the X.25 DTE node model.

Figure 6-20 Example of a Layered Protocol Model

x25_dte_adv Node Model

Interfaces between adjacent layers normally transfer control information, such


as a command value or an address, in addition to packet data. OPNET Modeler
provides the ICI (Interface Control Information block) to pass such parameters
between layers. This encourages modularity between different protocols, as it
allows the control information to remain separate from the data packet. Most
protocol layers will not need to examine a data packet passed by a higher layer
at all, but can use the information in the ICI to control its processing. In
particular, the format of the data packet remains local to a particular interface,
and does not depend on the packet format at higher or lower layers of the
protocol stack. Instead, a layer will combine information from an ICI with the
higher layer data packet into a packet format suitable for the lower layer (or
unpack the lower layer’s packet, packaging information from the packet header
into an ICI for the higher layer). The process of combining data into lower-level
packets is illustrated in the diagram below. For more examples of this strategy,
examine the x25 model and the process models in the standard model
directories.

MC-6-32 OPNET Modeler/Release 16.1


6—Node Domain

Figure 6-21 Interfacing Adjacent Layers of Protocol Stack

Level 3 Processor
Level 3 ICI (with control
information for level 2)

Level 3 Packet

Level 2 Processor
Level 2 ICI (with control
information for level 1)

Level 2 Packet (with


encapsulated data from
Level 3 ICI)

Level 1 Processor

Modeling Shared Resources


In general terms, a shared resource is an entity that provides a service to a
number of competing clients, whose access to the resource is limited. Typically,
a client will request access to the resource, and depending on the availability of
the resource, access may be granted immediately, or the client may have to
wait. When access is granted, the client maintains access to the resource for a
period of time, while the task that required the resource is being executed. When
the task is completed, the resource is released, so that other competing clients
may access it. Real world examples of shared resources include the CPU of a
computer system (or one of the CPUs in a multiprocessor system), a printer or
other output device, a communications channel, a bus for peripheral devices or
expansion cards, or a page of shared random-access memory.

Because it is not always possible for a request to be serviced immediately upon


its arrival, there must be some provision for buffering requests until the resource
becomes available. A variety of schemes for managing these requests is
possible, depending upon the desired performance goals, such as maximizing
throughput, minimizing delay, minimizing the variation in delay (a property
called fairness), or providing prioritized access to the resource.

A queue module is well suited to address these issues, as its built-in subqueues
provide the necessary facilities for buffering requests (implemented as packets),
and the ability to execute arbitrary process models allows different access
disciplines to be implemented and compared. The process model managing a
shared resource controls the manner in which requests are queued when the
resource is not available, as well as the method by which the client actually
controls the resource after access has been granted.

OPNET Modeler/Release 16.1 MC-6-33


6—Node Domain

When a packet representing a request is received, the process model must


determine whether the corresponding resource is available. If not, the request
must be queued according to the chosen discipline. The queue models
described in Modeling Queues on page MC-6-26 can serve as a basis for
implementation. When the resource becomes available, the requesting process
can be notified in one of two ways, according to the requirements of the
simulation:

• The resource manager can return the request packet to the requesting client,
setting a field to indicate that the request was granted and the client now has
access to the resource. The client then begins the process for which the
resource was needed. When it is done with the resource, it notifies the
resource manager (which can be done either by a remote interrupt or by
sending another packet), so that other requests can be serviced.
This approach is called an allocation-oriented scheme, because it models
the case where a resource is allocated by a process for its exclusive use. An
example of this might be a tape drive allocated by a user of a computing
facility to read data from a tape, as no other users can access the tape drive
until the first user releases it (after finishing with the tape).

• Alternatively, the resource manager can retain the request for a specific
length of time (the service time), which may be computed according to
parameters specified in the request packet. In this case, the processing is
modeled within the resource manager, instead of within the client. At the end
of the service period, the manager returns the request packet to the client,
thereby notifying it that the requested processing has been completed.
This approach is called a job-oriented scheme, because it models the case
where clients submit jobs or tasks to be performed using a resource, and are
notified when their job is completed. To continue the previous example, users
of the computer facility may submit print jobs containing files to be printed.
These jobs may be buffered in a queue and are serviced when the printer
becomes available, at which point the users are notified that their documents
are available. Another example is the facility’s time-sharing computer
system, in which users’ computation tasks are serviced in time slices. This
way, many jobs can use the shared resource simultaneously (or at least
apparently so).

Each approach has its advantages and disadvantages. The allocation-oriented


scheme requires the cooperation of the client processes to notify the resource
manager when they release the resource so that other clients may gain access,
which the job-oriented scheme avoids. Moreover, the allocation-oriented
scheme provides total control of the resource to the client. While this is
occasionally desirable, it precludes the use of “round-robin” or other scheduling
strategies providing shared access to the resource, which is supported by the
job-oriented scheme. On the other hand, it is possible with the
allocation-oriented scheme to model cases in which a client must reserve a
number of separately managed resources to do a given task.

MC-6-34 OPNET Modeler/Release 16.1

You might also like