Professional Documents
Culture Documents
01 19 Nddef
01 19 Nddef
6 Node Domain
Network models in OPNET Modeler are constructed from two broad classes of
components: communication nodes and communication links. The internal
structure of these objects, for the most part, is not visible at the network level.
This section presents the methods used to specify the internal structure and
therefore much of the capabilities of communication nodes. For more
information about communication links, see Chapter 9 Communication
Mechanisms on page MC-9-1.
Both the structural complexity of network nodes and their scope of activity can
vary greatly depending on the system which is modeled. On the low end of
complexity and function, simple terminals (connected to a host computer via
serial connections) can be considered nodes of a star network. On the other end
of the complexity spectrum, the network management center of a nationwide
public packet network can also be considered as one hub node. The table below
presents some typical examples of node models that can be built with
OPNET Modeler.
Workstation Can generate and receive transfers of files or sparse packets and
manage several concurrent network connections.
Packet Switch Supports large numbers of incoming and outgoing data links and
performs packet routing at high speeds.
Remote Data Sensor Acts as a simple source of packets, usually transmitting them in
bursts.
high-level software objects with well defined interfaces. In fact, the International
Standards Organization’s Open Systems Interconnect (OSI) Reference
Architecture for data communications networks is based on a block-structured
approach, where each block corresponds to a different protocol layer. Like the
OSI architecture, the node models in OPNET Modeler contain blocks which
operate on packets, so it is well suited to modeling protocol layers. This aspect
of OPNET Modeler is described in Modeling Layered Protocols on
page MC-6-30.
Node models specify the manner in which the inputs and outputs of various
modules are connected using objects called connections. There are two types
of connections, one to carry data packets, and one to transmit individual values.
A module can send data packets over its output packet streams, and receive
them from its input packet streams. It may also send individual values over
output statistics, and receive them from input statistics. The connection from a
source module’s output stream to a destination module’s input stream is called
a packet stream object. Likewise, the connection between a source module’s
output statistic and a destination module’s input statistic is called a statistic wire
object. (the connection objects are usually referred to as packet streams and
statistic wires, or simply streams and statwires, respectively). Other more
abstract intermodule communication mechanisms are available. For more
details, see Chapter 9 Communication Mechanisms on page MC-9-1.
The next two sections define the capabilities of the various types of modules and
connections, and illustrate their specific uses within node models. The
remaining sections describe the techniques used in creating specific types of
models, including layered protocols, queues, and shared resources.
Module Definitions
Modules represent those parts of a communication node within which data are
generated, consumed, or processed. There are several different types of
modules, distinguished by their function within a node: processors, queues,
generators, receivers, and transmitters. Some types of modules (processors,
queues, and generators) have functions that are strictly internal to a node, while
others (transmitters and receivers) have external connections to data
transmission links. The behavior of processors and queues can be specified in
precise detail by means of user-specified process models; other modules have
predefined behavior which can be modified only by setting the values of their
attributes. Some module types are limited in the way they can be connected to
other modules.
A module’s internal algorithm is invoked when an external event that will affect
the state of the module occurs. This is a consequence of the event-driven
simulation methodology used in OPNET Modeler. For example, a module’s
process model is invoked when a packet arrives on one of its packet input
streams, or a statistic to which it is connected changes. Invocations can also be
scheduled at regular intervals, or at specific times by the module itself (to model
a timer expiring, for instance).
The following sections describe each module type, giving an overview of its
capabilities and applications.
For additional details about specific types of modules, including attributes and
output statistics, see Node Reference on page MR-2-1.
Processor Modules
Figure 6-1 Standard Icon Used for a Processor Module
Processor modules can act as traffic generators when you specify the correct
process model for the module’s “model” attribute. When you specify such a
model, attributes that configure traffic (interarrival times, packet size, probability
density functions) are promoted and appear in the processor module’s attribute
list. The standard model set includes process models implemented as traffic
generators for several traffic types, including bursty traffic and self-similar traffic.
While processor modules are often connected to other modules via pairs of
streams (input streams and output streams), this is not always the case. For
instance, it is possible to use a processor module as a “controller” within a node
model. In such cases, the processor often communicates directly with other
modules through statistic wires or remote interrupts. In fact, it might not use any
packet streams at all. Processor modules can also be used as traffic sources
(generators) or sinks, in which case they might have only output streams or only
input streams.
Node models may employ both processor modules and queue modules
(described in the next section) to implement general processing of packets.
Normally, a processor module would be used in cases where a packet can be
completely processed in response to the interrupt associated with its arrival or
generation. If this is not the case, and it is necessary to buffer the packet while
awaiting a later event to complete processing, then a queue module, with its
additional buffering resources, is likely to be more correct. This is particularly
true if multiple packets must be buffered simultaneously.
Like all objects, processors have a number of built-in attributes that configure
their behavior. These attributes can be specified in the node domain, typically
using the graphical user interface of the Node Editor. However, processors are
one of the special objects that support a “model” attribute, in this case to select
the processor’s root process model. When selecting a process model, a
processor’s attributes may change: new attributes may appear, existing
attributes may change values, and certain attributes may disappear. This
depends on the process model’s declared model attributes and on its attribute
interfaces. Process model attribute interfaces allow the process model
developer to specify values in advance for processor built-in attributes. At the
same time, a processor attribute can be given a status of hidden which means
that it will exist and have a specified value, but will not appear on the processor’s
attribute list in the user interface. The process model’s model attributes allow
new attributes to be invented and to automatically be appended to be inherited
by the processor object.
Queue Modules
Figure 6-2 Standard Icon Used for a Queue Module
The primary difference between processors and queue modules is that queues
contain additional internal resources called subqueues, as illustrated in the
figure below. Subqueues facilitate buffering and managing a collection of data
packets. While it is possible to implement this functionality with ordinary
processor modules, the use of subqueues, together with the Kernel Procedures
that manipulate them, provide greater flexibility and ease of implementation of
a variety of queueing disciplines. Moreover, subqueues automatically compute
a number of statistics about their operation, which may be collected in a
straightforward manner by using probes. For more information, see Probe
Reference on page MR-4-1.
subqueue (k - 3) packet nk-3 - 1 packet nk-3 - 2 packet 1 packet 0
The capacity of each subqueue to hold data is unlimited by default, but a limit
may be set on the number of packets or the total size of all packets (or both)
within a subqueue. It is up to the processes in the queue to determine what
action to take when subqueues become full: packets may be removed to create
space for new arrivals, or the new arrivals may be discarded. The Simulation
Kernel merely provides notification that a subqueue can no longer accept
additional packets when an insertion is attempted.
Because the user controls the process model executed by a queue, it is possible
to model any queueing protocol by defining the manner in which the subqueues
are accessed and managed. Within a module, subqueues may be selected
directly by a physical index, or via abstract indices, which allow the process
model to choose a subqueue based on its state relative to other subqueues. For
example, the shortest subqueue may be chosen, or the one with the largest
queueing delay. Likewise, packets within a subqueue may be accessed directly
by index number, or they may be retrieved abstractly by requesting the packet
at the head or the tail of the subqueue, or the packet with the highest priority.
For more details on how these features can be used to implement a particular
queueing discipline, see Modeling Queues on page MC-6-26.
Esys Modules
Figure 6-4 Standard Icon Used for an esys Module
Double-clicking on an esys module opens the Process Editor, in which you can
edit the esys module’s process model. To create or edit the esys module’s ESD,
click the Edit ESD Model button in the esys module’s Attributes dialog box.
When an esys module uses an ESD model, the module’s process model acts
as the interface between the external system and the rest of OPNET Modeler.
Therefore, the process model is OPNET Modeler’s interface to the external
system.
Transmitter Modules
Figure 6-5 Icons Used for Transmitter Modules
Transmitter modules can collect packets from one or more input packet streams
and relay them over corresponding channels within the communications link. A
packet received on a given input stream is transmitted over the channel with the
same index number. Each channel can have its own data rate, which can be
used with the size of the packet to determine the transmission time. Packets
arriving on an input stream while the corresponding channel is busy with a
previous packet are automatically queued in a buffer. This buffer has a default
capacity of 1000 packets of any size.
other other
modules node(s) in
in node network
model model
node
Receiver Modules
Figure 6-7 Icons Used for Receiver Modules
Receiver modules can distribute packets to one or more output packet streams
upon receiving them over corresponding channels within the communications
link. A packet received on a given input channel is relayed to the output stream
with the same index number. Channels are subordinate objects of receiver
modules. The receiver contains them within its “channel” attribute, which is a
compound attribute. The “channel” attribute’s value is a compound attribute
object, which is the parent of all of the contained channels (refer to
Chapter 4 Modeling Framework on page MC-4-1 for more information about
compound attributes and subordinate objects). Each channel maintains a set of
attributes that control various aspects of packet reception. Some of these
attributes are also used to determine compatibility between receivers,
transmitters, and link objects to form viable links in the Project Editor.
Specifically the “data rate” and “packet formats” attributes of channels are used
for point-to-point and bus transceiver channels. For more information, see Link
Consistency on page MC-7-12.
other other
node(s) in modules
network in node
model model
node
Connection Definitions
Connections represent the communication paths and associations between the
various modules in a node. There are three types of connections in node
models: packet streams, which support the flow of data packets between
modules, statistic wires, which support the transmission of numerical state
information between modules, and logical associations, which indicate a binding
between two modules allowing certain pairs of modules to do a function
together. Note that while connections are the most commonly used means of
communication between models, it is possible to transfer data between modules
via other means. For more information, see Chapter 9 Communication
Mechanisms on page MC-9-1.
Packet Streams
Figure 6-9 Packet Stream representation
Packet Stream
Packet streams are connections that carry data packets from a source module
to a destination module. They represent the flow of data across the hardware
and software interfaces within a communication node. They are also regarded
as completely reliable (that is, no packets are lost and no errors are introduced)
and of potentially infinite bandwidth (because an arbitrarily large packet can
traverse a stream with no delay). Furthermore, streams have an unlimited
capacity to buffer data packets at the destination module in the order of their
arrival.
OPNET Modeler provides three different methods for transferring a packet over
a stream and notifying the destination module of its arrival: scheduled, forced,
or quiet. In a scheduled arrival, a “stream interrupt” is scheduled for the
destination module, occurring after all other events of lower priority previously
scheduled for that time. In a forced arrival, the “stream interrupt” occurs
immediately, before any other scheduled events. Finally, with a quiet arrival, no
“stream interrupt” occurs at all, and the packet remains buffered within the
stream. The destination module can obtain the packet only if it explicitly checks
for its presence when it responds to some other event. (For a detailed
description of various interrupt methods, see Interrupt Methods on
page MC-4-91 and Packet Streams on page MC-9-3.)
Output streams of generators and receivers (that is, modules which have no
process models) use the type of arrival specified by the “intrpt method” attribute
of the stream. For output streams of processors or queues, the module’s
process model overrides the value of this attribute by sending the packet to the
output stream using one of the Kernel Procedures op_pk_send(),
op_pk_send_delayed(), op_pk_send_quiet(), or op_pk_send_forced(). See
Packet Package on page DES-12-1 for more information.
Packet streams support the introduction of a delay between the time a packet is
placed on a stream by the source module and the time it arrives at the
destination module. By default, this value is zero, and in many cases this is
appropriate. However, a nonzero value may be used to model a constant
processing time, propagation delay, or transfer rate. Furthermore, a process
model executing in a processor or queue module may specify its own delay
when it places a packet on an output stream. In this way the delay may be
computed as a function of the size or contents of the transferred packet, for
example.
Packet streams are normally used to carry the actual data packets that the
modeled communication node is intended to process, rather than control data
which represents communication between adjacent modules. While it is
possible to send such control data between processor modules over packet
streams, it is often more appropriate to use other mechanisms, such as statistic
wires or ICIs (Interface Control Information structures). The use of ICIs to
communicate control data between layers of a protocol stack is shown in the
Modeling Layered Protocols on page MC-6-30. For more information about
communication between modules, see Chapter 9 Communication Mechanisms
on page MC-9-1.
Statistic Wires
Figure 6-10 Statistic Wire Representation
Statistic Wire
Statistic wires carry data from a source module to a destination module. Unlike
packet streams, which convey packets, statistic wires convey individual values.
They are generally used as an interface by which the source module can share
certain values with the destination module, and thereby provide information
regarding its state. They are also used as a signalling mechanism, allowing the
source module to notify the destination module that a particular condition of
interest has been reached.
Each module within a node has a set of local output statistics whose values are
updated at correct times during the simulation. It is this set of statistics that can
act as the sources of statistic wires. Processor and queue modules, which can
execute user-defined process models, have a set of statistics whose
computation and interpretation are defined by their process model. These
statistics are updated when the process model calls the Kernel Procedure
op_stat_write() (refer to Chapter 5 Process Domain on page MC-5-1 for more
information on user-defined local output statistics). Other modules have a
Processor and queue modules also have a set of input statistics, which act as
destinations for statistic wires. Process models within these modules can
access the current value of a particular input statistic by calling the Kernel
Procedure op_stat_local_read(). The statistic wire binds together an output
statistic and an input statistic, so that their values change in a linked manner. In
this way the value sharing described above is accomplished. Note that an
output statistic can be the source of many statistic wires, and similarly an input
statistic can be the destination of more than one statistic wire.
When the value of a statistic changes, the destination module can be notified by
means of a “statistic interrupt”. The conditions under which this interrupt occurs
can be controlled by means of attributes of the statistic wire, called statistic
triggers. For instance, a wire can be set to cause an interrupt when its statistic
becomes larger than a particular value, or when it becomes smaller than its
previous value. If no triggers are enabled, then no “statistic interrupts” will occur,
and the destination module must poll the input statistic on its own to determine
if its value has changed.
Statistic wires are typically used in one of two ways within node models. First,
they may be employed to dynamically monitor the state of another component
of a communication node. Often this is an “unintelligent” physical device like a
transmitter or receiver, but it might also be a queue module, for example, if a
processor needed to alter its behavior based on the size of a queue. In these
“busy” wire
“deference” wire
Logical Associations
Logical associations are special connections that to not actually carry data
between modules. In fact, logical associations do not exist during simulation, but
are used purely as specification devices. The purpose of a logical association is
to indicate that a relationship exists between two modules in a node model. The
existence of this relationship is used to interpret the node model’s structure.
The role of the logical transceiver association is to indicate that a transmitter and
a receiver function as a pair. While the objects exist independently,
OPNET Modeler must sometimes choose which transceivers to use when
attaching nodes to links in the Project Editor. Each duplex point-to-point link, for
example, makes use of one point-point transmitter and one point-to-point
receiver in each of the nodes to which it is connected. Each node may contain
multiple modules that can act as these “termination points” for the link. If it is
important to the node modeler that a particular transmitter and receiver be used
for the same attached link, then this requirement can be expressed by creating
a logical association between them. This same capability can be used for bus
transmitters and receivers, though it is only applicable on the end of the tap that
attaches the node to the bus.
The following figure shows the effect of logical associations when creating
point-point links in the Project Editor.
duplex
point-to-point link
remote
node
remote
node
duplex point-to-point link
The model’s interface dictates how one can use the model and control it
externally, without modifying any of its internal specifications.
Node model developers have a perspective that is the mirror-image of the node
model users. They are concerned with two questions as well:
In other words, the developer must determine what the model’s interface should
offer to the model user. This section describes the components that may
constitute a node model’s interface. Many of the concepts rely on the more
general modeling approach of OPNET Modeler, which is described in
Chapter 4 Modeling Framework on page MC-4-1.
Node model attributes are generally used to provide additional means of control
to node model users. By exposing a characteristic of a node model and allowing
it to be modified externally (i.e., on a per-instance basis), the node model gains
generality and can be more broadly re-used in different modeling situations, a
principle that is described in Chapter 4 Modeling Framework on page MC-4-1.
The same approach is supported by process model attributes, which allow new
attributes to be introduced for modules in the Node Domain. This raises the
question of when to employ node model attributes versus process model
attributes. In general, the answer is dependent on the “scope” of the attribute.
In other words, the question is: does the attribute rightfully belong to a particular
object in the node or does it belong to many objects in the node, or even to the
node as a whole? For example, an attribute specifying the velocity of the node
object is truly an attribute of the node rather than any particular object in the
node. In contrast, the processing speed of a packet-switch’s switching element
is probably a characteristic of the processor or queue that represents the
switching element, rather than a characteristic of the node as a whole. Of
course, the processing speed is still part of the description of the node as a
whole because the node encompasses the switching element. However, it is
more closely associated with the processor and belongs to the node only by
incorporation.
Another useful question which is more concrete and easier to answer is: which
objects actually use the attribute during simulation? If the answer includes only
one object, say a processor module, then in general it makes more sense to
declare that attribute in the process model that is associated with that
processor. If on the other hand, many objects in the node model require access
to the attribute, then the model developer may consider defining the attribute as
part of the node model. Because model attributes are user-defined, they must
in all cases be interpreted by user-defined logic embedded within process
models, or in some rarer cases transceiver pipeline stages. Therefore, there is
usually at least one process model that must have prior knowledge of the
existence of a model attribute, regardless of which level it is declared in. For this
reason, there is a natural bias toward declaring model attributes at the process
level rather than the node level. Another way to understand this bias is to realize
that, if a model attribute is declared at the node level and a process model
expects to refer to that attribute, then problems can occur if the process model
is used within a different node model which does not declare the required
attribute. This usually means that process models need to verify the existence
of a node model attribute by using the KP op_ima_obj_attr_exists() prior to
accessing the attribute on the node object. For these reasons, model
developers generally tend to use process model attributes for most applications
Node model attributes are declared using the Node Editor, which supports
attributes of several different data types, including compound attributes. The
properties of each attribute can also be defined to control the attribute’s
behavior when it appears in the Project Editor. See Chapter 4 Modeling
Framework on page MC-4-1 for information about attribute data types and
properties. See Model Attributes on page ER-5-6 for details on how to declare
and edit model attributes.
Attribute Renaming Provide a new name for an attribute as it is appears from the
node model user’s point of view. This is the name of the
attribute as it is seen on a node object in the Project Editor.
Attribute Merging Replace a set of attributes with one new attribute. The
underlying attributes are tied together in the sense that
changes to the new attribute are propagated to each of them.
Attribute Properties Changes Certain attribute properties, including range, symbol map,
comments, and default value can be altered.
Attribute Status Changes An attribute can be given one of three status values:
promoted, set, or hidden. Promoted attributes appear on a
node object without a value. Set attributes appear on the
node object with a value chosen by the node model
developer. Hidden attributes have a specified value, but are
not visible on the node object.
1. See Chapter 4 Modeling Framework on page MC-4-1 for an in-depth description of the mechanisms in this table.
The mechanisms listed in Table 6-2 apply to built-in attributes of node objects
as well as to the object-promoted attributes originating in the node model. For
example, the “condition” attribute of a node object can be renamed, or hidden
using a node model’s attribute interfaces. The following figure shows such a
specification in the Node Editor’s Node Interfaces dialog box.
Built-in attributes of the node object have Attributes promoted from modules have
been set to appropriate values and hidden. been renamed and set to reasonable
The user can no longer change these. initial values.
Keywords Each node model can carry a set of keywords that are used to
identify its relevance to particular network modeling efforts. Users
of the Project Editor can specify keywords of interest to display
only those models that are useful to them in the editor’s Object
Palette.
Comments A textual description of the node model allowing the node model
developer to provide any desired information to the model user.
Typically information in the comments includes: node model
functionality (i.e., what sort of real-world device it represents);
physical link interfaces; significance of major attributes;
compatibility with other models; available statistics, etc.
Supported Node Type Specifies whether the model is capable of being used by fixed,
mobile, or satellite nodes, or any combination of these.
Complexity can The more flexible and parameterized a model is, the more
overwhelm end-user complex it is for an end-user to configure. In many cases, the
parameters of a general model will not appear familiar to the
end-user and the work required to configure it for each instance
will exceed the end-user’s knowledge. If attribute values can be
specified in advance and match the user’s requirements, then the
node model will be significantly easier to deploy at the network
level.
Flexibility can permit The availability of many attributes may provide the opportunity for
invalid configurations the user to create an invalid or unintended node configuration.
End-user expects The end-user of the node model typically expects a particular
particular presentation interface that matches his experience with similar real-world
devices or other models. If this interface can be realized, the
model will be much more “user-friendly”
The following figure shows the derivation that would be performed to obtain
<R'> from <R>. See Chapter 4 Modeling Framework on page MC-4-1 for an
in-depth description of model derivation.
Statistic Promotion
The issue of customization and control described in the previous section exists
not only for attributes of a node model, but for statistics as well. This section
describes the node developer’s mechanisms that allow control of the statistics
that are available to node model users.
Most modules at the node level offer a set of built-in statistics. There are
statistics that are computed automatically and made available by the Simulation
Kernel. In addition, process models can declare and compute
application-specific statistics that appear as local output statistics of their
encompassing queue and processor modules. These sources of statistics
combine into one list of statistics that a node model is capable of reporting.
In general, not all of the statistics of a node model are of interest to its users.
Many built-in statistics of modules are not relevant or interesting for particular
applications. In addition, process models may compute specialized statistics
that are useful only in very particular situations. To avoid overwhelming the
node model user with the complexity of a large statistic set, the node model
developer requires the ability to eliminate those statistics that he considers
unimportant. In fact, because the majority of a node’s statistics are typically not
wanted, OPNET Modeler by default assumes that each statistic should be
prevented from appearing in the node model’s interface. Explicit designation of
the desired statistics is then required and OPNET Modeler provides this
capability with a mechanism called statistic promotion. Statistic promotion is
simply an indication to OPNET Modeler that a model should make a particular
statistic available to its users. Promoted statistics then become available to be
assigned to node statistic probes for data collection during simulation. For more
information about node statistic probes, see Chapter 10 Simulation Design on
page MC-10-1.
For each statistic that is promoted, the node model developer has the
opportunity to make three changes to its presentation. First, the name of the
statistics can be changed to be more meaningful for particular applications. In
particular, by default, each statistic name is prefixed with the name of the
module that generates it. This is necessary to distinguish between identically
named statistics associated with different modules. However, the module
names can confuse node model users, especially users of other
OPNET analysis software products who are unaware of the internal
architecture of the node. For this reason, in almost all cases, some adjustments
to the statistic name are specified when the statistic is promoted.
The second change that can be made is to the statistic’s group. Promoting
statistic groups allows the node model developer to organize statistics into
logical groups, defined by the developer. This makes it easier for users to find
and view statistics when they select Choose Results in the Project Editor.
Arranging statistics in groups also enables the model developer to reuse
descriptive names (such as “Throughput (bits/sec)”) for similar statistics in
different contexts.
The third change that can be made is to replace the descriptive comments
associated with the statistic. These comments serve as documentation for the
statistic and can be customized to be appropriate for the anticipated end user.
The new name of each statistic appears New descriptions for each statistic The new group names of promoted
in this column. Units may be included if are provided in this column. These statistics appear in this column. Statistic
desired. are the descriptions that can be seen names appear within these groups in the
in the Probe Editor. Statistic Browser.
Modeling Queues
OPNET Modeler’s queue modules are similar to ordinary processor modules, in
that they can execute arbitrary process models. However, they have the
additional capability to manage multiple subqueues of packets, as previously
described. This arrangement allows greater flexibility than if queues were only
available in special-purpose modules with predefined behavior. On the one
hand, a general-purpose process model that also requires the ability to manage
groups of packets can simply be placed within a queue module, where it can
take advantage of the additional resources of subqueues which are not
available in a processor module. On the other hand, a queue module can be
made to execute any queueing discipline simply by changing its process model
to one that implements the desired behavior.
In either case, priority schemes create an ordering of packets in the queue. The
continuous priority range model is used within subqueues, because a discrete
priority classification can be regarded as a special case of the continuous
system. However, the collection of subqueues within the queue can also be
used as a mechanism to represent discrete priorities. This is done by creating
one subqueue per priority level, and inserting packets into the correct subqueue
based upon their priorities. When a packet is removed from the queue, the
subqueues are scanned in priority order until a non-empty subqueue is found;
this guarantees that the highest priority packet available is always removed first.
This technique is illustrated by the following code fragments.
pk_ptr = OPC_NIL;
for (i = 0; i < NUM_PRIOS; i++)
{
if (op_subq_empty (i) == OPC_FALSE)
{
pk_ptr = op_subq_pk_remove (i, OPC_QPOS_HEAD);
break;
}
}
if (pk_ptr != OPC_NIL)
op_pk_send (pk_ptr, OUT_STRM);
some explicit action. Depending on the nature of the process being modeled, it
might choose to insert the new packet at the expense of a packet already in the
queue, which would first have to be removed from the queue to permit a
successful insertion, or it might choose to discard the newly arriving packet
instead.
The following figure shows the concept of a layered protocol as described by the
ISO OSI Model, and an analogous layered node model. Each layer of the OSI
Model has a corresponding processor module in the node model, and these
modules exchange data over packet streams only with the modules that
represent the adjacent layers. Just as each layer of the OSI Model is typically
specified individually, so each layer of the node model can be implemented by
a processor module executing a process developed separately from the others.
Application
Layer
Presentation
Layer
Session
Layer
Transport
Layer
Network
Layer
Data Link
Layer
Physical
Layer
Many of the example models supplied with OPNET Modeler are designed this
way. For example, the X.25 node model, which is in the models/std/x25
directory, incorporates an application processor, the X.25 DCE or DTE module
(which implements the network layer), and the LAPB module (which implements
the data-link layer). The following figure shows the X.25 DTE node model.
Level 3 Processor
Level 3 ICI (with control
information for level 2)
Level 3 Packet
Level 2 Processor
Level 2 ICI (with control
information for level 1)
Level 1 Processor
A queue module is well suited to address these issues, as its built-in subqueues
provide the necessary facilities for buffering requests (implemented as packets),
and the ability to execute arbitrary process models allows different access
disciplines to be implemented and compared. The process model managing a
shared resource controls the manner in which requests are queued when the
resource is not available, as well as the method by which the client actually
controls the resource after access has been granted.
• The resource manager can return the request packet to the requesting client,
setting a field to indicate that the request was granted and the client now has
access to the resource. The client then begins the process for which the
resource was needed. When it is done with the resource, it notifies the
resource manager (which can be done either by a remote interrupt or by
sending another packet), so that other requests can be serviced.
This approach is called an allocation-oriented scheme, because it models
the case where a resource is allocated by a process for its exclusive use. An
example of this might be a tape drive allocated by a user of a computing
facility to read data from a tape, as no other users can access the tape drive
until the first user releases it (after finishing with the tape).
• Alternatively, the resource manager can retain the request for a specific
length of time (the service time), which may be computed according to
parameters specified in the request packet. In this case, the processing is
modeled within the resource manager, instead of within the client. At the end
of the service period, the manager returns the request packet to the client,
thereby notifying it that the requested processing has been completed.
This approach is called a job-oriented scheme, because it models the case
where clients submit jobs or tasks to be performed using a resource, and are
notified when their job is completed. To continue the previous example, users
of the computer facility may submit print jobs containing files to be printed.
These jobs may be buffered in a queue and are serviced when the printer
becomes available, at which point the users are notified that their documents
are available. Another example is the facility’s time-sharing computer
system, in which users’ computation tasks are serviced in time slices. This
way, many jobs can use the shared resource simultaneously (or at least
apparently so).