Professional Documents
Culture Documents
Technical Reference Guide - IDX 3.3 Rev E
Technical Reference Guide - IDX 3.3 Rev E
Copyright 2015, VT iDirect Inc. All rights reserved. Reproduction in whole or in part without permission is
prohibited. Information contained herein is subject to change without notice. The specifications and information
regarding the products in this document are subject to change without notice. All statements, information and
recommendations in this document are believed to be accurate, but are presented without warranty of any kind,
express, or implied. Users must take full responsibility for their application of any products. Trademarks, brand
names and products mentioned in this document are the property of their respective owners. All such references
are used strictly in an editorial fashion with no intent to convey any affiliation with the name or the product's
rightful owner.
VT iDirect is a global leader in IP-based satellite communications providing technology and solutions that enable
our partners worldwide to optimize their networks, differentiate their services and profitably expand their
businesses. Our product portfolio, branded under the name iDirect, sets standards in performance and efficiency to
deliver voice, video and data connectivity anywhere in the world. VT iDirect is the worlds largest TDMA enterprise
VSAT manufacturer and is the leader in key industries including mobility, military/government and cellular
backhaul.
iDirect Government, created in 2007, is a wholly owned subsidiary of iDirect and was formed to better serve the
U.S. government and defense communities.
Revision History
The following table shows all revisions for this document. To determine if this is the latest
revision, check the TAC Web site at http://tac.idirect.net.
Contents
About . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxi
Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxi
Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxi
Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxi
Document Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii
Getting Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxiii
Document Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv
QoS Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
iDirect QoS Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Classification and Scheduling of Packets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Service Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Packet Scheduling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Priority Queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Class-Based Weighted Fair Queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Best Effort Queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Application Throughput. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Minimum Information Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Committed Information Rate (CIR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Maximum Information Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Free Slot Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Compressed Real-Time Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Sticky CIR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Application Jitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
TDMA Slot Feathering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Packet Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Application Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Maximum Channel Efficiency vs. Minimum Latency . . . . . . . . . . . . . . . . . . . . . . . . . 75
Group QoS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Group QoS Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Bandwidth Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Bandwidth Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Service Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Service Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Remote Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Group QoS Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Physical Segregation Scenario. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
CIR Per Application Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Tiered Service Scenario. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
List of Figures
Figure 12-3. Upstream Service Level with Trigger State Change Selected . . . . . . . . . . . . . . 110
Figure 14-1. Global NMS Database Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Figure 14-2. Sample Global NMS Network Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Figure 16-1. Protocol Processor Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Figure 17-1. Sample Distributed NMS Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Figure 18-1. DVB-S2 TRANSEC Frame Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Figure 18-2. Disguising Which Key is Used for a Burst . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Figure 18-3. Code Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Figure 18-4. Generating the Upstream Initialization Vector. . . . . . . . . . . . . . . . . . . . . . . . 133
Figure 18-5. Upstream ACQ Burst Obfuscation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Figure 18-6. Key Distribution Protocol. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Figure 18-7. Key Roll Data Structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Figure 18-8. Host Keying Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Figure 21-1. iMonitor Probe: Remote Power Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Figure 21-2. Remote VSAT Tab: Entering the Initial Transmit Power Offset . . . . . . . . . . . . . 161
Figure 21-3. Absolute vs. Generated G/T Contours for Two Beams . . . . . . . . . . . . . . . . . . . 162
Figure 23-1. Spectral Mask Illustrating 20% and 5% Roll-Off Factors . . . . . . . . . . . . . . . . . . 173
Figure 23-2. Adjacent Carrier Interference Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Figure 26-1. NMS Database Replication Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Figure 26-2. NMS Database Replication from a Single Primary NMS Server . . . . . . . . . . . . . . 186
Figure 26-3. NMS Database Replication on a Distributed NMS. . . . . . . . . . . . . . . . . . . . . . . 188
Figure 26-4. Enabling NMS Database Replication to a Backup Server . . . . . . . . . . . . . . . . . . 191
Figure 26-5. Replication Conditions Viewed in iMonitor . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Figure 26-6. Replication Error Resulting in Active Condition in iMonitor . . . . . . . . . . . . . . . 194
Figure 28-1. Line Card Failover Sequence of Events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Figure 30-1. Example of a Virtual Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Figure 30-2. Example VRRP Configuration in iBuilder . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Figure 30-3. Changing the Frequency of Router Priority Messages . . . . . . . . . . . . . . . . . . . 215
Figure 30-4. Changing a Remotes Router Priority Message Timeout . . . . . . . . . . . . . . . . . . 215
Figure 30-5. Example of LAN Port Monitoring Configuration for Multiple Ports in iBuilder . . . . 217
Figure 30-6. Example LAN Port Monitoring Configuration for Single Port in iBuilder . . . . . . . . 217
Figure 30-7. VRRP and Remote LAN Status Events in iMonitor . . . . . . . . . . . . . . . . . . . . . . 218
Figure 31-1. Transparent Layer 2 Emulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Figure 31-2. L2oS Reference Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Figure 31-3. L2oS Reference Model Applied to iDirect . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Figure 31-4. SDT Mode = VLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Figure 31-5. SDT Mode = QinQ (Double Tagged) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Figure 31-6. SDT Mode = Access (Remote Side Only) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
Figure 31-7. L2oS Forwarding Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Figure 31-8. Enabling L2oS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
List of Tables
About
Purpose
The Technical Reference Guide provides detailed technical information on iDirect technology
and major features as implemented in iDX Release 3.3.
Audience
The Technical Reference Guide is intended for iDirect Network Operators, network
architects, and anyone upgrading to iDX Release 3.3.
Contents
The Technical Reference Guide contains the following major sections:
iDirect System Overview
DVB-S2 in iDirect Networks
Modulation Modes and FEC Rates
iDirect Spread Spectrum Networks
Multichannel Line Cards
SCPC Return Channels
Adaptive TDMA
Multicast Fast Path
QoS Implementation Principles
TDMA Initial Transmit Power
Uplink Control Process
Remote Idle and Dormant States
Verifying Error Thresholds Using IP Packets
Global NMS Architecture
Security Best Practices
Global Protocol Processor Architecture
Document Conventions
This section illustrates and describes the conventions used throughout this document.
Getting Help
The iDirect Technical Assistance Center (TAC) and the iDirect Government Technical
Assistance Center (TAC) are available to provide assistance 24 hours a day, 365 days a year.
Software user guides, installation procedures, FAQs, and other documents that support iDirect
and iDirect Government products are available on the respective TAC Web site:
Access the iDirect TAC Web site at http://tac.idirect.net
Access the iDirect Government TAC Web site at http://tac.idirectgov.com
The iDirect TAC may be contacted by telephone or email:
Telephone: 703.648.8151
E-mail: tac@idirect.net
The iDirect Government TAC may be contacted by telephone or email:
Telephone: 703.648.8111
Email: tac@idirectgov.com
iDirect and iDirect Government produce documentation that are technically accurate, easy to
use, and helpful to our customers. Please assist us in improving this document by providing
feedback. Send comments to:
iDirect: techpubs@idirect.net
iDirect Government: techpubs@idirectgov.com
For sales or product purchasing information contact iDirect Corporate Sales at the following
telephone number or e-mail address:
Telephone: 703.648.8000
E-mail: sales@idirect.net
Document Set
The following iDirect documents are available at http://tac.idirect.net and contain
information relevant to installing and using iDirect satellite network software and equipment.
Release Notes
Software Installation Guide or Network Upgrade Procedure Guide
iBuilder User Guide
iMonitor User Guide
Installation and Commissioning Guide for Remote Satellite Routers
Features and Chassis Licensing Guide
Software Installation Checklist/Software Upgrade Survey
Link Budget Analysis Guide
This chapter presents a high-level overview of iDirect Networks. It provides a sample iDirect
network and describes the network architectures supported by iDirect.
System Overview
An iDirect network is a satellite network with a Star topology in which a Time Division
Multiplexed (TDM) broadcast downstream channel from a central hub location is shared by a
number of remote sites. Each remote transmits to the hub either on a shared Deterministic-
TDMA (D-TDMA) upstream channel with dynamic timeplan slot assignments or on a dedicated
SCPC return channel.
iDirect supports both traditional OSI Layer 3 TCP/IP networks and Layer 2 over Satellite (L2oS)
networks that transport Ethernet frames over the satellite link. The L2oS feature, which was
added in iDX Release 3.3.1, is described in Layer 2 over Satellite on page 221.
The iDirect Hub equipment consists of one or more iDirect Hub Chassis with Universal Line
Cards, one or more Protocol Processors (PP), a Network Management System (NMS) and the
appropriate RF equipment. Each remote site consists of an iDirect broadband satellite router
and the appropriate external VSAT equipment.
TDMA upstream carriers are configured in groups called Inroute Groups. Multiple Inroute
Groups can be associated with one downstream carrier. Any remote configured to transmit to
the hub on a TDMA upstream carrier is part of an Inroute Group. The specific TDMA upstream
carrier on which the remote transmits at any given time is determined dynamically during
operation. A remote that transmits on a dedicated SCPC return channel is not associated with
an Inroute Group. Instead, the dedicated SCPC upstream carrier is directly assigned to the
remote and to the hub line card that receives the carrier.
Prior to iDX Release 3.2, all TDMA upstream carriers in an Inroute Group were required to
have the same symbol rate, modulation and error coding. With the introduction of Adaptive
TDMA in iDX Release 3.2, the symbol rate and MODCOD of the carriers in an Inroute Group may
vary from carrier to carrier. Remotes in an Inroute Group move from carrier to carrier in real
time based on network conditions. Furthermore, with Adaptive TDMA, the individual carrier
MODCODs can adjust over time to optimize network performance for changing network
conditions. Adaptive TDMA allows for significantly less fade margin in the network design and
optimal use of upstream bandwidth during operation.
Figure 1-1 on page 2 shows an example of an iDirect network. The network consists of one
downstream carrier; two Inroute Groups providing the TDMA return channels for a total of
1200 remotes; and three remotes transmitting dedicated SCPC return channels to the hub.
iDirect software has flexible controls for configuring Quality of Service (QoS) and other
traffic-engineered solutions based on end-user requirements and operator service plans.
Network configuration, control, and monitoring functions are provided by the integrated NMS.
The iDirect software provides numerous features, including:
Packet-based and network-based QoS, including Layer 2 and Layer 3 packet
classification.
TCP acceleration (IPv4)
AES link encryption
Multicast support
End-to-end VLAN tagging
Layer 3 TCP/IP networks also support a long list of IP features, including DNS, DHCP, RIPv2,
IGMP, GRE tunneling, and cRTP. Layer 2 Ethernet-based networks provide Layer 3 protocol
transparency and simplified operation for applications such as Virtual Private Networks.
For a complete list of available features in all iDirect releases, see the iDirect Software
Feature Matrix available on the TAC Web site.
IP Network Architecture
The examples in this section apply to traditional iDirect TCP/IP networks which transport IPV4
traffic over the satellite link. Layer 2 networks, which transport Ethernet frames over
satellite, are discussed in Layer 2 over Satellite on page 221. Since Layer 3 protocols are
transparent to a Layer 2 network, a Layer 2 network can support Layer 3 protocols (such as
IPv6 and BGP) that are not supported in an iDirect TCP/IP network.
An iDirect network interfaces to the external world over Ethernet ports on the Remote
Satellite Routers and the Protocol Processor servers at the hub. The examples in Figure 1-2 on
page 4, Figure 1-3 on page 4, and Figure 1-4 on page 5 illustrate the IP level configurations
available to a Network Operator for Layer 3 networks.
The iDirect system allows a mix of networks that use traditional IP routing and VLAN based
configurations. This provides support for customers with conflicting IP address ranges. It also
allows multiple independent customers at individual remote sites by configuring multiple
VLANs on the same remote.
2 DVB-S2 in iDirect
Networks
Digital Video Broadcasting (DVB) represents a set of open standards for satellite digital
broadcasting. DVB-S2 is an extension to the widely-used DVB-S standard and was introduced in
March 2005. It provides for:
Improved inner coding: Low-Density Parity Coding
Greater variety of modulations: QPSK, 8PSK, 16APSK
Dynamic variation of the encoding on broadcast channel: Adaptive Coding and
Modulation
These improvements lead to greater efficiencies and flexibility in the use of available
bandwidth. iDirect supports DVB-S2 in both TRANSEC and non-TRANSEC networks.
DVB-S2 defines three methods of applying modulation and coding to a data stream:
ACM (Adaptive Coding and Modulation) specifies that every BBFRAME can be transmitted
on a different MODCOD. Remotes receiving an ACM carrier cannot anticipate the
MODCOD of the next BBFRAME. A DVB-S2 demodulator such as those used by iDirect
remotes must be designed to handle dynamic MODCOD variation.
CCM (Constant Coding and Modulation) specifies that every BBFRAME is transmitted at
the same MODCOD using long frames. Long BBFRAMEs are not used in iDirect. Instead, a
constant MODCOD can be achieved by setting the Maximum and Minimum MODCODs of
the outbound carrier to the same value. (See the iBuilder User Guide for details on
configuring DVB-S2 carriers.)
VCM (Variable Coding and Modulation) specifies that MODCODs are assigned according to
service type. As in ACM mode, the resulting downstream contains BBFRAMEs transmitted
at different MODCODs. Although iDirect does not support VCM, it does allow
configuration of specific MODCODs for multicast streams.
DVB-S2 in iDirect
Beginning with iDX Release 3.2, iDirect only supports DVB-S2 downstream carriers. Networks
with iNFINITI downstream carriers are only supported in earlier releases. All iDirect hardware
supported in this release can operate in a DVB-S2 network. The iBuilder User Guide lists all
available line card and remote model types.
iDirect DVB-S2 networks support ACM on the downstream carrier with all modulations up to
16APSK. An iDirect DVB-S2 network always uses short DVB-S2 BBFRAMES. iDirect also allows
the Network Operator to configure multiple multicast streams and specify the multicast
MODCOD of each stream.
DVB-S2 Downstream
A DVB-S2 downstream can only be configured as ACM. An ACM downstream is not constrained
to operate at a fixed modulation and coding. Instead, the modulation and coding of the
downstream varies within a configurable range of MODCODs. In iDirect, CCM is configured by
limiting the MODCOD range to a single MODCOD.
An iDirect DVB-S2 downstream contains a continuous stream of Physical Layer Frames
(PLFRAMEs). The PLHEADER indicates the type of modulation and error correction coding used
on the subsequent data. It also indicates the data format and frame length. Refer to
Figure 2-1.
The PLHEADER always uses /2 BPSK modulation. Like most DVB-S2 systems, iDirect injects
pilot symbols within the data stream. The overhead of the DVB-S2 downstream varies
between 2.65% and 3.85%.
The symbol rate remains fixed on the DVB-S2 downstream. Variation in throughput is realized
through DVB-S2 support, and the variation of MODCODs in ACM Mode. The maximum possible
throughput of the DVB-S2 carrier (calculated at 45 Msym/s and highest MODCOD 16APSK 8/9)
is approximately 155 Mbps. Multiple protocol processors may be required to support high
traffic to multiple remotes.
iDirect uses DVB-S2 Generic Streams with a proprietary variation of the LEGS (Lightweight
Encapsulation for Generic Streams) protocol for encapsulation of downstream data between
the DVB-S2 line cards and remotes. LEGS maximizes the efficiency of data packing into
BBFRAMES on the downstream. For example, if a time plan only takes up 80% of a BBFRAME,
the LEGS protocol allows the line card to include a portion of another packet that is ready for
transmission in the same frame. This results in maximum use of the downstream bandwidth.
ACM Operation
Adaptive Coding and Modulation (ACM) allows remotes operating in better signal conditions to
receive data on higher MODCODs by varying the MODCOD of data targeted to each remote to
match its current receive capabilities.
Not all data is sent to each remote at its most efficient MODCOD. Important system
information (such as time plan messages), as well as broadcast traffic, is transmitted at the
minimum MODCOD configured for the outbound carrier. This allows all remotes in the
network, even those operating at the worst MODCOD, to receive this information reliably.
The protocol processor determines the maximum MODCOD for all data sent to the DVB-S2 line
card for transmission over the outbound carrier. However, the line card does not necessarily
respect these MODCOD assignments. In the interest of downstream efficiency, some data
scheduled for a high MODCOD may be transmitted at a lower one as an alternative to inserting
padding bytes into a BBFRAME. When assembling a BBFRAME for transmission, the line card
first packs all available data for the chosen MODCOD into the frame. If there is space left in
the BBFRAME, and no data left for transmission at that MODCOD, the line card attempts to
pack the remainder of the frame with data for higher MODCODs. This takes advantage of the
fact that a remote can demodulate any MODCOD in the range between the carriers minimum
MODCOD and the remote current maximum MODCOD.
The maximum MODCOD of a remote is based on the latest Signal-to-Noise Ratio (SNR)
reported by the remote to the protocol processor. The SNR thresholds per MODCOD are
determined during hardware qualification for each remote model type. The Spectral
Efficiency of iDirect remotes at the threshold SNR for each MODCOD is documented in the
iDirect Link Budget Analysis Guide.
The hub adjusts the MODCODs of the transmissions to the remotes by means of the feedback
loop shown in Figure 2-2 on page 11. Each remote continually measures its downstream SNR
and reports the current value to the protocol processor. When the protocol processor assigns
data to an individual remote, it uses the last reported SNR value to determine the highest
MODCOD on which that remote can receive data without exceeding a specified BER. The
protocol processor includes this information when sending outbound data to the line card.
The line card then adjusts the MODCOD of the BBFRAMES to the targeted remotes accordingly.
NOTE: The line card may adjust the MODCOD of the BBFRAMEs downward for
downstream packing efficiency.
Figure 2-2 and Figure 2-3 show the operation of the SNR feedback loop and the behavior of
the line card and remote during fast fade conditions. Figure 2-2 shows the basic SNR reporting
loop described above.
Rx Line card
Figure 2-3 page 11 shows the back-off mechanism that exists between the line card and
protocol processor to prevent data loss. The protocol processor decreases the maximum data
sent to the line card for transmission based on a measure of the number of remaining
untransmitted bytes on the line card. These bytes are scaled according to the MODCOD on
which they are to be transmitted, since bytes destined to be transmitted at lower MODCODs
will take longer to transmit than bytes destined to be transmitted on a higher MODCODs.
Reduction in
downstream data
Rx Line Card
Figure 2-3. Feedback Loop with Back-Off from Line Card to Protocol Processor
Figure 2-4. Total Bandwidth vs. Information Rate in Fixed Bandwidth Operation
EIR is only enabled in the range of MODCODs from the remote Nominal MODCOD down to the
configured EIR Minimum MODCOD. Within this range, the system always attempts to allocate
requested bandwidth in accordance with the CIR and MIR settings, regardless of the current
MODCOD at which the remote is operating. Since higher MODCODs contain more information
bits per second, as the remote MODCOD increases, so does the capacity of the outbound
channel to carry additional information.
As signal conditions worsen, and the MODCOD assigned to the remote drops, the system
attempts to maintain CIR and MIR only down to the configured EIR Minimum MODCOD. If the
remote drops below this EIR Minimum MODCOD, it is allocated bandwidth based on the remote
Nominal MODCOD with the rate scaled to the MODCOD actually assigned to the remote. The
net result is that the remote receives the CIR or MIR as long as the current MODCOD of the
remote does not fall below the EIR Minimum MODCOD. Below the EIR minimum MODCOD, the
information rate achieved by the remote falls below the configured settings.
The system behavior in EIR mode is shown in Figure 2-5. The remote Nominal MODCOD is
labeled Nominal in the figure. The system maintains the CIR and MIR down to the EIR
Minimum MODCOD. Notice in the figure that when the remote is operating below EIR Minimum
MODCOD, it is granted the same amount of satellite bandwidth as at the remote Nominal
MODCOD.
Figure 2-5. EIR: Total Bandwidth vs. Information Rate as MODCOD Varies
Scaling
MODCOD Comments
Factor
16APSK 8/9 1.2382 Best MODCOD
16APSK 5/6 1.3415
16APSK 4/5 1.4206
16APSK 3/4 1.5096
16APSK 2/3 1.6661
8PSK 8/9 1.6456
8PSK 5/6 1.7830
8PSK 3/4 2.0063
8PSK 2/3 2.2143
8PSK 3/5 2.4705
QPSK 8/9 2.4605
QPSK 5/6 2.6659
QPSK 4/5 2.8230
QPSK 3/4 2.9998
QPSK 2/3 3.3109
QPSK 3/5 3.6939
QPSK 1/2 5.0596
QPSK 2/5 5.6572
QPSK 1/3 6.8752
QPSK 1/4 12.0749 Worst MODCOD
The following formula can be used to determine the information rate at which data is sent
when that data is scaled to the remotes Nominal MODCOD:
IRa = IRn x Sb / Sa
where:
IRa is the actual information rate at which the data is sent
IRn is the nominal information rate (for example, the configured CIR)
Sb is the scaling factor for the remotes Nominal MODCOD
Sa is the scaling factor for the MODCOD at which the data is sent
For example, assume that a remote is configured with a CIR of 1024 kbps and a Nominal
MODCOD of 16ASPK 8/9. If EIR is not in effect, and data is being sent to the remote at
MODCOD QPSK 8/9, then the resulting information rate is:
IRa = IRn x Sb / Sa
IRa = 1024 kbps x 1.2382 / 2.4605 = 515 kbps
For two scenarios showing how CIR and MIR are allocated for a DVB-S2 network in ACM mode,
see page 87 and page 89.
NOTE: When bandwidth is allocated for a remote, the CIR and MIR are scaled to
the remotes Nominal MODCOD. At higher levels of the Group QoS tree (Bandwidth
Group, Service Group, etc.) CIR and MIR are scaled to the networks best
MODCOD.)
DVB-S2 Configuration
Various iBuilder settings affect the operation of DVB-S2 networks. For details on configuring
DVB-S2, see the iBuilder User Guide. The following areas are affected:
Downstream Carrier Definition: When adding an ACM DVB-S2 downstream carrier, the
Network Operator must specify a range of MODCODs over which the carrier will operate.
Error correction for the carrier is fixed to LDPC and BCH. In addition, iBuilder does not
allow selection of an information rate or transmission rate for a DVB-S2 carrier as an
alternative to the symbol rate, since these rates vary dynamically with changing
MODCODs.
However, iBuilder provides a MODCOD Distribution Calculator that estimates the overall
Information Rate for the carrier based on the distribution of the Nominal MODCODs of
the remotes in the network. Access this calculator by clicking the MODCOD Distribution
button on the DVB-S2 Downstream Carrier dialog box. A similar calculator allows
estimation of CIR and MIR bandwidth requirements at various levels of the Group QoS
tree.
This chapter describes the carrier types, Modulation Modes and Forward Error Correction
(FEC) rates that are supported in iDX Release 3.3.
similar to the 4k TPC block and provides low TDMA overhead.The 170 byte payload size
provides an intermediate option when considering the trade off between bandwidth
granularity and minimizing TDMA overhead.
2D 16-State Coding has a number of benefits when compared to TPC and LDPC coding:
More granular FEC and payload size choices than turbo codes or LDPC
Efficiency gains on average of 1 dB
Cost savings from the use of smaller antenna and BUC sizes
Easy implementation since no new network design is required
2D 16-State Coding supports easy mapping of TPC to 2D 16-State configurations. For example,
the QPSK 2D16S-100B-3/4 offers similar performance and better spectral efficiency than the
TPC QPSK 1k block with .66 FEC.
The Link Budget Analysis Guide defines all upstream Modulation and Coding rates available
per payload size when using 2D 16-State Inbound Coding over TDMA and SCPC upstream
carriers. The LBA Guide also specifies EbN0 values and C/N thresholds for all upstream
MODCOD/block size combinations. See the LBA Guide sections Upstream TDMA Carrier
Performance Specifications and Upstream SCPC Carrier Performance Specifications for
details.
Beginning in iDX Release 3.2 the waveform formats for BPSK, QPSK and 8PSK employ the
Distributed Pilot TDMA (DP-TDMA) scheme to improve demodulator synchronization accuracy.
This permits more coding rates to be supported for each block size; better LBA C/N
performance thresholds; and increased frequency offset tolerance across all modulation
types. Spread Spectrum still employs the same waveform formats as in pre-3.2 releases,
except that BPSK with a spreading factor of 1 is no longer required or supported. The regular
BPSK waveforms with distributed pilots perform better than BPSK with spreading factor of 1
used in earlier releases.
The overhead symbols used for synchronization in DP-TDMA non-spread modes are distributed
throughout the burst, rather than concentrated in one block or a small number of large blocks
as in prior releases. This arrangement, sometimes referred to as preamble-less distributed
pilots, is shown in Figure 3-1.
Lgd Lp L1 Lp L1 Lp L1 Lp L2
Highlights of performance improvements that were introduced in iDX Release 3.2 with this
new waveform structure include:
Support for several 2D 16-State coding rates for each Modulation and Block size. This
provides more granularity in C/N dynamic range for both homogenous inroute groups of
static carriers and inroute groups using Adaptive TDMA.
C/N Performance improvement up to 1.5 dB or equivalent spectral efficiency in certain
combinations of modulation, coding rates and block sizes. Refer to the Link Budget
Analysis Guide for performance specifications.
Frequency tolerance of up to 1.5% of the symbol rate across all modulations
Improved TDMA burst detection performance
This chapter provides information about Spread Spectrum technology in iDirect networks. It
contains the following major sections:
Overview of Spread Spectrum on page 23
Spread Spectrum Hardware Components on page 24
Supported Forward Error Correction (FEC) Rates on page 25
TDMA Upstream Specifications on page 25
SCPC Upstream Specifications on page 25
Spreading takes place when the input data (dt) is multiplied with the PN code (pnt) which
results in the transmit baseband signal (txb). The baseband signal is then modulated and
transmitted to the receiving station. Despreading takes place at the receiving station when
the baseband signal is demodulated (rxb) and correlated with the replica PN (pnr) which
results in the data output (dr).
Each symbol in the spreading code is called a chip. The spread rate is the rate at which
chips are transmitted. For example, selecting No Spreading means that the spread rate is one
chip per symbol (which is equivalent to regular BPSK). Selecting a Spreading Factor of 4
means that the spread rate is four chips per symbol.
NOTE: The following model types require an iDirect licence to use the upstream
Spread Spectrum feature: Evolution XLC-11 line cards; Evolution X5, and X7
remotes.
Multichannel Line Cards are receive-only Evolution line cards capable of receiving up to eight
upstream carriers simultaneously. A Multichannel Line Card can receive either TDMA upstream
carriers or SCPC return channels with appropriate licensing. Evolution XLC-M line cards are
capable of simultaneously receiving up to 16 narrowband TDMA upstream carriers of up to 128
ksym per carrier.
Beginning with iDX Release 3.2, TRANSEC is supported on eM0DM line cards in multichannel
mode.
NOTE: Allow for 60 Watts of power for each Multichannel Line Card in a 20 slot
chassis. Total available power for each 20 slot chassis model type is specified in
the Series 15100 Universal Satellite Hub (5IF/20-Slot) Installation and Safety
Manual.
NOTE: Single Channel SCPC Mode is only selectable for Evolution eM1D1 line
cards. It cannot be selected on multichannel line cards.
NOTE: Prior to iDX Release 3.0, XLC-M and eM0DM line cards were available with
single channel TDMA support only. When upgrading from a pre-iDX 3.0 release,
these line cards are automatically set to Single Channel TDMA receive mode.
Table 5-1 shows various parameters associated with the Multichannel Line Cards.
Composite Limits
Max Composite Symbol Rate (ksym) N/A N/A 7500
1 (default)
1 (default)
up to 4 (license)
Channels per Card 1 up to 8 (license
up to 8 (license)
required)
up to 16 (XLC-M Narrowband)
* Additional information for Symbol Rate Limits of both Single Channel and Multichannel TDMAs can be found in the iDX 3.3 Link
Budget Analysis document.
NOTE: For Upstream TDMA and SCPC Modulation Modes and FEC Rates, see
Modulation Modes and FEC Rates on page 19.
The iBuilder User Guide contains procedures for configuring Multichannel Line Cards.
Beginning with iDX Release 3.0, a remote in a DVB-S2 network can be configured to transmit
to the hub either on a TDMA upstream carrier or on an SCPC upstream carrier. An SCPC
upstream carrier provides a dedicated, high-bandwidth, return channel from a remote to the
hub without the additional overhead of TDMA.
Remotes that transmit SCPC return channels (called SCPC remotes) receive the same
outbound carrier as the TDMA remotes in the network. However, unlike TDMA remotes, SCPC
remotes are not associated with Inroute Groups. Instead, a dedicated SCPC upstream carrier
is assigned directly to the hub line card that receives the carrier.
NOTE: SCPC upstream carriers received by a multichannel line card can have
different symbol rates, modulation modes and FEC rates. For more information on
multichannel SCPC return, see Multichannel Line Cards on page 27.
NOTE: For supported 2D 16-State Inbound Coding Modulation Modes, FEC Rates
and Block sizes see the Link Budget Analysis Guide.
The Move operation in the iBuilder tree allows the Network Operator to switch a remote
between TDMA and SCPC. During the move, select the SCPC line card and channel or the
TDMA Inroute Group to which the remote is moving as well as the appropriate QoS profile for
the new configuration. For details, see Moving Remotes Between Networks, Inroute Groups,
and Line Cards in the iBuilder User Guide.
The initial transmit power and maximum transmit power configured for a remote depend on
the characteristics of the upstream carrier. Therefore, the values configured for these
parameters will be different for TDMA and SCPC, and they will vary for SCPC carriers that are
not similar.
TDMA maximum and initial power and SCPC maximum and initial power (per carrier) are
defined separately in iBuilder. This allows easy switching of remotes between TDMA and SCPC
and between different SCPC carriers by preconfiguring these power parameters for any SCPC
carriers that the remote will transmit.
NOTE: When a remote is moved to an SCPC carrier that does not have the initial
and maximum power values preconfigured for that carrier, the remote becomes
incomplete in iBuilder. If this happens, configure the power settings on the remote
for that carrier for the remote to become operational.
See Adding Remotes in the iBuilder User Guide for details on configuring TDMA and SCPC
initial and maximum power for a remote in iBuilder. See the Installation and Commissioning
Guide for iDirect Satellite Routers for details on determining a remotes initial and maximum
powers.
7 Adaptive TDMA
Beginning with iDX Release 3.2, iDirect supports Adaptive TDMA. Adaptive TDMA (ATDMA)
allows remotes to dynamically adapt their transmissions to the hub to use the optimal symbol
rate and Modulation and Coding (MODCOD) for current conditions. This can reduce the amount
of rain margin that must be designed into the upstream link, significantly improving clear sky
throughput of the remotes when compared to non-adaptive TDMA networks. Eliminating the
extent to which system resources must be reserved for worst-case conditions allows
additional resources to be used for data transmission. This is the same principle that underlies
the use of Adaptive Coding and Modulation (ACM) for DVB-S2 outbound carriers. (See DVB-S2
in iDirect Networks on page 7.)
Theory of Operation
The core element of iDirect's Adaptive TDMA system is an inrouteS group of heterogeneous
TDMA carriers supporting different transmission rates and providing different levels of
protection against adverse channel effects such as rain fade. Individual remotes are assigned
time slots on upstream carriers based on their current demand and capability, as determined
by the channel state. The configuration of the inroute group automatically adapts over time
to maximize the system efficiency.
An inroute group can be regarded as a fixed portion of space segment bandwidth and power
partitioned into TDMA carriers with different properties. A collection of carrier definitions
that can be assigned to an inroute group is called an Inroute Group Composition (IGC). The
IGC currently assigned to an inroute group determines the MODCODs of each carrier in the
inroute group at the present time. Up to three IGCs can be configured for a single inroute
group but only one IGC is assigned to the inroute group at any time.
An inroute group can include carriers with different symbol rates and MODCODs. The IGC
currently assigned to the inroute group defines the MODCODs of the various carriers in the
group. An adaptive carrier can change MODCODs from one IGC to another. When a new IGC is
assigned to the Inroute Group, the MODCODs of the individual adaptive carriers change
according to the newly-applied IGC definition. Note however that the symbol rates and center
frequencies of the individual carriers remain the same in all IGCs.
The MODCODs of the upstream carriers do not vary frame by frame. The protocol processor
assigns TDMA slots on the upstream carriers based on the upstream link conditions of the
individual remotes. In the short term, remotes adapt to changing conditions by frequency
hopping among carriers with different properties. Over time, the protocol processor may also
adjust the configuration of the inroute group by selecting different IGCs as network conditions
change. Whenever a different IGC is assigned to the inroute group, the MODCODs of the
upstream carriers change to match the configuration of the newly-selected IGC. Thus the
system automatically adapts in two ways: frame-by-frame through remote frequency hopping,
and longer term by selecting the optimal IGC for the prevailing conditions.
Figure 7-1 shows how Adaptive TDMA operates in an iDirect network for a single inroute
group.
Remote
Terminals Power, signal quality
Short-term
Short-term adaptivity:
adaptivity:
Determine
Determine which
which remotes
remotes can
can use
use which
which Statistics
carriers
carriers in
in the
the current
current Inroute
Inroute Group
Group
Bandwidth Constraints Composition
Compposition
Requests
Real-time
Real-time resource
resource management:
management:
Assign
Assign slots
slots in
in current
current composition
composition to
to
Inroute
of Inroute
remotes,
remotes, respecting
respecting constraints
constraints on
on
the
the carriers
carriers they
theyy can
can use
use Inroute Group Composition 1
adaptivity:
Long-term adaptivity:
Compositions
Group Compositions
set of
suitable set
Contention Compositon
most suitable
level choice
Long-term
Group
Design most
Medium-term
Medium-term adaptivity:
adaptivity: Inroute Group Composition 2
Select
Select best
best among
among defined
defined Inroute
Inroute
Design
Group
Group Compositions
Compositions
NOTE: Beginning with iDX Release 3.2, iDirect networks support Inroute Groups
with different sized upstream carriers. Because of this, C/N0 has replaced C/N as
the measurement of signal quality used to monitor and control remote
transmissions on the upstream carriers. See C/N0 and C/N on page 38 for details.
Each remote is assigned slots on carriers with symbol rates and MODCODs that are estimated
to be within the remote capabilities for the current link conditions. The slot assignment
algorithm attempts to allocate slots on the remote nominal carrier. However, this is not
always possible due to the overall demand for upstream traffic slots in the inroute group.
Therefore, during periods of contention for upstream bandwidth, a remote may be assigned
slots on carriers that are less efficient or have lower peak throughput than the remote current
nominal carrier once all slots matching the nominal carrier parameters have been assigned.
This does not affect the overall bandwidth efficiency, which is determined by the IGC
currently being used.
As in earlier releases, the bandwidth allocation algorithm schedules bursts in TDMA frames for
each remote in accordance with the rules and priorities defined by the Group QoS settings for
the inroute group. However, for Adaptive TDMA, the algorithm must also account for the
dynamic nature of the total capacity available for each remote since the subset of carriers on
which a remote is able to transmit can change according to the current link conditions.
As an example of short-term adaptivity, consider a remote experiencing a rain fade. When the
remote enters the fade, the C/N of the remote bursts as measured at the hub drops, limiting
the set of upstream carriers on which the remote can successfully transmit. This causes the
Uplink Control Process to change the remote nominal carrier to a more protected carrier once
all power headroom available on the current nominal carrier has been exhausted.
When allocating new bandwidth to the remote during the fade, the bandwidth allocation
algorithm only considers available slots on the more limited subset of carriers. Since the
remote nominal carrier is set to a more protected carrier with lower throughput, the remote
is able to stay in the network at the expense of bandwidth efficiency and/or peak rate. When
the fade passes, the remote nominal carrier is changed back to the more efficient carrier.
For more information on how the hub selects a remote nominal carrier, see Uplink Control
Process on page 97.
Fixed IGC: If the Network Operator selects Enable Fixed IGC, then a specific IGC must be
selected as the Fixed IGC. In that case, the IGC selection algorithm is not executed.
Allowed Dropout Fraction: During the trial scheduling of an IGC executed as part of the
IGC selection process, the algorithm calculates the number of remotes that theoretically
would drop out of the network if that IGC is selected for the inroute group. If the
algorithm estimates that the percentage of remotes unable to sustain transmissions on
the most protected carrier of the IGC would exceed the configured Allowed Dropout
Fraction, then that IGC is not selected. If the Allowed Dropout Fraction is exceeded for
all IGCs, the default IGC is selected for the inroute group. Note that remotes are never
dropped explicitly as a result of this assessment. Remotes are only dropped if they fail to
maintain the link during operation.
Default IGC: The IGC that is selected if the Allowed Dropout Fraction is exceeded for all
IGCs defined for the inroute group.
NOTE: A severe downlink fade is observed at the hub as simultaneous fading of all
remotes. iDirect recommends including a very robust IGC among those defined for
the inroute group to allow the system to operate during a severe downlink fade.
and the measurement bandwidth B. In iDirect documentation that uses C/N such as the Link
Budget Analysis Guide, B is assumed to equal to the symbol rate of the carrier.
An adaptive system must constantly evaluate and compare the received C/N on one carrier
with what can be achieved on another carrier. For example, if a line card is receiving a
remote at a C/N of 11 dB on a carrier with a symbol rate of 1 Msps, what can be achieved on
a 2 Msps carrier? In this example, the corresponding C/N on the 2 Msps carrier is 8 dB.
Although the carrier power C is the same for both carriers, doubling the bandwidth doubles
the noise power. Therefore, the C/N is halved (lowered by 3 dB). Rather than making
comparisons with arbitrary references (and risking mistakes and confusion), the iDirect
system uses C/N0 as the common measure.
C/N0 can be thought of as the C/N achievable in a bandwidth of 1 Hz. The fact that this is a
stand-alone measure which does not need to be qualified by any signal properties is the
primary reason that the iDirect system now uses C/N0 extensively to measure and
characterize upstream performance.
C/N0 is equal to C/N + 10log10(Symbol Rate) and is expressed in dBHz. For this example,
C/N0 = 11 + 10log10(1x106) = 8 + 10log10(2x106) = 71 dBHz
Demodulation thresholds can also be expressed in terms of C/N0. The carrier MODCOD
(modulation and coding combination) corresponds to a certain threshold C/N. The C/N
threshold combined with the symbol rate is used during operation to compute the carrier
threshold C/N0.
A remote is allowed to transmit on a given upstream carrier when the C/N0 that can be
achieved is greater than or equal to the C/N threshold C/N0 of the carrier. The C/N0 is
therefore a direct and unified measure of the ability to use any of the available carriers in the
inroute group. This is another useful property of C/N0.
System designers who are familiar with link budget calculations will also already be familiar
with C/N0. This is usually the first signal quality metric computed in a receiver. It is typically
defined as:
C/N0 = EIRP (Path loss) + (Receiver G/T) (Boltzmanns Constant)
Figure 7-2. Fade Slope Distribution at Various Fade Levels (ITU-R Rec. P.1623-1)
aggressiveness indicates the accepted probability that the fade will exceed the margin
before the system can react.
Probability of
Scheme Fade Slope S (dB/s)
Exceeding Slope
Very Aggressive 10-2 0.03
-3
Aggressive 10 0.07
Medium 10-4 0.17
-5
Conservative 10 0.35
The M1 margin is intended to cover measurement uncertainties as well as allowing for the
system reaction time for fade detection. In the presence of time-varying fade, this parameter
is closely related to the spacing between measurements. With the above assumptions, the
relationship between the M1 margin and the measurement interval is
M1 = (3e + 0.5)S
Hence, an additional trade-off can be accessed, between margin and overhead (larger e
means smaller overhead for idle terminals). However, remember that the worst-case reaction
time is 10e, so this should probably not be pushed too far.
As an example, two situations are chosen: Low margin where e is a few seconds, and
Low-overhead where e is three times that of the low margin case. For both situations, the
numbers are also more aggressive for the cases that are more aggressive in terms of the fade
slope risk assumption.
Dedicated measurement probe bursts are only used by terminals that are typically idle.
Actual traffic bursts are also used for the measurements, so there is no measurement
overhead for an active terminal. Therefore, for systems with active terminals, the Low
margin setting will be the preference. The Low-overhead setting is intended for systems
that serve as back-up to other communications means, or other situations where a large
population of terminals is normally idling, so it is desirable to keep the overhead low. The
example choices are summarized in Table 7-2.
The configurable parameters address mainly the clear-weather and low-fade situations;
automatic adaptation is performed to speed up reaction time during fades.
The M2 system hysteresis margin is intended to prevent spurious switching of the nominal
carrier. It should be set higher than the power control step size supported by the terminals.
Initial experience suggests that this margin also caters effectively for a number of additional
minor uncertainties; a value of 1.0 dB has been found to be a suitable default for a variety of
situations. If M2 is set too low, extra processing will be incurred. In extreme cases, terminals
may be forced to log out because the system cannot keep up with the rapid carrier changes. If
the M2 is set too high, capacity will be wasted because the carrier usage (time slot
assignments) will be more conservative than necessary.
The M3 margin is applied to acquisition bursts for the purpose of selecting an initial nominal
carrier; its value is related to the C/N estimation accuracy for this signal. The M3 margin is
also used to accommodate initial uncertainties in EIRP settings, BUC gain variation with
frequency. A fixed value of 2 dB is suggested based on initial experience, design analyses of
the RF components and on the relevant estimation algorithms.
There are a number of situations where the suggested defaults or the above example may not
be appropriate. One such situation occurs when a single homogeneous (non-adaptive) inroute
group composition is used. This may be the case immediately after upgrading from a previous
release. In this situation, it may be useful to reduce M3, since there is no substantial choice to
be made for the initial nominal carrier. On the other hand, M1 and/or M2 should likely be
increased in this special case, since they will take the place of the fade margin used in
previous releases.
values in the inroute group. This includes the various compositions, since they can change
instantaneously on any frame boundary.
All terminals have a finite dynamic range of transmitter power/EIRP. At one end, this is
limited by the maximum power of the BUC. At the other end, it is limited by the level-control
attenuators and by stability considerations. The total dynamic range of an iDirect modem's IF
output power is large; however, not all of it is available to accommodate signal differences. In
particular, substantial allowances must be made in order to accommodate differences in BUC
gain and IF cable losses.
For all iDirect modems, the dynamic range available for real-time adjustment of burst power
is 15 dB. Hence, the inroute group should be arranged such that the difference between the
lowest and highest C/N0 threshold of the carriers that any terminal needs to access does not
exceed it. This 15 dB constraint, which applies across all compositions, is not very limiting if
taken into consideration when designing the inroute group and its compositions.
A 15 dB fade is a very rare event, even in extreme locations. As an example, Recife in Brazil
(the 0.01% rainfall rate is 69 mm/hr, old climatic zone N) has an elevation of 20to a
satellite, 15 dB still corresponds to 99.9% availability at Ku-band and 99.3% at Ka-band. For a
location with slightly more moderate but still substantial rainfall (Montreal, 38 mm/hr, old
zone K), the figures are 99.99% at Ku-band and 99.85% at Ka-band.
Power Control
For ATDMA to work well in the network, it is a must to set the maximum transmit power
accurately. In order to maintain the transmit power value close to the ODU limit, the 1 dB
compression tests must be performed from time to time, even though this task can be
troublesome. Additionally, having accurate maximum power proves another complication for
beam switching remotes. As a remote switches from one beam to another, they utilize
different parts of the RF spectrum, so the gained response from the BUC is never flat when
moving from one part of the RF spectrum to another. This means that the maximum transmit
power is different for every beam. Even if the maximum transmit power is calibrated for
every beam, there is still a risk that the value is 100% correct for every beam.
During the power transitions, typically during fast rain fades and changes in inroute
compositions, CRC are observed when a remote drastically changes its power level. For all
remote types, their accuracy is not always 100% when moving from one power level to
another. As they try to self-correct through the uplink power control, there are chances of
CRCs happening during this short transition period.
Frame Filling
For the frame filling efficiency, it should first be recognized that tests with real-world
arrangements in terms of carrier rates and numbers of terminals provide filling efficiencies on
all carriers typically above 98%. The purpose of this section is to explain why the number is
not a solid 100%, as well as providing rationales for the lower figures than can be observed in
some artificial test cases.
Time slots in the iDirect system can have very different durations. The main constraint
currently imposed is that all slots have the same payload size (100, 170, or 438 bytes). There
is no constraint on code (FEC) rates between carriers, so when a payload is encoded by the
turbo-code with one of the supported rates between 1/2 and 6/7, the number of encoded bits
can be different on different carriers. Furthermore, the different modulation schemes pack
different numbers of coded bits in each transmitted symbol (from 3 bits/symbol for 8PSK to 1
for BPSK). This means that the number of symbols in a burst can vary even more between
carriers. Finally, there is no particular constraint on the symbol rate (other than the overall
limits). Taken together, these factors mean that the duration of a time slot can vary by nearly
a factor of 200. For example, for 170 bytes payload, the time slot duration can be between
about 140 s and 24 ms. Furthermore, the durations are not generally simple integer
multiples of one another.
Time slot boundaries typically do not align between carriers in the inroute group. Since the
terminals can only transmit on one carrier at a time, assigning a slot on one carrier generally
will prevent it from using more than one time slot on all other carriers. This phenomenon
known as shadowing, is a process that assigns specific slots to specific terminals and
attempts to minimize any issues. The problem is effectively eliminated when the number of
carriers, and especially the number of terminals are not considerably small. A slot that cannot
be assigned to one terminal, due to shadowing or for other reasons, can be assigned to
another terminal with higher probability. This, when combined with a sophisticated
algorithm, allows us to achieve the 98+% figure quoted above.
It is possible to create artificial situations where the task of filling the frame completely
becomes impossible or impractical. Either with a very small number of terminals or a small
number of carriers and, in particular, if the time slot durations vary widely between carriers,
the achievable frame filling may be reduced substantially due to excessive shadowing. As
stated, this is not a major issue in most practical situations.
Generally, frame filling works well for simple inroute groups with only 2 types of carriers.
However, as the inroute groups become more complex with multiple carriers, frame filling
efficiencies are lowered to 80% and 90% range. Additionally, depending on the GQoS rules
configured for the inroute group, the distortion of allocation for each remote may occur. This
issue is more apparent for complex ATDMA use cases that require multiple types of carriers.
For these complex ATDMA use cases, additional trade offs are required with some over-
provisioning of bandwidth for disadvantaged carriers.
The MODCODs of an Adaptive carrier are configured when the Inroute Group Compositions are
defined for the Inroute group. After adding an Adaptive carrier to an inroute group, the
operator can select a different MODCOD for the carrier in each IGC. (See the iBuilder User
Guide for details.)
Both Static and Adaptive carriers can be included in any inroute group. While the MODCOD of
an Adaptive carrier can change from one IGC to another, the MODCOD of a Static carrier must
be the same in all IGCs. An inroute group containing only static carriers can have one and only
one IGC since the MODCODs of static carriers cannot change.
Both multichannel line cards and single channel line cards can be included in the same inroute
group. However, only multichannel line cards and receive-only eM1D1 line cards can receive
Adaptive carriers. An upstream carrier assigned to any other type of single channel line card
must be static. That is, the MODCOD of the carrier received by the line card cannot change
from one IGC to another.
The following constraints apply to the configuration of inroute groups and TDMA upstream
carriers:
Only upstream carriers assigned to eM0DM, XLC-M, or receive-only eM1D1 line cards can
be configured as Adaptive. All other upstream carriers must be static.
The upstream carrier Payload Block Size configured in iBuilder must be the same for all
carriers defined for any inroute group and in all IGCs.
A maximum of three IGCs can be configured for an inroute group.
All inroute groups have at least one IGC even if no Adaptive carriers are included.
To define multiple IGCs, an inroute group must have at least one Adaptive carrier.
For each carrier, the center frequency and symbol rate must be the same for all IGCs.
A different MODCOD can be selected for each Adaptive carrier in each IGC.
Spread Spectrum carriers must be Static. (The MODCOD must be the same for all IGCs.)
Table 7-3 shows an example of an inroute group containing four upstream carriers and three
Inroute Group Compositions.
Center Symbol
Static /
Carrier ID Frequency Rate IGC1 IGC2 IGC3
Adaptive
(MHz) (ksym/s)
A1 Adaptive 1300.000 1024 8PSK 2/3 8PSK 2/3 QPSK 3/4
A2 Adaptive 1301.229 1024 8PSK 2/3 8PSK 2/3 QPSK 2/3
A3 Adaptive 1302.150 512 8PSK 6/7 8PSK 2/3 8PSK 2/3
S1 Static 1302.611 256 QPSK 1/2 QPSK 1/2 QPSK 1/2
The first three carriers (A1, A2 and A3) in Table 7-3 are Adaptive; the fourth carrier (S1) is
Static. Notice that the center frequency and symbol rate of each carrier remain constant
across all compositions. The MODCODs of the Adaptive carriers vary from IGC to IGC. The
MODCOD of the Static carrier is the same for all IGCs. The example in Table 7-3 was designed
for Ku band in the Amazon rain forest.
Remote Configuration
A number of parameters affect Adaptive TDMA operations for individual remotes. These
parameters are described in this section.
In an Adaptive system, it is crucial to set each remote TDMA Initial Power and TDMA Maximum
Power correctly. Incorrect settings that did not adversely affect system operations in earlier
releases may cause problems in an Adaptive system. iDirect strongly advises reviewing the
procedures for setting these parameters to ensure they are correct for all remotes before
updating an inroute group to use Adaptive TDMA. These procedures are contained in the
Installation and Commissioning Guide for iDirect Satellite Routers.
CAUTION: Adaptive TDMA allows remotes to operate much closer to the maximum
allowed Power Spectral Density. Care must be taken to ensure the TDMA Initial
Power and TDMA Maximum Power of each remote are set accurately. In addition,
the effects of downlink fade must be accounted for in the link design.
NOTE: Maximum Link Impairment only affects IGC selection. It does not affect
the amount of bandwidth allocated to the remote.
Beginning with iDX Release 3.0, iDirect supports the Multicast Fast Path feature. Multicast
Fast Path improves multicast throughput, allowing service providers to offer more reliable
and higher performing services for organizations that want to expand their use of HD
broadcast, IPTV, distance learning, digital signage and other video applications. Beginning
with iDX Release 3.2, iDirect has added support for encryption of Multicast Fast Path traffic.
The Multicast Fast Path feature is available on all remote model types. The remote model
types that can receive encrypted Multicast Fast Path traffic are listed on page 50.
Beginning with iDX Release 3.3.3.1, an X7 remote can be configured to receive Multicast Fast
Path traffic on a separate outbound carrier using its second receiver. A license is required to
enable the second receiver on an X7.
Overview
When Multicast Fast Path is enabled for a multicast stream, downstream multicast packets
received by a remote are forwarded directly to the Ethernet by the remote firmware. This
bypasses software processing on the remote resulting in improved throughput. Multicast
traffic that does not use the Fast Path feature is handled by the full software stack and
processed as regular data traffic. This limits the maximum aggregate throughput. Using the
Multicast Fast Path feature to bypass software processing on the remote results in much
higher throughput of multicast traffic.
Multicast Fast Path has significantly less impact on total throughput than non-fast path
multicast. Therefore, the unicast performance of the remote is much less affected by
multicast traffic when Multicast Fast Path is enabled for the multicast stream.
There is no requirement to use the Multicast Fast Path feature to transmit user multicast
traffic. Multicast implementations from releases that do not support Multicast Fast Path can
still be used for downstream multicast traffic. As in prior releases, NMS multicast traffic
continues to be transmitted as regular multicast traffic.
Multicast Fast Path streams, allowing the operator to prioritize and better manage multicast
traffic.
All downstream multicast packets that match the rules configured for a Multicast Fast Path
Application are forwarded to the Ethernet by the remote firmware. This bypasses IGMP
processing by the remote software, effectively acting like a persistent multicast group on the
eth0 interface of the remote. Therefore, it is not necessary to explicitly configure a
persistent multicast group for the remote LAN interface for Multicast Fast Path streams.
However, iDirect does recommend configuration of persistent multicast groups in iBuilder for
Networks using Multicast Fast Path streams. This ensures that the Protocol Processor forwards
the multicast packets to the transmit line card for transmission on the downstream carrier.
See the section titled Configuring Remotes for Multicast Fast Path in the iBuilder User
Guide for details on configuring Multicast Fast Path.
As mentioned above, a Protocol Processor blade can generate Multicast Fast Path encryption
keys. The samnc process that runs on the blade is responsible for generating the keys. If the
samnc process is restarted or if multicast responsibility moves to a different blade:
The blade generates and distributes new keys
The time to the next key roll is reset
NOTE: Checking Multicast Overlay enables encrypted Multicast Fast Path traffic
to Evolution X7 remotes over a secondary network.
Refer to the section titled Configuring Remotes for Multicast Fast Path in the iBuilder User
Guide for details on configuring Multicast Fast Path.
NOTE: The primary network refers to the iBuilder network in which the X7 remote
is configured. The secondary network refers to the network transmitting the
downstream carrier with the Multicast Fast Path traffic.
In iDX Release 3.3.3, the X7 Multicast Fast Path on a Second Downstream Carrier feature
provides the following functionality:
An X7 remote can use the second receiver to receive Multicast Fast Path streams from an
iDirect network (that is, a secondary network) other than the one in which it is
configured
When the Multicast Fast Path traffic is encrypted, an X7 can receive Multicast Fast Path
traffic from a single downstream only
This may be the downstream carrier of the primary network or the downstream carrier of
a secondary network received by the X7s secondary receiver.
An X7 always receives its Multicast Fast Path decryption keys from the downstream carrier
received by its primary receiver, even when the Multicast Fast Path traffic is received by its
secondary receiver. Therefore, whenever a secondary network is used for sending encrypted
Multicast Fast Path traffic to X7 remotes, the Protocol Processor distributing the keys to the
remotes and the Protocol Processor encrypting the Multicast Fast Path traffic must have the
same encryption key.
A Global Key Distribution (GKD) Server must be configured to synchronize the keys across
multiple networks. A GKD Server is required even if the two networks are controlled by the
same Protocol Processor.
NOTE: When an X7-ER router is deactivated and not in the network, the second
receiver will remain active and receive multicast fast path traffic from the
primary network. This is a known behavior of this product.
The general tasks that must be performed to enable Multicast Fast Path to the secondary
receiver on an X7 remote are summarized here. For more details, see the iBuilder User
Guide.
Use iBuilder to configure the following items for X7s secondary network:
1. In the Downstream Application folder, configure the Multicast Application Profile and
associated rules for the multicast stream.
2. On the Network Group QoS tab, create a Multicast Application; assign the Multicast
service profile to it; and add the Multicast Service Profile.
NOTE: Record the number of the Multicast Fast Path Stream displayed on the
Multicast QoS Application dialog box. The stream number is required to select the
multicast streams on the Remote VSAT-2 tab.
3. On the Network Information tab, configure a Persistent Multicast Group for each VLAN
(including the default VLAN) configured by the Multicast Fast Path Application Profile to
carry the Multicast Fast Path packets.
NOTE: Without a Persistent Multicast Group, the Protocol Processor will not
forward the Multicast traffic to the transmit line card for transmission on the
downstream carrier.
5. On the Network tabs, there is no action necessary if the second downstream is controlled
by the local NMS for the primary network. If the second downstream is controlled by
another NMS in a secondary network, perform the following at the Network Information
tab:
Check Multicast Encryption
Check Multicast Overlay
6. For the primary network, at the Remote VSAT-2 tab, check Enable and do the following:
Configure the Second Receiver
Select Rx Frequency and Symbol Rate
Select the Authorized Multicast Fast Path Streams
For details on configuring the second receiver on an X7 remote, see the section titled VSAT-2
Tab in the Configuring Remotes chapter of the iBuilder User Guide.
The Multicast Fast Path streams have been configured in the secondary network.
NOTE: If the Multicast Fast Path stream is to be received only by the second
receivers of X7 remotes in the primary network, skip the step for assigning the
Service Profile to remotes when configuring the MCFP streams in the secondary
network.
The Evolution X7 remotes second receivers have been enabled and configured to receive
the Multicast Fast Path Streams in the primary network.
NOTE: An Evolution X7s primary and secondary networks may be managed by the
same Protocol Processor, by different Protocol Processors, or even by different
Network Management Systems.
Enabling both primary and secondary networks to receive the Multicast Fast Path encryption
keys from the GKD requires configuring the Protocol Processors, Networks, and Remotes for
both networks using iBuilder client.
Network Configuration
Configure the Network as follows:
1. At the Network Information tab, click Multicast Encryption and Multicast Overlay. See
Figure 8-3.
2. Click OK.
NOTE: There is no action necessary at the Network Information tab if the second
downstream is controlled by the local NMS for the primary network.
X7 Remote Configuration
Configure the Remotes as follows:
1. Right-click the X7 remote in the secondary network and select ModifyItem from the
menu.
2. Click the Custom tab.
3. In the Hub-side Configuration pane, enter the following Custom Key:
[RMT_FASTPATH_RX2]
encryption = 1
base = 0xffe0
bitmap = 0x000X
Where, X represents the number of channels (in hex value; for instance, 0x0001 to
represent channel 1) where the remote will be receiving the Multicast Fast Path traffic.
4. In the Remote-side Configuration pane, enter the following Custom Key:
[ENC]
enc_enabled = 1
enc_mode = 6
5. Click OK to save the changes to the remote.
Figure 8-4. Protocol Processor Options File with GKD Node Definitions
4. Delete everything from the options file except the GKD_NODE definitions.
5. If creating a GKD options file for generating Multicast Fast Path Encryption keys, add the
following to the beginning of the options file:
[GKD_KEY_0]
gkd_key_name = multicast_pp1_tx_key
[GKD_KEY_1]
gkd_key_name = multicast_pp1_rx_key
[GKD_NODE_10]
priority = 10
connect_addr = INET;{Primary PP tunnel interface IP};45001
Where, {Primary PP tunnel interface IP} is the tunnel interface IP address of the PP on
the primary network.
NOTE: All instances of the gkd_key_type keys must be removed from the options
file as well.
NOTE: Each GKD options file must be named gkd_opts.opt and must be present in
the /etc/idirect/gkd directory on the GKD Server.
To create the options files for the other two GKDs in the example, make two additional copies
of this options file and change the GKD_LOCAL_CFG definitions to match the priorities of the
other two GKDs.
GKD
Link Rx1 Multichannel Fast Rx2 Multichannel Fast Configured
Description
Encryption Path Encryption Path Encryption for Multicast
Keys
MCFP Encryption MCFP Encryption
GKD
Link Rx1 Multichannel Fast Rx2 Multichannel Fast Configured
Description
Encryption Path Encryption Path Encryption for Multicast
Keys
MCFP Encryption MCFP Encryption
9 QoS Implementation
Principles
This chapter describes Quality of Service and its general implementation in iDirect networks.
It also includes several Group QoS operational scenarios. For additional material on this topic,
see the chapter titled Configuring Quality of Service for iDirect Networks in the iBuilder
User Guide.
QoS Measures
When discussing QoS, at least four interrelated measures are considered: Throughput,
Latency, Jitter, and Packet Loss. This section describes these parameters in general terms,
without specific regard to an iDirect network.
Throughput. Throughput measures the amount of user data that is received by the end user
application. For example, a G729 voice call without additional compression (such as cRTP), or
voice suppression, requires a constant 24 Kbps of application-level RTP data to achieve
acceptable voice quality for the duration of the call. Therefore this application requires 24
Kbps of throughput. When adequate throughput cannot be achieved on a continuous basis to
support a particular application, QoS can be adversely affected.
Latency. Latency measures the amount of time between events. Unqualified latency is the
amount of time between the transmission of a packet from its source and the receipt of that
packet at the destination. If explicitly qualified, it may also mean the amount of time
between a request for a network resource and the time when that resource is received. In
general, latency accounts for the total delay between events and it includes transit time,
queuing, and processing delays. Keeping latency to a minimum is very important for Voice
Over IP (VoIP) applications for human factor reasons.
Jitter. Jitter measures the variation of latency on a packet-by-packet basis. Referring to the
G729 example again, if voice packets (containing two 10 ms voice samples) are transmitted
every 20 ms from the source VoIP equipment, ideally those voice packets arrive at the
destination every 20 ms; this is a jitter value of zero. When dealing with a packet-switched
network, zero jitter is particularly difficult to guarantee. To compensate for this, all VoIP
equipment contains a jitter buffer that collects voice packets and forwards them at the
appropriate interval (20 ms in this example).
Packet Loss. Packet Loss measures the number of packets that are transmitted by a source,
but not received by the destination. The most common cause of packet loss is network
congestion. Congestion occurs whenever the volume of traffic exceeds the available
bandwidth causing packets to fill queues internal to network devices at a rate faster than
those packets can be transmitted from the device. When this condition exists, network
devices drop packets to keep the network in a stable condition. Applications that are built on
a TCP transport interpret the absence of these packets (and the absence of their related
ACKs) as congestion and they invoke standard TCP slow-start and congestion avoidance
techniques. With real-time applications, such as VoIP or streaming video, it is often
impossible to gracefully recover these lost packets because there is not enough time to
retransmit lost packets. Packet loss may affect the application in adverse ways. For example,
parts of words in a voice call may be missing or there maybe an echo; video images may break
up or become block-like (pixilation effects).
Service Levels
Each packet that enters the iDirect system is classified into one of the configured Service
Levels. A Service Level may represent a single application (such as VoIP traffic from a single IP
address) or a broad class of applications (such as all TCP based applications). Each Service
Level is defined by one or more packet-matching rules.
In iDirect TCP/IP networks, the set of rules for a Service Level allows logical combinations of
comparisons to be made between the following IP packet fields:
Source IP address
Destination IP address
Source port
Destination port
Protocol (such as DiffServ DSCP)
TOS priority
TOS precedence
VLAN ID
The Layer 3 packet classifiers listed above can also be used in L2oS networks to classify
Ethernet frames containing IPv4 packets. In addition, when using L2oS, the following Layer 2
classifiers can also be configured in iBuilder:
Source MAC Address
Destination MAC Address
SVN ID
SVN PCP
Ethernet Type
Effective Ethernet Type
VLAN PCP
An iDirect TCP/IP network is limited to IPv4 traffic. However, since higher-level protocols are
transparent to Layer 2, IPv6 can also be transported when using L2oS. In that case, the
following classifiers can be used for the IPv6 packets embedded in the Ethernet frames:
Source IPv6 Address
Source IPv6 Subnet Mask
Destination IPv6 Address
Destination IPv6 Subnet Mask
The iDirect L2oS feature is described in Layer 2 over Satellite on page 221.
Packet Scheduling
Packet Scheduling determines the order in which packets are transmitted according to
priority and classification.
When a remote always has sufficient bandwidth for all of its applications, packets are
transmitted in the order that they are received without significant delay. Priority makes little
difference since the remote never has to select which packet to transmit next.
However, when a remote does not have sufficient bandwidth to transmit all queued packets
the remote scheduling algorithm must determine which packet to transmit next from the set
of queued packets across a number of service levels.
For each service level defined in iBuilder, the Network Operator can select any one of the
following queue types to determine how packets using that service level are to be selected
for transmission: Priority Queue, Class-Based Weighted Fair Queue (CBWFQ), and Best-Effort
Queue.
Priority Queues are emptied before CBWFQ queues are serviced; CBWFQ queues are in turn
emptied before Best Effort queues are serviced.
The packet scheduling algorithm (Figure 9-2) first services packets from Priority Queues in
order of priority, P1 being the highest priority for non-multicast traffic. It selects CBWFQ
packets only after all Priority Queues are empty. Similarly, packets are taken from Best Effort
Queues only after all CBWFQ packets are serviced.
A Network Operator can define multiple service levels using any combination of the three
queue types. For example, the operator can define a combination of Priority and Best Effort
Queues only.
Priority Queues
There are five levels of Priority Queues:
Multicast: (Highest priority. Only for downstream multicast traffic.)
Level 1: P1
Level 2: P2
Level 3: P3
Level 4: P4 (Lowest priority)
All queues of higher priority must be empty before any lower-priority queues are serviced. If
two or more queues are set to the same priority level, then all queues of equal priority are
emptied using a round-robin selection algorithm prior to selecting any packets from lower-
priority queues.
queues are only serviced if there are no packets waiting in Priority Queues and no packets
waiting in CBWFQ Queues.
Application Throughput
Application throughput depends on proper classification and prioritization of packets and on
proper management of available bandwidth. For example, if a VoIP application requires 16
Kbps and a remote is only given 10 Kbps the application fails regardless of priority, since there
is not enough available bandwidth.
In iDirect, bandwidth assignment is controlled by the Protocol Processor. As a result of the
various network topologies (for example, a shared TDM downstream with a deterministic
TDMA upstream), the Protocol Processor has different mechanisms for downstream control
versus upstream control. Downstream control of bandwidth is provided by continuously
evaluating network traffic flow and assigning bandwidth to remotes as needed. The Protocol
Processor assigns bandwidth and controls the transmission of packets for each remote
according to the QoS parameters defined for the remotes downstream.
Upstream bandwidth is requested continuously with each TDMA burst from each remote. A
centralized bandwidth manager integrates the information contained in each request and
produces a TDMA burst timeplan which assigns individual bursts to specific remotes. The burst
timeplan is produced once per TDMA frame (typically 125 ms or 8 times per second).
NOTE: There is a 250 ms delay from the time that the remote makes a request for
bandwidth and when the Protocol Processor transmits the burst timeplan to it.
iDirect has developed a number of features to address the challenges of providing adequate
bandwidth for a given application. These features are discussed in the sections that follow.
For information on configuring these features and for a discussion of additional QoS
properties, see the chapter titled, Configuring Quality of Service for iDirect Networks of the
iBuilder User Guide.
latency resulting in a poor user experience if the delay is noticeable. iDirect recommends that
this feature be used with care.
Beginning with iDX Release 3.1, iBuilder allows a Network Operator to configure Minimum
Information Rates for remotes below the limit of one slot every two seconds that was
supported in previous releases. However, if the configured Minimum Information Rate is not
supported by the network or the iDirect hardware, remotes may drop out of the network. See
page 110 for more information and for recommendations on setting the Minimum Information
Rate for remotes.
Also beginning with iDX Release 3.1, iBuilder allows a Network Operator to enable the Idle and
Dormant States feature to dynamically reduce the remotes Minimum Information Rate based
on the length of time that the remote has no user traffic to transmit on the TDMA upstream
carrier. This feature is described in Remote Idle and Dormant States on page 107.
For instructions for configuring Minimum Information Rate and Idle and Dormant States, see
the section titled Upstream and Downstream Rate Shaping in the chapter titled
Configuring Remotes of the iBuilder User Guide.
The QoS bandwidth allocation algorithm does not strictly enforce MIR for upstream traffic.
Therefore, it is possible that a remote may receive more bandwidth than the configured
maximum if free bandwidth is available. However, this does not affect bandwidth allocations
for competing remotes or QoS nodes. MIR is strictly enforced for outbound traffic.
Sticky CIR
Sticky CIR is activated only when CIR is over-subscribed on the downstream or on the
upstream. When enabled, Sticky CIR favors remotes that have already received their CIR over
remotes that are currently asking for it. When disabled (the default setting), the Protocol
Processor reduces assigned bandwidth to all remotes to accommodate a new remote in the
network. Sticky CIR can be configured in the Bandwidth Group and Service Group level
interfaces in iBuilder.
Application Jitter
Jitter is the variation in packet-by-packet latency for a specific application traffic flow. For
an application like Voice over IP (VoIP), the transmitting equipment spaces each packet at a
known fixed interval (every 20 ms, for example). However, in a packet switched network,
there is no guarantee that the packets will arrive at their destination with the same interval
rate. To compensate for this, the receiving equipment stores incoming packets in a jitter
buffer and attempts to play out the arriving packets at the interval at which they were
transmitted. To accomplish this, additional latency is introduced by buffering packets for a
certain amount of time before forwarding them at the fixed interval.
While jitter plays a role in both downstream and upstream directions, an iDirect network
tends to introduce more jitter in the upstream direction than in the downstream direction.
This is due to the discrete nature of the TDMA timeplan which only allows a remote to
transmit data in assigned slots. Jitter is introduced when the inter-slot times assigned to a
particular remote do not match the desired play out rate.
Another source of jitter is additional traffic transmitted between (or in front of) successive
packets in the real-time stream. When a large packet needs to be transmitted in front of a
real-time packet, jitter is introduced because the remote must wait longer than desired
before transmitting the next real-time packet.
Packet Segmentation
Beginning with iDS Release 8.2, Segmentation and Reassembly (SAR) and Packet Assembly and
Disassembly (PAD) have been replaced by a more efficient iDirect application. The Network
Operator can configure the downstream segment size in iBuilder. Beginning with iDX Release
3.0, the operator can also configure the SCPC upstream segment size. However, all TDMA
upstream packet segmentation is handled internally to optimize upstream packet
segmentation.
Some applications may require changing the downstream or SCPC upstream segment size on
small carriers to reduce jitter in the downstream or SCPC upstream packets. Typically, this is
not required. For details on configuring the downstream segment size, see the chapter on
Configuring Remotes in the iBuilder User Guide.
Application Latency
Application latency is typically a concern for transaction-based applications such as credit
card verification systems that require a rapid completion time per transaction. For
applications like these, it is important that the priority traffic be expedited through the
system at the expense of the less important background traffic. This is especially important in
bandwidth-limited conditions where a remote may have a limited number of TDMA slots on
which to transmit its packets. In that case, it is important to minimize application latency as
much as possible after the distributors QoS decision. Accomplish this by configuring a higher-
priority for transaction-based applications, allowing those packets to be transmitted
immediately.
Group QoS
Group QoS (GQoS), introduced in iDS Release 8.0, enhances the power and flexibility of
iDirects QoS feature for TDMA networks. It allows advanced Network Operators a high degree
of flexibility in creating subnetworks and groups of remotes with various levels of service
tailored to the characteristics of the user applications being supported.
Group QoS is built on the Group QoS tree: a hierarchical construct within which containership
and inheritance rules allow the iterative application of basic allocation methods across groups
and subgroups. QoS properties configured at each level of the Group QoS tree determine how
bandwidth is distributed when demand exceeds availability.
Group QoS enables the construction of very sophisticated and complex allocation models. It
allows Network Operators to create network subgroups with various levels of service on the
same outbound carrier or Inroute Group. It allows bandwidth to be subdivided among
customers or Service Providers, while also allowing oversubscription of one groups configured
capacity when bandwidth belonging to another group is available.
For details on using the Group QoS feature, see the chapter titled Configuring Quality of
Service for iDirect Networks in the iBuilder User Guide.
Bandwidth Pool
A Bandwidth Pool is the highest node in the Group QoS hierarchy. As such, all sub-nodes of a
Bandwidth Pool represent subdivisions of the bandwidth within that Bandwidth Pool. In the
iDirect network, a Bandwidth Pool consists of an outbound carrier or an Inroute Group.
Bandwidth Group
A Bandwidth Pool can be divided into multiple Bandwidth Groups. Bandwidth Groups allow a
Network Operator to subdivide the bandwidth of an outroute or Inroute Group. Different
Bandwidth Groups can then be assigned to different Service Providers or Virtual Network
Operators (VNO).
Bandwidth Groups can be configured with QoS properties such as:
CIR and MIR: Typically, the sum of the CIR bandwidth of all Bandwidth Groups equals the
total bandwidth. When MIR is larger than CIR, the Bandwidth Group is allowed to exceed
its CIR when bandwidth is available.
Priority: A group with highest priority receives its bandwidth before lower-priority
groups.
Service Group
A Service Provider or a Virtual Network Operator can further divide a Bandwidth Group into
sub-groups called Service Groups. A Service Group can be used strictly to group remotes into
sub-groups or to differentiate groups by class of service. For example, a platinum, gold, silver
and best effort service could be defined as Service Groups under the same Bandwidth Group.
Like Bandwidth Groups, Service Groups can be configured with CIR, MIR, Priority and Cost.
Service Groups are typically configured with either a CIR and MIR for a physical separation of
the groups, or with a combination of Priority, Cost and CIR/MIR to create tiered service.
Beginning with iDX Release 2.1, there are two types of Service Groups, Application Service
Groups and Remote Service Groups. An Application Service Group contains multiple
Applications. Remotes assigned to an Application Service Group share the bandwidth assigned
to the various Applications in the group. When using Remote Service Groups, a remote
becomes a container node for its Applications. Each remote is configured with its own QoS
properties such as MIR and CIR and independently allocates that bandwidth to its
Applications. Remote Service Groups allow the Network Operator to configure bandwidth for
individual remotes and then assign multiple Applications to the remotes. The bandwidth
allocated to the Applications under a remote is taken from bandwidth granted to the
individual remote; it is not shared with other remotes as it is with Application Service Groups.
Note that this structure allows remotes to retain their QoS configuration when moving
between networks.
The use of and the differences between each type of Service Group are discussed in detail in
the iBuilder User Guide.
Application
An Application defines a specific service available to the end user. Applications are defined by
Application Profiles and associated with any Service Group. The following are examples:
VoIP
VLAN
Multicast
NMS Traffic
Default
NOTE: Beginning with iDX Release 3.0, Multicast Fast Path Applications can be
configured for remotes in iBuilder. Multicast Fast Path bypasses software
processing on the remote resulting in improved throughput. For details, see
Multicast Fast Path on page 49.
Each Application can have one or more Service Levels with matching rules such as:
Protocol: TCP, UDP, and ICMP
Source and/or Destination IP or IP Subnet
Source and/or Destination Port Number
DSCP Value or DSCP Ranges
VLAN
Each Application can be configured with various QoS properties such as:
CIR/MIR
Priority
Cost
Service Profiles
Service Profiles are applicable only to Application Service Groups. A Service Profile is created
by selecting Applications from the associated Application Service Group and configuring the
Group QoS properties (such as CIR and MIR) of the Service Profile. While the Application
Service Group specifies the CIR and/or MIR by Application for the entire Application Service
Group, the Service Profile specifies the per-remote CIR and/or MIR by Application. For
example, the VoIP Application could be configured with a CIR of 1 Mbps for the Service Group
in the Application Group and a CIR of 14 Kbps per-remote in the Service Profile.
Typically, remotes in an Application Service Group use the Default Profile for that Service
Group. In order to accommodate special cases, however, additional profiles (other than the
Default Profile) can be created by an operator with Group QoS Planning permissions. For
example, profiles can be used by a specific remote to prioritize an Application that is not
used by other remotes; to prioritize a specific VLAN on a remote; or to prioritize traffic to a
specific IP address (such as a file server) connected to a specific remote in the Service Group.
Or a Network Operator may want to configure some remotes for a single VoIP call and others
for two VoIP calls. This can be accomplished by assigning different Service Profiles to each
group of remotes.
Remote Profiles
Remote Profiles are applicable only to Remote Service Groups. Like Service Profiles, Remote
Profiles define the Applications that are used by the remotes. Unlike Service Profiles, the
Applications defined by Remote Profiles are subnodes of the Remotes in the Group QoS tree.
Each Application in a Remote Profile can be configured with its own CIR, MIR, etc. which
determine how bandwidth is shared on individual remotes that have the Remote Profile
assigned. The Applications are themselves built from Application Profiles, which contain the
QoS Service Levels and Rules governing the bandwidth usage of the remote.
NOTE: Another solution would be to create a single Bandwidth Group with two
Service Groups. This solution would limit the flexibility, however, if the satellite
provider decides in the future to further split each group into sub-groups.
VoIP could also be configured as priority 1 traffic. In that case, demand for VoIP must be fully
satisfied before serving lower priority applications. Therefore, it is important to configure an
MIR to avoid having VoIP consume all available bandwidth.
Note that cost could be used instead of priority if the intention were to have a fair allocation
rather than to satisfy the Platinum service before any bandwidth is allocated to Gold; and
then satisfy the Gold service before any bandwidth is allocated to Silver. For example:
Platinum Cost 0.1 - CIR 6 Mbps, MIR 12 Mbps
Gold Cost 0.2 - CIR 6 Mbps, MIR 18 Mbps
Silver Cost 0.3 - No CIR, No MIR Defined
Remote 2 is on board a private vessel that requires bandwidth for VoIP as well as for more
general internet traffic such as web browsing. The VoIP application has a CIR of 64 kbps to
ensure sufficient bandwidth for high-quality voice calls. Limiting the CIR for other
applications to 448 kbps ensures that VoIP traffic will be granted the 64 kbps even if the
remotes demand for other bandwidth is greater than 448 kbps. The 512 kbps of MIR for other
applications is shown for clarity. It is not really necessary to configure the 512 kbps of MIR for
these applications since the remote itself is already limited to 512 kbps.
NOTE: When bandwidth is allocated for a remote, the CIR and MIR are scaled to
the remotes Nominal MODCOD. At higher levels of the Group QoS tree (Bandwidth
Group, Service Group, etc.) CIR and MIR are scaled to the networks best
MODCOD.)
Remote 3 receives 300 Kbps * 1.2382 / 3.6939 = 101 Kbps of Best Effort for a Total of
1.101 Mbps
Figure 9-11 shows two remotes, Remote 1 and Remote 2, each configured with a CIR of 1
Mbps. Remote 1 is operating at a Nominal MODCOD of 8PSK 3/4. Remote 2 is operating at a
Nominal MODCOD of QPSK 3/4. Both remotes are requesting their full CIR, but only enough
bandwidth to satisfy 1.65 Mbps of CIR at 8PSK 3/4 is available. Note that QPSK 3/4 requires
about 1.5 times the raw satellite bandwidth of 8PSK 3/4 to deliver the same CIR.
The tree on the left-hand side of Figure 9-11 shows the result of disabling Bandwidth
Allocation Fairness Relative to MODCOD for the Service Group. The satellite bandwidth is split
equally between Remote 1 and Remote 2 until the bandwidth is exhausted. This results in
Remote 1 receiving 825 Kbps of CIR and Remote 2 receiving 550 Kbps of CIR.
The tree on the right-hand side of Figure 9-11 shows the result of enabling Bandwidth
Allocation Fairness Relative to MODCOD for the Service Group. Each remote receives enough
bandwidth to carry 660 Kbps CIR. To accomplish this, Remote 2 must be granted 1.5 times the
satellite bandwidth of Remote 1.
Allocation Fairness Relative to Operating MODCOD operates similarly to Allocation Fairness
Relative to Nominal MODCOD except that adjustments are made dynamically based on the
MODCOD at which the remote are currently operating rather than the remotes nominal
MODCOD. This favors remotes currently operating at lower MODCODs, since their satellite
bandwidth allocations must increase to achieve the same information rate as remotes
operating at higher MODCODs. For additional information, see the section titled Allocation
Fairness Relative to MODCOD in the iBuilder User Guide.
A TDMA remote attempts to join an iDirect network by sending an acquisition burst to the hub
in an upstream carrier acquisition slot. The hub assigns upstream TDMA acquisition slots to
remotes in the burst timeplan, which is broadcast to all remotes on the downstream carrier.
To join the network, the remote must transmit the acquisition burst at a power level that
allows the burst to be successfully demodulated by the hub line card receiving the upstream
carrier. After the remote has joined the network, the Uplink Control Process at the hub takes
over to keep the remote in the network at the correct power.
TDMA initial transmit power is the power level at which the remote transmits acquisition
bursts. The initial transmit power is determined during remote commissioning and configured
in iBuilder. After the remote is commissioned, the value configured in iBuilder and contained
in the remote options file is used whenever the remote re-joins the network.
Beginning with iDX Release 3.2, each upstream carrier in an inroute group can have a
different MODCOD, symbol rate, and spreading factor. Specific values for these upstream
carrier parameters (collectively called the remotes Reference Carrier) are entered into
iBuilder along with the TDMA Initial Power. The value configured for TDMA Initial Power is
defined in relation to the configured Reference Carrier parameters.
A remote may be invited to acquire the network on any upstream carrier in the inroute group
that has acquisition enabled. When sending an acquisition burst on an upstream carrier with
characteristics that differ from the configured Reference Carrier, the remote uses its
Reference Carrier parameters to adjust the initial transmit power so that the acquisition burst
is received by the hub line card within the correct C/N range for that carrier.
This chapter describes why it is important to correctly configure the TDMA initial transmit
power for all remotes. Additional information is contained in the following iDirect
documentation:
The Installation and Commissioning Guide for iDirect Satellite Routers contains the
procedure for determining the correct TDMA Initial Power and Reference Carrier
parameters for a remote.
Remote Acquisition on page 147 describes the process by which a remote joins an
iDirect network.
Uplink Control Process on page 97 describes uplink power control in iDirect networks.
As shown in Figure 10-1, under ideal circumstances, the average C/N of all remotes on the
upstream carrier is equal to the center of the UCP adjustment range. Notice that the optimal
detection range extends below the threshold C/N of the upstream carrier.
to acquire using traditional acquisition (rather than Superburst) with a C/N value of less than
7 dB will not acquire the network.
Figure 10-2. Skewed C/N Detection Range: Initial Transmit Power Too High
Figure 10-3. Skewed Detection Range: Initial Transmit Power Too Low
The iDirect Uplink Control Process executes at the hub to bring remotes into the network and
to maintain the correct frequency, power and timing during operation. This is accomplished
by continuously monitoring each remotes upstream performance at the hub and instructing
the remote to adjust its transmission as required to remain within acceptable limits.
Acquisition
Satellites drift through the station-keeping box, adding approximately 1.7 ms of uncertainty
to the symbol timing. The 1.7 ms timing uncertainty consists of 0.1o of satellite station
keeping uncertainty and approximately 50 miles of remote position uncertainty.
NOTE: Symbol timing uncertainty is higher than 1.7 ms for inclined orbit
applications. For those applications, the Acquisition Aperture for the upstream
TDMA carriers in iBuilder should be adjusted. Consult the iDirect TAC for advice.
This variation in symbol timing is accounted for during remote acquisition by providing a
larger guard interval in the TDMA frame for acquisition slots than for traffic slots. A larger
guard interval in the acquisition slot prevents the acquiring remote from bursting into the
traffic slots allocated to other remotes. This is illustrated in Figure 11-1.
Guard
Guard
Interval
Interval
Frequency offset is the difference between the nominal frequency at which the remote
transmits and the frequency at which the hub demodulator receives the upstream carrier.
Frequency offset occurs even after the remotes clock is synchronized to the hub clock. The
hub downconverter equipment and the Doppler effect due to satellite and remote terminal
motion are major contributors to frequency offset.
iDX Release 3.3 supports two types of remote acquisition: Traditional (or Fast) Acquisition
and Superburst Acquisition. Superburst Acquisition greatly improves the time and bandwidth
required for remotes to join the network and should be used whenever possible. However, in
this release, Superburst Acquisition can only be used on upstream carriers being received by
multichannel line cards.
Once a remote that is ready to join the network has locked to the downstream carrier, it
begins to burst in its assigned acquisition slots at the frequency offsets indicated by the hub.
Once a burst from the remote is detected at the hub, the hub sends the frequency offset
correction to the remote.
When receiving a traditional acquisition burst, the TDMA demodulator at the hub has a narrow
tolerance for frequency offset (approximately 1.5% of the upstream carrier symbol rate).
Therefore, during traditional acquisition, the remote must burst at various frequencies until
the demodulator detects the upstream carrier. The hub sends invitations on the downstream
carrier to all remotes that are not in the network to transmit at different frequency offsets.
When receiving a Superburst, the hub demodulators tolerance for frequency offset improves
to approximately 7.5% of the symbol rate. A Superburst is also a much more robust waveform
that is independent of the traffic MODCOD. These advantages allow the hub to detect a
Superburst over a much wider frequency range and at a much lower C/N when compared to a
traditional acquisition burst. Therefore, in most cases, a remote must only transmit a single
Superburst to acquire the network. When using Superburst, frequency sweeping is typically
not required.
During remote commissioning, an initial transmit power is determined for the remote and
configured in iBuilder. The initial power is chosen so that the remote can enter the network in
rain fade conditions. During acquisition, the remote always transmits at the configured initial
power if acquiring on an upstream carrier with the same parameters as its configured
reference carrier. (See Reference Carrier Parameters on page 46.) If a remote is acquiring on
a carrier that does not match the configured reference carrier, the remote adjusts its initial
power to maintain the spectral density of the transmission. For details on both Traditional
Acquisition and Superburst Acquisition, see Remote Acquisition on page 147.
Once the remote has joined the network, the operational UCP algorithm takes over to
maintain optimal transmit power for the remote. For remotes in Adaptive Inroute Groups,
uplink control selects the initial nominal upstream carrier for the remote based on the C/N of
the acquisition burst. This initial selection is based on the Acquisition Margin (M3) configured
in iBuilder on the Uplink Control tab of the inroute group. The Acquisition Margin is separate
from the normal margins, due to the larger uncertainty of the C/N estimation on the
Superburst and to allow for other system uncertainties such as BUC aging and frequency
response variations. This margin is used both for Superburst and traditional burst acquisition.
For details on configuring this parameter, see the iBuilder User Guide.
Network Operation
Once the remote has acquired the network, the Uplink Control Process continuously corrects
the frequency, symbol timing and power while the remote is in the network. For remotes in
Adaptive Inroute Groups, the UCP process is also responsible for switching remotes to new
nominal upstream carriers as required. A remotes nominal carrier is the upstream carrier
with the highest threshold C/N0 the remote can sustain and is allowed to use. All power
control and other real-time corrections are related to the nominal carrier. Power control is
described in UCP Power Control and Fade Detection on page 100.
The Guard Interval is no longer entered into iBuilder in symbols. It is now entered in units of
system time called NCR ticks. For clarity, the number of NCR ticks entered is translated to
microseconds and displayed on the screen (Figure 11-2).
Figure 11-2. iBuilder: Maximum Speed and Guard Interval for Inroute Group
A remote may be assigned slots on an upstream carrier that does not match its current
nominal carrier. For example, during upstream bandwidth contention, a remote may be
granted slots on a less efficient carrier if there are no available slots on the nominal carrier. In
that case, the remote automatically adjusts its transmit power such that the power matches
what is required on the assigned carrier.
In earlier releases, the target C/N (or TDMA Nominal C/N) was an operator-entered value
determined by adding the C/N threshold for the inroute from the Link Budget Analysis Guide
to the additional operating margin determined by the Link Budget Analysis for the network.
Beginning in iDX Release 3.2, the target C/N is calculated using the C/N thresholds for the
inroutes from the Link Budget Analysis Guide and the following margins configured in
iBuilder:
The Fade Slope Margin (M1) allows for incremental fade that can occur during the
reaction time of the power control algorithm as well as the uncertainty in the C/N0
estimations.
The Hysteresis Margin (M2) is added to the Fade Slope Margin to prevent unnecessarily
frequent switching between carriers.
NOTE: The Acquisition Margin (M3) is used only to select the initial nominal
carrier when a remote acquires the network. This margin is discussed on page 98.
The system adds the sum of the Fade Slope Margin and the Hysteresis Margin to the C/N
thresholds from the LBA Guide to determine the target C/Ns for each carrier in the inroute
group. iDirect provides a dimensioning tool to assist network designers in determining suitable
values for these parameters.
If a remotes C/N falls below the target C/N, the power control algorithm increases the
remotes transmit power if possible to bring the signal back up to the target level. If the
remotes power cannot be increased on the nominal carrier and a more protected carrier is
available, then the remotes nominal carrier is changed to the more robust carrier. This keeps
the remote in the network at the expense of diminished throughput. If the remote is
consistently below the threshold defined by the target C/N minus the Hysteresis Margin on the
most protected carrier in the inroute group, the remote is logged out of the network. A
remote that has been logged out must re-acquire the network before it can continue to
transmit user traffic.
If a remotes C/N is above the target C/N plus the hysteresis margin, the power control
algorithm looks for a more efficient carrier on which the remote can maintain the target C/N.
If such a carrier is found and if UCP estimates that remote is capable of transmitting on that
carrier, the remotes nominal carrier is changed to the new carrier. If the remote cannot
switch to a better carrier, the power control algorithm decreases the power as necessary to
return the remotes signal to the target C/N.
The power control algorithm reacts faster when the hub determines that a remote has
entered an upstream fade. When monitoring the remotes signal, the algorithm samples the
C/N periodically. The algorithm determines a fade slope for the remote based on analysis of
the C/N samples. The algorithm adjusts the measurement interval depending on the fade
slope, such that it is shorter when the fade varies rapidly. This tends to keep small and
constant the amount of margin required in order to account for the reaction time.
If a remote fade is detected, then the update interval is decreased based on the severity of
the fade. Decreasing the update interval may require the system to allocate additional
upstream slots to the remote. For idle remotes, the configured Minimum Information Rate is
ignored and more slots allocated if additional burst measurements are required by the Uplink
Control Process.
The Measurement Spacing configured in iBuilder on the Uplink Control tab of the inroute
group defines the steady-state update interval. This parameter is set to 2 seconds by default
but can be modified by the Network Operator. The smaller update intervals used during fades
are set by the software and are not configurable.
In pre-iDX 3.2 releases, the power adjustment algorithm applied corrections to the remotes
power using coarse and fine step sizes configured in iBuilder. These parameters are no longer
used and have been removed from iBuilder. Beginning with iDX Release 3.2, the power
adjustment step size is automatically determined by the power control algorithm.
NOTE: Because iDX Release 3.2 supports Inroute Groups with different sized
upstream carriers, C/N0 has replaced C/N as the measurement of signal quality
used to monitor and control remote transmissions on the upstream carriers. See
C/N0 and C/N on page 38 for details.
Figure 11-3 illustrates the application of uplink power control (UPC) to a fading remote in an
Adaptive system.
C3 C3
Hysteresis margin M2 M2
C2 C2
Fade Slope margin M1 H M1
C1 C1
H
Carrier 1 Carrier 1
C3
' M2
Sufficient
C2
headroom to
M1 switch up
C1
Carrier 2 UPC keeps C/N0
close to C3
In Figure 11-3:
C/N0 = C/N + 10log10 (Rs) : The C/N adjusted for the carrier symbol rate (Rs).
C1 : The C/N threshold from Link Budget Analysis Guide.
C2 = C1 + M1 : The C/N threshold plus the Fade Slope Margin.
C3 = C2 + M2 : The C/N threshold plus the Fade Slope Margin plus the Hysteresis Margin.
: The C/N0 difference between Carrier 1 and Carrier 2.
H : Power Headroom; i.e. the amount that the remote can increase its transmit power
before reaching its configured TDMA maximum power.
Uplink power control continuously monitors the remotes C/N0 as measured at the hub and
adjusts the remotes transmit power as required to maintain the target C/N0 on the nominal
carrier. Referring to Figure 11-3, the system reacts to a remote fade as follows:
1. Prior to the fade, the remote is operating on Carrier 1. Uplink power control keeps the
C/N0 of the remote at approximately the target C/N0 represented by C3 of Carrier 1.
2. As the remote enters the fade, uplink power control increases the remotes transmit
power as necessary to maintain the target C/N0 on Carrier 1.
3. When the remotes power headroom is exhausted, the remotes C/N0 continues to drop
on Carrier 1 but the transmit power can no longer be increased to return the remote to
C3.
4. When the remotes C/N0 falls below C2 on Carrier 1, the remote is moved to a more
protected carrier (Carrier 2) where the target C/N0 of the remote can be maintained.
5. As the fade decreases, uplink power control decreases the remotes transmit power to
maintain the target C/N0 on Carrier 2.
6. Once the fade has decreased to the point that the remote has sufficient power headroom
to meet the target C/N0 (C3) of Carrier 1, the remote is moved back to Carrier 1.
7. When the fade passes, uplink power control continues to operate to keep the remotes
C/N0 at C3 on Carrier 1 as it did prior to the fade.
NOTE: A remote-side custom key is required to select the correct local frequency
oscillator correction algorithm for high-speed SCPC remotes. See the iBuilder User
Guide appendix COTM Settings and Custom Keys and Settings for details.
The following information for the selected remote is displayed on the iMonitor UCP Info tab:
The remotes Upstream C/N0 as measured at the hub
The remotes UCP Power Adjustment
The remotes UCP Symbol Timing Offset (TDMA only)
The remotes UCP Frequency Offset
The top image in Figure 11-4 shows the UCP Info tab for an SCPC remote while the bottom
image shows the UCP Info Tab for a TDMA remote. Notice that the Symbol Offset is not
applicable to the SCPC remote.
In older releases, UCP only modified the power of a remote if the remote's bursts were
outside of a normal range defined in iBuilder. However, beginning with iDX Release 3.2, the
remote's power is adjusted if the received SNR differs from the target threshold by more than
+/-0.25 dBm. Each time a correction is sent, the remote adjusts its power by its minimum
step size (0.5 dB for most model types; 1 dB for Evolution X1 or e150 remotes). Because of
this, when viewed in iMonitor or on a spectrum analyzer, remote power may continuously vary
by +/-0.5 or +/-1 dBm. This is not an operational issue and should be ignored.
As discussed in Acquisition on page 97, the Uplink Control Process also controls network
acquisition for TDMA remotes. Beginning with iDX 3.0, the Network Operator can view
upstream acquisition statistics (as well as other upstream performance statistics) per remote
in iMonitor. The Network Operator can view these statistics in both graphical and tabular
form.
Upstream acquisition performance statistics displayed in iMonitor for remotes transmitting on
a TDMA inroute include:
The number of valid acquisition bursts received from the remote
The number of CRC errors received in acquisition slots assigned to the remote
The number of missing acquisition bursts in acquisition slots assigned to the remote
Figure 11-5 shows the upstream acquisition statistics for a remote in the process of acquiring
the network. Notice that there are a high number of missing acquisition bursts and acquisition
CRC errors as the UCP algorithm adjusts the remotes frequency offset. When the hub detects
a burst from the remote, the hub sends the remote the correct frequency offset correction
and the missing bursts and CRC errors drop to zero.
For details on viewing UCP and Upstream Performance statistics in iMonitor, see the iMonitor
User Guide.
Beginning with iDX Release 3.1, the Idle and Dormant States feature allows a remote to
dynamically reduce its Minimum Information Rate based on the length of time that the
remote has no user traffic to transmit on the TDMA upstream carrier.
Overview
Once it has acquired the network, a TDMA remote is always allocated a minimum number of
upstream TDMA slots even when it has no upstream bandwidth demand. The minimum number
of slots allocated to a remote depends on Minimum Information Rate defined on the Remote
QoS tab. If a Minimum Information Rate has been configured for the remote, then the number
of slots per frame is displayed on the screen. If no Minimum Information Rate is configured for
the remote, the remote is granted at least one slot per TDMA frame by default. The remote
requires this Minimum Information Rate for a number of reasons, including to request
additional bandwidth from the hub when needed for upstream packets, and to send periodic
bursts to the hub so that the Uplink Control Process (UCP) can keep the remote in the
network.
When the Idle and Dormant State feature is enabled, the Minimum Information Rate granted
to the remote is reduced over time if the remote continues to have no upstream user traffic
to transmit. In networks with large numbers of remotes with periods of no demand, enabling
this feature can save significant upstream bandwidth by minimizing the number of unused
upstream TDMA slots allocated to remotes.
However, the smaller the Minimum Information Rate allocated to a remote, the longer it takes
(on average) for that remote to request and to be granted additional bandwidth when the
remote receives upstream traffic for transmission. Therefore, when the Idle and Dormant
States feature is enabled, this additional ramp up delay after periods of inactivity may be
noticeable for interactive applications since it can take several seconds for the remotes
bandwidth allocation to be increased to meet the new demand. In addition, if the Minimum
Information Rate of the remote is set too low, the UCP process may not be able to keep the
remote in the network, and the remote may be force to reacquire.
Feature Description
A remote with the Idle and Dormant States feature enabled can be in one of three states:
Active, Idle or Dormant. Figure 12-1 shows how the remotes states change when this feature
is enabled.
Figure 12-2 shows the fields on the iBuilder Remote QoS tab used to configure this feature.
The configuration of the remotes Minimum Information Rate fields determine the system
behavior in the Active State. The configuration of the Idle and Dormant States fields
determine the system behavior in the other two states.
A remote that is in network and actively transmitting upstream user traffic is in the Active
State. If Minimum Information Rate is not enabled, a remote in the Active State is granted a
minimum of 1 slot per frame by default. If Minimum Information Rate is enabled, then the
minimum bandwidth granted to the remote is determined by the value in kbps entered on the
screen (Figure 12-2). Notice in Figure 12-2 that when a Minimum Information Rate is entered
into iBuilder, the equivalent number of slots per frame is automatically displayed.
NOTE: The Active State behavior is identical to the system behavior when the Idle
and Dormant States feature is not enabled.
When you select Enable Idle and Dormant States, then for each of those states you can
configure how frequently slots are allocated to the remote (in units of 1 slot per n frames)
and the Timeout that determines when the remote will change to that state from the
previous state.
If a remote in the Active State has no upstream user traffic to transmit for the time period
defined by the Idle State Timeout, then the remote changes from the Active State to the Idle
State. When the remote enters the Idle State, the remotes Minimum Information Rate
changes based on the Idle State configuration of 1 Slot / n Frames. Notice in Figure 12-2 that
when you configure the Idle State for 1 slot every n frames, the equivalent Idle Minimum
Information Rate is automatically displayed in kbps on the screen.
If the remote has been in the Idle State for the time period defined by the Dormant State
Timeout and the remote still has no upstream user traffic to transmit, then the remote
changes from the Idle State to the Dormant State. In the Dormant State, the remotes
Minimum Information Rate again changes based on the Dormant State configuration of 1 Slot /
n Frames. As with the Idle State, when you configure the Dormant State for 1 slot every n
frames, the equivalent Dormant Minimum Information Rate is automatically displayed in kbps
on the screen. A remote in the Dormant State remains in that state as long as it has no
upstream user traffic to transmit.
NOTE: Minimum Information Rate must be greater than or equal to the Idle
Minimum Information Rate. Similarly, the Idle Minimum Information Rate must be
greater than or equal to the Dormant Minimum Information Rate. iBuilder
enforces these constraints when the configuration is entered on the Remote QoS
tab.
If at any time a remote in Idle State or Dormant State receives upstream user traffic for
transmission, the remote returns to the Active State and the Idle State Timeout is reset. By
default, only upstream user traffic triggers a state change from Idle or Dormant State to
Active State. Upstream management traffic generated by the remote to the NMS does not
trigger a state change. To select which upstream QoS Service Levels trigger state changes,
select or clear the Trigger State Change check box for the Service Level defined for that
traffic. An example is shown in Figure 12-3. (See the section titled Adding an Application
Profile in the chapter Configuring Quality of Service for iDirect Networks in the iBuilder
User Guide for details.)
Figure 12-3. Upstream Service Level with Trigger State Change Selected
NOTE: When using the Idle and Dormant States feature in conjunction with
Remote Sleep Mode, the Sleep timeout should be longer than the Idle and
Dormant State timeouts. Otherwise, the remote will sleep before the Dormant
State timeout expires and the Idle and State feature will be ineffective. See
Remote Sleep Mode on page 143.
Using the configuration in Figure 12-2 as an example, if the remote has been recently Active
but has no more user traffic to transmit, it will remain in the Active State for 120 seconds (the
Idle State timeout). During that time, the remote will be granted its configured Minimum
Information Rate of 27.264 kbps. If after 120 seconds the remote still has no user traffic to
transmit, it will enter the Idle State and the Minimum Information Rate granted to the remote
will change to 3.14 kbps (or 1 slot every 8 frames).
If after 180 seconds in the Idle State the remote still has no user traffic to transmit, it will
enter the Dormant State and its Minimum Information Rate will change to 0.85 kbps. The
remote will remain in the Dormant State until it receives upstream user traffic to transmit. If
at any time a remote in Idle State or Dormant State receives upstream user traffic for
transmission, it will to return to the Active State and the Minimum Information Rate will
return to 27.264 kbps.
To remain in the network, a remote should transmit at least 1 burst every 4 seconds. With a
typical frame length of 125 ms, this translates into a minimum allocation of 1 slot every 32
frames.
As in previous releases, the Minimum Information Rate for the Active State cannot be set
lower than one slot every 16 frames (the equivalent of one slot every 2 seconds.) Typically,
this minimum allocation is sufficient for any remote to stay in the network. For the Idle State
and the Dormant State, however, the Minimum Information Rate can be set as low as one slot
every 64 frames (one slot every 8 seconds). Note that based on the discussion in the previous
paragraph is unlikely that a remote with such a low Minimum Information Rate will remain in
the network except when all of the following conditions are met:
The remote is transmitting a BPSK or QPSK upstream carrier to an Evolution line card
under clear sky conditions.
The remote is a fixed-site (non-mobile) remote in a Ku Band, C Band or X Band network.
Therefore, configuring a rate below the recommended setting may cause the remote to drop
out of the network and be forced to reacquire if there is insufficient margin to handle rain
fade or if 8PSK upstream modulation is chosen. Because of these constraints, do not configure
a Minimum Information Rate for any state below the recommended settings unless the
network configuration and system design permit a lower rate.
Note also that in DVB-S2 networks with Adaptive Coding and Modulation (ACM) enabled, if a
remote enters a downstream fast fade condition the remote attempts to report its current
SNR to the hub every second. This allows the hub to quickly adjust the outbound MODCOD of
the remote to compensate for the fade. If the remote is in the Idle State or Dormant State,
the remote may not have sufficient TDMA slots to allow it to increase its inbound transmission
rate to report the fade. As a result, the hub may not adjust the remotes MODCOD quickly
enough to avoid the loss some downstream data by the remote.
Beginning with iDX Release 3.2, if the hub detects that a remotes upstream C/N is dropping
rapidly due to fade conditions, the Minimum Information Rate of the remote is automatically
increased so that the power control algorithm can react quickly to adjust the remotes
transmit power. If the remote requires additional slots, the currently-configured Minimum
Information Rate of the remote is ignored and the slot allocation rate is determined by the
software. For more information on power control see UCP Power Control and Fade Detection
on page 100.
13 Verifying Error
Thresholds Using IP
Packets
This chapter provides instructions on testing iDirect equipment functioning in IP networks to
ensure that Layer 1 errors are seen at the specified rate at the C/N thresholds. The C/N
thresholds are published in the Link Budget Analysis Guide.
Introduction
In the early days of VSAT systems, data was input and output from satellite modems using
serial data protocols that allowed users direct access to the bit stream going over the air. This
enabled 3rd party test equipment to be used to inject test data containing pseudo-random
data patterns into a transmit stream and to synchronize to those patterns in the receive
stream. The Bit Error Rate (BER) was calculated by counting the bit errors in the receive
stream.
When the satellite industry standardized on IP networking protocols, access to the bit stream
was lost since user data is wrapped in various other packet formats (such as Ethernet) at
various points in the network. Although this low level access is no longer available, the
positive result of using IP packets is that there are many off-the-shelf methods for measuring
IP Packet Error Rate (PER). This includes free tools such as Iperf
(http://en.wikipedia.org/wiki/Iperf) and udpblast which is shipped with iDirect hub
software. These tools run on standard PCs and do not require special test hardware.
Many more sophisticated test platforms are available for high-throughput IP testing and for
modeling a mix of packet sizes to mimic user application or internet traffic. However, when
verifying error thresholds, a simple stream of fixed size packets is usually sufficient.
When testing the upstream channel in a network by inserting IP test packets at the remote
and terminating them at the hub, the advertised CLR threshold must be adjusted to account
for the size of the IP packets being used in the test. For example, if it takes 10 TDMA cells to
transmit each IP packet, the IP PER will be 10 times worse than the CLR at the threshold
point; in other words, a PER of 1e-4 will be measured at the advertised threshold.
Use the following equations to find the expected upstream IP Packet Error Rate:
IP_PacketErrorRate = CellLossRate * (1 + IP_PacketSize) / (TDMA_CellSize)
Where,
TDMA_CellSize = TDMA_BlockSize 12
Upstream Example 1
Find the IP Packet Error Rate for:
512 byte IP test packets
8PSK modulation
2D 16-State/170 Byte-4/5 FEC
In this example:
TDMA_CellSize = 170 12 = 158 bytes
Therefore,
IP_PacketErrorRate = 1e-5 * (1 + 512) / 158 = 3.25e-5
Since this TDMA mode reaches its error threshold at a C/N of 11.7 dB, 3.25 IP Packets are
expected to be lost for every 100,000 transmitted at this C/N.
Upstream Example 2
Find the IP Packet Error Rate for:
512 byte IP test packets
QPSK modulation
2D 16-State/438 Byte-2/3 FEC
In this example:
TDMA_CellSize = 438 12 = 426 bytes
Therefore,
IP_PacketErrorRate = 1e-5 * (1 + 512) / 426 = 1.20e-5
Since this TDMA mode reaches its error threshold at a C/N of 5.0 dB, 1.20 IP Packets are
expected to be lost for every 100,000 transmitted at this C/N.
Downstream Example
Find the IP Packet Error Rate for:
1024 byte IP test packets
8PSK Rate 3/4 MODCOD
In this example:
FrameLossRate = (11600 + 112) * 1e-8 = 1.17e-4
DVB-S2_FrameSize = (11600 / 8) 7 = 1443 bytes
Therefore,
IP_PacketErrorRate = 1.17e-4 * (1 + 1024) / (1443 2) = 8.32e-5
Since this DVB-S2 MODCOD reaches its error threshold at a C/N of 8.5 dB, 8.32 IP Packets are
expected to be lost for every 100,000 transmitted at this C/N.
This chapter describes how the Global NMS works in a global architecture and a sample Global
NMS architecture.
In this example, there are 4 different networks connected to three different Regional
Network Control Centers (RNCCs). A group of remote terminals has been configured to roam
among the four networks.
NOTE: This diagram shows only one example from the set of possible network
configurations. In practice, there may be any number RNCCs and any number of
protocol processors at each RNCC.
On the left side of the diagram, a single NMS installed at the Global Network Control Center
(GNCC) manages all the RNCC components and the group of roaming remotes. Network
Operators, both remote and local, can share the NMS server simultaneously with any number
of VNOs. (Only one VNO is shown in the Figure 14-2.) All users can run iBuilder, iMonitor, or
both on their PCs.
The connection between the GNCC and each RNCC must be a dedicated high-speed link.
Connections between NOC stations and the NMS server are typically standard Ethernet.
Remote NMS connections are made either over the public Internet protected by a VPN, port
forwarding, or a dedicated leased line.
This chapter recommends basic security practices to enhance the security of iDirect
networks. iDirect also recommends implementation of all additional security measures
required by your company security standards.
Thereafter, these passwords should be changed periodically. When selecting new passwords,
iDirect recommends following common guidelines for constructing strong passwords.
Client Access
Access to iBuilder and iMonitor sessions should be strictly controlled. Network Operators
should always log out of any NMS clients when leaving workstations to prevent unauthorized
access.
Remote Access
All remote access by NMS client applications to iDirect networks should be established over
secure private networks.
NOTE: Option 1 is the preferred option because it prevents SAT0 DNS queries
from entering the iDirect system.
2. Configure a Downstream Filter in iBuilder to deny all UDP traffic with destination port
53. See the iBuilder User Guide for details on creating and assigning QoS filters.
16 Global Protocol
Processor Architecture
This chapter describes how the Protocol Processor works in a global architecture. Specifically
it contains Remote Distribution, which describes how the Protocol Processor balances
remote traffic loading and De-coupling of NMS and Data Path Components, which describes
how the Protocol Processor Blades continue to function in the event of a Protocol Processor
Controller failure.
Remote Distribution
The actual distribution of remotes and processes across a blade set is determined by the
Protocol Processor controller dynamically in the following situations:
At system Startup, the Protocol Processor Controller determines the distribution of
processes based on the number of remotes in the network(s).
When a new remote is added in iBuilder, the Protocol Processor Controller analyzes the
current system load and adds the new remote to the blade with the least load.
When a blade fails, the Protocol Processor Controller re-distributes the load across the
remaining blades, ensuring that each remaining blade takes a portion of the load.
The Protocol Processor controller does not perform dynamic load-balancing on remotes. Once
a remote is assigned to a particular blade, it remains there unless it is moved due to one of
the situations described above.
one possible configuration of processes across two blades is shown in Figure 16-1.
This chapter presents an overview of iDirects Distributed NMS Server feature. This feature
distributes the NMS server processes across multiple server machines. Distributing the NMS
processes can result in improved server performance and better use of disk space. The steps
for implementing a Distributed NMS Server are contained in the iBuilder User Guide.
18 Transmission Security
(TRANSEC)
This chapter describes how TRANSEC is implemented in an iDirect Network. It includes the
following sections:
What is TRANSEC?
iDirect TRANSEC
TRANSEC Key types
DVB-S2 Downstream TRANSEC
Upstream TRANSEC
ACQ Burst Obfuscation
TRANSEC Dynamic Key Management
TRANSEC Remote Admission Protocol
ACC Key Management
Automatic Beam Selection (ABS) and TRANSEC
What is TRANSEC?
Transmission security prevents an adversary from exploiting information available in a
communications channel without necessarily having defeated the encryption inherent in the
channel. For example, even if an adversary cannot defeat the encryption placed on individual
packets, the adversary may be able to determine answers to questions such as:
What types of applications are currently active on the network?
Who is talking to whom?
Is the network or a particular remote site active now?
Based on traffic analysis, what is the correlation between network activity and real
world activity?
Is a particular remote site moving?
Is there significant acquisition activity?
iDirect supplies FIPS 140-2 certified TRANSEC-capable remote modems. iDirects TRANSEC
feature, as outlined in this chapter, makes answers to questions like those listed above
theoretically impossible for an adversary to determine.
iDirect TRANSEC
iDirect achieves full TRANSEC compliance by presenting to an adversary eavesdropping on the
RF link a constant wall of fixed-sized, strongly-encrypted (AES, 256 bit key, CBC Mode) traffic
segments, the frequency of which do not vary with network activity. All network messages,
including those that control the admission of a remote terminal into the TRANSEC network,
are encrypted and their original size is hidden. The content and size of all user traffic (Layer
3 and above), as well as all network link layer traffic (Layer 2), is completely indeterminate
from an adversarys perspective. In addition, no higher-layer information can be ascertained
by monitoring the physical layer (Layer 1) signal.
iDirect TRANSEC includes a remote-to-hub and a hub-to-remote authentication protocol based
on standard X.509 certificates designed to prevent man-in-the-middle attacks. This
authentication mechanism prevents an adversarys remote from joining an iDirect TRANSEC
network. In a similar manner, it prevents an adversary from coercing a TRANSEC remote into
joining the adversarys network. While these types of attacks would be extremely difficult
even in a non-TRANSEC iDirect network, the mechanisms in place within a TRANSEC network
render them theoretically impossible.
All encryption in a TRANSEC network is done using the AES algorithm with a 256 bit key and
operates in CBC Mode.
The iDirect TRANSEC system is designed so that compromise of the ACC Key only reveals
which remotes are acquiring the network. Compromise of this key does not allow an
attacker to join the network or to perform traffic analysis on remotes that have already
acquired the TRANSEC network.
NOTE: All downstream multicast traffic is also encrypted with the DCC Key.
Except as noted below, all downstream information is encrypted by either the ACC Key or the
DCC Key. Some of the ACC-encrypted traffic, such as the DVB-S2 NCR timestamps, is
generated by the line card firmware. Other ACC-encrypted traffic, such as authentication
traffic and acquisition invitations, is generated by software running on the hub servers. ACC
traffic generated by server software is specifically tagged as ACC traffic, so that the line card
firmware can store it in the appropriate queue.
The DVB-S2 downstream consists of a series of BBFRAMEs. Each BBFRAME is defined at the
physical layer. For TRANSEC operation, the outbound operates at a fixed MODCOD, meaning
that the modulation and FEC encoding for each DVB-S2 frame is the same.
A DVB-S2 frame can contain both ACC and DCC data. The proportion of each type of data is
hidden from an attacker using the AES encryption. The definition of the DVB-S2 frame
structure and the type of key applied to the various fields within each frame are shown in
Figure 18-1. (The exact positions and lengths of the fields may be rearranged for
implementation purposes; however, that does not change which fields are encrypted by a
given key.)
The first 21 bytes of the DVB-S2 frame are sent in the clear. These initial 21 bytes consist of:
A four-byte fixed header, which never changes
A 16-byte Initialization Vector (IV) used by the encryption / decryption algorithm
A single byte containing the key ring position for the ACC Key. This allows for key rolls of
the ACC Key as described in ACC Key Management on page 139.
The next 16 bytes are always encrypted with the ACC Key. They contain the following
information:
The length of the ACC-encrypted data
The length of the DCC-encrypted data
The key ring position of the DCC Key, required to allow key rolls.
Because these 16 bytes are encrypted, the proportion of the ACC to DCC data is hidden from
an attacker. Compromise of the ACC Key can only jeopardize the acquisition information (i.e.
who is acquiring). Once acquired, all traffic patterns are protected by the DCC Key. Because
the DCC Key is protected by the RSA public/private keys, possession of the ACC Key does not
allow an attacker to determine the DCC Key.
The total length of encrypted data is always the same. If there is unused space in a BBFRAME,
it is filled with random data and then encrypted. In some cases, the entire BBFRAME may
consist of encrypted random data.
The steps in the decryption process are:
1. The packet is recovered at the physical layer, including CRC checking.
2. The four-byte fixed header is checked. If it is not correct, the packet is discarded.
3. The 16 byte IV is loaded into the AES core.
4. The key ring position is used to select the correct ACC Key.
5. The first 16 bytes are decrypted and ACC and DCC lengths are extracted.
6. The AES core decrypts the ACC data (using CBC); the key is switched to the DCC Key; and
the DCC data is decrypted. (The DCC Key ring position is used to select the appropriate
key). The last 16 bytes of the ACC data serve as the IV of the DCC encrypted data.
7. The decrypted packet is processed in the normal fashion to extract the packets.
At startup, the initial IV is the result of the Known Answer Test. For each subsequent
BBFRAME, the last 16 bytes of encrypted data are used as the IV for the following BBFRAME.
Upstream TRANSEC
All data for transmission on the upstream TDMA carrier starts as data packets. These packets
are segmented by software into fragments that fit into the fixed size payloads of the TDMA
bursts. This segmented data is segregated into two queues: one for ACC data and one for DCC
data. A given TDMA burst only contains ACC or DCC data.
All packets regardless of class are encrypted. The key used to encrypt the data varies
depending on the packets queue. Packets extracted from the DCC queue are encrypted using
the DCC Key. Packets extracted from the ACC queue are encrypted using the ACC Key.
determine whether or not the remote is acquiring the network. To prevent that, the following
scheme (illustrated in Figure 18-2) is used to hide which key is used for any given burst.
Each Code Field show in Figure 18-2 is an eight-bit (one byte) field with the following
structure:
3. After this first encryption, the output is used as IV #2. This is used with the ACC Key and
Code #2 to encrypt both Code #1 and the first 15 bytes of IV #1. The result is shown in
step (3).
4. The output of this encryption becomes IV #3, which is used with the key specified in
Code #1 to encrypt the remainder of the data payload. This is illustrated in step (4).
Code #2 always indicates the ACC Key (although the key ring position can change), while
Code #1 indicates either the ACC or the DCC Key. However, since Code #1 is encrypted, and
therefore not visible to an attacker, an attacker has no way of knowing whether a burst is ACC
or DCC.
NOTE: Even if the ACC Key were compromised, only acquisition information would
be exposed. After acquisition, link layer information is still protected by the DCC
Key.
Packets exit the encryption process as shown in step (4). At this point the packet is ready for
transmission.
As illustrated in Figure 18-4, the following inputs are provided to the AES Core:
The first 128 bits are provided to the IV Input.
The second 128 bits are provided to the Data Input.
The Key used for burst N+1 is provided to the Key Input.
The 128 bit Output is used as the IV for the next burst.
While no logic is included to ensure that IVs do not repeat, the probability of repeating the
same IV within a two-hour period is extremely smallroughly estimated to be 1 in 297 for a
maximum data rate iDirect TDMA channel.
To address this weakness, the iDirect system uses ACQ Burst Obfuscation. ACQ Burst
Obfuscation works as follows:
1. The hub Issues dummy invitations to remotes already in the network. Since remotes
always burst in response to dummy invitations, it always appears that there is acquisition
activity.
2. The hub deliberately does not issue invitations for some inroute slots, so the ACQ
channel never appears full.
3. The hub Issues normal invitations, in response to which some remotes will burst and
others will not.
An attacker observing the upstream always sees some acquisition activity, but never full
acquisition activity. The is illustrated in Figure 18-5.
A remote responding to dummy invitations sends filler data encrypted with the Network
Acquisition Key. The remote deliberately offsets the time, frequency, and transmit power of
the burst to mimic a real acquisition attempt.
The proportion of dummy ACQ bursts, deliberately-empty slots, and actual ACQ slots is
controlled by the hub side equipment. A random function provides significant short and long
term variation in activity, while also providing a minimum number of used and unused slots.
After a network restart, all the remotes are out of the network. ACQ burst obfuscation does
not operate until at least one remote per inroute has acquired the network. During this
startup period, all of the ACQ slots contain real ACQ invitations.
NOTE: Beginning with iDX Release 3.2.3, the dead time required to
accommodate dummy ACQ bursts in TRANSEC networks was increased from 600 s
to 1500 s. This results in ~0.72% additional decrease in upstream bandwidth due
to ACQ burst obfuscation.
Figure 18-6 assumes that upon the receipt of a certificate from a peer, the host is able to
validate and establish a chain of trust based on the contents of the certificate. iDirect uses
standard X.509 certificates and methodologies to verify the peers certificate.
After the completion of the sequence described in Figure 18-6, a peer may provide an
unsolicited key update message as required. The data structure used to complete the key
update (also called a Key Roll) is described in Figure 18-7.
The Host Keying Protocol describes how a host receives an X.509 certificate from a Certificate
Authority (CA). iDirect provides a Certificate Authority (called the CA Foundry) with its
TRANSEC hub.
NOTE: In all cases, Host Key Generation is done on the X.509 host. In the iDirect
system, hosts are NMS servers, protocol processor blades, line cards and remote
modems.
After the host has completed the exchange shown in Figure 18-8, the hub transmits the
Network Acquisition Key to the host. The initial generation of the Network Acquisition Key is
described in ACC Key Management on page 139. The Network Acquisition Key is encrypted
with the host public key before it is transmitted to the host.
NOTE: ACC Key Roll time is configurable. For details see the appendix Managing
TRANSEC Keys in the iBuilder User Guide.
1. When a modem first enters the network, it receives the Current ACC Key and the
Next ACC Key. The Next ACC Key is the one that will be used after the next Key Roll.
2. Remotes switch to the Next ACC Key based on the downstream key pointer.
3. Once a Key Roll occurs, the remote uses the Next ACC Key.
4. After the Key Roll, the Next ACC Key becomes the Current ACC Key, and a new Next ACC
Key is generated.
5. The new pair of ACC Keys is pushed to all the remotes.
Because the Network Acquisition Keys must be distributed by a single Master GKD, secure and
reliable terrestrial connectivity must be established between hubs in an ABS system. In
addition, the terrestrial network routing and security must be configured to allow the hub
equipment from one location to communicate with the hub equipment at the other locations.
iDirect can provide the identifying information for this traffic to allow for proper terrestrial
network configuration.
To configure the system to propagate the same ACC Keys to the remotes from multiple hub
servers, a network of GKDs is required to forward the ACC Keys from the Master GKD to each
hub server that distributes the ACC keys. A GKD can reside on an existing Protocol Processor
blade, NMS server, or it can run on a dedicated GKD Server machine.
For details on setting up GKD Servers, see the appendix Global Key Distribution in the
iBuilder User Guide.
The Remote Sleep Mode feature conserves remote power consumption during periods of
network inactivity. This chapter explains how Remote Sleep Mode is implemented. It includes
the following sections:
Feature Description on page 143
Awakening Methods on page 144
Enabling Remote Sleep Mode on page 144
Power Consumption on page 145
NOTE: Sleep Mode is intended for use by non-roaming remotes with occasional
transmissions. It is not compatible with the Roaming Remote or Alternate
Downstream Carrier features.
Feature Description
The Remote Sleep Mode feature automatically powers down the BUC when the remote has no
data to transmit to conserve power. When Sleep Mode is enabled on the iBuilder GUI for a
remote, the remote enters Remote Sleep Mode after a configurable period elapses with no
data to transmit. By default, the remote exits Remote Sleep Mode whenever packets arrive on
the local LAN for transmission on the inbound carrier.
NOTE: The remote console commands powermgmt mode set sleep and
powermgmt mode set wakeup enable and disable remote sleep mode.
The stimulus for a remote to exit sleep mode is also configurable in iBuilder. The Network
Operator can select which types of traffic automatically trigger wakeup on the remote by
selecting or clearing a check box for the any of the QoS service levels used by the remote. If
no service levels are configured to trigger wakeup the remote, the operator can manually
force the remote to exit sleep mode by disabling sleep mode on the remote configuration
screen.
Before a remote enters sleep mode, the protocol processor continues to allocate traffic slots
(including minimum CIR) to the remote. Before it enters sleep mode, the remote notifies the
NMS and the real time state of the remote is updated in iMonitor. Once the remote enters
sleep mode, as far as the protocol processor is concerned, the remote is out of the network.
Therefore, no traffic slots are allocated to the remote while it is in sleep mode. When the
remote receives traffic that triggers wakeup, the remote returns to the network and traffic
slots are allocated as normal by the protocol processor.
Awakening Methods
There are two methods by which a remote is awakened from Sleep Mode. They are
Operator-Commanded Awakening, and Activity-Related Awakening.
Operator-Commanded Awakening
With Operator Command Awakening, a Network Operator can manually force a remote into
Remote Sleep Mode and subsequently awake it from the NMS. This can be done remotely
from the Hub since the remote continues to receive the downstream while in sleep mode.
service level with Trigger Wakeup selected. This is now the default behavior for remotes in
Sleep Mode, so the SAT0 custom key is no longer necessary.
NOTE: When Sleep Mode is enabled, a remote with RIP enabled will always
advertise the satellite route as available on the local LAN, even if the satellite link
is down. Therefore, the Sleep Mode feature is not compatible with configurations
that rely on the ability of the local router to detect loss of the satellite link.
To enable Remote Sleep Mode, see the chapter on configuring remotes in the iBuilder User
Guide. To configure service level based wake up, see the QoS Chapter in the iBuilder User
Guide.
Power Consumption
Examples of the power consumed by typical remote terminals during both normal operation
and sleep mode is shown in Table 19-1.
Table 19-1. Power Consumption: Normal Operations vs. Remote Sleep Mode
20 Remote Acquisition
This chapter describes how remotes join iDirect TDMA networks. The network acquisition
process requires the remote to transmit one or more acquisition bursts at various frequencies
on a dynamically assigned inroute until the hub line card successfully demodulates a burst. At
that point, the Uplink Control Process takes over to keep the remote in the network. The
protocol processor at the hub controls the acquisition process.
Acquisition Process
The protocol processor broadcasts timeplan messages on the downstream carrier that allocate
upstream traffic slots to the remotes in the network and acquisition slots to remotes that are
not in the network. When a remote is ready to join an iDirect network, it first locks to the
downstream carrier and waits for a timeplan message from the hub that invites the remote to
send an acquisition burst to the hub. The timeplan message identifies the upstream carrier,
acquisition slot, and frequency offset that the remote should use when transmitting the
acquisition burst. The remote may be assigned acquisition slots on any upstream carrier in the
inroute group.
Based on the timeplan message and configuration parameters, the remote calculates the
correct Frame Start Delay (FSD) and frequency for the acquisition burst. The remote
calculates the transmit power of the acquisition burst from the TDMA initial power and
reference carrier parameters configured in iBuilder.
The initial power must be set such that the hub receives acquisition bursts from the remote at
an acceptable C/N. Beginning in iDX Release 3.2, the characteristics (MODCOD, symbol rate,
etc.) of the upstream carriers in an inroute group may differ from carrier to carrier. If the
remote were to transmit at the same initial power on all inroutes, the C/N of the acquisition
bursts received at the hub would vary depending on the characteristics of the inroute on
which the remote was acquiring. This variation could cause interference on the upstream
carrier or missed acquisition bursts from the remote.
To prevent this variation in C/N, reference carrier parameters are configured in iBuilder along
with the TDMA initial power. If the remote is assigned an acquisition slot on a carrier that
differs from the configured reference carrier, the remote adjusts its initial transmit power to
compensate for the differences between the reference carrier and the assigned carrier. This
allows the C/N of the acquisition bursts to remain within the acceptable range regardless of
the inroute on which the remote acquires. For a description of the reference carrier
parameters, see Reference Carrier Parameters on page 46. The Installation and
Commissioning Guide for iDirect Satellite Routers contains Instructions for setting the TDMA
initial power and reference carrier parameters for a remote.
If the hub fails to detect the acquisition burst from the remote in the assigned acquisition
slot, it allocates another upstream acquisition slot to the remote. The hub changes the
remotes frequency offset for the new burst if the acquisition step size for the carrier is
smaller than the total sweep range. The sweep range is mainly determined by the stability of
the hub downconverter.
This process continues until the hub detects an acquisition burst from the remote. Once the
hub detects an acquisition burst, the hub sends the frequency offset correction to the remote
and the Upstream Control Process takes over to keep the remote in the network at the correct
power, frequency and symbol timing. (For more information, see Uplink Control Process on
page 97.)
The performance of the acquisition process is determined by the speed with which remotes
join the network and the number of acquisition bursts the remote must transmit before a
burst is successfully demodulated. If a remote can acquire the network more quickly by trying
fewer frequency offsets, the number of opportunities that other remotes have to acquire is
increased and the number of remotes that are out of the network at any one time is reduced.
Therefore optimization of the acquisition process involves reducing the number of acquisition
bursts that remotes must transmit to acquire the network.
iDX Release 3.3 supports two types of remote acquisition: Traditional Acquisition and
Superburst Acquisition. The type of acquisition is configured per upstream carrier in iBuilder.
Superburst Acquisition greatly improves the time and bandwidth required for remotes to join
the network and should be used whenever possible. However, in this release Superburst
Acquisition can be used only on upstream carriers being received by multichannel line cards
or by receive-only eM1D1 line cards.
When receiving a traditional acquisition burst, the TDMA demodulator at the hub has a narrow
tolerance for frequency offset (approximately 1.5% of the upstream carrier symbol rate for all
modulation types). Because of this, the hub may fail to demodulate an acquisition burst at the
assigned frequency offset. Therefore, the hub varies the frequency offset in the timeplan
messages causing the remote to burst at different frequencies within a defined frequency
range (or sweep range) until the demodulator at the hub detects the upstream burst.
When receiving a Superburst, the hub demodulators tolerance for frequency offset improves
to approximately 7.5% of the symbol rate. A Superburst is also a much more robust waveform
that is independent of the carrier MODCOD. These advantages allow the hub to detect a
Superburst over a much wider frequency range and at a much lower C/N when compared to a
traditional acquisition burst. Therefore, in most cases, a remote must only transmit a single
Superburst to acquire the network. When using Superburst, frequency sweeping is typically
not required since the sweep step size is generally larger than the instability of the hub
downconverter. In the rare cases when sweeping is required, the remote sweeps the
frequency range using the same fast acquisition method as a remote acquiring with
traditional acquisition bursts.
The frequency sweeping algorithm is described in the next section. For more on Superburst,
see Superburst Acquisition on page 150.
In iDX Release 3.3, Traditional Acquisition is still required in the following cases:
Acquisition over Spread Spectrum upstream carriers
Acquisition in TRANSEC networks
Acquisition on upstream carriers received by single channel line cards except receive-
only eM1D1 line cards configured for Single Channel TDMA (Adaptive) Receive Mode.
Acquisition on upstream carriers received by multichannel line cards in Single Channel
TDMA Receive Mode
Acquisition on upstream carriers received by Evolution XLC-M line cards with more than
eight assigned narrow-band carriers
Acquisition Algorithm
The acquisition algorithm determines how the protocol processor selects the various
frequency offsets at which the remote transmits acquisition bursts. In iDS Release 7.1,
changes were made to the acquisition algorithm that greatly improved the network
acquisition process used in prior releases. By using a common hub receive frequency offset,
the improved acquisition algorithm determines an anticipated frequency range smaller than
the complete frequency sweep range configured for each remote. As the common receive
frequency offset is updated and refined, the sweep window is reduced. If an acquisition
attempt fails within the reduced sweep window, the sweep window is widened to include the
entire sweep range.
When a remote first attempts to acquire the network, the hub assigns frequency offsets using
the smaller frequency range based on the common frequency offset. For a given ratio x:y, the
remote sweeps the smaller frequency range x times. After x of sweeps over the smaller
frequency range, the remote sweeps the entire range y times before it sweeps the narrower
range again. The default ratio is 100:1. That is, the remote tries 100 frequency offsets within
the reduced (common) range before resorting to one full sweep of the remotes entire
frequency range.
The sweep algorithm is the same whether the remote is transmitting traditional acquisition
bursts or Superbursts. However, when using Superburst the remote is highly likely to acquire
the network on the first acquisition burst. The frequency step size is automatically
determined based on the acquisition types and symbol rates of the carriers in the inroute
group. The step size used for all remotes is the smallest of all step sizes calculated
independently for each carrier.
A Network Operator can configure the following custom keys to override the default ratio of
fast sweeps to full sweeps. These custom keys must be applied to the hub side for each
remote in the network.
[REMOTE_DEFINITION]
sweep_freq_fast = <x>
sweep_freq_entire_range = <y>
sweep_method = <0 or 1>
where: <x> is the number of times to sweep the common frequency range (Default: 100)
<y> is the number of times to sweep the full frequency range (Default: 1)
sweep_method = 1 uses the fast sweep algorithm described above
sweep_method = 0 uses pre-iDS 7.1 sweeping over the full frequency range
The NMS does not use the fast sweep algorithm for any remote that is enabled for an iDirect
Music Box; for any remote that is not configured to use the 10 MHz reference clock; or for any
remote with the sweep_method custom key set to 0. In those cases, the remote sweeps the
entire acquisition frequency range each time. In IF networks, such as those used in test
environments, the 10 MHz reference clock is not used.
Superburst Acquisition
Beginning with iDX Release 3.2, remotes can use Superburst Acquisition to acquire the
network on non-spread upstream carriers received by multichannel line cards or by receive-
only eM1D1 line cards. To use Superburst Acquisition, the Receive Mode of a multichannel line
card must be set to Multiple Channel TDMA Mode in iBuilder. The Receive Mode of a receive-
only eM1D1 line card must be set to Single Channel TDMA (Adaptive). Superburst is not
available for carriers received by any other Line Card Types in any other Receive Modes.
A maximum of eight TDMA upstream carriers with Superburst enabled can be assigned to one
multichannel line card. In addition, all carriers received by a multichannel line card must be
configured for the same type of acquisition bursts. Superbursts and traditional acquisition
bursts cannot be received simultaneously by the same line card.
An Evolution XLC-M line card can receive up to 16 narrowband carriers. Since a multichannel
line card cannot receive more than eight carriers with Superburst enabled, an XLC-M line card
receiving more than eight carriers must use iDirects Traditional Acquisition for all carriers.
Advantages of Superburst
Superburst Acquisition represents a significant improvement when compared to iDirects
Traditional Acquisition. Superburst allows a remote to quickly acquire the network under
clear sky or fade conditions in much less time than Traditional Acquisition. This improvement
is due to the robust nature of the Superburst waveform and the frequency tolerance of the
Superburst demodulator.
When using iDirects Traditional Acquisition, the frequency detection of the TDMA
demodulator at the hub is limited to a frequency offset of 1.5% of the symbol rate. Due to
frequency inaccuracies throughout the satellite system (mainly caused by the instability of
hub downconverter), the remote must sweep in discrete frequency steps until the
demodulator detects a burst. Because of the limited detection range, remotes typically
transmit a number of bursts during the sweep that are not detected. This results in a
significant number of allocated acquisition slots that do not result in remote acquisition.
Additionally, the traditional acquisition bursts are identical to traffic bursts. Therefore there
is no additional frequency tolerance or C/N performance that the demodulator can take
advantage of to detect a remotes burst more quickly.
Superburst Acquisition addresses both the narrow tolerance for frequency offset and the
disadvantages of transmitting the acquisition burst at the same MODCOD as the upstream
traffic bursts. Superbursts are transmitted using a unique, robust waveform that is
independent of the MODCOD of the upstream carrier. Superburst increases upstream symbol
rate frequency tolerance from 1.5% to 7.5% of the upstream symbol rate. In addition, the hub
can reliably detect a Superburst with receive C/N between 2.5 dB and +20 dB.
Because of these improvements, the hub usually detects a remotes first Superburst. In that
case the remote acquires the network on the first attempt and no frequency sweeping is
required. However, in rare cases (for example at low symbol rates with large hub
downconverter instability), a remote may need to transmit a small number of additional
Superbursts to acquire the network. Typically, a sweep requires no more than a few steps.
21 Automatic Beam
Selection
This section contains information pertaining to Automatic Beam Selection (ABS) for roaming
remotes.
Theory of Operation
ABS is built on iDirects existing mobile remote functionality. When a remote is in a particular
beam, it operates as a traditional mobile remote in that beam.
Overview
In an ABS system, a roaming remote terminal consists of an iDirect remote modem and a
controllable, steerable, stabilized antenna. The ABS software in the remote modem can
command the antenna to find and lock to any satellite. Using iBuilder, a Network Operator
defines an instance of the remote in each beam that the remote is permitted to use. The
operator configures and monitors all instances of the remote as a single entity. The
consoldated remote options file (which conveys configuration parameters to the remote
from the NMS) contains the definition of each of the remotes beams. Consolidated options
files are described in the iBuilder User Guide.
As a remote moves from one beam to another, the remote must switch from its current beam
to the new beam. Automatic Beam Selection enables the remote to select a new beam,
decide when to switch to the new beam, and to perform the switch, without human
intervention. ABS logic in the remote reads the current location from the antenna and decides
which beam will provide optimal performance for that location. The remote selects the new
beam based either on a beam map or, if no map is available, using a round-robin selection
algorithm. This selection is made by the remote, rather than by the NMS, because the remote
must be able to select a beam even if it is not in any iDirect network.
its new location. The geographical size of the beam maplets varies in order to keep the file
size approximately constant. A typical beam maplet covers approximately 1000 km square
with 0.1 degree resolution. From the maplet, the remote determines the Beam Quality and Tx
Gain for each beam at its current location.
The Beam Quality depends on the geographic location of the remote. To decide which beam
to join, the remote looks up the Beam Quality for each beam at its current location. When
first acquiring a network, the remote uses the Beam Quality of the various beams as inputs to
a decision algorithm that selects the best beam. Once the remote has joined a beam, it
periodically compares the Beam Quality of other networks to the current Beam Quality. When
the Beam Quality on an alternate beam is higher than the current Beam Quality (plus an
additional hysteresis for stability), the remote automatically switches to the alternate beam.
By default, a remote always attempts to join any beam included in the beam map file if that
beam is determined to be the best choice available. This includes beams with a Beam Quality
value of zero for the remotes current location. However, selecting Inhibit Tx (when Beam
Quality = 0) in the iBuilder Network dialog box configures the network so that remotes never
attempt to join a beam if the quality of the beam at the current location is zero.
See the iBuilder User Guide for instructions on configuring a network in iBuilder. For details
on configuring and running a map server, see the appendix Configuring Networks for
Automatic Beam Selection in the iBuilder User Guide.
Beam Selection
The remote uses the Beam Quality of its current location to determine which beam to use.
The following rules apply:
If the remote has not joined a network, it attempts to enter the beam with the highest
Beam Quality number. If it fails to do so after a configured timeout period, it cycles
through the lower Beam Quality beams in order.
If a remote is already in a network, it switches beams if there is a beam with a Beam
Quality that is higher than the current Beam Quality plus an offset known as the quality
hysteresis. The quality hysteresis is defined in the map file header.
A Beam Quality of 0 is a special value indicating that the beam is not usable at the
current location due to lack of coverage and/or geopolitical constraints. (See Regulatory
Considerations on page 156.)
The Tx Gain values read by the remote from the beam map are used in two ways:
When the remote is attempting to acquire the network, it adjusts the initial transmit
power based on the Tx Gain of the current location. This allows the system to
compensate for G/T variations of the satellite. (See Initial Transmit Power on page 160
for details.)
A Tx Gain of 0 indicates that the beam is receive-only at this location. The remote will
remain in the beam if it is locked on the downstream carrier. However, the remote will
not attempt to establish a return link. This feature can be used to designate receive-only
geographic areas. (See Regulatory Considerations on page 156.)
NOTE: In order to use the iDirect ABS feature, the Satellite Provider must enter
into an agreement with iDirect to provide the beam map data in a supported
conveyance file format.
iDirect provides a utility that converts the conveyance beam map file from the Satellite
Provider into a beam map file that can be used by the iDirect system. Instructions for
converting a conveyance beam map file into an iDirect beam map are contained in the
iBuilder User Guide appendix Configuring Networks for Automatic Beam Selection.
Beginning in iDX Release 3.2, the system uses the Tx Gain contours when calculating C/N
adjustments used to determine the maximum C/N at which a remote can be received at its
current location. Because of this, the minimum Tx Gain contour for all beams in the
generated beam map should be set to zero. When using the GXT format for conveyance beam
maps, this can be accomplished by setting the gain_offset in the GXT header file such that
the minimum Tx Gain of each beam as defined in the beams gain file plus the gain_offset
for that beam equals zero. The formats of the GXT header file and gain files are defined in
the iDirect GXT Map Converter User Guide.
Regulatory Considerations
iDirect supports the use of separate GXT data files for reading Beam Quality and Tx Gain
information into the iDirect GXT Map Converter utility used for creating beam maps. The GXT
Map Converter also supports the input of geo-political or regulatory constraints. The
regulatory file input uses the same GXT data format as Beam Quality and Tx Gain data files.
The regulatory input creates regulatory contours in the resulting beam map file. A
regulatory contour restricts the service for all beams in the specified area. This avoids the
necessity of individually editing each conveyance beam map file. The GXT data file format
and its use are described in the iDirect GXT Map Converter User Guide.
The two types of regulatory contours are Do Not Operate and Do Not Transmit. A Tx Gain of 0
for a regulatory contour means Do Not Operate. A Tx Gain of 1 for a regulatory contour means
Do Not Transmit.
A Do Not Transmit contour defines a receive-only area for multicast traffic. This disables
remote transmissions in the specified area. Examples of when Do Not Transmit contours can
be useful include:
Some Ku band transmissions are prohibited near radio telescope installations.
Some regulatory restrictions limit transmissions over a specific country, even when the
satellite beam covers more than that country.
In either example, adding a Do Not Transmit regulatory contour prevents the remote from
transmitting in the specified location. When a remote receives GPS coordinates indicating
that it has entered a Do Not Transmit contour, it mutes its transmitter within approximately
100 ms.
A Do Not Operate contour disables remote operation. This includes muting transmissions and
disabling receive-only operation within the specified geographic area. For example, a country
may prohibit any operation within its national air space, including receive-only operation.
Do Not Transmit and Do Not Operate contours are added during the map conversion process
from the conveyance format to the map format. A Do Not Operate contour forces the Beam
Quality values to 0 within the contour for all beams. This sets all beams to an unusable state.
A Do Not Transmit contour forces the Tx Gain values to 0 within the contour for all beams.
This tells the terminal to operate in receive-only mode and to mute all transmissions.
If a remote is within a regulatory contour, then leaves the area covered by the current
maplet, and does not have a maplet that covers its new location, it switches to mapless mode
and no longer observes the regulatory restriction. Once it receives a new maplet covering its
current location, it will observe any regulatory contours defined for that location. Using a
local map server machine is the best way to ensure that the remote always has a maplet
covering the current location.
When using the GXT conveyance file format, a Network Operator can define Do Not Operate
and Do Not Transmit contours in a GXT data file and add these contours to the beam map.
The steps for adding regulatory data to the beam map are:
1. Create a GXT data file as specified in the section Geopolitical Constraint Data Files in
the iDirect GXT Map Converter User Guide.
2. Update the GXT header file to point to the GXT data file as specified in the iDirect GXT
Map Converter User Guide.
3. Copy the files to the /etc/idirect/map/Beams directory of the NMS server.
4. Re-execute the GXT Map Converter Utility to add the regulatory data to the beam map
and restart the map server. See the appendix Configuring Networks for Automatic Beam
Selection in the iBuilder User Guide for details.
Do Not Operate and Do Not Transmit contours can also be implemented in the Intelsat
format. However, this requires the conveyance beam map files from the Satellite Provider to
already have the proper values set to zero for the beams in the affected locations.
A beam is usable unless an attempt to use it fails. A beam is considered unusable for a
period of one hour after the failure, or until all visible beams are unusable.
NOTE: A custom key is required to change the length of time that a remote
considers beams to be unusable. (See the appendix Configuring Networks for
Automatic Beam Selection in the iBuilder User Guide for details.)
If the selected beam is unusable, the remote attempts to use another beam, provided one or
more usable beams are available. A beam can become unusable for many reasons, but each
reason ultimately results in the inability of the remote to communicate with the outside
world using the beam. Therefore the only usability check is based on the layer 3 state of
the satellite link, such as whether or not the remote can exchange IP data with the upstream
router.
Examples of events that can cause a beam to become unusable include:
The NMS operator disables the remote instance in iBuilder.
A Hub Line Card fails with no available backup.
The Protocol Processor fails with no backup.
A component in the upstream or downstream RF chain fails.
The satellite fails.
The beam is reconfigured.
The remote cannot lock to the downstream carrier.
The receive line card stops receiving the remote.
Unless the remote is in receive-only mode, anything that causes the remote to stop
transmitting and the receive line card to stop receiving the remote, eventually causes Layer 3
to fail. The remote stops transmitting if it loses downstream lock. A mobile remote will also
stop transmitting under the following conditions:
The remote has not acquired the beam and no GPS information is available.
The remote antenna declares loss-of-lock.
The antenna declares a blockage.
Beam map data places the remote in receive-only mode.
The remote has been configured as Rx Only in iBuilder.
If the remote cannot remain in the network for an extended period due to blockage or
network outage.
If the map server is unreachable.
In all cases, after the remote establishes communications with the map server, it immediately
asks for a new maplet. When a maplet becomes available, the remote uses the maplet to
compute the optimal beam, and switches to that beam if it is not the current beam.
IP Mobility
Communications to the customer intranet (or to the Internet) are automatically re-
established after a beam switch. The process of joining the network after a new beam is
selected uses the same internet routing protocols that are already established in the iDirect
system. When a remote joins a beam, the Protocol Processor for that beam begins advertising
the remote's IP addresses to the upstream router using the RIP protocol. When a remote
leaves a beam, the Protocol Processor for that beam withdraws the advertisement for the
remote's IP addresses. When the upstream routers see these advertisements and withdrawals,
they communicate with each other using the appropriate IP protocols to update their routing
tables. This permits other devices on the Internet to send data to the remote over the new
path with no manual intervention.
3. The installer verifies that the remote has requested and received the maplet for the
current location over the TCP/IP link to the map server by entering the console
command:
map show
4. The installer enters the following console command to determine the Initial Transmit
Power Offset of the remote:
beamselector txpower offset <acquisition power>
Where <acquisition power> is the Transmit Power from the Probe (Figure 21-1) plus
any budgeted margin to ensure that the remote can acquire under all conditions.
5. On the iBuilder VSAT tab, the Network Operator enters the transmit power offset
returned by the console command in Step 4 as the Init Tx Power Offset (Figure 21-2).
Figure 21-2. Remote VSAT Tab: Entering the Initial Transmit Power Offset
6. The Network Operator saves and applies the iBuilder configuration changes.
-3 dB/K 0 dB
+1 dB/K +4 dB
Beam A:
X X
+2 dB/K
0 dB
+4 dB/K
+2 dB
Beam B:
Y
Y
Figure 21-3. Absolute vs. Generated G/T Contours for Two Beams
Figure 21-3 shows two beams: Beam A and Beam B. The two diagrams on the left show the
absolute G/T contours for the beams, as obtained from the satellite operator. The generated
map defines the edge of coverage for each beam as the 0 dB contour. The G/T contours of the
generated map are shown on the right for each beam.
Assume the remote is commissioned in Beam A at point X and that the steady-state power
observed is 10 dBm. (All power levels used in this example are normalized to the reference
carrier). The beamselector txpower offset command used to determine the offset will
return a value of 6 dBm, corresponding to the power that would be needed if the acquisition
took place at the edge of coverage. During actual acquisition attempts in Beam A, the remote
automatically adjusts for the location so that the power used is appropriate.
If the same power setting were used to attempt to acquire in Beam B, the bursts would likely
arrive at a too high level, potentially causing interference or over-driving the satellite and/or
the demodulator. For example, if the remote attempted to acquire at location Y with the
same power setting as in Beam A, then the transmit power would be 8 dB, which is 2 dB
lower than the edge-of-beam value.
If the difference in G/T is the only difference between the link budgets for these two beams
(for example, if both beams are dominated by the uplink), then the bursts in beam B will
arrive at a level corresponding to the difference in the edge-of-beam G/T. In that case, the
network designer can conclude that the Initial Transmit Power Offset value for beam B should
be 5 dB lower than that used for beam A, or 11 dBm. At point Y the acquisition power used
would be 13 dBm.
If there are other factors influencing the inbound link budget, a more detailed assessment of
the differences between the beams may be appropriate for determining the Initial Transmit
Power Offset value, or the remote should be commissioned separately in each beam.
NOTE: This feature can be disabled on a remote by entering a custom key. See the
appendix Configuring Networks for Automatic Beam Selection in the iBuilder
User Guide for details.
A remote may also be configured as Rx Only on the iBuilder Remote Information tab. This is a
general feature and is not restricted to remotes using the ABS feature. If a remote is
configured as Rx Only in iBuilder, the remote will never transmit.
To run a local version of the map server, override the map server IP address in the remotes
option file by entering a remote-side custom key defining the local map servers IP address
within the private address space of the remote. This custom key must be applied to each
remote with a local map server. (See the appendix Configuring Networks for Automatic
Beam Selection in the iBuilder User Guide for the definition of this custom key.)
Operational Scenarios
This section presents a series of top-level operational scenarios that can be followed when
configuring and managing iDirect networks that contain roaming remotes using Automatic
Beam Selection. Steps for configuring network elements such as iDirect networks (beams) and
roaming remotes are documented in iBuilder User Guide. Steps specific to configuring ABS
functionality, such as adding an ABS-capable antenna or converting a conveyance beam map
file, are described in the appendix Configuring Networks for Automatic Beam Selection in
the iBuilder User Guide.
Adding a Remote
This scenario outlines the steps required to add a roaming remote using ABS to all available
beams.
1. The NMS operator configures the remote in one beam.
2. The NMS operator adds the remote to the remaining beams.
3. The NMS operator saves the remote's options file and delivers it to the installer.
4. The installer installs the remote.
5. The installer copies the options file to the remote using iSite.
6. The installer manually selects a beam for commissioning.
7. The remote commands the antenna to point to the satellite.
8. The remote receives the current location from antenna.
9. The installer commissions the remote in the initial beam.
10. The remote enters the network and requests a maplet from the NMS map server.
11. The remote checks the maplet. If the commissioning beam is not the best beam, the
remote switches to the best beam as indicated in the maplet. This beam is then assigned
a high preference rating by the remote to prevent the remote from switching between
overlapping beams of similar quality.
12. Assuming center beam in clear sky conditions:
a. The installer determines the Initial Transmit Power Offset as discussed in Setting the
Initial Transmit Power Offset on page 161.
b. The Network Operator sets the Initial Transmit Power Offset in iBuilder on the
Remote VSAT tab.
c. The Network Operator sets the TDMA Maximum Power to 6 dB above the nominal
transmit power in iBuilder on the Remote Information tab.
NOTE: Check the levels the first time the remote enters each new beam and
adjust the transmit power settings if necessary.
Normal Operations
This scenario describes the events that occur during normal operations when a remote is
receiving map information from the NMS.
1. As the remote moves, it periodically receives the current location from antenna.
2. While in the beam, the antenna automatically tracks the satellite.
3. As the remote approaches the edge of the current maplet, it requests a new maplet from
the map server.
4. When the remote reaches a location where the maplet shows a better beam, the remote
switches by doing the following:
a. Computes best beam.
b. Saves best beam to non-volatile storage.
c. Commands the antenna to move to the correct satellite and beam.
d. Reboots.
e. Reads the new best beam from non-volatile storage.
f. Again commands the antenna to move to the correct satellite and beam.
NOTE: This command is repeated after reboot because the remote is not
aware of the reason for restart. The antenna controller ignores the repeated
command since it has already been commanded to track this beam in Step c.
Mapless Operations
This scenario describes the events that occur during operations when a remote is not
receiving beam mapping information from the NMS.
1. While operational in a beam, the remote periodically asks the map server for a maplet.
The remote does not attempt to switch to a new beam unless one of the following
conditions are true:
a. The remote drops out of the network.
b. The remote receives a maplet indicating that a better beam exists.
c. The satellite drops below the minimum look elevation defined for that beam.
2. If not acquired, the remote selects a visible, usable beam based only on satellite
longitude and attempts to switch to that beam.
3. After five minutes (by default), if the remote is still not acquired, it marks the new beam
as unusable and selects the best beam from the remaining visible, usable beams in the
options file. This step is repeated until the remote is acquired in a beam, or all visible
beams are marked as unusable.
4. If all visible beams are unusable, the remote marks them all as usable, and continues to
attempt to use each beam in a round-robin fashion as described in step 3.
Error Recovery
This section describes the actions taken by the remote under certain error conditions.
1. If the remote cannot communicate with the antenna and is not acquired into the
network, it will reboot after five minutes.
If the antenna is initializing, the remote waits for the initialization to complete. It will not
attempt to switch beams during this time.
22 Hub Geographic
Redundancy
This chapter describes how to establish a primary and back up hub that are geographically
diverse. It includes the following sections:
Section , Feature Description describes how geographic redundancy is accomplished.
Section , Configuring Wait Time Interval for an Out-of-Network Remote describes how
to set the wait period before switchover.
Feature Description
The Hub Geographic Redundancy feature builds on the Global NMS feature and the existing
Backup and Restore utilities. The Network Operator configures the Hub Geographic
Redundancy feature by defining all the network information for both the Primary and Backup
Teleports in the Primary NMS. All remotes are configured as roaming remotes and they are
defined identically in both the Primary and Backup Teleport network configurations.
During normal (non-failure) operations, carrier transmission is inhibited on the Backup
Teleport. During failover conditions (when roaming network remotes fail to see the
downstream carrier through the Primary Teleport NMS) the operator can manually enable the
downstream transmission on the Backup Teleport, allowing the remotes to automatically
acquire the downstream transmission through the Backup Teleport NMS. Re-acquisition occurs
after a wait period that defaults to five minutes.
iDirect recommends the following for most efficient switchover:
A separate IP connection (at least 128 Kbps) between the Primary and Backup Teleport
NMS for database backup and restore operations. A higher rate line can be employed to
reduce this database archive time.
The downstream carrier characteristics for the Primary and Backup Teleports MUST be
different. For example, either the FEC, frequency, frame length, or data rate values
must be different.
On a periodic basis, backup and restore the NMS configuration database between the
Primary and Backup Teleports. See the NMS Redundancy and Failover Technical Note for
complete NMS redundancy procedures.
23 Carrier Occupied
Bandwidth
This chapter discusses occupied bandwidth of iDirect carriers. Occupied bandwidth includes
the actual carrier size plus the guard band required to prevent interference with adjacent
carriers. Minimizing the guard band between adjacent carriers optimizes the use of the
available satellite bandwidth.
In earlier releases, iDirect required a minimum guard band of 20% of the carrier symbol rate
for both upstream and downstream carriers. Beginning with iDX Release 3.2, the minimum
guard band required for DVB-S2 downstream carriers has been reduced from 20% of the carrier
symbol rate to 5% of the carrier symbol rate by reducing the roll-off factor of these carriers.
This is discussed in detail in DVB-S2 Roll-Off Factors on page 173.
This chapter includes the following sections:
Overview describes the relationships among occupied bandwidth, guard band and
information rate and factors to consider when setting the guard band.
DVB-S2 Roll-Off Factors describes the improvements to the iDirect downstream carrier
wave form that allow reduced guard bands beginning with iDX Release 3.2.
Improving Throughput by Reducing Guard Band provides an example of how to increase
the efficiency of the occupied bandwidth by reducing the guard band of DVB-S2 carriers.
DVB-S2 Guard Band Constraints specifies limitations of minimum symbol rates and
MODCODs for DVB-S2 carriers with small guard bands.
Adjacent Channel Interference discusses the relationship between DVB-S2 Roll-Off factor
and interference on adjacent carriers.
Overview
A lower guard band requirement between carriers makes it possible to fit a higher bit rate
carrier into the same satellite bandwidth. Therefore, with a lower guard band requirement, a
Network Operator can increase the bit rate of existing carriers without purchasing additional
bandwidth.
Optimized digital filtering is used in the iDirect transmit firmware to minimize the amount of
satellite bandwidth required for an iDirect carrier. For upstream carriers, iDirect supports a
guard band as low as 20% of the carrier symbol rate. Beginning in iDX Release 3.2, iDirect
supports a guard band for DVB-S2 downstream carriers as low as 5% of the carrier symbol rate.
The amount of required guard band between carriers can also be expressed as the carrier
spacing requirement. For example, if the required guard band is 20%, the channel spacing
requirement is 1.2*Carrier Symbol Rate in Hz. Carrier spacing can be configured in iBuilder for
upstream and downstream carriers. This field can be used to document the total occupied
bandwidth of the carriers. It represents the carrier bandwidth plus the guard band normalized
by the symbol rate.
The Carrier Spacing configured in iBuilder is only enforced on multichannel receive line cards
to prevent overlap when assigning upstream carriers to a specific line card. Otherwise, it is
the responsibility of the Network Operator to ensure that adjacent carriers do not interfere
with each other.
NOTE: Evolution e8000 Series remotes can only be used with downstream carriers
configured for a 20% roll-off factor (carrier spacing 1.2). If an Evolution e8000
Series remote is added to a network with a smaller roll-off factor, its iBuilder
status is set to Incomplete and the remote will not operate.
NOTE: Not all DVB-S2 downstream carriers support a 5% guard band. See DVB-S2
Guard Band Constraints on page 175 for details.
Figure 23-1 illustrates the difference between a 20% and 5% roll-off factor for a DVB-S2
carrier.
With a 5% roll-off factor (shown in blue in Figure 23-1), the transmitted spectrum more
closely resembles the ideal brick-wall spectrum with occupied bandwidth at 1.05 times the
carrier symbol rate. The spectral efficiency gain is approximately 14% for bandwidth-limited
link budgets. Note that for the same transmit power, the 5% roll-off factor has a Power
Spectral Density (PSD) differential of 0.58 dB when compared to a 20% roll-off factor.
Because of the improved roll-off factor for DVB-S2 carriers introduced in iDX Release 3.2,
iBuilder allows Network Operators to select one of four values for Carrier Spacing when
configuring a DVB-S2 downstream carrier:
1.20 (Guard band = 20% of carrier symbol rate)
1.15 (Guard band = 15% of carrier symbol rate)
1.10 (Guard band = 10% of carrier symbol rate)
1.05 (Guard band = 05% of carrier symbol rate)
NOTE: Increasing the information rate of existing carriers may affect the link
budget of the satellite network. This is especially true for upstream carriers since
uplink power control will automatically increase the transmit power of the
remotes. Consult with the link budget provider prior to adjusting any carrier
configurations.
Table 23-1 illustrates the increase in information rate that can be achieved on a DVB-S2
downstream carrier by decreasing the guard band while holding the occupied bandwidth
constant. The calculations were performed using the 8PSK-8/9 MODCOD.
In the example, reducing the guard band from 20% of the symbol rate to 5% of the symbol rate
results in a 14.29% increase in information rate.
For DVB-S2 roll-off factors of 10, 15 and 20%, the BER degradation of the wanted carrier does
not exceed 0.25 dB from LBA guide across all supported MODCODs for ACI levels of +7 dBc
(each) on either side independent of the symbol rate, roll-off and waveform modulation type
of the adjacent carriers. However, for 5% roll-off factor, the robustness to ACI levels is
degraded slightly.
Based on carrier arrangements robustness to ACI levels for 5% roll-off factor vary from +4 dBc
to +7 dBc on either side. The worst case typically occurs at the highest operating MODCOD
(i.e. 16APSK-8/9) under the following conditions:
The wanted carrier is less than 15 Msps and the adjacent carriers have an identical
symbol rates shaped with 5% roll-off factor
With strong, narrow-band adjacent carriers that occupy 5% to 10% of symbol rate of the
wanted carrier on either side
In most cases, the DVB-S2 downstream carrier operates at the highest power spectral density
in the transponder, maximizing the contracted equal power equal bandwidth (EPEBW) limit.
This makes ACI levels exceeding +4 dBc unlikely. In addition, adjacent carriers shaped at 5%
(bullet 1) or narrow-band carriers stronger than the DVB-S2 downstream carrier (bullet 2) that
are likely to violate the EPEBW mask are unusual.
Network designers can realize the benefit of 5% roll-off by planning around the identified
constraints.
24 Alternate Downstream
Carrier
This chapter provides information about iDirects Alternate Downstream Carrier feature. It
contains the following sections:
Background on page 179
Feature Description on page 179
Background
The Alternate Downstream Carrier feature is intended to make it easier to move an iDirect
network to a new transmit carrier and to eliminate the danger of stranding remotes that have
not received the new carrier definition when the carriers are switched. If, for example, the
network must move to a larger downstream carrier, the Alternate Downstream Carrier feature
can facilitate the transition. Before this feature was available, if the downstream carrier
changed, a site visit was required to recover any remotes that were not in the network at the
time that the carrier was changed.
The Alternate Downstream Carrier feature is disabled if the NMS server is licensed for the
Global NMS feature. However, the Global NMS feature can accomplish the same goal by
creating an alternate network containing the new downstream carrier and configuring
roaming remotes instances in both the existing network and the new network. Like the
Alternate Downstream Carrier feature, this allows the Network Operator to verify that all
remotes have the new downstream carrier definition prior to the actual upgrade.
Feature Description
Beginning in iDX Release 2.0, iBuilder provides the capability of selecting an alternate
downstream carrier on the Line Card dialog box of the transmit line card. (See the chapter
titled Defining Networks, Line Cards, and Inroute Groups in the iBuilder User Guide for
details). The configuration includes all necessary parameters for the remote to acquire the
alternate downstream. Configure the alternate carrier for a network well in advance of the
carrier change to ensure that all remotes have the alternate carrier definition when the
downstream change is implemented.
If a remote is not in the network at the time of the carrier change it will attempt to acquire
the old primary carrier unsuccessfully when it first tries to rejoin the network. Since the old
primary carrier is no longer being transmitted, the remote will then attempt to acquire its
configured alternate downstream carrier which is the new primary carrier. At that point the
remote will acquire the network on the new carrier.
When a remote joins a network with a configured Alternate Downstream Carrier, it first
attempts to acquire the last downstream carrier to which it was locked before it attempts to
acquire the other carrier. Therefore, if the remote was last locked to the primary carrier, it
attempts to lock to the primary carrier again when it tries to rejoin the network. Similarly, if
the remote was last locked to the alternate carrier, it attempts to lock to the alternate
carrier again when it tries to rejoin the network.
By default, a remote tries for five minutes (300 seconds) to find the last carrier before
switching to the other carrier. However, this timeout can be changed by defining the
net_state_timeout remote-side custom key on the Remote Custom tab in iBuilder as
follows:
[BEAMS_LOCAL]
net_state_timeout = <timeout>
where <timeout> is the number of seconds that the remote tries to acquire the
primary carrier before switching to the alternate carrier.
NOTE: If a new remote has never locked to any carrier, it always attempts to lock
to the primary downstream carrier first. Therefore, when commissioning a new
remote, it will first look for the primary carrier even if an alternate carrier is
configured.
Primary and alternate downstream carriers cannot co-exist as active carriers in an iDirect
system. In addition, the Alternate Downstream Carrier feature is not intended to be used as a
recovery channel. A carrier selected as the Alternate Downstream Carrier for one Tx line card
cannot also be assigned to another line card, either as the primary or alternate carrier.
The procedure for moving a network to an Alternate Downstream Carrier is documented in the
iBuilder User Guide. See Changing to an Alternate Downstream Carrier in the chapter titled
Defining Networks, Line Cards, and Inroute Groups.
This chapter describes the iDirect Transmit Key Line Feature. It includes the following
sections:
Introduction
Feature Description
Introduction
The iDirect Transmit Key Line feature is supported on iConnex e850mp and Evolution e150
remotes. The feature uses a pair of differential hardware lines to indicate when the remote is
transmitting on a burst-by-burst basis. iDirect anticipates that this signal will be used by
terminal providers to conserve power on the remote terminal by powering on the Solid State
Power Amplifier (SSPA) on the Block Upconverter (BUC) only when the remote is required to
transmit on the upstream carrier.
NOTE: This feature is only available for remote terminals equipped with an
iConnex e850mp or Evolution e150 satellite router board that has been properly
integrated with a BUC that supports the transmit key line signal.
Feature Description
Maintaining satellite communications is often a critical requirement of deployments in which
the only power source available to the remote terminal consists of batteries or small
generators with limited fuel. This makes power conservation crucial to the success of the
mission. Since the biggest power requirement of a satellite terminal comes from the BUC, the
Transmit Key Line feature is designed to allow the terminal to conserve power by turning off
the BUC Power Amplifier (PA) when the remote is not transmitting. This significantly increases
the amount of time the terminal is on line.
The Transmit Key Line feature uses a differential RS 422-compatible signal provided to the
BUC through a connector on the remote modem. Details of this interface are defined in the
integration guide for each model type. For the iConnex e850mp, see the e800 Series Satellite
Router Integration Guide. For the Evolution e150, see the e150 Satellite Router Integration
Guide.
By default, the Transmit Key Line feature is disabled for e850mp and e150 remotes. A
Network Operator can enable the feature on individual remotes by selecting a check box on
the Remote VSAT Tab in iBuilder. When Transmit Key Line is enabled, the operator must also
enter a BUC PA warm up time between 0 and 1700 microseconds (s). This represents the
minimum amount of time prior to transmitting that the modem will enable the Transmit Key
Line signal. See the chapter titled Configuring Remotes in the iBuilder User Guide for a
step-by-step procedure to enable the Transmit Key Line feature on remote modems.
26 NMS Database
Replication
Beginning with iDX Release 3.1, iDirect supports replication of NMS databases from the
Primary NMS Server to one or more Backup NMS Servers. This chapter describes the NMS
Database Replication feature and explains how to configure and monitor the feature on NMS
server machines.
NOTE: Great care must be taken not to permit such applications to update the
databases residing on the backup server. Doing so will cause replication from the
Primary NMS to the Backup NMS to stop. This may only be noticed when the MySQL
master attempts to update the same table and row that was inappropriately
updated in the MySQL slave database.
Allows replication of key server configuration files. By using the -b option when setting
up replication, key NMS server files are automatically replicated to the backup server.
This eliminates the need to manually backup these files to provide server redundancy.
Feature Description
The NMS Database Replication feature provides MySQL database replication from a Primary
NMS Server to one or more Backup NMS Servers. Replication in MySQL provides support for
one-way, asynchronous replication, in which one server acts as the master, while one or more
other servers act as slaves.
Because replication is asynchronous, updates can be made to the master databases even
when the slave servers are not connected to the master server. When a slave re-connects to
the master, any database updates logged on the master while the slave was disconnected are
applied on the slave to re-synchronize the databases.
When NMS Database Replication is enabled, the MySQL master running on the Primary NMS
writes database updates as events to a binary log. Slaves are configured to read the binary
log on the master and to execute the events in the binary log on the slave's local database.
Because it is the responsibility of each slave to keep track of which transactions have been
executed on its local databases, individual slaves can be connected and disconnected from
the server without affecting the master's operation.
The example in Figure 26-1 shows a single NMS server with one backup server and an external
server that reads the replicated database. However, the NMS Database Replication feature
can be set up in a number of different configurations. Note the following:
NMS databases can be replicated from a single MySQL master to a single MySQL slave (as
shown in Figure 26-1) or from a single MySQL master to multiple MySQL slaves.
It is not possible to replicate the NMS databases from multiple MySQL masters to a single
MySQL slave. Each MySQL slave can only receive replicated databases from a single
master.
It is not possible to select which individual NMS databases that reside on an NMS server to
replicate. When NMS Database Replication is enabled, all databases are replicated.
idsBackup can be configured to run on any, all or none of the MySQL slave servers.
idsBackup can be configured to archive the NMS databases to its own server (as shown in
Figure 26-1) or to a different backup server.
On a distributed NMS, NMS Database Replication can run on any, all, or none of the NMS
servers on which databases are stored.
On a distributed NMS, it is not possible to replicate the NMS databases from multiple NMS
servers on which the databases are stored to a single backup server. Each NMS server
with replication enabled must replicate its database(s) to a different backup machine.
Key configuration files can be replicated to the Backup NMS Server by enabling
replication with the -b option (see page 191). This allows the Backup NMS Server to be
easily brought on line as the Primary NMS Server if the primary server fails. However,
when using this option, the backup server cannot act independently in another role (such
as a GKD server) since the configuration on the backup server will be overwritten by the
configuration from the Primary NMS Server.
Figure 26-2 shows a sample implementation of the NMS Database Replication feature with one
Primary NMS Server and one Backup NMS Server.
Figure 26-2. NMS Database Replication from a Single Primary NMS Server
In the example in Figure 26-2, the databases on the Primary NMS Server are replicated to the
Backup NMS Server. idsBackup runs on the backup server and stores the archived databases to
the local machine.
When you enable NMS Database Replication on a Primary NMS Server, the Primary NMS
becomes the MySQL master and the server(s) that you specify become the MySQL slave(s).
Enabling replication automatically creates a copy of all of the NMS databases on each slave
server. From then on, the master writes all changes made to its local databases to the binary
log (see Figure 26-1 on page 184).
When you enable NMS Database Replication on a Primary NMS Server, idsBackup is
automatically disabled on the Primary NMS and enabled on the backup server(s). Although you
can run idsBackup on a Primary NMS Server with database replication enabled, doing so
negates one of the main advantages of enabling replication as discussed on page 183. (See the
technical note NMS Redundancy and Failover for details on configuring idsBackup.)
Each slave is responsible for reading the MySQL binary log and updating its own copy of the
database. In the case of one master and multiple slaves, at any given time, database updates
that have been replicated on one slave may not have been replicated on another slave.
Therefore, the master must ensure that all slaves have processed an update before deleting it
from the log.
It is possible for replication to stop working on a MySQL slave. For example, if a change is
made directly to a slave database, replication on that slave will stop when the master
database is determined to be different from the slave database. In the case that replication
stops on a slave, the slave will stop processing the updates in the binary log and send a
warning to the NMS. An active condition will appear in iMonitor stating that a replication error
has been detected on the MySQL slave server.
If a replication error is detected, it is important that the Network Operator take action to
recover from the failure. If no action is taken, the log files on the master can no longer be
purged and it is possible to run out of disk space on the Primary NMS Server. To recover from
the failure, the operator should delete and recreate the MySQL slave. This will re-copy the
MySQL master databases to the MySQL slave server and restart replication on the slave.
Figure 26-3 shows a distributed NMS with NMS Database Replication enabled on all three NMS
server machines.
Notice in Figure 26-3 that both the nms and nrd_archive databases are created on all three
Primary NMS Servers. Each database is created on each server even if no process that writes
to that database runs on that server. When replication is enabled on a Primary NMS Server,
each database on that server is copied onto the Backup NMS Server even if the database
contains no data.
As discussed on page 185, you cannot replicate the NMS databases from multiple NMS servers
on which the databases are stored to a single backup server. Each NMS server with replication
enabled must replicate its database(s) to a different backup machine. Therefore, to replicate
the databases on all three NMS server machines, three backup machines are required to act as
MySQL slaves.
However, there is no requirement to replicate the databases on all Primary NMS Servers. For
example, it is possible to configure a DNMS to replicate only the nrd_archive databases on
NMS Server 2 and NMS Server 3, and run idsBackup locally on NMS Server 1 to back up the nms
database nightly. This would eliminate the need for NMS Backup Server 1 in Figure 26-3.
NOTE: See the Technical Note NMS Redundancy and Failover for instructions on
bringing a Backup NMS Server on line if the Primary NMS Server fails.
Examples
This section provides examples showing how to configure NMS Database Replication on NMS
Servers. With the exception of the cr8dBSlave command used to stop replication on a
MySQL slaves local machine to prepare for a failover, all commands should be run from the
root account of the Primary NMS containing the databases to be replicated.
If not previously done, enable remote access from the Primary NMS Server to a backup server
by executing the pushSSHKeys command. This command only needs to be executed one
time.
To enable remote access to a backup server:
1. Log on to the root account of the Primary NMS Server
2. Enter the command:
pushSSHKeys <backup server IP Address>
where <backup server IP Address> is the IP address of the backup server.
NOTE: Enabling replication restarts iDirect NMS services on the Primary NMS
Server.
NOTE: When the -b option is used, the backup server cannot act independently in
another role (such as a GKD server) since the configuration on the backup server is
periodically overwritten by the configuration on the primary server.
CAUTION: Do not use the -b option when enable replication for all non-CFG NMS
servers in the DNMS environment.
Contact TAC immediately if the -b option has already been used on these NMS
servers for additional support.
NOTE: If 192.168.77.16 is the only MySQL slave associated with this MySQL
master, this command also disables replication on this Primary NMS Server and this
NMS Server is no longer configured as a MySQL master.
This command can be used to re-configure a Backup NMS Server to no longer act as a MySQL
slave if the Primary NMS Server is not available. For example, if the Primary NMS Server has
failed, this command properly stops replication in preparation for making the slave the new
Primary NMS.
Execute the following command on the Backup NMS Server to stop the server from being a
MySQL slave:
cr8DbSlave -d
repl-file-send copies key NMS configuration files to the slave server. This option is
intended to facilitate the use of the Backup NMS Server as the primary in case the
Primary NMS Server fails. This cronjob is only setup if the -b option is specified when
running cr8DbMaster. By default this cronjob is set to run at two minutes after the hour
and every five minutes thereafter when replication is configured using the cr8DbMaster
script. When the last slave is deleted, the cr8DbMaster script removes this cronjob.
NOTE: These default settings can be changed by your Linux Systems Administrator
by editing the crontab file on the MySQL master server after replication is
enabled.
NOTE: When the cr8DbMaster script is executed on the Primary NMS Server, the
NMS processes are automatically restarted. Operators logged on to iMonitor when
this script is executed should select Log On from the File menu and log on again or
they may not see new Conditions.
NOTE: Since replication (and therefore the Active Condition shown in Figure 26-6)
is not associated with any elements in the iMonitor tree view, no associated
alarms or warnings appear in the tree. NMS operators should check the Active
Conditions regularly to ensure that there are no active replication errors.
If iMonitor displays a persistent Active Condition indicating a replication error, first ensure
that the Primary NMS Server has IP connectivity to the MySQL slave server. If not, re-establish
the IP connection from the MySQL master server to the MySQL slave server and monitor the
Active Condition in iMonitor to see if it clears.
If the error condition does not clear and the Primary NMS Server can communicate with the
Backup NMS Server, force the Primary NMS Server to re-copy the database(s) to the Backup
NMS Server and restart replication as follows:
1. Log on to the root account of the Primary NMS Server acting as MySQL master.
2. Enter the command:
cr8DbMaster -f -u <User> -p <Password> <Slave IP Address>
where:
<User> is the MySQL User configured when enabling replication
<Password> is the MySQL User password
<Slave IP Address> is the IP Address of the MySQL slave server
3. Check the Active Conditions in iMonitor to make sure the condition clears.
Licenses are required for chassis slots and certain iDirect features before they can be enabled
in iBuilder. Please see the iDirect Features and Chassis Licensing Guide for detailed
information on how to obtain, install and activate iDirect licenses.
NMS Server Feature Licenses are software feature licenses that apply to all iDirect
networks managed by an NMS server. To enable these licenses, a valid license file must
be copied to a specific directory on the NMS server.
NMS Server Feature Licenses include:
Global NMS
VNO User Groups
CNO User Groups
Protocol Processor Blade Feature Licenses are software feature licenses that apply to all
iDirect modems controlled by a Protocol Processor Blade server. To enable these
licenses, a valid license file must be copied to a specific directory on the PP blade server.
PP Blade Server Feature Licenses include:
Encryption License (for Link Encryption and TRANSEC)
High-speed COTM licenses are required for mobile remotes that exceed the speed of 150 mph.
A mobile remote determines its speed by monitoring its geographic location over time. If an
unlicensed remote calculates three consecutive times that it has exceeded 150 mph, the
remote issues the event COTM License Violated and all user traffic to the remote is
stopped. When the remotes speed falls below the 150 mph limit, the violation is reset and
the remote resumes carrying user traffic.
NOTE: For more information on the High-Speed COTM feature, see the appendix
COTM Settings and Custom Keys in the iBuilder User Guide.
For information on importing Chassis Slot Licenses and Hardware-Specific Feature Licenses
into iBuilder and for validating Hub Slot Group Licenses in iBuilder, see the iBuilder User
Guide.
This chapter describes basic hub line card failover concepts, transmit/receive verses receive-
only line card failover, failover sequence of events, and failover operation from a users point
of view.
For information about configuring line cards for failover, refer the Networks, Line Cards, and
Inroute Groups chapter of the iBuilder User Guide.
NOTE: If a Tx line card fails, or there is only one Rx line card and it fails, all
remotes must re-acquire the network after failover is complete.
pre-configured with the parameters of the Tx card for that network, and has those
parameters loaded into memory. The only difference between the active Tx(Rx) card and the
warm standby is that the standby mutes its transmitter (and receiver). When the NMS detects
a Tx(Rx) line card failure, it sends a command to the warn standby to un-mute its transmitter
(and receiver), and the standby immediately assumes the role of the Tx(Rx) card.
Cold standby line cards take longer to failover than warm standby line cards because they
need to receive a new options file, flash it, and reset.
This chapter defines the IP port numbers that are used by the various processes that run on
the NMS servers, Protocol Processor servers, and GKD servers. This information is provided to
assist iDirect Network Operators, iDirect network architects, and any other personnel
responsible for configuring and maintaining iDirect networks or for integrating iDirect
networks with other systems.
This chapter contains the following sections:
NMS Server Ports on page 203 defines the port numbers for all ports used by the NMS
server processes that run on NMS server machines.
GKD Server Ports on page 205 defines the port numbers used by the GKD servers to
manage TRANSEC keys. A GKD server can run on an NMS, protocol processor blade, or
standalone machine.
PP Controller Ports on page 205 defines the port numbers used by the PP controller
process that runs on the NMS server machine.
Protocol Processor Ports on page 205 defines the port numbers used by the processes
that run on the protocol processor blade machine.
Port Assignments for NMS/Upstream Router Traffic Flow on page 206 defines the port
numbers and respective protocols for traffic flow between the NMS servers and the
upstream router.
PP Controller Ports
Table 29-2 defines the port numbers used by the pp_contoller process that executes on the
NMS server machine. These are internal port numbers used for communication between the
PP Controller and the PP blades. For each Port Type, the pp_controller uses the base port
number + index number to derive the port numbers on the NMS server machine. Each index is
specified by the corresponding protocol processor ID in the NMS configuration database.
Table 29-2. pp_controller Ports
Table 29-5 defines the port ranges reserved for the remaining protocol processor processes,
such as sada, sarmt, sarouter, and sana. The number of ports used depends on the number of
networks assigned to each protocol processor and the number of processes running on each
blade.
Table 29-5. Protocol Processor Port Ranges
This chapter describes the iDirect Virtual Router Redundancy Protocol (VRRP) and Remote LAN
Port Monitoring features.
This chapter contains the following sections:
Virtual Router Redundancy Protocol (VRRP) on page 207 describes the VRRP feature and
how to configure it in iDirect networks.
Remote LAN Port Monitoring on page 216 describes the Remote LAN Port Monitoring
feature and how to configure it in iDirect networks.
Monitoring VRRP Status and Remote LAN Status on page 217 describes how to monitor
VRRP and remote LAN port status in iDirect networks.
VRRP Overview
VRRP is an IP protocol that allows two or more physical routers on a subnetwork to be grouped
into a single Virtual Router. At any time, only one router in the VRRP group (called the
Master router) is responsible for routing IP traffic. The remaining routers, called Backup
routers, do not forward IP traffic.
VRRP provides router redundancy by dynamically electing a Backup router to serve as the
Master router in the case that the Master router fails. This increases the availability of the
default gateway for hosts on an IP subnetwork and therefore improves the reliability of
routing paths.
Each physical router in a VRRP group has a priority between 1 and 255. (The priority 0 is
reserved to allow the Master router to release responsibility for the Virtual Router.) The
routers use VRRP to elect the router with the highest priority as the Master router for the
group. If two or more routers have the same priority and that priority is the highest, then of
those routers, the one with the highest IP address is considered the highest-priority router.
In VRRP, a virtual IP address is assigned to the Virtual Router. One Virtual Router can be
defined per VLAN. If the VRRP virtual IP address is the same as the IP address of the VLAN of
one of the routers in the VRRP group, then the router with the IP address that is the same as
the virtual IP address is automatically given a priority of 255. Therefore, the router that
owns the Virtual IP address of the Virtual Router is the default VRRP Master for that VRRP
group.
The routers in a VRRP group communicate using a predefined multicast address. The Master
router sends periodic VRRP Advertisement messages every Advertisement Interval. The
Advertisement Interval defaults to one second but can be configured. If the Backup routers do
not receive any Advertisements from the Master for three Advertisement Intervals, then the
Backup routers elect a new Master.
Unless an election is in progress, Backup routers do not typically send VRRP messages.
However, if a Backup router is configured with a higher priority than the current Master and
preemption is enabled on the Backup router, then the Backup router uses the VRRP protocol
to preempt the Master.
The priorities of the routers in a VRRP group may change dynamically. Therefore, the current
priority of a router is not always identical to the configured priority. For example, if the
virtual IP address assigned to the Virtual Router is identical to the IP address of the routers
LAN interface, then the actual priority of that router is set to 255 rather than to the
configured value. In addition, if Route Tracking is enabled for a destination address and that
address becomes unreachable, the priority of the Master router is reduced by a pre-defined
value. (Route Tracking is a VRRP extension discussed in VRRP Route Tracking on page 213.)
Any user-defined VLAN interface configured on an Evolution X5 or Evolution X7 remote can be
included in a VRRP group. Please note the following conditions:
Any VLAN can be included in only one VRRP group.
The default VLAN (VLAN 1) cannot be included in a VRRP group.
iDirect supports a maximum of seven VRRP groups per remote on separate VLANs.
Since iDirect does not support multiple remotes on a single subnet, two iDirect remotes
cannot be included in the same VRRP group.
NOTE: If the LAN cable is disconnected from an Evolution X7 remote and LAN Port
Monitoring is not enabled, the remote enters the Master state. If this occurs
without LAN Port Monitoring enabled, the X7 could incorrectly remain in the
Master state.
Figure 30-1 shows an example of a Virtual Router consisting of an iDirect remote and a router.
Per VLAN
VRID: 1
Virtual IP: 10.15.1.1
Virtual MAC:
00-00-5E-00-01-{VRID}
10.15.1.5
iDirect
IP Network Hub Remote
Backup Host 1
Priority 100 Default gateway: 10.15.1.1
10.15.1.1
Host 2
Router
Master
Priority 255
In Figure 30-1, an iDirect remote teams with a router (e.g. a Cisco router) to form a VRRP
group on the remote LAN. There is a terrestrial path (via the router) and a satellite path (via
the remote modem) to the remote site. In the figure, the Virtual Router has been assigned a
Virtual IP address of 10.15.1.1. The hosts in the remote routing domain are configured to use
10.15.1.1 as their default gateway.
Since the Virtual IP address of the Virtual Router is identical to the terrestrial routers IP
address, that router automatically becomes the default Master with a priority of 255. Because
the remote has a lower VRRP priority (100 by default), it acts as the Backup router in this
VRRP group as long as the Master router is available. As a Backup router, the remote does not
process IP traffic. Therefore, when both the terrestrial router and the remote are available,
IP traffic is routed over the terrestrial link.
NOTE: The Virtual MAC address shown in Figure 30-1 is automatically set to the
MAC address defined by the VRRP specification. VRRP uses the Virtual Router ID as
the final eight bits of a pre-determined Virtual MAC address. Identical Virtual
Router IDs result in identical Virtual MAC addresses. Do not use identical Virtual
Router IDs in situations that may cause addressing conflicts.
As long as the terrestrial router is on line it acts as the Master router, periodically sending
VRRP Advertisements to the VRRP multicast address. In its role as Backup router, the remote
listens to the VRRP multicast address and therefore receives the VRRP Advertisements sent by
the Master router. If the terrestrial router fails and the remote stops receiving VRRP
Advertisements, the remote elects itself as Master (since there are no other routers in the
VRRP group) and begins processing IP traffic on the remote LAN for transmission over the
satellite link. The remote also informs the protocol processor that it is available to route IP
traffic so that protocol processor can update its routing tables to enable the satellite path to
the remote.
When the terrestrial router comes back on line, it preempts the remote, taking back the role
of Master. The remote returns to the Backup role, stops routing IP traffic, and informs the
protocol processor so that the satellite path can be removed from hub routing tables.
Note that if an iDirect remote is acting as the Master router for a VRRP group and the satellite
link is lost, the remote stops sending VRRP Advertisements. Therefore, a Backup router will
be elected as Master if one is available. When the satellite link is restored, the remote
resumes its role as Master router for the group, provided it has the highest VRRP priority and
preemption is enabled on the remote.
NOTE: Ensure that RIPv2 is enabled in the Protocol Processor VLAN configuration.
NOTE: The remotes actual priority may differ from the configured priority if
Route Tracking is enabled. See VRRP Route Tracking on page 213 for details.
In Figure 30-2, three VLANs are configured in different VRRP groups. Note that:
For VLAN 2 and VLAN 4, the configured VRRP priorities are set to 113 and 200,
respectively. Since no VRRP priority is configured for VLAN 3, the remotes configured
priority on that VLAN is 100. (However, in all cases if the remotes ETH0 Interface IP
address for a VLAN is the same as the Virtual Router IP address, then the configured VRRP
priority is ignored and the priority is automatically set to 255).
For VLAN 2 and VLAN 4, the Advertising Interval is set to 2 seconds (2000 ms) and 3
seconds (3000 ms), respectively. Since no Advertising Interval is set for VLAN 3, the
remotes Advertising Interval on that VLAN is one second by default.
For VLAN 4, the default Preempt Mode setting of 1 (True) has been changed to 0 (False).
Therefore, even with a higher priority, the remote when acting as Backup router does not
preempt an existing Master router. (However, the remote will preempt if the remotes
ETH0 Interface IP address for VLAN 4 is set to the Virtual Router IP address of
160.100.4.1.)
For VLAN 4, the default Accept Mode setting 0 (False) has been changed to 1(True).
Therefore, in the Master state, the remote accepts packets addressed to 160.100.4.1
even if that is not the ETH0 Interface IP address for VLAN 4 configured on the remote.
When configuring another router for the same VRRP group, such as the terrestrial router in
Figure 30-1 on page 209, ensure that the following settings are identical to those configured
for the remote VLAN:
VLAN ID
Virtual Router Identifier
Virtual Router IP address
Advertisement Interval
The vrrp_enabled custom key in this group only enables or disables route
tracking on this VLAN. It does not affect the VRRP configuration.
<Route List> is a comma-separated list of route IDs for which to track this
route.
The following remote hub-side custom keys configure route tracking on VLAN 2 for two
upstream routes. The protocol processor instructs the remote to reduce its VRRP priority on
VLAN 2 by 50 if route 1 is unavailable and by 40 if route 2 is unavailable. If both routes are
unavailable, the VRRP priority for VLAN 2 is reduced by 50 (the maximum value of all
configured deductions).
[ETH0_02]
vrrp_enabled=1
vrrp_objtrack_ids=1,2
[OBJTRACK_1]
priority_deduct=50
obj_type=1
dest_addr=10.160.0.0
subnet_mask=255.224.0.0
[OBJTRACK_2]
priority_deduct=40
obj_type=1
dest_addr=10.192.0.0
subnet_mask=255.224.0.0
Whenever the availability of a tracked route changes, the protocol processor sends a Router
Priority message to each remote that is tracking that route. This message contains the
maximum priority deductions for each of the remotes VRRP groups configured to track
routes. If the availability of tracked routes does not change, the protocol processor sends this
message periodically to the remotes to ensure that the remotes have the correct priority
deductions.
By default, the protocol processor sends the Router Priority message every two minutes to
each remote with routes to track. If a remote does not receive this message for five minutes,
the remote restores any decremented VRRP priorities to their configured values. Both the
time between messages and the timeout that the remote waits to receive the message can be
changed using custom keys.
To change the frequency with which the protocol processor sends the Router Priority message
to the remotes, add the following custom key on the Custom tab of the protocol processor in
iBuilder:
[STACK_MGR]
route_tracking_period_sec = <Message Interval>
where Message Interval is the frequency (in seconds) with which the protocol
processor sends the Router Priority message to all remotes.
The protocol processor custom key in Figure 30-3 changes the frequency with which the
protocol processor sends the Router Priority messages to the remotes to one minute.
To change the length of time a remote waits to receive a Router Priority message before
restoring any reduced VRRP priorities to their configured priorities, add the following hub-
side custom key on the Custom tab of the Remote in iBuilder:
[RMT_ROUTE_TRACKING]
msg_timeout = <Timeout>
where Timeout is the interval the remote waits (in tenths of seconds) to receive its next
Router Priority message before restoring all VRRP priorities to their configured values.
NOTE: If all tracked routes are deleted from the Remote Custom tab and
priorities have been reduced on the remote due to route tracking, the
remote will not restore the configured priorities until this timeout expires.
The remote hub-side custom key in Figure 30-4 changes the timeout that the remote waits for
the next Router Priority message before restoring the configured priorities. In the figure, the
timeout is set to three minutes.
Table 30-1. PP Routing Table Operations: LAN Port Monitoring and VRRP
To add LAN Port Monitoring on a remote VLAN, configure the following hub-side custom key
group on the Remote Custom tab in iBuilder:
[RMT_LAN_PORT_MONITOR_<N>]
vid = <VLAN_ID>
port_mask = <Port Mask>
where: <N> is an integer from 0 to 7 unique to this custom key group for this VLAN.
<VLAN_ID> is the VLAN for which LAN port monitoring is enabled.
<Port Mask> is a bit mask indicating the remote LAN ports to which this VLAN is
assigned.
Since Evolution X5 remotes have a single LAN port, port_mask should always be set to 1 for
an X5 remote.
Evolution X7 remotes have eight LAN ports. For an X7, configure the port_mask such that all
bits representing ports to be monitored for this vid are set to one. Port 1 is the least
significant bit and port 8 is the most significant bit in the eight-bit port mask.
The example in Figure 30-5 enables LAN port monitoring for VLAN 2 on ports 7 and 8 for an
Evolution X7 remote by configuring a hub-side custom key group on the Remote Custom tab.
Notice that port_mask is set to 128 + 64 = 192 (C016, 110000002) to enable ports 8 and 7,
respectively
Figure 30-5. Example of LAN Port Monitoring Configuration for Multiple Ports in iBuilder
The example in Figure 30-6 enables LAN port monitoring for VLAN 3 and VLAN 17 on port 1 by
configuring two hub-side custom key groups on the Remote Custom tab.
Figure 30-6. Example LAN Port Monitoring Configuration for Single Port in iBuilder
You can enable LAN Port Monitoring for up to eight VLANs per remote using groups
RMT_LAN_PORT_MONITOR_0 through RMT_LAN_PORT_MONITOR_7.
2. In the Select dialog box, select the check boxes of the remotes that you want to monitor.
3. Click OK to view the Remote Events.
Figure 30-7 shows both LAN Status events and VRRP Status events as viewed in iMonitor. LAN
status events are displayed whenever the LAN status changes for a VLAN enabled for Remote
LAN Monitoring. VRRP Status events are displayed when either the VRRP role (Master or
Backup) or the VRRP priority changes for a VLAN enabled for VRRP.
If the State is Initializing, the remote is unable to join the VRRP group.
The vrrp <vrrp_id> params console command displays the current VRRP parameters for
the Virtual Router ID specified in the command. Sample output of the vrrp params
command for Virtual Router ID 2 is shown here:
vrrp 2 params
vrrp_enabled = 7
vrrp_state = Backup
vrrp_mac_addr = 00005E-000102
vrrp_ip_addr = 168.10.10.4
vrrp_configured_priority = 190
vrrp_priority_configuration_override = 190
vrrp_route_tracking_priority_decrement = 0
vrrp_running_priority = 190
vlan_id = 2
vrrp_adver_interval_ms = 1000
vrrp_master_down_interval_ms = 3257
vrrp_premption_enabled = 1
vrrp_accept_enabled = 0
The lan_port_monitor status console command displays the LAN Port Monitoring status
for all remote VLANs enabled for LAN Port Monitoring. Sample output of the
lan_port_monitor status command is shown here:
lan_port_monitor status
VLAN_ID STATUS
------- -------
2 DOWN
7 DOWN
Beginning with iDX Release 3.3.1, iDirect supports Layer 2 over Satellite (L2oS).iDX Release
3.3.3.1 introduced the Layer 2/Layer 3 Hybrid mode feature to allow Layer 2 and Layer 3
functionality in the same network. For more information, see the iBuilder User Guide.This
chapter provides a description of the L2oS feature as implemented in iDirect. It includes the
following major sections:
L2oS Overview on page 221
L2oS Benefits on page 222
L2oS Reference Model on page 223
Satellite Virtual Network and Service Delimiting Tags on page 224
L2oS Service Modes on page 227
Layer 2/Layer 3 Hybrid Mode on page 228
Advanced Header Compression on page 229
MAC Address Learning on page 231
Forwarding Rules on page 233
Bidirectional Forwarding Detection on page 234
L2oS Configuration on page 235
Monitoring L2oS Networks on page 245
L2oS Examples including Layer 2 and Layer 3 Hybrid Mode on page 247
L2oS Overview
Layer 2 over Satellite enables the satellite link to emulate an Ethernet connection. When an
iDirect network is configured for L2oS, Ethernet frames, rather than IP packets, are
transported across the satellite link. In terms of data networking, this is moving down the
stack from the traditional iDirect model of IP network layer connectivity over the satellite
(OSI Layer 3) to Ethernet link layer connectivity (OSI Layer 2).
Figure 31-1 shows iDirect Layer 2 connectivity between a Gateway Router at the hub and a
router attached to an iDirect remote modem.
Remote Remote
Gateway Teleport Modem Router
Router Hub System
Satellite Network
L2oS Benefits
In many VSAT applications, Layer 2 connectivity between the hub and remotes is preferred
over the Layer 3 IP connections traditionally offered by iDirect. The benefits of using L2oS
include:
Layer 3 Transparency: Because an iDirect L2oS VSAT network behaves as a switched
Ethernet network, the satellite network is invisible to higher-layer protocols. Therefore,
any Layer 3 protocol, such as IPv6, OSPF, and BGP, is transported transparently across the
satellite network.
Simplified Operation: In a traditional iDirect Layer 3 network, services such as Virtual
Private Networks typically require allocation and configuration of a large number of IP
addresses on the iDirect protocol processor and remote satellite routers. With Layer 3
transparency, this IP address allocation is no longer required. The only VLAN IP addresses
required on an L2oS network are for the default VLAN used for iDirect network
management.
Efficient Physical Layer: As in traditional iDirect IP networks, L2oS networks carry traffic
over iDirects highly-optimized DVB-S2 outbound and TDMA inbound carriers. All the
advantages of features such as DVB-S2 ACM and TDMA 2D 16-State coding are preserved
when using L2oS.
Group QoS Support: L2oS supports all of the Group QoS features that exist for iDirect IP
networks. In addition, for L2oS networks, Layer 2 classifiers, such as source and
destination MAC address, have been added to the Service Level and Filter Rules in
iBuilder. IPV6 source and destination IP address classifiers have also been added.
PSN Tunnel
Attachment
Circuit
PW1
Customer Customer
Edge 1 Provider Edge 1 Provider Edge 2 Edge 2
Emulated Service
SDT
MAC-dest Ether-
MAC-src VID Payload
0x8100 type Payload
6B 6B 2B
2B
SDT
MAC-dest Ether-
MAC-src 0x9100 O-VID Payload
type Payload
6B 6B 0x88a8 2B
2B
When QinQ tags are used, the SVN is identified by both the outer tag and a second (inner) tag
in the Ethernet frame. The concatenation of the two tags determines the SVN. The SVN
number is OVID * 65536 + IVID (Figure 31-5). Any additional tags are passed through
transparently.
SDT
Ether-
MAC-dest MAC-src 0x9100 O-VID I-VID
0x8100 type Payload
6B 6B 0x88a8 2B 2B
2B
SDTs are always used on the hub side. It is the responsibility of the Gateway CE to add the
appropriate SDT for downstream traffic; it is the responsibility of the iDirect hub to add the
appropriate SDT for upstream traffic. The hub SDT Mode is a global setting for either VLAN or
QinQ. The hub and its associated gateway CE should be configured in the same mode.
SDTs are optional on the remote side. When used, it is the responsibility of the CE attached to
the remote modem to add the appropriate SDT for upstream traffic; it is the responsibility of
the remote to add the appropriate SDT for downstream traffic.
SDTs can be omitted on the remote side by configuring a remote LAN port as an Access Port
and associating the port with a single SVN. All Ethernet frames received on the port are
transmitted on the configured SVN. The hub tags the frame based on the SVN ID and hub SDT
Mode before sending the frame to the hub CE.
The SDT of an SVN on the hub side can be rewritten at the remote with its LAN (local) side. It
is even possible to have different SDT types on both sides. For example, it is common to have
QinQ SDTs at the hub side, but they will be rewritten to either the VLAN type SDTs or access
port at the remote side. This SDT Local Rewrite feature provides additional flexibility for
VLAN index management and ultimately simplify deployment of new networks and sites.
Ether-
MAC-dest MAC-src Payload
type
6B 6B 2B
NOTE: The SDT format used on the hub side is not required to be the same as the
SDT format used on the remote side.
Table 31-1 displays the Service Delineating Tag (SDT) support matrix.
TCP Acceleration
TCP acceleration can be enabled for IPv4 TCP packets on the Service Level screen in iBuilder.
TCP acceleration only occurs if the protocol stack in the Ethernet frame is one of the
following:
Ethernet, IPv4, TCP
Ethernet, VLAN, IPv4, TCP
Ethernet, QinQ, IPv4, TCP
Tunneled IPv4 TCP packets or IPv4 TCP packets nested in the payload are not accelerated.
IPv6 TCP packets are never accelerated.
SVNs Scalability
The number of SVNs on an X1 remote is limited to four and on an X7, it is limited to eight
regardless of Layer 2 or Layer 3.
The number of Layer 3 SVNs supported over any single iDirect network depends upon the
number of active remotes in the network and the number of protocol processor blades and
not on the number of Layer 2 SVNs. As the number of remotes increase, the number of Layer
3 SVNs that can be supported decreases. This decrease occurs despite an increase in the
number of protocol processor blades with the increase in remotes. Table 31-2 shows the
relationship between some typical network sizes and the number of Layer 3 SVNs supported:
Table 31-2. Maximum Number of Layer 3 SVNs Per Network (Relative to Network Size)
The number of Layer 2 SVNs supported over any single iDirect network is 500 and is
independent of the number of remotes or Layer 3 SVNs in the network. The number of
protocol processor blades required is subject to the design guidelines governed by the number
of remotes.
Routers and switches in the network, including the Upstream and Tunnel Switches, have their
own limits on the number of SVNs supported. The limits set by these devices should be
checked to ensure they can support the number of SVNs reported above. The Cisco 2960G
switch currently being shipped with iDirect Evolution systems supports up to 255 SVNs. The
use of this switch sets an upper limit of 255 SVNs per network, regardless of Layer 2 or Layer
3.
VPLS is required for remotes to communicate with each other directly, rather than through a
router. In VPLS mode, the hub replicates upstream data to the downstream as required for
remote-to-remote traffic. This replication process is referred to as "hairpinning. For
example, VPLS is appropriate if all Layer 3 devices at all remotes are in the same IP subnet.
On a traditional LAN segment, each device receives all Ethernet frames transmitted by all
other devices. A VPLS emulated LAN in an iDirect satellite network behaves differently:
To conserve satellite bandwidth, Ethernet frames received by the hub on the upstream
are replicated on the downstream only if necessary. (For more information, see
Forwarding Rules on page 233.)
To improve performance, a downstream unicast frame is directed only to the remote
that requires it whenever possible. This is done based on MAC address learning.
When transported in the same network, Layer 2 and Layer 3 traffic is separated by an
individual Satellite Virtual Networks (SVN), see Satellite Virtual Network and Service
Delimiting Tags on page 224. In other words, SVNs will be marked as either Layer 2 or Layer 3
during network configuration.
The most common use cases for this feature include:
Introducing Layer 2 in Existing Layer 3 NetworkIn this use case, for a network operator
currently running a Layer 3 iDirect network, a new Layer 2 connection to new sites can
be introduced, leveraging its protocol transparency and other benefits, without having to
set up a brand new network.
Running Persistent Multicast along with Layer 2 In this use case, a network operator
can take advantage of Layer 2 network architecture while providing multicast service
such as video distribution using the tried-and-tested persistent multicast feature in Layer
3.
When considering the use of Layer 2 or Layer 3 Hybrid Mode, there is a restriction of only one
SDT per LAN port, that is, a LAN port cannot simultaneously support both Q-in-Q traffic in
Layer 2 and VLAN traffic in Layer 3.
For IP/GRE and IP-in-IP, the headers inside the tunnel wrapper will also be compressed
according to the profile(s) chosen. There is also an Aggressive setting and when this setting is
enabled, the padding bytes in an Ethernet frame will be removed before transmission.
The following tables shows the comparison between uncompressed headers and RoHC v2-
compressed headers for various protocol profiles based on IPv4 and IPv6. The actual
compression efficiency will be slightly lower from the table below because of the periodic
Initialization and Reset (IR) frames that are used to setup the initial compression context and
re-sync the information. The IR frames are necessarily less efficient.
Table 31-3 and Table 31-4 shows the IPv4 header compression.
NOTE: On IPv4, when UDP checksum is not enabled, the ROHCv2 effective header
sizes for all UDP related compression will decrease by two bytes. The results in
Table 31-3 and Table 31-4 are obtained with UDP checksum enabled.
Table 31-5 and Table 31-6 show the IPv6 header compression.
Forwarding Rules
This section contains the rules for forwarding Ethernet frames received by the iDirect hub
equipment and remote modems. iDirect management traffic and Ethernet control frames are
not discussed here. Refer to Figure 31-7.
1 2
1. When Ethernet frames are received by the hub from the Gateway CE:
The hub discards frames that do not map to any known SVN.
The hub sends over-the-air all unicast frames that map to a known SVN and
destination MAC address.
The hub broadcasts over-the-air all multicast and broadcast frames that map to a
known SVN.
2. When Ethernet frames are received by the remote from the Remote CE:
The remote discards all frames with a local destination MAC Address.
If configured as an Access Port:
The remote sends all frames over-the-air on the configured SVN.
If the remote is configured for VLAN tagging or QinQ tagging:
The remote sends over-the-air all frames that map to a configured SVN.
If SDT Local Rewrite is configured for a particular SVN, the remote sends the
frame that matches the local SDT over-the-air after the local SDT has been
replaced by the configured global SDT.
3. When Ethernet frames are received over-the-air by the hub from a remote:
If the service mode of the SVN is VPWS:
The hub directs the frame back to its local LAN and lets the Gateway CE to make
the decision of whether or not to route the frame back for downstream
transmission (Required coordination with customer switch configurationfor
example: protected ports, etc.)
If the service mode of the SVN is VPWSEPC:
The hub rewrites the destination MAC address with the configured MAC address
and redirects it to the local LAN.
If the service mode of the SVN is VPLS:
If a frame is a unicast frame:
If a frame maps to a known remote MAC Address in the SVN, the hub resends
the frame over-the-air to the destination remote.
If a frame is addressed to an unknown MAC address, the hub directs the frame
to the local LAN.
If a frame is a multicast or broadcast frame:
The hub directs the frame to the local LAN.
The hub retransmits the frame over-the-air to all remotes in the SVN.
4. When Ethernet frames are received over-the-air by a remote from the hub:
The remote discards all frames that do not match any configured SVN.
The remote discards all frames with a local source MAC Address.
The remote forwards all remaining frames to the local LAN.
For more information on the BFD proxy, see the draft RFC BFD Proxy for Connections over
Monitored Links.
These trade-offs are summarized in Table 31-7.Table 31-7.
No BFD Requires fewer system resources Slow (layer 3) link failure detection for all
BFD may not be available in all equipment segments of CE-to-CE link
L2oS Configuration
This section discusses the L2oS parameters that can be configured in iBuilder. For more
information on configuring iDirect networks, see the iBuilder User Guide.
The SDT parameters determine if VLAN tagging or QinQ tagging is used on the hub connection
to the CE. The SDT parameters are:
Mode: VLAN (default) or QinQ.
Ethertype 1: Defaults: 0x8100 for VLAN Mode; 0x9100 for QinQ Mode.
Ethertype 2: Default: 0x8100. Applies only to QinQ Mode.
Reserved outer SVN ID for L3: Select this checkbox and provide SVN ID which is reserved
for L3. Enabled only when SDT mode is in QinQ mode.
The L2SW Default parameters determine the default L2oS service mode. This service mode is
used for any SVN that does not have an optional override defined. The L2SW Default
parameters are:
Mode: Selections are VPLS (default), VPWSEPC, or VPWS.
MAC Address Rewrite: VPWSEPC Mode only. The MAC address to which the hub sends all
upstream Ethernet frames for all SVNs, unless an Optional Override applies. The default
is 00:00:5E:00:52:13.
The following fields are used to configure each SVN in the network:
SVN Id: The identifier of the Satellite Virtual Network. When SDT Mode is set to VLAN,
this is the VLAN ID. When the SDT Mode is set to QinQ, a two-part SVN Id is used that
combines the outer and inner VLAN tags.
Name: A name for the SVN.
Type: Layer 2 or Layer 3 which delineates the layer 2 SVNs from Layer 3 SVNs
Forward Snap/LLC: Select Forward SNAP/LLC Frames Enabled to allow Logical Link
Control (LLC) frames and Subnetwork Access Protocol (SNAP) to be transported over the
satellite link in Layer 2 networks. By default, these frames are blocked.
L2SW Overrides: The L2SW Default parameters determine the default L2oS service
mode. This service mode is used for any SVN that does not have an Optional Override
defined.
The SDT parameters determine if VLAN tagging, QinQ tagging, or no tagging is used on the
remote connection(s) to the CE. The SDT parameters are:
Mode: VLAN (default), QinQ, or Access
Ethertype 1: Defaults: 0x8100 for VLAN Mode; 0x9100 for QinQ Mode. This parameter
does not apply if Mode is set to Access.
Ethertype 2: Default: 0x8100. Applies only to QinQ Mode.
The SVN section of the remote L2oS tab defines the set of SVNs over which Layer 2 traffic
flows over-the-air to and from this remote. The selection of SVNs is limited to the Layer 2
SVNs already configured on the PP. The remote only accepts downstream traffic received on
matching SVN IDs. If Mode is VLAN or QinQ, the tag or tags for Ethernet frames received on
the local LAN must match a configured SVN Id for the frame to be transmitted on the
upstream carrier. When a Local ID is configured for an SVN, the tag on the local LAN side is
modified to this ID. Only the ingress Ethernet frames matching the Local ID, not the SVN ID,
are to be transmitted on the upstream carrier.
NOTE: SDT from the remote is not completely linked to the global SVN ID. Its
dependent on the Local ID when SDT from remote is different from the SDT from
the Hub. SDT mode is for local ID and not SVN ID.
NOTE: QinQ cannot be enabled per LAN port on an X7 remote. If QinQ SDT
Mode is selected for an X7, all Ethernet frames on all ports must be double-
tagged.
For Access Mode, only one SVN Id can be enabled. All incoming Ethernet frames are
sent over the enabled SVN.
NOTE: If the SDT Mode of an X7 is set to Access, all LAN ports on the X7 are
associated with a single SVN. To configure individual ports on the X7 as access
ports, see Assigning VLAN IDs to X7 LAN Ports on page 241
Local Id: An optional one part or two part ID that remaps the VLAN ID(s) on the
remote LAN to a different SVN ID.
NOTE: A Local Id can be defined to map a VLAN ID used on the remote LAN to
a two-part SVN ID. This can be useful for services requiring QinQ Mode at the
hub and VLAN Mode at the remote.
Enabled: A check box to enable or disable this SVN. An SVN must be enabled to be
used. No traffic is transmitted on a disabled SVN. A maximum of eight SVNs can be
enabled per remote.
Header Compression:
Compression: Disable, Simple, or Advanced
Disable: No compression of any headers
The set of advanced header compression profiles that can be enabled together is represented
by the level of indentation in the Profiles section of the remote L2oS tab. iBuilder enforces
the following rules when selecting advanced compression options:
IP header compression must be selected in order to select any additional profiles.
UDP header compression must be selected to select RTP or GTP header compression.
IP header compression must be selected in order to select Aggressive Compression.
The headers that are actually compressed in any Ethernet frame depend on the profiles
selected on the GUI and the rules governing the header compression software. The header
compression software behaves as follows:
Headers are compressed sequentially within the Ethernet frame. Once the compression
software encounters any header that is not enabled for compression, no additional
headers are compressed.
If IP-in-IP or GRE is not selected, headers within IP or GRE tunnels are not compressed,
even if the profiles for those headers are selected.
Selecting GTP compresses the GTP header, as well as headers for other enabled protocols
within the GTP tunnel.
Selecting IP-in-IP compresses tunneled IP headers, as well as headers for other enabled
protocols within the IP tunnel.
Selecting GRE compresses GRE headers, as well as headers for other enabled protocols
within the GRE tunnel.
NOTE: The IP and TCP headers in TCP packets cannot be compressed. However,
TCP acceleration greatly reduces the size of the IPv4 TCP headers transmitted
over the satellite link.
The examples in Table 31-8 illustrate which headers will be compressed depending on the
headers in the Ethernet frame and the compression profiles selected on the GUI.
When Aggressive Compression is enabled, the compression software detects and discards
Ethernet frame padding if the frame size is less than the minimum Ethernet frame size of 64
bytes. This option also normalizes ambiguous IP checksums as discussed in RFC 1141 and
RFC 1624. This results in a decompressed frame that is semantically equivalent, but not
identical, to the original frame.
NOTE: If Aggressive Compression and UDP header compression are both enabled,
padding is discarded from Ethernet frames smaller than 64 bytes, but the UDP
headers in those frames are not compressed.
NOTE: If the SDT Mode of an X7 is set to QinQ, all switch ports on the X7 should be
configured as Trunk (All VLANs).
The processing of Ethernet frames received by the internal X7 switch from the remote
software or from the local LAN depends on the configuration of the switch. There are three
ways to configure an X7 port on the iBuilder Remote Switch tab, illustrated in Figure 31-12.
Trunk: When All VLANs are assigned to a port (Port 1 in Figure 31-12), the switch
forwards all Ethernet frames received from the remote software to the local LAN on the
port. The switch accepts all frames received on the LAN port and forwards them to the
remote software, regardless of VLAN ID. Ethernet frames on user-defined VLANs are
tagged. Ethernet frames on the Default VLAN are not tagged.
Trunk with VLAN Range: When multiple VLAN IDs are assigned to a port (Port 2 in
Figure 31-12), the switch only forwards Ethernet frames received from the remote
software on that port if the VLAN ID in the Ethernet frame matches a VLAN ID assigned to
the port. Similarly, the switch only accepts incoming frames received on the LAN port if
the VLAN ID in the frame matches one of the VLAN IDs assigned to the port.
Access Port: When a single VLAN ID is assigned to a LAN port (Port 3 in Figure 31-12) and
an Ethernet frame with a matching VLAN tag is received from the remote software, the
switch removes the VLAN tag and forwards the untagged frame to the LAN port. The
switch accepts untagged Ethernet frames received on the LAN port, tags them with the
VLAN ID assigned to the port, and forwards them to the remote software.
Both Layer 2 and Layer 3 SVNs can be assigned to each port. The selection of SVNs is limited
to SVNs already assigned to the remote in the L2oS tab. Choose the available SVNs by SVN IDs
or by names. Once entered for one port, a VLAN ID can be selected from the drop-down menu
for other ports (Figure 31-13).
NOTE: VLAN IDs between 2 and 4094 can be configured in the VLAN Assignment
dialog box. VLAN ID 1 is reserved for the Default VLAN. In Layer 2 networks, the
Default VLAN is used for internal Layer 3 management traffic only. It cannot be
used to transport Layer 2 user traffic.
For more details on configuring the X7 eight port switch, see the iBuilder User Guide.
Effective Ethernet Type: The inner-most Ethertype in the Ethernet frame. This is
applicable if there are multiple Ethertypes after the STD.
VLAN PCP: The PCP field of the VLAN tag after the SDT headers are removed
In Layer 2 networks, the Layer 3 classifiers in Figure 31-14 also work, provided the incoming
Ethernet frame is IPv4. In addition, IPV6 packets can be classified based on source and
destination IP address by selecting Source IPV6 or Destination IPV6 in the IP Address fields.
This is shown in Figure 31-15.
For details on configuring Rules for Filter Profiles or Application Profiles, including definitions
of the Layer 2 classifiers, see the iBuilder User Guide.
NOTE: Perform the following procedure only when changing the hub side VLAN
architecture to QinQ.
In a L2OS network with a hub configured in SDT mode QinQ, change the PP eth0 interface MTU
from 1504 to 1508 to support frames with QinQ tags.
To change the MTU, perform the following procedure:
1. Log into the Protocol Processor.
2. Go to the network-scripts directory.
cd /etc/sysconfig/network-scripts/
3. Use the vi editor to modify the ifcfg-eth0 file. Change the MTU parameter from 1504 to
1508.
4. Restart the network services for the changes to take effect.
service network restart
5. Enter the following command to restart the Protocol Processor services:
service idirect_hpb restart
PP
Home/SOHO
Router PC
Satellite
Network
Upstream Remote 1 Layer2 Access Port
Layer 2 Hub System
Switch PC
L2oS, VPWS
Layer2 .1qTrunk Layer2 (SVN10)
(VLAN10, VLAN20) Layer2 (SVN20)
Remote 2 PC
Hub configuration:
Default Mode: VPWS
SDT Mode: VLAN
Remote 1 configuration:
SDT Mode: Access
SVN ID: 10
Remote 2 configuration:
SDT Mode: Access
SVN ID: 20
Hub forwarding:
Ethernet frames with VLAN ID 10 or VLAN ID 20 received from the upstream switch with a
known destination MAC address in that SVN are sent over-the-air.
Ethernet frames received over-the-air on SVN 10 or SVN 20 are forwarded to the
upstream switch. No remote-to-remote routing is performed by the iDirect hub.
Remote forwarding:
All Ethernet frames with non-local destination MAC addresses received by the remote on
the local LAN are sent over-the-air on SVN 10 (Remote 1) or SVN 20 (Remote 2).
All Ethernet frames received over-the-air by a remote on its configured SVN that have a
local destination MAC address are sent out on the remote LAN.
Figure 31-17 shows a VPWS connection between the hub and a remote site used to transport
IPv6 packets. Because Layer 3 protocols are transparent to the Layer 2 connection, IPv6
traffic can be transported over the satellite link. Note that IPv6 traffic is not accelerated
over the satellite link.
PP
Satellite
Network
Upstream Layer 2 Remote
Hub System Multiple PC
Switch
L2oS, VPWS
Layer2 .1qTrunk
Layer2 (SVN10) (VLAN10)
VLAN10
IPv6
Hub configuration:
Default Mode: VPWS
SDT Mode: VLAN
Remote configuration:
SDT Mode: VLAN
SVN ID: 10
Hub forwarding:
Ethernet frames with VLAN ID 10 received from the upstream switch on an 802.1q VLAN
trunk with a known destination MAC address in SVN 10 are sent over-the-air.
Ethernet frames received over-the-air on SVN 10 are forwarded to the upstream switch.
Remote forwarding:
Ethernet frames with VLAN ID 10 and non-local destination MAC address received by the
remote on an 802.1q VLAN trunk on the local LAN are sent over-the-air on SVN 10.
Ethernet frames received over-the-air on SVN 10 by the remote with a local destination
MAC address are sent out on the remote LAN on an 802.1q VLAN trunk tagged with VLAN
10.
Figure 31-18 shows a VPWS connection between the hub and a single remote site with traffic
from multiple VLANs.
PP
PC
Satellite
Network
Upstream Layer 2 Remote Layer 2
Switch Hub System Switch
L2oS, VPWS
Layer2 (.1qTrunk) Layer2 (SVN10) Layer2 (.1qTrunk) PC
VLAN10, VLAN20 VLAN10, VLAN20
Layer2 (SVN20)
Hub configuration:
Default Mode: VPWS
SDT Mode: VLAN
Remote configuration:
SDT Mode: VLAN
SVN ID 1: 10
SVN ID 1: 20
Hub forwarding:
Ethernet frames with VLAN ID 10 or VLAN ID 20 received from the upstream switch on an
802.1q VLAN trunk with a known destination MAC address in the corresponding SVN are
sent over-the-air.
Ethernet frames received over-the-air on SVN 10 and SVN 20 are forwarded to the
upstream switch on an 802.1q VLAN trunk.
Remote forwarding:
All Ethernet frames with VLAN ID 10 and VLAN ID 20 and a non-local destination MAC
address received by the remote on the local LAN are sent over-the-air on the
corresponding SVN.
All SVN 10 and SVN 20 traffic received over-the-air by the remote with a local destination
MAC address for that SVN are sent out on the remote LAN on an 802.1q VLAN trunk.
Figure 31-19 shows a VPWS connections between the hub and two remote sites, each with a
single VLAN. Since the Default Mode is VPWS, the hub will not route traffic directly between
the two remotes. No hairpinning will occur within the iDirect network. If there are multiple
protocol processor blades at the hub, it is important to configure the upstream switch to
disallow inter-blade communications to prevent the switch from replicating upstream traffic
to the downstream before a Layer 3 decision can be made.
PP
Satellite Layer 2
Network
Upstream Remote Layer 2 PC
Layer 2 Hub System Switch
Switch (VLAN10)
L2oS, VPWS
Layer2 .1qTrunk Layer2 (SVN10)
(VLAN10, VLAN20) Layer2 (SVN20)
Layer 2
Layer 2 PC
Remote
Switch
(VLAN20)
Hub configuration:
Default Mode: VPWS
SDT Mode: VLAN
Remote 1 configuration:
SDT Mode: VLAN
SVN ID: 10
Remote 2 configuration:
SDT Mode: VLAN
SVN ID: 20
The hub and remote forwarding rules are similar to those described for the example shown in
Figure 31-17 and Figure 31-18.
Figure 31-20 shows a VPLS connection between the hub and three remote sites, each using
VLAN 10. Because the hub mode is VPLS, the hub performs remote-to-remote routing of
upstream Ethernet frames received on SVN 10 that have a known destination MAC address at
another remote site on the same SVN. Multicast frames are handled as described in
Forwarding Rules on page 233.
PP
Satellite
VLAN10
Network
Router Remote Layer 2 Switch PC
Hub System Layer2 (SVN1,
Layer2 VLAN10)
(VLAN10)
VLAN10
VLAN10
Hub configuration:
Default Mode: VPLS
SDT Mode: VLAN
Remote configuration (all remotes):
SDT Mode: VLAN
SVN ID: 10
Figure 31-21 shows VPWS connections between the hub and two remote sites, each using
VLAN 10. Because the hub mode is VPWS, the hub does not perform remote-to-remote routing
of upstream Ethernet frames received on SVN 10. All remote-to-remote routing decisions are
made by the upstream router or switch.
PP
Satellite VLAN10
Network
Router
R o ter Remote Layer 2 PC
Hub System Switch
Layer2 (SVN1, VLAN10)
L2oS, VPWS
Layer2 .1qTrunk
VLAN10 Layer2 (SVN10)
VLAN10
Layer 2 PC
Remote
Switch
Layer2 (SVN1, VLAN10)
Hub configuration:
Default Mode: VPWS
SDT Mode: VLAN
Remote configuration (all remotes):
SDT Mode: VLAN
SVN ID: 10
Figure 31-22 shows an L2oS network configured to use VLAN SDT Mode at the remote site and
QinQ SDT Mode at the hub. The Local ID configured for the remote maps the VLAN IDs of
Ethernet frames on the local LAN to the two-part SVN required to support QinQ tags at the
hub.
PC
P
Layer2 .1Q Trunk Satellite
(VLAN20) Network
Layer 2 Customer Remote Layer 2
Switch Layer 2 Hub System Switch
Switch
PC
Layer2 .1Q Trunk L2oS, VPWS Layer2 .1Q Trunk
Layer2 Q-in-Q
Trunk Q-in-Q SVN100_10, (VLAN10, VLAN20,
Layer2 (SVN100: Q-in-Q:Q-in-Q
OUTER VLAN100)
SVN100_20, Q-in-Q VLAN30)
SVN100_30, Q-in-Q
PC
Layer 2
Switch
Hub configuration:
Default Mode: VPWS
SDT Mode: QinQ
Remote configuration:
SDT Mode: VLAN
SVN 1:
SVN ID: 100_10
Local ID: 10
SVN 1:
SVN ID: 100_20
Local ID: 20
SVN 1:
SVN ID: 100_30
Local ID: 30
Figure 31-23 shows an iDirect L2oS network and a terrestrial network using Border Gateway
Protocol (BGP) for redundancy. The Layer 3 BGP protocol is transparent to the iDirect Layer 2
network.
SP
Terrestrial
Network
MPLS
PE PP
Upstream
MPLS Router Managed
PE (MPLS CE) CPE
SP Satellite LAN
Network
Remote
Hub System
SP Core Network
L2oS, VPWS
MPLS VPN Layer2 .1Q Trunk
Layer-3 IPv4 (VLAN10) Layer2 (SVN10) Layer2 VLAN10 Layer-3 IPv4
BGP Session
The iBuilder configuration for Figure 31-23 is similar to the configuration described for
Figure 31-17.
Figure 31-23 shows an example of the MPLS network extension with a VRF-lite session over the
VLAN10 air.
VLAN20
PP
MPLS
PE
ASR1k PoP
CPE
CE
Switch
Satellite
Network
Remote
Hub System
SP Core Network CPE
Router
EoMPLS VRF-Lite
NOTE: VRF lite implies non-MPLS tagged frames. Although over-the-air transport
of MPLS tagged frames is supported, TCP acceleration and advanced header
compression are not applied to these frames.
Figure 31-25. Layer 2/Layer 3 Hybrid Mode with VLAN Tagging at Hub
In this example, the hub side is connected to an 802.1Q trunk with six VLANs:
Three Layer 2 VLANs
Two Layer 3 VLANs (the term SVN used in the diagram includes both VLANs and QinQ)
One Layer 3 multicast only VLAN
On the remote end, the X1 in this case is configured to output at the LAN port an 802.1Q trunk
consisting of the following:
One Layer 3 multicast only VLAN (there is only 1 configured at the hub)
One Layer 2 VLAN (chosen from the three configured at the hub)
One Layer 3 VLAN (chosen from the two configured at the hub)
The X7 is configured to use four of the eight switch ports:
Port 1 - 802.1Q trunk with Layer 2 VLANs
Port 2 - One Layer 3 VLAN
Port 3 - One Layer 3 VLAN
Port 4 - One Layer 3 Multicast only VLAN (there is only one configured at the hub)
Figure 31-26. Layer 2/Layer 3 Hybrid Mode with QinQ Tagging at Hub
In the above example, the hub is connected to an 802.1ad QinQ trunk with seven SVNs:
Three Layer 3 SVNs - Identified by the Reserved Outer Tag; the Reserved Outer Tag is
removed immediately at the PP resulting in three Layer 3 VLANs.
Three Layer 2 SVNs - These are used to transverse the iDirect network intact as QinQ
tagged SVNs.
One Layer 3 multicast only SVN - Identified by the Reserved Outer Tag. The Reserved
Outer Tag is removed immediately at the PP resulting in three Layer 3 VLANs.
On the remote end, the X1 in this example is configured to output at the LAN port an 802.1ad
QinQ trunk consisting of L2 SVNs. Only L2 SVNs can be part of any Q-in-Q trunk on the remote
LAN side.