Dialog_R241_ PlatformArchitecture_v1.1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 128

Platform Architecture

Newtec Dialog®
R2.4.1

Revision 1.1
May 21, 2021
© 2021 ST Engineering iDirect (Europe) CY NV and/or its affiliates. All rights reserved.
Reproduction in whole or in part without permission is prohibited. Information contained herein is
subject to change without notice. The specifications and information regarding the products in this
document are subject to change without notice. While every effort has been made to ensure the
accuracy of the statements, information and recommendations in this document, they are provided
without warranty of any kind, express, or implied. Users must take full responsibility for their
application of any products. Trademarks, brand names and products mentioned in this document are
the property of their respective owners. All such references are used strictly in an editorial fashion
with no intent to convey any affiliation with the name or the product's rightful owner.

ST Engineering iDirect is a global leader in satellite communications (satcom) providing technology


and solutions that enable its customers to expand their business, differentiate their services and
optimize their satcom networks. Through the merger with Newtec, a recognized industry pioneer, the
combined business unites over 35 years of innovation focused on solving satellite’s most critical
economic and technology challenges, and expands a shared commitment to shaping the future of
how the world connects. The product portfolio, branded under the names iDirect and Newtec,
represents the highest standards in performance, efficiency and reliability, making it possible for its
customers to deliver the best satcom connectivity experience anywhere in the world. ST
Engineering iDirect is the world’s largest TDMA enterprise VSAT manufacturer and is the leader in
key industries including broadcast, mobility and military/government.

Company Website: www.idirect.net | Main Phone: +32 3 780 6500


Support Contact Information: Email: customersupport@idirect.net | Website:
www.idirect.net/support-and-training
Table of Contents

Table of Contents

1 About This Guide ............................................................................................ 1


1.1 Revision History ...................................................................................................................................... 1
1.2 Cautions and Symbols ............................................................................................................................ 1

2 What is Newtec Dialog® ................................................................................. 2

3 Newtec Private Cloud Infrastructure ............................................................. 7

4 Physical Architecture ................................................................................... 10


4.1 1IF Hub Module ..................................................................................................................................... 10
4.1.1 1IF Hardware Devices ..................................................................................................................... 10
4.1.2 1IF External Interfaces .................................................................................................................... 11
4.1.2.1 RF ................................................................................................................................................ 11
4.1.2.2 Time and Frequency Synchronization ......................................................................................... 12
4.1.2.3 IP .................................................................................................................................................. 13
4.1.3 1IF Redundancy ............................................................................................................................. 14
4.1.3.1 Servers ........................................................................................................................................ 14
4.1.3.2 Modulators and 10 MHz Reference Signal ................................................................................. 14
4.1.3.3 Demodulators .............................................................................................................................. 16
4.1.3.4 Network Connectivity ................................................................................................................... 18
4.1.3.5 Power ........................................................................................................................................... 18
4.2 4IF Hub Module ..................................................................................................................................... 19
4.2.1 4IF Hardware Devices ..................................................................................................................... 19
4.2.2 4IF External Interfaces .................................................................................................................... 22
4.2.2.1 RF ................................................................................................................................................ 22
4.2.2.2 Time and Frequency Synchronization ......................................................................................... 23
4.2.2.3 IP .................................................................................................................................................. 24
4.2.3 4IF Redundancy ............................................................................................................................. 25
4.2.3.1 Servers ........................................................................................................................................ 25
4.2.3.2 Modulators and 10 MHz Reference Signal ................................................................................. 26
4.2.3.3 Demodulators .............................................................................................................................. 30
4.2.3.4 Network Connectivity ................................................................................................................... 32
4.2.3.5 Power ........................................................................................................................................... 32
4.3 XIF Hub Module .................................................................................................................................... 33
4.3.1 XIF Hardware Devices .................................................................................................................... 33
4.3.1.1 XIF Baseband Hub Module .......................................................................................................... 33

Platform Architecture v1.1 3/122


Newtec Dialog R2.4.1
Table of Contents

4.3.1.2 XIF Processing Hub Module ........................................................................................................ 35


4.3.2 XIF External Interfaces .................................................................................................................... 38
4.3.2.1 RF ................................................................................................................................................ 38
4.3.2.2 Time and Frequency Synchronization ......................................................................................... 39
4.3.2.3 IP .................................................................................................................................................. 40
4.3.2.4 Baseband and Processing Hub Module Connection ................................................................... 45
4.3.2.4.1 HUB7208 and HUB7318 ....................................................................................................... 45
4.3.2.4.1.1 HPE 5710 ....................................................................................................................... 45
4.3.2.4.1.2 HPE 5700 ....................................................................................................................... 46
4.3.3 XIF Redundancy ............................................................................................................................. 47
4.3.3.1 Servers ........................................................................................................................................ 47
4.3.3.2 Modulators ................................................................................................................................... 47
4.3.3.3 Demodulators .............................................................................................................................. 48
4.3.3.4 PTP Sources ............................................................................................................................... 50
4.3.3.5 Network Connectivity ................................................................................................................... 50
4.3.3.5.1 HUB7208 ............................................................................................................................... 50
4.3.3.5.2 HUB7318 ............................................................................................................................... 51
4.3.3.6 Power ........................................................................................................................................... 52
4.4 NMS Hub Module .................................................................................................................................. 52
4.4.1 NMS Hardware Devices .................................................................................................................. 52
4.4.2 NMS External Interfaces ................................................................................................................. 54
4.4.3 NMS Redundancy ........................................................................................................................... 56
4.4.3.1 Server .......................................................................................................................................... 56
4.4.3.2 Network Connectivity ................................................................................................................... 57
4.4.3.3 Power ........................................................................................................................................... 57

5 Functional Architecture ................................................................................ 59


5.1 Data and Control Plane ......................................................................................................................... 59
5.1.1 Architecture ..................................................................................................................................... 67
5.1.2 Functions ........................................................................................................................................ 68
5.2 Management Plane ............................................................................................................................... 73
5.2.1 Inventory Management ................................................................................................................... 77
5.2.1.1 Components ................................................................................................................................ 77
5.2.1.2 Data Model ................................................................................................................................... 78
5.2.2 Configuration Management ............................................................................................................ 78
5.2.2.1 Components ................................................................................................................................ 78
5.2.2.2 Configuration Models ................................................................................................................... 79
5.2.2.2.1 Satellite Resources ................................................................................................................ 79
5.2.2.2.2 Network Resources ............................................................................................................... 81
5.2.2.2.3 Profiles ................................................................................................................................... 82

Platform Architecture v1.1 4/122


Newtec Dialog R2.4.1
Table of Contents

5.2.2.2.4 Security .................................................................................................................................. 82


5.2.2.2.5 Terminal ................................................................................................................................. 83
5.2.3 Fault and Performance Management ............................................................................................. 83
5.3 Redundancy ......................................................................................................................................... 86
5.3.1 Redundancy Controller ................................................................................................................... 86
5.3.1.1 Interaction .................................................................................................................................... 86
5.3.1.2 Pool Redundancy ........................................................................................................................ 86
5.3.1.3 Chain Redundancy ...................................................................................................................... 86
5.3.2 RF Devices ..................................................................................................................................... 87
5.3.2.1 1IF ................................................................................................................................................ 87
5.3.2.1.1 Modulators and 10 MHz Reference Signal ........................................................................... 87
5.3.2.1.2 Demodulators ........................................................................................................................ 89
5.3.2.2 4IF ................................................................................................................................................ 91
5.3.2.2.1 Modulators and 10 MHz Reference Signal ........................................................................... 91
5.3.2.2.2 Demodulators ........................................................................................................................ 94
5.3.2.3 XIF ............................................................................................................................................... 96
5.3.2.3.1 Modulators ............................................................................................................................. 96
5.3.2.3.2 Demodulators ........................................................................................................................ 97
5.3.3 Redundancy for Network Connectivity ............................................................................................ 98
5.3.3.1 1IF ................................................................................................................................................ 98
5.3.3.2 4IF ............................................................................................................................................. 100
5.3.3.3 XIF ............................................................................................................................................. 101
5.3.3.3.1 HUB7208 ............................................................................................................................. 101
5.3.3.3.2 HUB7318 ............................................................................................................................. 102
5.3.4 Redundancy for Power ................................................................................................................. 104
5.3.5 Redundancy for Servers, Virtual Machines and Applications ....................................................... 105
5.3.5.1 1IF ............................................................................................................................................. 105
5.3.5.2 4IF ............................................................................................................................................. 106
5.3.5.3 XIF ............................................................................................................................................. 110
5.3.5.4 NMS ........................................................................................................................................... 113
5.3.6 Geographic Redundancy .............................................................................................................. 113

6 Abbreviations .............................................................................................. 118

Platform Architecture v1.1 5/122


Newtec Dialog R2.4.1
About This Guide

1 About This Guide


The Newtec Dialog Platform Architecture provides detailed information about the physical and
functional architecture of the Newtec Dialog® system.

1.1 Revision History

Version Date Reason of new version

1.0 December, 2020 Initial version of this release.

1.1 May, 2021 Bug fixes.

1.2 Cautions and Symbols


The following symbols appear in this guide:

A caution message indicates a hazardous situation that, if not avoided, may result in
minor or moderate injury. It may also refer to a procedure or practice that, if not
correctly followed, could result in equipment damage or destruction.

A hint message indicates information for the proper operation of your equipment,
including helpful hints, shortcuts or important reminders.

A reference message is used to direct to a location in a document with related


document or a web-link.

Platform Architecture v1.1 1/122


Newtec Dialog R2.4.1
What is Newtec Dialog®

2 What is Newtec Dialog®


Dialog® is a single-service and multi-service VSAT platform that allows operators and service
providers to build and adapt their infrastructure and satellite networking according to business or
missions at hand. Based on the cornerstones of flexibility, scalability and efficiency, the Dialog
platform gives the operator the power to offer a variety of services on a single platform.
Key characteristics are:
• Flexible service offering
• Flexible business models
• Multi-service operation
• Anywhere, anytime service
• Streamlined operations

The Dialog platform fully manages all aspects of a service: bandwidth usage, real-time
requirements, network characteristics and traffic classification. The platform offers these services
with carrier grade reliability through full redundancy of the platform components.
The Dialog platform supports multiple traffic types, such as the following:
• Video and audio
• Data
• Voice
• Data casting

Platform Architecture v1.1 2/122


Newtec Dialog R2.4.1
What is Newtec Dialog®

The core of the Dialog platform is the Hub, which is located at a physical gateway site. A Dialog
platform can consist of one or more hubs, located at one or more gateways.
A hub consists of one or more Hub Modules. A hub module contains all hardware and software
required for aggregating and processing traffic of one or more satellite networks.
Following types of hub modules exist:
• The 1IF hub module serves one satellite network and is suited for small networks. It provides less
scalability and flexibility than the next hub modules. It is also referred to as HUB6501.
• The 4IF hub module serves up to four satellite networks and is suited for medium to large
networks. It provides flexibility and scalability. It is also referred to as HUB6504.
• The XIF hub module is suited for very large networks and provides full flexibility and scalability. It
can serve up to 18 satellite networks. It is the combination of one or two baseband hub modules
and one processing hub module. The combination of HUB7208 and HUB7318 is referred to as an
XIF hub module.
– The XIF baseband hub module holds the RF devices. It is also referred to as HUB7208.
– The XIF processing hub module holds the processing servers. It is also referred to as
HUB7318. HUB7318 is deployed on the Newtec Private Cloud Infrastructure or NPCI.

Equipment redundancy is supported for all devices in the hub module. A hub module may be
implemented fully redundant, non-redundant or partially redundant.
The Terminal is the equipment located at the end-user’s site. It consists of the outdoor unit
(antenna, LNB and BUC) and the indoor unit, i.e. the modem.

Dialog R2.4.1 supports all modem types.


Do note that new features as described in the release notes of Dialog R2.4.1 and higher
are no longer supported on MDM2200, MDM2500 and MDM3x00.

Platform Architecture v1.1 3/122


Newtec Dialog R2.4.1
What is Newtec Dialog®

A hub module is connected to an IP backbone at one side and to an RF interface at the other side,
establishing the Satellite Network.
A satellite network is associated with forward link capacity from one physical or virtual (in case of
DVB-S2X Annex M) forward carrier and with the corresponding return link capacity. The forward link
is based on one of the following technologies:
• DVB-S2
• DVB-S2X
• DVB-S2X Annex M.
The return link supports multiple return link technologies:
• 4CPM MF-TDMA
• DVB-S2 and S2-Extensions SCPC
• HRC SCPC and Mx-DMA
• MRC NxtGen Mx-DMA

Platform Architecture v1.1 4/122


Newtec Dialog R2.4.1
What is Newtec Dialog®

Network Resources are configured on top of the physical satellite networks and are isolated from
each other using VLAN identifiers. Dialog provides end-to-end network connectivity for three types
of networks:
• Layer 3
• Layer 2
• Multicast
Layer 3 network resources consist of one or more virtual networks. A layer 3 virtual network is an
isolated IPv4 or IPv6 network. Devices within the same virtual network can directly communicate
with each other. A virtual network can independently use its own addressing scheme and the same
addressing schemes can be reused in different virtual networks.
Layer 2 network resources consist of one or more point-to-point virtual connections. A layer 2
point-to-point virtual connection can be considered as a virtual Ethernet pipe, which establishes
isolated communication between two devices.
A multicast network connects an uplink network on the hub side with one or more LAN networks on
the modem side. This consists of a single multicast routing instance providing unidirectional routing
of multicast IP traffic from the uplink network to the modem LAN networks. The MC network can
therefore be compared to a multicast router.

Platform Architecture v1.1 5/122


Newtec Dialog R2.4.1
What is Newtec Dialog®

The Dialog platform is managed through a single Network Management System or NMS. The
NMS can be embedded in a hub module or it can be a standalone hub module, which is deployed on
a Private Cloud Infrastructure or NPCI. The standalone NMS on NPCI is referred to as HUB7318.
The NMS provides a unified management interface to monitor, manage and control the Dialog
platform. It serves as a single point of access and embeds the following configuration and
management interfaces:
• Satellite resources
• Network resources
• Service and classification profile management
• Terminal provisioning
• Fault (alarms) and performance (metrics) management

Platform Architecture v1.1 6/122


Newtec Dialog R2.4.1
Newtec Private Cloud Infrastructure

3 Newtec Private Cloud Infrastructure


The Newtec Private Cloud Infrastructure or NPCI is a private cloud solution based on off-the-shelf
carrier-grade software, consisting of:
• Cloud hardware including:
– Enclosures and blade servers
– Network switches and cabling
and based on HPE and Dell EMC hardware:
– HPE hardware: only in legacy deployments of standalone NMS
– Dell EMC + HPE hardware: all new deployments of standalone NMS and Dialog XIF
processing hub module
• Cloud infrastructure software with following key features:
– Carrier-grade
– High flexibility
– High scalability
– High performance
– High availability
• Cloud management layer, which hosts custom developed components to perform specific cloud
management functions, such as the Virtual Network Functions Manager or VNFM. The Virtual
Network Functions Manager or VNFM communicates with the cloud infrastructure software to
deploy an NPCI product, such as the XIF processing hub module.

The main components of the cloud architecture are three types of physical nodes (hosts):
• Controller nodes, which
– Run the cloud services needed to manage the cloud infrastructure.
– Manage the other hosts over the internal management network.
– Provide external administration interfaces to clients over the OAM (Operations,
Administration and Management) network.
– Provide basic disk space for storage.
• Storage nodes, which
– Provide dedicated storage for virtual machine persistent disks and for ephemeral disks.
– Have a root disk and one or more storage disks.
– Are optional, but when used:

Platform Architecture v1.1 7/122


Newtec Dialog R2.4.1
Newtec Private Cloud Infrastructure

• They must be deployed in groups of either two or three for reliability. The number of
nodes required per group depends on the replication factor specified for the system.
• They must connect to the internal management network, and to the optional
infrastructure network .
• The network used for storage cluster activity (either the management network, or the
optional infrastructure network) must support 10 GbE.
• Compute nodes, which
– Run the cloud compute services and host the virtual machines, providing CPU, memory,
optional local storage and L2 networking services. The compute nodes also provide L3
networking services, such as L3 routing, floating IP, and NAT services for the virtual
machines.
– Connect to the controller nodes over the internal management network, to the storage
nodes over the optional infrastructure network, and to the provider networks using data
interfaces.

On top of the NPCI, different products can be deployed.


• XIF processing hub module
• NMS hub module
• Solutions application, such as the quota manager etc.
A product is entirely based on virtual machines.

Platform Architecture v1.1 8/122


Newtec Dialog R2.4.1
Newtec Private Cloud Infrastructure

Platform Architecture v1.1 9/122


Newtec Dialog R2.4.1
Physical Architecture

4 Physical Architecture
This chapter describes the hardware devices, interfaces, redundancy configurations and deployment
scenarios of the different hub modules.

4.1 1IF Hub Module

4.1.1 1IF Hardware Devices


The hardware of the 1IF hub module is referred to as HUB6501.

HUB6501 is delivered without a rack.

The hardware devices of the 1IF hub module are:

Hardware Device Number Supported Type Functionality

Ethernet distribution • Non-redundant: 1 • Cisco Catalyst • Interconnects the


switch • Redundant: 2 2960X-48TS-L devices in the hub
48-Port module.
• Cisco Catalyst • Provides the
2960S-48TS-L connection with the
48-Port customer's IP
backbone.

Universal redundancy Only needed in case USS0202 • The TX switch


switch, including a TX of modulator provides redundancy
switch and a 10 MHz redundancy. switching of the
reference switch modulators.
• The 10 MHz
reference switch
provides redundancy
switching of the 10
MHz reference
output signal of the
modulators.

Platform Architecture v1.1 10/122


Newtec Dialog R2.4.1
Physical Architecture

L-band and reference 1 ACC6000 • The 8-way L-band


splitter, including an splitter splits the RF
8-way L-band splitter RX signal from the
and a 2-way 10 MHz satellite network
reference splitter towards the
demodulator(s).
• The 2-way 10 MHz
reference splitter
splits the external 10
MHz reference
signal towards the
modulators.

Blade server • Non-redundant: 1 HP DL360p Runs a fixed set of


• Redundant: 2 virtual machines or
VMs.

Modulator • Non-redundant: 1 • M6100


• Redundant: 2 • MCM7500 *

Demodulator The number and type • MCD6000


of demodulators DVB-S2/S2X
depend on the
• MCD6000 HRC
required return link
capacity, the • MCD7000
supported return DVB-S2/S2X
technologies and the • MCD7000 HRC
redundancy
• MCD7000 4CPM
configuration. You
can use up to eight • MCD7500 HRC
demodulators in a 1IF 20 Mbaud
hub module. • MCD7500 HRC
68 Mbaud
• MCD7500 MRC
• MCD7500 4CPM
• NTC2291 4CPM *

* MCM7500 in combination with NTC2291 is not supported.

4.1.2 1IF External Interfaces

4.1.2.1 RF

The 1IF hub module supports one satellite network and has only one RF TX interface (transmit /
uplink) and one RF RX interface (receive / downlink).
The interface used for RF TX depends on the modulator redundancy.
The RF TX interface is:
• The IF- or L-band output interface at the back panel of the modulator in case of no redundancy.
M6100

Platform Architecture v1.1 11/122


Newtec Dialog R2.4.1
Physical Architecture

MCM7500 (only L-band)

• The output interface C of the USS in case of redundancy.

The interface used for RF RX is always the 75 Ohm BNC IN interface of the L-band splitter
(ACC6000).

4.1.2.2 Time and Frequency Synchronization

The 1IF hub module offers two modes for time and frequency synchronization.
• External 10 MHz mode: An external 10 MHz reference source should be connected to the 50
Ohm BNC IN interface on the reference splitter (ACC6000).

• Internal 10 MHz mode: The active modulator provides the 10 MHz reference signal. The output
interface of this signal depends on the modulator redundancy.
The 10 MHz output interface is:
– The 50 Ohm BNC 10 MHz REF OUT interface found at the back panel of the modulator in
case of no redundancy.
M6100

Platform Architecture v1.1 12/122


Newtec Dialog R2.4.1
Physical Architecture

MCM7500

– The output C of the USS in case of redundancy.

4.1.2.3 IP

The 1IF hub module is connected to the customer's IP backbone through specific ports on the
Ethernet distribution switch.
Following ports are used for uplink connectivity:
• Gi1/0/47 for unicast traffic.

• Gi1/0/45 for multicast traffic.

• Gi1/0/48 for management traffic.

In case of redundant switches, the ports of both switches should be connected to the IP backbone.

Platform Architecture v1.1 13/122


Newtec Dialog R2.4.1
Physical Architecture

4.1.3 1IF Redundancy

4.1.3.1 Servers

Two servers are deployed in a redundant setup: SRV-01 and SRV-02. The applications run on
redundant virtual machines in an active-standby redundancy cluster across the servers. The
redundancy behavior depends on the type of sub-systems and VMs. For more information on how
redundancy of the applications and virtual machines work, refer to
Servers, Virtual Machines and Applications for 1IF on page 105.

4.1.3.2 Modulators and 10 MHz Reference Signal

Modulators
A redundant setup requires two modulators of the same type: 2x M6100 or 2x MCM7500. The
modulators operate in a 1:1 redundancy configuration. When the active modulator fails, the standby
modulator takes over. The 1:1 redundancy is non-revertive.
The RF outputs of both modulators are connected to the TX switch of the USS (output A and B).
Output C of the USS' TX switch serves as the RF TX interface.
(The figure below shows the setup for redundant M6100 modulators. The same setup is used for
redundant MCM7500 modulators.)

Platform Architecture v1.1 14/122


Newtec Dialog R2.4.1
Physical Architecture

10 MHz Reference Signal


In case of the internal 10 MHz mode, the 10 MHz REF OUT interfaces of redundant modulators are
connected to the 10 MHz reference switch of the USS (output A and B). Output C of the USS'
reference switch serves as the 10 MHz output interface.
(The figure below shows the setup for redundant M6100 modulators. The same setup is used for
redundant MCM7500 modulators.)

Platform Architecture v1.1 15/122


Newtec Dialog R2.4.1
Physical Architecture

Both switches of the USS (TX and reference switch) switch simultaneously during a redundancy
swap, making sure that the TX output signal and 10 MHz reference output signal are coming from
the same modulator, i.e. the active one.
In case of the external 10 MHz mode, the external 10 MHz signal is inputted to the IN interface of
the reference splitter (ACC6000) and the 10 MHz REF IN interface of both modulators is connected
to the OUT interfaces of this reference splitter.
(The figure below shows the setup for redundant M6100 modulators. The same setup is used for
redundant MCM7500 modulators.)

4.1.3.3 Demodulators

The RF IN of a demodulator is connected to the 8-way L-band splitter. A 1IF hub module can have
up to eight demodulators (limited by the 8-way L-band splitter).

Platform Architecture v1.1 16/122


Newtec Dialog R2.4.1
Physical Architecture

The demodulator redundancy scheme depends on the type of demodulator.

MCD6000 / MCD7000 / MCD7500


These demodulators are grouped into one or more N:M redundancy pools. Pool redundancy is
provided per return technology. For example, MCD6000 HRC, MCD7000 HRC and MCD7500 HRC
with the same capabilities can belong to the same redundancy pool. The N:M redundancy is
non-revertive.
The 1IF hub module can support up to:
• Eight DVB-S2/S2X demodulators. For example, in a 7:1 redundancy pool with seven active
devices and one standby device.
• Eight HRC demodulators. For example, in a 6:2 redundancy pool with six active devices and two
standby devices.
• Four 4CPM demodulators. For example, in a 3:1 redundancy pool with three active devices and
one standby device. The limited set of 4CPM demodulators is due to a limitation of the modulator,
which can only synchronize NCR/ASI signaling to four 4CPM demodulators.
• Eight MRC demodulators, one active.

Platform Architecture v1.1 17/122


Newtec Dialog R2.4.1
Physical Architecture

NTC2291
Because the NTC2291 demodulator has no redundant network connectivity and no redundant
power supply, N:M redundancy cannot be used. Instead, the NTC2291 demodulators operate in an
N:N chain redundancy. Taking into account the limitation of the modulator with respect to NCR/ASI
signaling, you can have up to four NTC2291 demodulators per chain.

4.1.3.4 Network Connectivity

The core of a redundant network in the 1IF hub module is formed by two distribution switches
(DSW-1 and DSW-2). The switches operate in a 1+1 redundancy configuration, where the level of
resilience is referred to as active/active as the backup switch actively participates with the system
during normal operation.
The switches are unaware of the state in any other redundancy scheme. All devices in the hub
module, except NTC2291, have redundant management and data connectivity.

For more information about redundancy for network connectivity, refer to


Redundancy for Network Connectivity on page 98.

4.1.3.5 Power

The hub module has two Power Distribution Units or PDUs, which can be connected to two different
power circuits for redundancy.
Most devices in the hub module have dual power supply and are connected to both PDUs. Following
devices have a single power supply and can only be connected to one of both PDUs:
• Distribution switch(es)
• NTC2291

Platform Architecture v1.1 18/122


Newtec Dialog R2.4.1
Physical Architecture

4.2 4IF Hub Module

4.2.1 4IF Hardware Devices


The hardware of a 4IF hub module is referred to as HUB6504.

HUB6504 is delivered with a 19” rack by default. Optionally, you can order the hub
module without a rack.

The hardware devices of the 4IF hub module are:

Hardware Device Number Supported Type Functionality

Ethernet access 2 • Cisco Catalyst • Interconnects the RF


switch 2960X-48TS-L devices
48-Port • Connects to the
• Cisco Catalyst distribution switches.
2960S-48TS-L
48-Port

Universal redundancy 1 USS0202 The 10 MHz reference


switch main module, switches provide
including four 10 MHz redundancy switching
reference switches of the 10 MHz
reference output signal
of the modulators.

Universal redundancy 1 USS0203 The TX switches


switch extension provide redundancy
module, including four switching of the
TX switches modulators.

Platform Architecture v1.1 19/122


Newtec Dialog R2.4.1
Physical Architecture

L-band splitter, 2 ACC6000 The 8-way L-band


including two 8-way splitters split the RF
L-band splitters RX signal from the
satellite network
towards the
demodulators.

Reference splitter, 1 ACC6000 The first 8-way 10 MHz


including two 8-way reference splitter splits
10 MHz reference the external 10 MHz
splitters reference signal
towards the
modulators. The
second 8-way splitter is
used as spare part in
case the first one fails.

Enclosure 1 C7000 HP • Holds the distribution


switches and blade
servers.
• Has two redundant
onboard
administrators, which
provide component
control in the
enclosure, such as
power control and
thermal
management.

Ethernet distribution 2 • HP6125G/XG • Provides the


switch connection with the
The HP distribution • Cisco Catalyst
customer's IP
switches are Blade Switch
configured in an IRF 3120G (only backbone
(Intelligent Resilient available in older • Interconnects the
Framework) stack. hub modules) blade servers in the
This switch stack acts enclosure
as one logical switch. • Connects to the
The physical switches access switches.
are members of the
stack.

Blade server Up to 16. HP BL460c Runs a fixed set of


The number of blade virtual machines or
servers depends on VMs.
the number of satellite
networks you want to
serve, the
configuration of
redundancy and the
use of an embedded
or a standalone NMS.

Platform Architecture v1.1 20/122


Newtec Dialog R2.4.1
Physical Architecture

Modulator The number of • M6100


modulators depends • MCM7500 *
on the number of
satellite networks you
want to serve and the
configuration of
redundancy. You
have either one
(non-redundant) or
two (redundant)
modulators per
satellite network. You
can add up to eight
modulators (to serve
four satellite networks
with full redundancy).

Demodulator The number and type • MCD6000


of demodulators DVB-S2/S2X
depends on the
• MCD6000 HRC
required return link
capacity and • MCD7000
supported DVB-S2/S2X
technologies, and the • MCD7000 HRC
configuration of
• MCD7000 4CPM
redundancy. You can
add up to eight • MCD7500 HRC
demodulators per 20 Mbaud
satellite network. • MCD7500 HRC
68 Mbaud
• MCD7500 MRC
• MCD7500 4CPM
• NTC2291 4CPM *

* MCM7500 in combination with NTC2291 is not supported.


The allocation of the blade servers can be divided in three sub-systems:
• Hub Module Management System or HMMS, which provides the internal management
functionality of the hub module. These servers are in slots 1 and 2 of the enclosure.
• Hub Processing Segment or HPS, which aggregates and processes the data of one satellite
network. One HPS contains two servers. These servers are in slots 3 to 12 of the enclosure.
• Network Management System or NMS, which provides centralized management functionality of
the entire Newtec Dialog Platform. These servers are in slots 13 to 16 of the enclosure.

The NMS sub-system is only deployed if the platform uses embedded NMS.

The minimum server deployment of a 4IF hub module is:


• One HMMS server in slot 1.
• Two HPS servers handling one satellite network in slots 3 to 12:
– One server handles the Satellite Channel Processing (SCP) and must be in an uneven slot
position (3, 5, 7, 9 or 11)

Platform Architecture v1.1 21/122


Newtec Dialog R2.4.1
Physical Architecture

– The other server handles the Edge/Data Processing (EDP) and must be in an even slot
position (4, 6, 8, 10 or 12).
In case you are using an embedded NMS, the NMS blade servers are in slots 13 to 16 of the
enclosure. The minimum NMS server deployment is one blade server in slot 13.

4.2.2 4IF External Interfaces

4.2.2.1 RF

The 4IF hub module supports one to four satellite networks. The RF TX interface(s) (transmit /
uplink) and RX interface(s) (receive / downlink) are available through the interface panel, which is at
the rear of the 19” rack.

The RX and TX interface of each satellite network is terminated on the interface panel as shown in
the figure.

Platform Architecture v1.1 22/122


Newtec Dialog R2.4.1
Physical Architecture

• Transmit is a 50 Ohm interface with BNC connector that outputs the RF signal of the active
modulator.
• Receive is a 75 Ohm interface with BNC connector that receives the RF signal from the antenna
and distribute it to the demodulators.

4.2.2.2 Time and Frequency Synchronization

The frequency synchronization interface is available through the interface panel, which is at the rear
of the 19” rack. The 4IF hub module offers two frequency synchronization modes.
• External 10 MHz mode: An external 10 MHz reference source should be connected to the REF
IN interface on the interface panel.

• Internal 10 MHz mode: The active modulator provides the 10 MHz reference signal. The 10 MHz
output interface is the 50 Ohm BNC 10 MHz REF OUT interface on the interface panel. Each
satellite network has its own REF OUT interface.

Platform Architecture v1.1 23/122


Newtec Dialog R2.4.1
Physical Architecture

4.2.2.3 IP

The IP interfaces are available through the interface panel, which is at the rear of the 19” rack.

• Data 1A/1B/2A/2B are two pairs of redundant 1 GbE interfaces for unicast traffic.
• Data 3A/3B is one pair of redundant 1 GbE interfaces for multicast traffic.
• MGMT-A/B is one pair of redundant 1 GbE interfaces for management traffic.

Internally, the A ports are connected to DSW-1 and the B ports are connected to
DSW-2.

Platform Architecture v1.1 24/122


Newtec Dialog R2.4.1
Physical Architecture

• OA-1/2 is one pair of redundant 1GbE interfaces for accessing the onboard administrator of the
enclosure.

4.2.3 4IF Redundancy

4.2.3.1 Servers

The minimum server deployment of a 4IF hub module is:


• One HMMS server in slot 1.
• Two HPS servers handling one satellite network in slots 3 to 12:
– One server handles the Satellite Channel Processing (SCP) and must be in an uneven slot
position (3, 5, 7, 9 or 11)
– The other server handles the Edge/Data Processing (EDP) and must be in an even slot
position (4, 6, 8, 10 or 12).
To make this minimum deployment redundant you should add:
• One HMMS server in slot 2.
• Two servers in slots 3 to 12 (i.e. a redundant HPS). The SCP server must be in an uneven slot
position (3, 5, 7, 9 or 11), the EDP server must be in an even slot position (4, 6, 8, 10 or 12).
• For each additional satellite network, you should add another set of two servers (an additional
HPS) in slots 3 to 12. The SCP server must be in an uneven slot position (3, 5, 7, 9 or 11), the
EDP server must be in an even slot position (4, 6, 8, 10 or 12).

In case you are using an embedded NMS, the NMS blade servers are in slots 13 to 16 of the
enclosure. The minimum NMS server deployment is one blade server in slot 13. To make this
minimum deployment redundant you should add a server in slot 14. Depending on the size of your

Platform Architecture v1.1 25/122


Newtec Dialog R2.4.1
Physical Architecture

Newtec Dialog network, you can add an extra NMS server in slot 15 and optionally a redundant
server in slot 16.

The applications run on redundant virtual machines in an active-standby redundancy cluster across
the blade servers.
The redundancy behavior depends on the type of sub-systems the server is used for.
For more information on application and virtual machine redundancy, refer to
Servers, Virtual Machines and Applications for 4IF on page 106.

4.2.3.2 Modulators and 10 MHz Reference Signal

Modulators
A redundant setup requires two modulators of the same type per satellite network: 2x M6100 or 2x
MCM7500 per satellite network. These modulators operate in a 1:1 redundancy configuration. When
the active modulator fails, the standby modulator takes over. The 1:1 redundancy is non-revertive.
Modulators can be inserted in rack position 1 to 8. The positions are clearly numbered on the rack
and the patch panel.
The RF outputs of the modulators are connected to the TX switches of the universal redundancy
switch extension module (USS0203). There are four RF TX switches for the redundancy switching
of the TX signals of the four satellite networks.

Platform Architecture v1.1 26/122


Newtec Dialog R2.4.1
Physical Architecture

10 MHz Reference Signal


In case of the internal 10 MHz mode, the 10 MHz REF OUT interfaces of the modulators are
connected to the 10 MHz reference switch of the universal redundancy switch main module
(USS0202). There are four 10 MHz reference switches for the redundancy switching of the 10 MHz
reference output signal of the modulators.

Platform Architecture v1.1 27/122


Newtec Dialog R2.4.1
Physical Architecture

The 10 MHz output interface is the 50 Ohm BNC 10 MHz REF OUT interface on the interface panel.
Each satellite network has its own REF OUT interface.

Both switches of the USSs (TX and reference switch) switch simultaneously making sure that TX
output signal and 10 MHz reference output signal are coming from the same modulator, i.e. the
active one.
In case of the external 10 MHz mode, the external 10 MHz signal is inputted to the REF IN
interface on the interface panel.

Platform Architecture v1.1 28/122


Newtec Dialog R2.4.1
Physical Architecture

This signal is forwarded to the first 8-way reference splitter (ACC6000) and distributed to the 10 MHz
REF IN interface of the modulators. The second 8-way reference splitter can be used as spare part
in case one of the other splitters fails.

Platform Architecture v1.1 29/122


Newtec Dialog R2.4.1
Physical Architecture

4.2.3.3 Demodulators

A 4IF hub module can have up to eight demodulators per satellite network. The two L-band splitters
(ACC6000) provide four 8-way L-band splitters for splitting the RX signal of the four satellite
networks.
Demodulators can be inserted in rack position 1 to 18. The positions are clearly numbered on the
rack and the patch panel.

Platform Architecture v1.1 30/122


Newtec Dialog R2.4.1
Physical Architecture

The demodulator redundancy scheme depends on the type of demodulator.

MCD6000 / MCD7000 / MCD7500


These demodulators are grouped into one or more N:M redundancy pools. Pool redundancy is
provided per return technology. For example, MCD6000 HRC, MCD7000 HRC and MCD7500 HRC
with the same capabilities can belong to the same redundancy pool. The N:M redundancy is
non-revertive.
The 4IF hub module can support up to:
• Eight DVB-S2/S2X demodulators. For example, in a 7:1 redundancy pool with seven active
devices and one standby device.
• Eight HRC demodulators. For example, in a 6:2 redundancy pool with six active devices and two
standby devices.
• Four 4CPM demodulators. For example, in a 3:1 redundancy pool with three active devices and
one standby device. The limited set of 4CPM demodulators is due to a limitation of the modulator,
which can only synchronize NCR/ASI signaling to four 4CPM demodulators.
• Eight MRC demodulators, one active.

Platform Architecture v1.1 31/122


Newtec Dialog R2.4.1
Physical Architecture

NTC2291
Because the NTC2291 demodulator has no redundant network connectivity and no redundant
power supply, N:M redundancy cannot be used. Instead, the NTC2291 demodulators operate in an
N:N chain redundancy. Taking into account the limitation of the modulator with respect to NCR/ASI
signaling, you can have up to four NTC2291 demodulators per chain.

4.2.3.4 Network Connectivity

The core of a redundant network in the 4IF hub module is formed by two access switches and two
distribution switches. network switches.The switches operate in a 1+1 redundancy configuration.
The switches are unaware of the state in any other redundancy scheme. All devices in the hub
module, except NTC2291, have redundant management and data connectivity.

For more information about redundancy for network connectivity, refer to


Redundancy for Network Connectivity on page 100.

4.2.3.5 Power

The hub module has two Power Distribution Units (PDUs), which can be connected to two different
power circuits for redundancy.
Most devices in the hub module have dual power supply and are connected to both PDUs. Following
devices have a single power supply and can only be connected to one of both PDUs:
• Access switches
• NTC2291

Platform Architecture v1.1 32/122


Newtec Dialog R2.4.1
Physical Architecture

4.3 XIF Hub Module


The XIF hub module is used for large Newtec Dialog Platform deployments. It can serve up to 18
satellite networks. The hardware devices are spread over multiple hub modules: baseband hub
module and processing hub module. For each processing hub module, you can have up to two
baseband hub modules. The XIF processing hub module is deployed on the Newtec Private Cloud
Infrastructure or NPCI.

4.3.1 XIF Hardware Devices

4.3.1.1 XIF Baseband Hub Module

The hardware of an XIF baseband hub module is referred to as HUB7208.

HUB7208 is delivered with a 19” rack by default. Optionally, you can order the hub
module without a rack.

The hardware devices of the XIF baseband hub module are:

Hardware Device Number Supported Type Functionality

Ethernet access 4 • HPE FlexFabric • Interconnects the RF


switch 5710 48XGT devices
The access switches
are configured in an 6QS+/2QS28 • Connects to the
IRF (Intelligent • [legacy] HPE distribution switch in
Resilient Framework) FlexFabric 5700 the processing hub
stack. This switch 32XGT 8XG module.
stack acts as one 2QSFP+
logical switch The different switch
(ASW-<BBID>-1, with types cannot be
BBID = identifier of
mixed.
the baseband hub
module). The physical

Platform Architecture v1.1 33/122


Newtec Dialog R2.4.1
Physical Architecture

switches are
members of the stack.

L-band RF switch 1 • The input interfaces


matrix, with 32 input connect to the
interfaces and 32 modulators and
output interfaces provide the RX
interface(s).
• The output
interfaces connect to
the demodulators
and provide the TX
interface(s).

Modulator The number of MCM7500


modulators depends
on the number of
satellite networks you
want to serve, the
type of network
interface used and the
configuration of
redundancy.

Demodulator The number and type • MCD7000


of demodulators DVB-S2/S2X
depends on the
• MCD7000 HRC
required return link
capacity and • MCD7000 4CPM
supported • MCD7500 HRC
technologies, and the 20 Mbaud
configuration of • MCD7500 HRC
redundancy. 68 Mbaud
• MCD7500 MRC
• MCD7500 4CPM

Number of Modulators
The MCM7500 supports two types of network interfaces:
• 1 Gigabit Ethernet with transceiver modules on RJ-45, fiber or DAC (non-wideband mode).
• 10 Gigabit Ethernet with transceiver modules on fiber or DAC (wideband mode)
The number of modulators depends on the number of satellite networks you want to serve, the type
of network interface used and the configuration of redundancy.
• You can add up to 10 modulators if they all operate in wideband mode. These modulators can be
inserted in rack positions 12 to 21. The positions are clearly numbered on the rack.
• You can add up to 32 modulators if they all operate in non-wideband mode. These modulators
can be inserted in rack positions 1 to 32. The positions are clearly numbered on the rack.
• You can link one satellite network to one non-wideband modulator. You can link multiple satellite
networks to one wideband modulator (has multiple virtual carriers).

Number of Demodulators

Platform Architecture v1.1 34/122


Newtec Dialog R2.4.1
Physical Architecture

The number and type of demodulators depend on the number of satellite networks you want to
serve, the required return link capacity and supported technologies, and the configuration of
redundancy.
• You need at least one demodulator per satellite network. You can add up to 32 demodulators.
Demodulators can be inserted in rack position 1 to 32. The positions are clearly numbered on the
rack.
• The number of demodulators that can be used per satellite network is limited to:
– Eight active HRC demodulators, due to the limited processing capacity of the HRC
controller. The HRC controller is a virtual machine, which runs on the processing hub
module.
– One active 4CPM demodulator for the XIF baseband hub module combined with a
HUB7318 processing hub module. This limit is due to the number of CPM controllers that
are deployed on the processing hub module. There's only one CPM controller deployed on
HUB7318. One CPM controller serves one 4CPM demodulator.
– Eight active MCD7000 DVB-S2/S2X demodulators. This is not a hard limit but is driven by
the maximum throughput that can be reached.
– Eight MRC demodulators, one active.

The modulators and demodulators, which serve the same satellite network, can be
spread over multiple baseband hub modules.

4.3.1.2 XIF Processing Hub Module

The hardware of an XIF baseband hub module is referred to as HUB7318. HUB7318 is deployed on
the Newtec Private Cloud Infrastructure or NPCI.

Platform Architecture v1.1 35/122


Newtec Dialog R2.4.1
Physical Architecture

The hardware devices of the XIF processing hub module are:

Hardware Device Number Supported Type Functionality

Top-Of-Rack or TOR 2. • HPE FlexFabric • Provides the


switch 5710 48XGT connection with the
The switches are
6QS+/2QS28 customer's IP
configured in an IRF
(Intelligent Resilient • [legacy] HPE backbone.
Framework) stack. FlexFabric 5700 • Interconnects the
This switch stack acts 32XGT 8XG enclosures.
as one logical switch. 2QSFP+ • Connects to the
The physical switches The different switch access switches in
are members of the types cannot be the baseband hub
stack: TOR-DSW-M1
mixed. module.
and TOR-DSW-M2.

Controller block 1 See below. • Runs the services


needed to manage
the NPCI.
• Manages the
compute and storage
nodes.
• Provides basic disk
space for storage.

Compute block Up to 3 See below. • Runs the compute


services and hosts
the virtual network
functions.
• Provides CPU and
RAM to the NPCI.

Storage block 1 (optional) See below. • Provides extra disk


space for storage.

PTP source 2 (optional) TRF0200 • Provides the time


and frequency
reference.
• Dedicated per
processing hub
module or shared
over multiple
processing hub
modules.

Controller Block
The controller block consists of a Dell PowerEdge FX2s enclosure that includes:
• Two FC430 blade servers that act as controller nodes.
• Two FC430 blade servers that act as storage nodes.
• Two FD332 storage sleds, each containing eight storage disks.

Platform Architecture v1.1 36/122


Newtec Dialog R2.4.1
Physical Architecture

• Two FN410T Ethernet switches, which provide connection with TOR-DSW and interconnect the
blade servers in the enclosure.
• One Chassis Management Controller (CMC).

Compute Block
The first compute block consists of a Dell PowerEdge FX2s enclosure that includes:
• Three (minimum) to four FC640 blade servers that act as compute nodes.

• Two FN410T Ethernet switches, which provide connection with TOR-DSW and interconnect the
blade servers in the enclosure.
• One Chassis Management Controller (CMC).

Additionally, you can have two extra compute blocks, each containing:

Platform Architecture v1.1 37/122


Newtec Dialog R2.4.1
Physical Architecture

• One to four FC640 blade servers that act as compute nodes.


• Two FN410T Ethernet switches, which provide connection with TOR-DSW and interconnect the
blade servers in the enclosure.
• One Chassis Management Controller (CMC).

[optional] Storage Block


HUB7318 can contain one storage block. The storage block consists of a Dell PowerEdge FX2s
enclosure that includes:
– One FC430 blade servers that acts as storage node.
– One FD332 storage sled containing eight storage disks.
– Two FN410T Ethernet switches.
– One Chassis Management Controller (CMC).
OR
– Two FC430 blade servers that act as storage nodes.
– Two FD332 storage sleds, each containing eight storage disks.
– Two FN410T Ethernet switches.
– One Chassis Management Controller (CMC).

The number of control, storage and compute nodes depends on the number and size
of satellite networks you want to serve, the configuration of redundancy and the type
deployments on NPCI.

4.3.2 XIF External Interfaces

4.3.2.1 RF

The RF TX interface(s) (transmit / uplink) and RX interface(s) (receive / downlink) are available on
the baseband hub module through the RF switch matrix.
The RX and TX interface of each satellite network is terminated on the RF switch matrix as shown in
the figure.

Platform Architecture v1.1 38/122


Newtec Dialog R2.4.1
Physical Architecture

• Output TX is a 50 Ohm interface with BNC connector that outputs the RF signal of the active
modulator. You can have up to 16 TX interfaces on the RF switch matrix.
• Input RX is a 50 Ohm interface with BNC connector that receives the RF signal from the antenna
and distributes it to the demodulators. You can have up to 16 RX interfaces on the RF switch
matrix.
A baseband hub module supports up to 16 satellite networks.
Due to the internal cabling of the hub module, the input and output RF signal will suffer some loss.
This loss is less than 3 dB. The RF switch matrix does not generate extra losses.

4.3.2.2 Time and Frequency Synchronization

Time and frequency synchronization of the XIF hub module is based on PTP (Precision Time
Protocol). A PTP source with stable oscillator is used as the external PTP master clock.
The 10 MHz output interface is the 50 Ohm BNC 10 MHz REF OUT interface on the back panel of
the PTP master clock.
The PTP master clock can be slaved to an external 10 MHz reference signal. In this case, an
external 10 MHz reference source should be connected to the 10 MHz REF IN interface on the back
panel of the PTP master clock.
Two PTP source deployment modes exist:
• Dedicated: A redundant pair of PTP sources is available per processing hub module. The PTP
sources are connected directly to the Ethernet distribution switches.
• Shared: A redundant pair of PTP sources is available per hub/gateway. The PTP sources are
connected via an external PTP-enabled switch. This switch in turn is connected to multiple
processing hub modules via the Ethernet distribution switches.
The following interfaces of the distribution switches are used: TEN1/0/25 and TEN2/0/25
The figure shows the dedicated mode at the left and the shared mode at the right.

Platform Architecture v1.1 39/122


Newtec Dialog R2.4.1
Physical Architecture

The distribution switches in the XIF hub modules are PTP-enabled. The switches slave on a single
master port to the PTP master clock. On all other PTP-enabled ports of the switch, the PTP
messages are regenerated considering the path and processing delay and sent to the connected
slave devices:
• Access switches
• Modulators
• Demodulators
• HRC controller
• MRC Controller

4.3.2.3 IP

In this chapter:
• Network Connectivity using HPE 5710 Switch on page 41
• Network Connectivity using HPE 5700 Switch on page 42
• Management Connectivity to Enclosures on page 44

Platform Architecture v1.1 40/122


Newtec Dialog R2.4.1
Physical Architecture

Network Connectivity using HPE 5710 Switch


The data connections between HUB7318 and your backbone network can operate in 1 GbE or 10
GbE.
For unicast data traffic, insert CAT6A cables into following redundant 1 GbE or 10 GbE RJ45 ports
on the TOR switches:
• TEN 1/0/5 of TOR-DSW-M1
• TEN 2/0/5 of TOR-DSW-M2
OR
Insert fiber cables into following redundant 10 GbE fiber ports on the adapter panels of the TOR
switches:
• TEN 1/0/49:1 of TOR-DSW-M1
• TEN 2/0/49:1 of TOR-DSW-M2

TEN x/0/49:1 corresponds with port 13/14 on the adapter panel.

Ports 5 and 49:1 are mutually exclusive.

For multicast data traffic, insert CAT6A cables into following redundant 1 GbE or 10 GbE RJ45 ports
on the TOR switches:
• TEN 1/0/4 of TOR-DSW-M1
• TEN 2/0/4 of TOR-DSW-M2

Platform Architecture v1.1 41/122


Newtec Dialog R2.4.1
Physical Architecture

For the remote management of HUB7318, following redundant 1GbE RJ45 port on the TOR
switches are available:
• TEN 1/0/8 of TOR-DSW-M1
• TEN 2/0/8 of TOR-DSW-M2

Network Connectivity using HPE 5700 Switch


The data connections between HUB7318 and your backbone network can operate in 1 GbE or 10
GbE.
For unicast data traffic, insert CAT6A cables into following redundant 1 GbE or 10 GbE RJ45 ports
on the TOR switches:
• TEN 1/0/29 of TOR-DSW-M1
• TEN 2/0/29 of TOR-DSW-M2
OR
Insert DAC or fiber cables (with SFP+ transceiver modules) into following redundant 10 GbE SFP+
ports on the TOR switches:
• TEN 1/0/33 of TOR-DSW-M1
• TEN 2/0/33 of TOR-DSW-M2

Platform Architecture v1.1 42/122


Newtec Dialog R2.4.1
Physical Architecture

Ports 29 and 33 are mutually exclusive.

For multicast data traffic, insert CAT6A cables into following redundant 1 GbE or 10 GbE RJ45 ports
on the TOR switches:
• TEN 1/0/28 of TOR-DSW-M1
• TEN 2/0/28 of TOR-DSW-M2

For the remote management of HUB7318, following redundant 1GbE RJ45 port on the TOR
switches are available:
• TEN 1/0/32 of TOR-DSW-M1
• TEN 2/0/32 of TOR-DSW-M2

Platform Architecture v1.1 43/122


Newtec Dialog R2.4.1
Physical Architecture

Management Connectivity to Enclosures


For managing the hardware enclosures, the 1 GbE RJ45 port, labeled Gb1, at the rear of the
enclosure of the controller block is available. Access to the other enclosures is daisy-chained
through the STK/Gb2 Ethernet port.

Platform Architecture v1.1 44/122


Newtec Dialog R2.4.1
Physical Architecture

1. Your management infrastructure.


2. Active CMC.

4.3.2.4 Baseband and Processing Hub Module Connection

One baseband hub module can be connected to one processing hub module.
One processing hub module can be connected to up to two baseband hub modules.

The baseband and processing hub module are connected through ports on the access switches of
the baseband hub module and the distribution switches of the processing hub module.

4.3.2.4.1 HUB7208 and HUB7318

The type of switches used in the baseband hub module(s) and the processing hub
module must be the same. The type is either HPE FlexFabric 5710 48XGT
6QS+/2QS28 switches or HPE FlexFabric 5700 32XGT 8XG 2QSFP+ switches
(legacy).

The connection depends on the switch type: HPE 5710 or HPE 5700.

HPE 5710
For traffic between HUB7318 and the first baseband hub module, the following ports are connected:
• For management traffic
– Connect TEN 1/0/49:1 of ASW-1-1 member 1 (BBSW-1) to TEN 1/0/49:2 of TOR-DSW-M1
– Connect TEN 2/0/49:1 of ASW-1-1 member 2 (BBSW-2) to TEN 2/0/49:2 of TOR-DSW-M2

• TEN x/0/49:1 of ASW-1-1 = port 13/14 on BB adapter panel


• TEN x/0/49:2 of TOR-DSW = port 15/16 on PROC adapter panel

• For data traffic


– Connect TEN 1/0/50:1 of ASW-1-1 member 1 (BBSW-1) to TEN 1/0/50:2 of TOR-DSW-M1
– Connect TEN 2/0/50:1 of ASW-1-1 member 2 (BBSW-2) to TEN 2/0/50:2 of TOR-DSW-M2

• TEN x/0/50:1 of ASW-1-1 = port 1/2 on BB adapter panel


• TEN x/0/50:2 of TOR-DSW = port 3/4 on PROC adapter panel

If you are using a second baseband hub module, the following ports are connected:
• For management traffic
– Connect TEN 1/0/49:1 of ASW-2-1 member 1 (BBSW-1) to TEN 1/0/49:3 of TOR-DSW-M1
– Connect TEN 2/0/49:1 of ASW-2-1 member 2 (BBSW-2) to TEN 2/0/49:3 of TOR-DSW-M2

• TEN x/0/49:1 of ASW-2-1 = port 13/14 on BB adapter panel


• TEN x/0/49:3 of TOR-DSW = port 17/18 on PROC adapter panel

• For data traffic


– Connect TEN 1/0/50:1 of ASW-2-1 member 1 (BBSW-1) to TEN 1/0/50:3 of TOR-DSW-M1

Platform Architecture v1.1 45/122


Newtec Dialog R2.4.1
Physical Architecture

– Connect TEN 2/0/50:1 of ASW-2-1 member 2 (BBSW-2) to TEN 2/0/50:3 of TOR-DSW-M2

• TEN x/0/50:1 of ASW-2-1 = port 1/2 on BB adapter panel


• TEN x/0/50:3 of TOR-DSW = port 5/6 on PROC adapter panel

HPE 5700
For traffic between HUB7318 and the first baseband hub module, the following ports are connected:
• For management traffic
– Connect XGE 1/0/34 of ASW-1-1 member 1 (BBSW-1) to TEN 1/0/35 of TOR-DSW-M1
– Connect XGE 2/0/34 of ASW-1-1 member 2 (BBSW-2) to TEN 2/0/35 of TOR-DSW-M2
• For data traffic
– Connect XGE 1/0/40 of ASW-1-1 member 1 (BBSW-1) to TEN 1/0/36 of TOR-DSW-M1
– Connect XGE 2/0/40 of ASW-1-1 member 2 (BBSW-2) to TEN 2/0/36 of TOR-DSW-M2
If you are using a second baseband hub module, the following ports are connected:
• For management traffic
– Connect XGE 1/0/34 of ASW-2-1 member 1 (BBSW-1) to TEN 1/0/37 of TOR-DSW-M1
– Connect XGE 2/0/34 of ASW-2-1 member 2 (BBSW-2) to TEN 2/0/37 of TOR-DSW-M2
• For data traffic
– Connect XGE 1/0/40 of ASW-2-1 member 1 (BBSW-1) to TEN 1/0/38 of TOR-DSW-M1
– Connect XGE 2/0/40 of ASW-2-1 member 2 (BBSW-2) to TEN 2/0/38 of TOR-DSW-M2

Platform Architecture v1.1 46/122


Newtec Dialog R2.4.1
Physical Architecture

4.3.3 XIF Redundancy

4.3.3.1 Servers

Controller Block
The controller block has two redundant controller nodes and two redundant storage nodes. The
redundant pairs work in hot-standby mode.

Compute Block
The compute nodes host Virtual Network Functions or VNFs. The virtual network functions
correspond with the virtual machines of hub modules, which are not deployed on NPCI.
The VNFs are hosted on the compute nodes in a random way but can be divided in the following
sub-systems:
• Hub Module Management System or HMMS, which provides the internal management
functionality of the hub module.
• Hub Processing Segment or HPS, which aggregates and processes the data of one satellite
network.
The redundancy behavior of the VNFs depends on the type of sub-systems the VNF belongs to.
When one of the compute nodes fails, its VMs are redistributed over the remaining compute nodes
in a best effort manner.
For more information on application and virtual machine redundancy, refer to
Servers, Virtual Network Functions and Apllications for XIF on page 110.

4.3.3.2 Modulators

Modulators in the XIF baseband hub module operate in an N:M pool redundancy configuration, with
N active devices and M standby devices. A baseband hub module can have one or more

Platform Architecture v1.1 47/122


Newtec Dialog R2.4.1
Physical Architecture

redundancy pools of modulators. Dedicated redundancy pools should be configured for each type of
modulator. The type depends on the modes (e.g. MCM7500 1G, MCM7500 10G) of the device.
The redundancy pools can be defined across different satellite networks and can have one or more
redundant modulators protecting multiple satellite networks. Redundancy pools cannot be
configured across baseband hub modules. The number of redundant devices can be 0.
The RF output of each modulator is connected to an input port of the RF switch matrix. The RF
output signal of each active modulator is switched to the associated TX interface of the satellite
network. The association between the inputs and outputs of the RF switch matrix is defined by
configuration and the modulators’ redundancy state:
• Multiple RF output signals can be multiplexed on the same TX interface (Tx 2 in the figure below).
• Multiple TX interfaces can be used for redundancy, carrying identical (duplicated) RF output
signals (Tx 3 and Tx 4 in the figure below).

Redundant modulators in the redundancy pool can replace a failing modulator of any satellite
network. When the active modulator in a satellite network fails, a standby modulator takes over.
After the redundancy swap, the standby modulator has become the new active modulator. The
failing modulator is isolated from the active network and when the failure is fixed, it can be used as a
standby modulator. The N:M redundancy is non-revertive.
The number of modulators depends on the number of satellite networks you want to serve, the type
of network interface used and the configuration of redundancy:
• You can add up to 10 modulators if they all operate in wideband mode. These modulators can be
inserted in rack positions 12 to 21. The positions are clearly numbered on the rack.
• You can add up to 32 modulators if they all operate in non-wideband mode. These modulators
can be inserted in rack positions 1 to 32. The positions are clearly numbered on the rack.

4.3.3.3 Demodulators

Demodulators in the XIF baseband hub module operate in an N:M pool redundancy configuration,
with N active devices and M standby devices. A baseband hub module can have one or more
redundancy pools of demodulators. Dedicated redundancy pools should be configured for each type

Platform Architecture v1.1 48/122


Newtec Dialog R2.4.1
Physical Architecture

of demodulator. The type can depend on the technology (e.g. CPM, HRC, S2) or the capabilities
(e.g. CPM 16MHz, HRC 17MBd/70MHz) of the device.
The redundancy or device pools can be defined across different satellite networks and can have one
or more redundant demodulators protecting multiple satellite networks. For example, you can have a
redundancy pool of 24:1, with 24 active HRC demodulators and 1 redundant HRC demodulator. 8
Active HRC demodulators are linked to one satellite network, another 8 active HRC demodulators
are linked to a second satellite network, and another 8 HRC demodulators are linked to a third
satellite network. The standby demodulator is the redundant demodulator for all three satellite
networks.
Redundancy pools cannot be configured across baseband hub modules. The number of redundant
devices can be 0.
The RF input of each demodulator is connected to an output port of the RF switch matrix. The signal
on any RX interface of the satellite network is switched to the associated demodulator. The
association is defined by configuration and the demodulators’ redundancy state:
• Multiple satellite network RX signals can be multiplexed on the same RX interface.
• Multiple RX interfaces can be forwarded to the same output port of the RF switch matrix.
• Multiple RX interfaces can be used for redundancy, carrying identical (duplicated) satellite network
RX signals.

Redundant demodulators in the redundancy pool can replace a failing demodulator of any satellite
network. When the active demodulator in a satellite network fails, a standby demodulator takes over.
After the redundancy swap, the standby demodulator has become the new active demodulator. The
failing demodulator is isolated from the active network and when the failure is fixed, it can be used
as a standby demodulator. The N:M redundancy is non-revertive.
The number and type of demodulators depends on the required return link capacity and supported
technologies, and the configuration of redundancy. You can add up to 32 demodulators.
Demodulators can be inserted in rack position 1 to 32. The positions are clearly numbered on the
rack.

Platform Architecture v1.1 49/122


Newtec Dialog R2.4.1
Physical Architecture

4.3.3.4 PTP Sources

Two PTP source deployment modes exist:


• Dedicated: A redundant pair of PTP sources is available per processing hub module. The PTP
sources are connected directly to the Ethernet distribution switches.
• Shared: A redundant pair of PTP sources is available per hub/gateway. The PTP sources are
connected via an external PTP-enabled switch. This switch in turn is connected to multiple
processing hub modules via the Ethernet distribution switches.
The figure shows the dedicated mode at the left and the shared mode at the right.

4.3.3.5 Network Connectivity

4.3.3.5.1 HUB7208

The four access switches of the XIF baseband hub module are connected in a ring topology and are
configured in an IRF fabric (stack). This switch stack acts as one logical switch. The physical
switches are members of the stack.
All devices within the hub module have redundant management and data connectivity.

Platform Architecture v1.1 50/122


Newtec Dialog R2.4.1
Physical Architecture

4.3.3.5.2 HUB7318

The two TOR switches of HUB7318 are configured in an IRF (Intelligent Resilient Framework) stack.
This switch stack acts as one logical switch. The physical switches are members of the stack.

The processing hub module supports two types of switches: HPE FlexFabric 5710
48XGT 6QS+/2QS28 or HPE FlexFabric 5700 32XGT 8XG 2QSFP+ (legacy).
The different switch types cannot be mixed.

Redundant ports are available for the unicast, multicast, and management uplink.
The switches are unaware of the state in any other redundancy scheme. All devices within the hub
module have redundant management and data connectivity.

Platform Architecture v1.1 51/122


Newtec Dialog R2.4.1
Physical Architecture

For more information about network redundancy, refer to Redundancy for Network Connectivity on
page 101.

4.3.3.6 Power

Each XIF hub module has two Power Distribution Units (PDUs), which can be connected to two
different power circuits for redundancy.
All devices in the hub module have dual power supply and are connected to both PDUs.

4.4 NMS Hub Module

4.4.1 NMS Hardware Devices


The Newtec Dialog Platform is managed through one Network Management System (NMS). The
NMS can be embedded in a hub module or it can be a standalone hub module, which is deployed on
a Newtec Private Cloud Infrastructure (NPCI).
The standalone NMS hub module is referred to as HUB7318.

HUB7318 is deployed on the Newtec Private Cloud Infrastructure or NPCI. It can be


part of an existing NPCI platform, for instance the NPCI which is already used for the
HUB7318 XIF processing hub module.

The hardware devices of the HUB7318 NMS hub module are:

Hardware Device Number Supported Type Functionality

Top-Of-Rack or TOR 2. • HPE FlexFabric • Provides the


switch 5710 48XGT connection with the
The switches are
6QS+/2QS28 customer's IP
configured in an IRF
(Intelligent Resilient • [legacy] HPE backbone.
Framework) stack. FlexFabric 5700 • Interconnects the
This switch stack acts 32XGT 8XG enclosures.
as one logical switch. 2QSFP+
The physical switches The different switch
are members of the types cannot be
stack: TOR-DSW-M1
mixed.
and TOR-DSW-M2.

Controller block 1 See below. • Runs the services


needed to manage
the NPCI.
• Manages the
compute and storage
nodes.
• Provides basic disk
space for storage.

Compute block Up to 3 See below. • Runs the compute


services and hosts
the virtual machines.
• Provides CPU and
RAM to the NPCI.

Platform Architecture v1.1 52/122


Newtec Dialog R2.4.1
Physical Architecture

Storage block 1 (optional) See below. Provides extra disk


space for storage.

Controller Block
The controller block consists of a Dell PowerEdge FX2s enclosure that includes:
• Two FC430 blade servers that act as controller nodes.
• Two FC430 blade servers that act as storage nodes.
• Two FD332 storage sleds, each containing eight storage disks.

• Two FN410T Ethernet switches, which provide connection with TOR-DSW and interconnect the
blade servers in the enclosure.
• One Chassis Management Controller (CMC).

Compute Block
The first compute block consists of a Dell PowerEdge FX2s enclosure that includes:
• Three (minimum) to four FC640 blade servers that act as compute nodes.

• Two FN410T Ethernet switches, which provide connection with TOR-DSW and interconnect the
blade servers in the enclosure.
• One Chassis Management Controller (CMC).

Platform Architecture v1.1 53/122


Newtec Dialog R2.4.1
Physical Architecture

Additionally, you can have two extra compute blocks, each containing:
• One to four FC640 blade servers that act as compute nodes.
• Two FN410T Ethernet switches, which provide connection with TOR-DSW and interconnect the
blade servers in the enclosure.
• One Chassis Management Controller (CMC).

[optional] Storage Block


HUB7318 can contain one storage block. The storage block consists of a Dell PowerEdge FX2s
enclosure that includes:
– One FC430 blade servers that acts as storage node.
– One FD332 storage sled containing eight storage disks.
– Two FN410T Ethernet switches.
– One Chassis Management Controller (CMC).
OR
– Two FC430 blade servers that act as storage nodes.
– Two FD332 storage sleds, each containing eight storage disks.
– Two FN410T Ethernet switches.
– One Chassis Management Controller (CMC).

4.4.2 NMS External Interfaces


In this chapter:
• Management Connectivity using HPE 5710 Switch on page 54
• Management Connectivity using HPE 5700 Switch on page 55
• Management Connectivity to Enclosures on page 55

Management Connectivity using HPE 5710 Switch


The management interface of NMS HUB7318 is available through following redundant 1GbE RJ45
port on the TOR switches:
• TEN1/0/8 of TOR-DSW-M1
• TEN2/0/8 of TOR-DSW-M2

Platform Architecture v1.1 54/122


Newtec Dialog R2.4.1
Physical Architecture

Management Connectivity using HPE 5700 Switch


The management interface of NMS HUB7318 is available through following redundant 1 GbE RJ45
port on the TOR switches:
• TEN1/0/32 of TOR-DSW-M1
• TEN2/0/32 of TOR-DSW-M2

Management Connectivity to Enclosures


For managing the hardware enclosures, the 1 GbE RJ45 port, labeled Gb1, at the rear of the
enclosure of the controller block is available. Access to the other enclosures is daisy-chained
through the STK/Gb2 Ethernet port.

Platform Architecture v1.1 55/122


Newtec Dialog R2.4.1
Physical Architecture

1. Your management infrastructure.


2. Active CMC.

4.4.3 NMS Redundancy

4.4.3.1 Server

Controller Block
The controller block has two redundant controller nodes and two redundant storage nodes. The
redundant pairs work in a hot-standby mode.

Compute Block
The compute nodes host Virtual Network Functions or VNFs. The VNFs are hosted on the compute
nodes in a random way and use the anti-affinity policy. Anti-affinity means that two instances of the
same VNF (e.g. DMA-1 and DMA-2) are never hosted on the same compute node. As a result, the
NMS requires at least two compute nodes.
When one of the compute nodes fails, its VMs are redistributed over the remaining compute nodes
in a best effort manner.

Platform Architecture v1.1 56/122


Newtec Dialog R2.4.1
Physical Architecture

4.4.3.2 Network Connectivity

Connectivity towards your management infrastructure is always done through redundant ports on the
distribution switches. For more information on the redundant ports, refer to NMS External Interfaces
on page 54.

4.4.3.3 Power

Each NMS hub module has two Power Distribution Units (PDUs), which can be connected to two
different power circuits for redundancy.
All devices in the hub module have dual power supply and are connected to both PDUs.

Platform Architecture v1.1 57/122


Newtec Dialog R2.4.1
Physical Architecture

The most basic hub module deployment is a single hub module with embedded NMS. A single hub
module deployment can be done with a 1IF hub module and 4IF hub module.

A multiple hub module deployment exists of more than one hub module, which can be geographically
spread. The number of extra hub modules depends on the number of satellite networks and the
number of terminals. The multiple hub module deployment can be handled by an embedded NMS.
However, when the Newtec Dialog Platform has many hub modules, satellite networks and
terminals, a standalone NMS hub module is required.
In case of XIF hub module with HUB7318, the standalone NMS can be deployed on the same NPCI
infrastructure.

Platform Architecture v1.1 58/122


Newtec Dialog R2.4.1
Functional Architecture

5 Functional Architecture

The functional architecture consists of management plane functions, control plane functions and
data plane functions. The Dialog hub consists of one or more hub modules and one NMS module
which is responsible for the central control and management plane functions. The hub modules
serve one or more satellite networks.
A satellite network consists of dedicated data plane functions which process and forward packets
and frames from the network edge at the hub side to the network edge at the terminal side, and vice
versa.
The control plane is responsible for the adaptive bandwidth management on both FWD link and RTN
link based on link quality and bandwidth requests.
The management plane interacts with the operator and integrates fault management, configuration
management, accounting, performance monitoring and security functionality. It is available at hub
module level and terminal level, and at central level. Via the central NMS module, the operator
obtains a consolidated management view of the system.
The NMS module also has central control plane functions, which are responsible for mobility
management and modem certification. The central mobility control plane functionality offers a north
bound interface to external systems, allowing an operator to implement his own business logic for
beam handover when appropriate.

5.1 Data and Control Plane


The data plane refers to all functions and processes that forwards packets and frames from the
network edge on the hub side to the network edge on the modem side, and vice versa. The control
plane is responsible for handling the control and signaling traffic, supporting the data plane.
Dialog provides end-to-end network connectivity for three types of networks:
• Layer 3 on page 60
• Layer 2 on page 64
• Multicast on page 65

Platform Architecture v1.1 59/122


Newtec Dialog R2.4.1
Functional Architecture

A network connects an uplink network on the hub side to one or more LAN networks behind the
modems.
An uplink network is identified by a customer-defined VLAN on a specific uplink interface. There can
be multiple uplink interfaces on a hub module to provide the required network capacity (for example,
multiple 1 GbE uplinks). Each uplink interface handles a distinct set of networks. The uplink
interfaces are:
• Aggregated interfaces grouping multiple (typically two) physical interfaces for redundancy.
• Configured in Trunk mode allowing to define multiple VLANs on top of them.
The LAN network on the modem side is also identified by a VLAN. This can either be a native or
untagged VLAN.

Layer 3 Network
A layer 3 or L3 network connects an uplink network (1 in the figure below) on the hub side with one
or more LAN networks on the modem side (2 and 3 in the figure below). This consists of a single
routing instance (Virtual Routing and Forwarding or VRF) which provides bidirectional routing of IP
traffic between the uplink network and the modem LAN networks. The L3 network can therefore be
compared to a router.

Platform Architecture v1.1 60/122


Newtec Dialog R2.4.1
Functional Architecture

The L3 network is identified by a configurable VLAN tag on the uplink interface and the LAN
interface of the modem.
You can configure multiple L3 networks in the same satellite network resulting in multiple VRF
instances. Each instance has an isolated routing and addressing context allowing to reuse private
address ranges for the different networks in the same satellite network. And you can terminate
multiple L3 networks on the same modem, also resulting in multiple VRF instances with isolation of
the routing and addressing context.
The number of virtual networks supported per satellite network depends on the type of hub module.

Hub Module Type Max. # of L3 Networks

1IF 50

4IF 128 (with servers GEN8)


256 (with servers GEN9 and up)

XIF 256

The number of virtual networks supported on a terminal depends on the type of terminal:

Modem Type Max. # of Virtual Networks

MDM2200 4

MDM2210 4

MDM2500 4

MDM2510 16

MDM3100 8

MDM3300 8

Platform Architecture v1.1 61/122


Newtec Dialog R2.4.1
Functional Architecture

MDM3310 16

SMB3310 16

SMB3315 16

MDM5000 16

MDM5010 24

The uplink network can be used in networks on different satellite networks within the same hub
module. This provides a network that extends over terminals in multiple satellite networks. The layer
3 network configuration should be defined per satellite network in a consistent way avoiding any
network conflicts.
The L3 network supports two modes for assigning IP addresses or subnets to the LAN network of
the modem. The modes are:
• Dedicated Subnet
• Shared Subnet
In Dedicated Subnet mode, the modem receives a unique and dedicated range of IPv4 and/or IPv6
addresses. One IP address from this range is assigned to the modem's network interface. The
remaining addresses in the range are available for the hosts behind the modem. The modem can
serve as a DHCP server for the allocation of the IP addresses. If the modem is not used as a DHCP
server, another device in the LAN has to act as the DHCP server, or a static IP address on each host
has to be configured.

In Shared Subnet mode, the modem receives a single unique IP address for the host behind the
modem. This IP address is taken from a centrally managed IPv4 and/or IPv6 address pool. The IP
address of the modem's network interface in a shared subnet is always the first IP address of this
pool. This address is used as proxy IP address on each modem that receives an IP address from the
same address pool. The host behind the modem will behave as if it is part of a larger subnet. By
means of Proxy ARP on the modem, the host will be able to reach other hosts in the same subnet
but connected to different modems.
In case of NAT or Network Address Translation, the IP address is not handed out the host but
remains on the modem and acts as the routable IP address.

Platform Architecture v1.1 62/122


Newtec Dialog R2.4.1
Functional Architecture

The L3 network features are listed below.


• IPv4 and IPv6 in a dual stack mode
• Static routing on the uplink interface
• Dynamic routing on the uplink interface
– OSPFv2 (IPv4) and OSPFv3 (IPv6)
– BGP (only IPv4)
• Static next hop routing on modems
• Dynamic routing to modems
– BGP (only IPv4)
• Traffic between modem LAN networks is always routed via the uplink and the customer edge
router to allow "lawful intercept" by the customer. Except for dynamic routing with BGP in which
case traffic between modems is directly reflected on the hub (DEM).
• Protocol enhancements, which are
– TCP acceleration
– GTP acceleration
– Packet header compression
– Packet aggregation
– TCP payload compression
• Payload encryption using AES-128. This is encryption per terminal.
• DNS
– DNS caching and proxying (both on modem and hub)
– Transparent DNS forwarding
• DHCP daemon on the modem (can be enabled or disabled)
– Assigns IP addresses on the modem LAN
– Provides DNS server address(es)
– Provides the default gateway

Platform Architecture v1.1 63/122


Newtec Dialog R2.4.1
Functional Architecture

Layer 2 Network
A layer 2 or L2 network is a point-to-point virtual connection and can be considered as a virtual
Ethernet pipe which establishes isolated communication between two devices. A L2 network
connects an uplink network (1 in the figure below) on the hub side with a single LAN network on the
modem side (2 in the figure below). This consists of a single switching instance providing
bidirectional switching of Ethernet traffic between the uplink network and the modem LAN network.
The L2 network can therefore be compared to a switch with two ports connecting the uplink network
and the modem LAN network in a single broadcast domain.
The layer 2 network does not support terminal to terminal communication.

The L2 network is identified by a configurable single or double VLAN tag on the uplink interface and
by a configurable single VLAN tag on the LAN interface of the modem. The supported VLAN tagging
is according to the IEEE802.1Q standard (0x8100). You can use the different VLAN tags on the hub
and modem for the same L2 network.

Uplink Input Modem LAN Output Actions

VLAN S + VLAN C VLAN C Strip VLAN S

VLAN S + VLAN C VLAN C* • Strip VLAN S


• Translate VLAN C to VLAN C*

VLAN C VLAN C Transparent forwarding

VLAN C VLAN C* Translate VLAN C to VLAN C*

Untagged traffic on the modem LAN is not supported.


You can configure multiple L2 networks in the same satellite network and you can terminate multiple
L2 networks on the same modem.
The L2 network features are listed below.
• Stripping of the outer VLAN tag (VLAN S) if present
• Transparent forwarding of Ethernet frames, including inner VLAN tags (VLAN C & more)

Platform Architecture v1.1 64/122


Newtec Dialog R2.4.1
Functional Architecture

• Translation of VLAN C tag


• Protocol enhancements for traffic with up to three VLAN tags (not counting VLAN S)
– TCP acceleration
– GTP acceleration
– Packet header compression
– Packet aggregation
– TCP payload compression
• Payload encryption using AES-128. This is encryption per terminal.

Layer 2 point-to-point virtual connections are only supported on 4IF and XIF hub
modules with HP switches.

Layer 2 point-to-point virtual connections are not supported on mobile terminals.

Multicast Network
A multicast or MC network connects an uplink network (1 in the figure below) on the hub side with
one or more LAN networks on the modem side (2 and 3 in the figure below). This consists of a
single multicast routing instance providing unidirectional routing of multicast IP traffic from the uplink
network to the modem LAN networks. The MC network can therefore be compared to a multicast
router.

The MC network is identified by a configurable VLAN tag on the uplink interface. It connects to the
'native' or 'untagged' VLAN on the modem LAN network.
You can configure only one MC network in a satellite network. Different satellite networks can reuse
the same MC VLAN on the uplink interface, creating a multicast network covering multiple satellite
networks.
The MC network features are listed below.

Platform Architecture v1.1 65/122


Newtec Dialog R2.4.1
Functional Architecture

• Multiple IPv4 multicast streams


– Allowed range for multicast streams is 224.0.0.0 – 239.255.255.255, except
• 224.0.0.1
• 224.0.1.1
• 239.1.0.1
• 239.1.0.2
• 239.192.2.1
• 239.255.255.100
• Static multicast routing (no IGMP on hub)
• All multicast streams configured on the network and linked to a satellite network are transported
and delivered to all modems in the satellite network
• Static and dynamic multicast routing on modem
– Static configuration of multicast stream(s) to forward to modem LAN
– IGMP: multicast stream(s) are forwarded when a JOIN message is received from the
modem LAN

Platform Architecture v1.1 66/122


Newtec Dialog R2.4.1
Functional Architecture

5.1.1 Architecture
The data plane is functionally scoped into four different segments. Each segment has its own
functionality. On the hub side, the functionality is provided by physical devices and virtual machines
or virtual network functions. This is shown in the table below. Segments 2, 3, and 4 correspond with
the HPS (Hub Processing Segment) subsystem.

Segment 1 Segment 2 Segment 3 Segment 4

Functionality Baseband Satellite channel Data processing. Edge


processing. processing. processing.

Hardware • Modulator(s) Server(s). Server(s). Server(s).


• Demodulator(s) Depending on Depending on Depending on
the type of hub the type of hub the type of hub
module, the module, the module, the
servers are servers are servers are
dedicated or not. dedicated or not. dedicated or not.

VM / VNF NA • CSE • TAS • DEM


• DCP • L2DEM
• HRCCTL
• CPMCTL
• S2CTL
• MRCCTL

On the terminal side, the functionality of the different segments is provided by applications in the
modem.
The four functionalities are described below.
• Baseband Processing embodies the satellite physical layer baseband processing related to
modulation (transmit) and demodulation (receive).
• Satellite Channel Processing is responsible for the encapsulation and decapsulation of
baseband frames,associated with a "satellite channel". It also manages the satellite bandwidth
and offers QoS for both FWD (forward) and RTN (return) link in collaboration with the segment 2
control plane functions.
• Data Processing is dedicated to specific “protocol enhancement”. It enhances end-to-end
protocol behavior over satellite and reduces bandwidth needs via network optimization
technologies, such as header compression, payload compression, packet aggregation .... It also
implements the payload encryption/decryption.
• Edge Processing implements the “network edge” and controls the endpoints of the satellite
channels. It interacts directly with terrestrial networking functionality of both data plane and control
plane level. For layer 3 all router functionality is implemented here.
Each satellite network is a combination of these four segments.

Layer 3, layer 2 and multicast networks can be mixed within the scope of one satellite network. The
segment 4 network instances at the hub are transparently connected to the segment 4 instances at
the modem. Each connection is established through a 'satellite channel' as shown in different colors
in the figure below.

Platform Architecture v1.1 67/122


Newtec Dialog R2.4.1
Functional Architecture

For L3 networks, the network connection passes through segment 3, 2 and 1. For example, network
1 and 2 in the figure above.
For L2 networks, the network connection passes through segment 3, 2 and 1. Each layer 2 network
instance connects with a single remote network instance on only one modem. For example, network
3 in the figure above.
For multicast networks, the network connection passes through segment 2 and 1. Segment 3 is
bypassed because no protocol enhancement is performed on multicast traffic. The multicast network
scope connects with the remote network instance on all modems in the satellite network. For
example, network 4 in the figure above.

5.1.2 Functions
Each segment can be organized in functional blocks, which perform specific task(s). The diagrams
below show the data and control plane functions for the hub module and modem side.

Platform Architecture v1.1 68/122


Newtec Dialog R2.4.1
Functional Architecture

For reason of simplicity, the diagram above only shows one demodulator and return link controller. In
reality you have a demodulator and return link controller per return link technology.

For reason of simplicity, the diagram above only shows one modulating block. In reality you have a
modulating block per return link technology.

Functional Description of the Data Path


The functions are implemented by physical devices and VMs or VNFs and are described per
segment in the following paragraphs.
Segment 4
The main function of segment 4 is demarcation.
Layer 3 Network
On the hub side, demarcation is implemented on the DSW/TOR switch and the DEM VM. The main
functionality of segment 4 in case of a layer 3 network is routing.
The DSW/TOR wraps incoming forward traffic with a VLAN ID identifying the network, with a second
VLAN. This wrapper VLAN is used for the entire hub module and all satellite networks deployed in it
and connects to all DEM VM instances. The DSW/TOR forwards the traffic to the DEM VMs.
The DEM classifies the incoming forward traffic based on the network VLAN ID and maps it to the
VRF of the network. After classification, the DEM handles the packets as follows:
• The network VLAN is stripped from the packet.
• The MPLS-Infra outer label is added and used to identify the segment 3 instance to which the
traffic must be forwarded.
• The MPLS-Channel inner label is added and used to identify the modem for which the traffic is
destined.
• The IP packet is forwarded to segment 3.
The DEM classifies incoming return traffic based on the MPLS labels and maps it to the VRF of the
network. After classification, the DEM handles the packets as follows:
• The MPLS labels are stripped from the packet.

Platform Architecture v1.1 69/122


Newtec Dialog R2.4.1
Functional Architecture

• The network VLAN ID is added


• The packet is forwarded to the DSW/TOR.
On the uplink interface, the wrapper VLAN is stripped off.
The DEM also performs caching and proxying of DNS requests from terminals towards external
DNS servers.
On the modem side, incoming packets at the LAN interface are analyzed, classified, and optionally
marked with DSCP values.
Layer 2 Network
On the hub side, demarcation is implemented on the DSW/TOR switch and the L2DEM VM. The
main functionality of segment 4 in case of a layer 2 network is VLAN mapping.
The DSW/TOR wraps incoming forward traffic (with one or two VLAN IDs identifying the network)
with an extra VLAN. This new VLAN is used to connect the network traffic to the L2DEM VMs.
There is a dedicated wrapper VLAN for each L2DEM VM.
The L2DEM classifies the incoming forward traffic based on the network VLAN ID and maps it to the
L2 virtual connection. After classification, the L2DEM VM handles the frames as follows:
• If the VLAN S is present, it is stripped from the frame.
• If needed, VLAN C is translated to VLAN C*.
• The MPLS-Infra outer label is added and used to identify the segment 3 instance to which the
traffic must be forwarded.
• The MPLS-Channel inner label is added and used to identify the modem for which the traffic is
destined
• The frame is forwarded to segment 3.
The DEM classifies incoming return traffic based on the MPLS labels and maps it to the L2 virtual
connection. After classification, the L2DEM VM handles the frames as follows:
• The MPLS labels are stripped from the packet.
• If needed, VLAN C* is translated to VLAN C.
• If needed, VLAN S is added.
• The frame is forwarded to the DSW/TOR.
On the uplink interface, the wrapper VLAN is stripped off.
On the modem side, incoming packets at the LAN interface are analyzed, classified, and optionally
marked with DSCP values.
Multicast Network
On the hub side, demarcation is implemented on the DSW/TOR switch.
On a cloud platform, the TOR switch wraps the incoming multicast traffic with a VLAN identifying the
network, with a second VLAN and forwards the traffic to segment 2.
On a non-cloud platform, the incoming multicast traffic is simply flooded over the VLAN to segment
2.

MPLS or Multiprotocol Label Switching is a routing technique in telecommunications


networks that directs data from one node to the next based on short path labels rather
than long network addresses, thus avoiding complex lookups in a routing table and
speeding traffic flows. (source Wikipedia)

Segment 3

Platform Architecture v1.1 70/122


Newtec Dialog R2.4.1
Functional Architecture

The main function of segment 3 is tunneling and protocol enhancement.


The traffic is handled by the TelliNet application. The TelliNet application works in a Client-Server
mode. The TAS VM has multiple TelliNet server instances deployed (six in case of XIF and 4IF, two
in case of 1IF). The modem has a single TelliNet client instance deployed. Each modem The
Tellinet client instance of a modem sets up an eTCP (Enhanced TCP) tunnel with a TelliNet server
instance. The eTCP tunnels are balanced between the different instances on the TAS VM. The
eTCP tunnel is used to transport the enhanced traffic between the modem and the hub module.
The incoming forward traffic is tagged with two MPLS labels. The MPLS-Infra outer label determines
to which Tellinet server instance the traffic should be sent. The MPLS-Channel inner label
determines to which modem the traffic should be sent.
The TelliNet server instance applies the required protocol enhancements to the incoming traffic and
encapsulates the resulting traffic into the eTCP tunnel. The eTCP packets are then encapsulated in
a TCP tunnel, including control data for the shaping feedback.This allows segment 3 to learn about
congestion happening in segment 2, which on its turn allows segment 3 to throttle the traffic input to
segment 2 to avoid packet drops.
The TelliNet client instance on the modem decapsulates the enhanced traffic from the eTCP tunnel
and processes it to recover the original traffic. The reconstructed traffic is then forwarded to segment
4 on the modem.
In the return direction, the TelliNet client instance on the modem applies the required protocol
enhancements to the incoming traffic from the LAN network, encapsulates the resulting traffic into
the eTCP tunnel and then into the TCP tunnel. The traffic is forwarded to segment 2. The TelliNet
server instance on the TAS decapsulates the enhanced traffic from the eTCP tunnel and processes
it to recover the original traffic. The reconstructed traffic is then forwarded to segment 4 on the hub
side.
Segment 2
The main functions of segment 2 are shaping, encapsulation and decapsulation. On the hub side
shaping and encapsulation is implemented on the CSE VM, and decapsulation is implemented on
the DCP VM.
On the hub side, everything is handled by the TelliShape application. On the modem side, shaping
and encapsulation for DVB-S2(X) is also handled by the TelliShape application. Encapsulation for
MRC, HRC and 4CPM, and decapsulation is implemented in the FPGA.
The TelliShape application on the hub classifies the incoming forward traffic and applies the
configured shaping rules. The shaped traffic is then encapsulated in baseband frames or BBF and
forwarded to the modulator over UDP/IP. On the modem the FPGA decapsulates the incoming BBF
streams from the demodulator. The resulting unicast traffic is forwarded to segment 3 and the
multicast traffic is forwarded to segment 4.
The TelliShape application on the modem classifies the incoming return traffic and applies the
configured shaping rules. Depending on the return link technology, the traffic is encapsulated by the
TelliShape application or the FPGA. The TelliShape application on the DCP decapsulates the
incoming BBF streams from the demodulators. The unicast traffic is forwarded to segment 3 and the
multicast traffic is forwarded to segment 4.
Segment 1
The main functions of segment 1 are modulation and demodulation.

Functional Description of the Control Path


Forward ACM Control
The ACM controller on the CSE and the ACM client on the modem control the ACM functionality of
the forward link.

Platform Architecture v1.1 71/122


Newtec Dialog R2.4.1
Functional Architecture

The ACM client sends line quality feedback to the ACM controller on the CSE. The ACM feedback is
sent over the return link. In case of MRC, HRC and S2, the feedback is forwarded from the DCP to
the ACM controller. In case of 4CPM, the feedback is forwarded from the CPMCTL to the ACM
controller. Based on this quality feedback, the ACM controller notifies the encapsulator of the
required MODCOD for the modem and sends ACM configuration messages to the shaper. The ACM
client extracts the ACM configuration messages from the forward link signaling.
Forward Link Control
The CSE sends layer 2 forward signaling to control the forward link selection. It also sends the
population ID signaling, containing the Air MAC address of the provisioned terminals. At the modem
the signaling and path timing is extracted from the forward link. The path timing is forwarded to the
time and frequency controller on the modem, which slaves the modem clock to the hub clock. The
forward signaling is forwarded to the FWD controller on the modem.
Return Link Control
The CPMCTL controls the dynamic behavior of 4CPM. It assigns bandwidth capacity to modems
based on their capacity requests.
The modem inserts the return capacity requests into the return link. The CPM demodulator extracts
the 4CPM return related layer 2 signaling, including return capacity requests and ACM feedback from
the modems and sends it to the CPMCTL. The CPMCTL inserts the 4CPM layer 2 return signaling
with the bandwidth assignments into the forward link. The modem extracts the capacity assignments
from the forward link.
For MRC and HRC, the return link controller (MRCCTL or HRCCTL) controls the dynamic behavior
of the return technology. The DCP sends the capacity requests from the terminals to the controller
and the controller inserts the layer 2 return signaling with bandwidth assignments into the forward
link. The modem extracts the return signaling from the forward link.
The S2CTL performs the following functions:
• Controlling the DVB-S2(ext) return carrier resources.
• Managing the DVB-S2(ext) demodulator hardware resources.
• Configuring the remote terminals.
• Controlling the ACM functionality of the DVB-S2(ext) return link.
• Controlling the AUPC functionality of the DVB-S2(ext) return link.

Modem Controller
The modem controller is the main control and management application of the modem and performs
the following functions:
• Storage of data.
• Handling configuration updates of data path.
• Handling terminal installation flow and terminal authentication.
• Communication with Terminal Control Server or TCS
• Provide data output to the modem GUI.
• Provide terminal statistics to the TCS.

Platform Architecture v1.1 72/122


Newtec Dialog R2.4.1
Functional Architecture

5.2 Management Plane


The management plane functionality interacts with the operator and integrates fault management,
configuration management, accounting, performance monitoring and security functionality (FCAPS).
The functionality of the management plane is provided by physical devices (servers) and virtual
machines or virtual network functions.

Platform Architecture v1.1 73/122


Newtec Dialog R2.4.1
Functional Architecture

In each hub module a number of generic VMs or VNFs are deployed. These are part of the HMMS
sub-system and are common to all satellite networks configured in the hub module.
• BSC: is responsible for bootstrap management, system configuration management and runs the
inventory management client.
• LOG: stores metrics.

Platform Architecture v1.1 74/122


Newtec Dialog R2.4.1
Functional Architecture

• MON: stores real-time metrics.


• HMGW: is responsible for hub module configuration management, return controller management
and device configuration management.
• REDCTL: is the redundancy controller for the RF equipment and hub processing segments.
• TCS: is a proxy for terminal configuration management.
Following VNFs are part of the NMS sub-system:
• BSC
• LOG
• MON
• CMS: Configuration Management Service, which is responsible for
– Configuration of resources and terminals on the network
– Inventory management
– Security and access control service
– Mobility management
– TICS (Terminal Installation Certification Server)
• DMA: DataMiner Service, which is responsible for the monitoring of:
– Terminals
– Users/Virtual Network Operators (VNOs)
– Networks resources
– Satellite resources
– NMS infrastructure
– Hub equipment

The NMS sub-system is only deployed on the hub module if the platform uses an
embedded NMS. In case of a standalone NMS, these components run on the NMS
hub module.

The table lists the specific tasks of the functional blocks.

Functional Block Responsibilities (Virtual) Machine

NMS

Configuration Configuring resources and terminals on the Dialog CMS


management platform.

Inventory • Configuring hub modules and their equipment: CMS


management – Defining number of Satellite Networks. BSC
– Defining RF devices (MOD, DEMOD) and redundancy
pool.
– ...

Access Control • Configuring users and their roles. CMS


management • Configuring VNOs.

Platform Architecture v1.1 75/122


Newtec Dialog R2.4.1
Functional Architecture

Functional Block Responsibilities (Virtual) Machine

Fault & Performance • Monitoring the system components. DMA


management • Gathering metrics of the system components.
• Generating alarms for the system components.

Dialog Mobility Hosts the Mobility API which is used for CMS
Manager (DMM) communication between a Mobility Manager and the
hub. Only applicable for mobile terminals which switch
beams during operation.

Logging • Getting all logs from the different system modules. LOG
• Digest logging and visualize the logs via the MON
centralized logging of the Dialog NMS.
• Storing the metrics in a database.

Terminal Installation TICS is an optional feature in Newtec Dialog. It is CMS


Certification (TICS) used to verify and certify the installation of Newtec
Dialog terminals.

Hub Module

Hub Module Configuring the resources and terminals on the HMGW


Configuration different Traffic and Control functions in the Satellite
management Networks.

Return Controller • Managing settings for the Return technology (CPM, HMGW
management HRC, MRC, S2(X)) per terminal.
• Managing switching between Return technology on
a terminal.

Device configuration Managing configuration of RF devices (especially the HMGW


management modulators).

Redundancy • Managing redundancy REDCTL


management – Pool redundancy in protection groups
– Pool redundancy for RF device
– Chain redundancy for RF devices

Logging • Getting all logs from the different system modules. LOG
• Digest logging and visualize the logs via the
centralized logging of the Dialog NMS.
• Storing the metrics in a database.

Platform

System Managing the basic system configuration of the BSC


Configuration servers and VMs/VNFs.
management

Bootstrap Managing the basic installation of new servers and BSC


management VMs/VNFs.

Platform Architecture v1.1 76/122


Newtec Dialog R2.4.1
Functional Architecture

Functional Block Responsibilities (Virtual) Machine

Inventory Managing the VM/VNF set configuration, based on BSC


Management Client the defined inventory.

VM/VNF Managing the creation of VMs/VNFs on the servers, SRV (Blade


management based on input of the Inventory Management client Server)
(VM set definition).

5.2.1 Inventory Management

5.2.1.1 Components

Inventory management plays an important role in the installation and configuration of the Newtec
Dialog platform. Through inventory management, the customer specifies the actual hardware
configuration and deployment of his platform. When a customer updates the Newtec Dialog platform
(for example adding/removing equipment, adding capacity,... ), inventory management is used to
define the changes and to push the necessary configurations to all involved equipment.
Inventory management provides flexibility, scalability and modularity in the Newtec Dialog platform.
Based on the items configured by the customer, inventory management will bootstrap the necessary
devices and VMS/VNFs on the platform. The NMS components will also learn from inventory
management which are the available devices, what is the capacity and redundancy configuration in
order to set up the configuration and monitoring of the platform.
The Inventory Management Server or IMS gives the Newtec Dialog customer the ability to
configure the inventory of his Newtec Dialog platform in terms of hub modules, satellite networks,
RF equipment (modulators/demodulators), servers and redundancy. The IMS functionality is part of
the nms-conf application running on the CMS, which is deployed on the NMS.
The Inventory Management Client or IMC is an application running on the BSC, which is deployed
on each hub module. The IMC learns the IMS configuration for the hub module during installation or
when changing the inventory, and triggers actions to apply the necessary configuration to the hub
module.

Platform Architecture v1.1 77/122


Newtec Dialog R2.4.1
Functional Architecture

5.2.1.2 Data Model

The first entity that should be defined in the IMS is the hub gateway, which corresponds to the
physical location of the hub. There can be one or more gateways depending on your Newtec Dialog
platform constellation: a Newtec Dialog platform can consist of multiple hubs at different gateway
locations.
Next, the hub modules that are located at the gateway should be defined.
The hub module configuration includes:
• The number of satellite networks
• The RF devices and their location in the rack (does not exist for the XIF processing hub module)
• The pool and/or chain redundancy of the RF devices.The IMS validates the input and gives
feedback to the client whether the redundancy scheme can be applied. The redundancy controller
will learn the devices and redundancy schemes from the IMS.

When devices are added to increase capacity, this will impact the redundancy scheme.

• The servers and their location in the rack (is not available for the XIF processing hub module
deployed on NPCI)
• The HPS pools.
• Uplink ports to use
• Number of enclosures, use of USS, PTP mode

5.2.2 Configuration Management

5.2.2.1 Components

The configuration management system is built around a central configuration management


application (nms-conf) and distributed components (device managers) that apply the configuration to
specific devices.
• The nms-conf application runs on the CMS, which is deployed on the NMS server(s).

nms-conf is responsible for maintaining the central configuration model in an SQL database. It
exposes interfaces (REST API) that allows the user to change/update the configuration of the
system, either directly via REST calls or via the GUI. It ensures consistency of the model and
enforces both security (permissions) and validation rules with regard to the boundaries of the
system before writing the changes into the configuration database.
• Device managers are deployed on or near the device they're managing and are responsible for
applying configuration changes to the device. They keep themselves in sync with nms-conf by
polling the CMS every second for changes. This polling mechanism ensures that the system
recovers automatically from temporary outages. If the communication fails between a device
manager and the CMS, the NMS fault & performance application will trigger a 'device manager
outdated' alarm. During this communication failure period, it is still possible to change the

Platform Architecture v1.1 78/122


Newtec Dialog R2.4.1
Functional Architecture

configuration via REST or GUI. Once the communication is restored, the device manager can poll
the CMS again and apply the configuration changes (if present).

5.2.2.2 Configuration Models

5.2.2.2.1 Satellite Resources

The core of the resources model is the satellite network. A satellite network is the combination of
specific forward and return link resources on which terminals can be provisioned.

The Satellite Resources can be configured via the Newtec Dialog GUI or via REST
API.

Platform Architecture v1.1 79/122


Newtec Dialog R2.4.1
Functional Architecture

When the physical satellite network has been configured, it needs to be linked to actual satellite
resources. The satellite resources correspond with a beam, which covers a geographical area in
which terminals are serviced.
The forward link is defined as the link from the hub over the satellite to the terminals. The forward
link can use the DVB-S2 and DVB-S2X standard as well as the DVB-S2X Annex M standard. The
forward resources model contains the configuration of the forward carrier (frequency as seen by the
terminal, symbol rate, roll-off factor, etc.) and the configuration of the quality of service capacity
(QoS DSCPs, CIR/PIR/Weights of the different pools). The forward resources should also be linked
to a transponder and a logical satellite network.
The return link is defined as the link from the terminals over the satellite to the hub. The return link in
Newtec Dialog supports following access and coding & modulation technologies:
• MF-TDMA - 4CPM
• SCPC - DVB-S2 and S2 Extensions
• SCPC - HRC
• Mx-DMA – HRC
• NxtGen Mx-DMA - MRC
The return link contains both the frequency plan and capacity plan of the return path. Multiple
technologies and carriers can be used simultaneously in the return for one satellite network. The
return link has a couple of common configuration parameters (QoS DSCPs used in the return, the
local oscillator frequency in the return) and consists of one or more return capacity groups. The
return capacity groups bundle RF-related properties for a given return technology:
• For DVB-S2 return capacity groups, this is the spectral range (defined as frequency as transmitted
by the terminal) that can be used to provision DVB-S2 carriers in.
• For HRC SCPC return capacity groups, these are the spectral range and the ACM settings for the
terminals in this RCG (whether or not to use ACM, min and max MODCOD).
• For HRC Mx-DMA return capacity groups, these are the spectral range, the ACM settings and the
symbol rate terminal should use when logging in.
• For MRC NxtGen Mx-DMA return capacity groups, these are the spectral range, the ACM settings
and the symbol rate the terminal should use when logging in.

Platform Architecture v1.1 80/122


Newtec Dialog R2.4.1
Functional Architecture

• For 4CPM return capacity groups, this the entire carrier frequency plan.
For multiple access return technologies (MF-TDMA, Mx-DMA, NxtGen Mx-DMA), the capacity of the
return capacity group is split in return pools, each with CIR/PIR/weights (as in the forward link).
Terminals are attached in the forward to a forward pool and in the return either to a return capacity
group directly for SCPC technologies or to a return pool otherwise.

5.2.2.2.2 Network Resources

The network resources are configured on top of a satellite networks and are isolated from each other
using VLAN identifiers.
Network resources can be grouped into:
• Layer 3 network resources
• Layer 2 network resources

The Network Resources can be configured via the Newtec Dialog GUI or via REST
API.

Layer 3 Network Resources


Layer 3 network resources consist of one or more virtual networks. A layer 3 virtual network is an
isolated IPv4 or IPv6 network. Devices within the same virtual network can directly communicate
with each other. A virtual network can independently use its own addressing scheme and the same
addressing schemes can be reused in different virtual networks.
The terminal can be linked to one or more layer 3 virtual networks.
When a terminal is linked to a virtual network, it can assign one or more IP addresses to one or more
hosts. The way the IP addresses are assigned and the number of assigned IP addresses, depends
on the type of virtual network. In the Newtec Dialog Platform two types of virtual networks exist:
• Dedicated Subnet
• Shared Subnet

The main difference between a shared and dedicated subnet types is that in a shared subnet, IP
addresses belong to shared IP address pools and are handed out address per address; in a
dedicated subnet no address pools are used and terminals are provisioned with an entire subnet.
The following table gives a more extensive overview of the differences:

Platform Architecture v1.1 81/122


Newtec Dialog R2.4.1
Functional Architecture

Shared Subnet Dedicated Subnet

IP addresses One IP address per terminal. The One subnet per terminal.
per terminal terminal acts as a "bridge" and The terminal acts as a "router".
captures the traffic to other IP
addresses in the same subnet.

IP address IP addresses come from IP The addresses/subnets can be


allocation address pools. chosen freely, but must be unique
within the network.

Security VNOs are granted access to IP VNOs are granted access to the
pools, and thereby gain access to virtual network directly.
the network.

Layer 2 Network Resources

Layer 2 point-to-point virtual connections are only supported on 4IF and XIF hub
modules with HP switches.

Layer 2 point-to-point virtual connections are not supported on mobile terminals.

A layer 2 point-to-point virtual connection can be considered as a virtual Ethernet pipe, which
establishes isolated communication between two devices. A layer 2 point-to-point connection is
completely agnostic of the payload of the Ethernet frame. Any L2 Ethernet traffic (IP, ARP, OAM
flows) is passed transparently between these two devices. The forwarding is based on VLAN tags.

5.2.2.2.3 Profiles

Profiles allow to reuse common settings over a (large) number of terminals. The system supports the
following profiles:
• Classification Profile, which contains rules for classifying incoming traffic into traffic classes. The
classification profile also specifies if the classified traffic needs to be marked with DSCP values or
not.
• Service Profile, which defines the shaping parameters (CIR/PIR/weight) for the terminal circuit
and QoS classes.
• Attachment Profile, which is a group of attachments and each attachment defines a beam, a
satellite network, a forward pool and a return pool. This is used for terminals that can be
operational in multiple beams.
• Firewall Profile, which allows you to block any incoming traffic, except the one that matches
specific rules.
• BGP Profile, which contains rules to filter the routes which are exchanged across BGP peers.

5.2.2.2.4 Security

The security system uses Domains and Users. The Hub Network Operator (HNO) and Virtual
Network Operators (VNO) are domains and can have one or multiple users. Access to resources is
granted to domains (for example domain VNO-1 has access to forward pool A, return pool B,
service profiles C and D, virtual network E, ...). Users are granted roles (for example read-only
users).

Platform Architecture v1.1 82/122


Newtec Dialog R2.4.1
Functional Architecture

Domains are hierarchical, typically there is one "System" domain, that has one HNO domain and all
VNO domains belong to this HNO domain.
Resources are linked to domains in two different ways:
• All resources belong to a domain (this is also reflected in the resources identifier). Users of the
owning domain can make changes to the resource.
• Domains can be granted access to resources. Users of these domains cannot change the
resource, but can use them, for example provision a terminal using this resource.
The resources that can be assigned to (VNO) domains are:
• Forward Attachments (Forward Pools)
• Return Attachments (Return Pools, S2 and HRC SCPC Return Capacity Groups)
• IP Pools
• Dedicated subnets
• Different profiles, such as service profiles and classification profiles

5.2.2.2.5 Terminal

To allow a remote terminal to be operational on the Newtec Dialog platform, it needs to be


configured in the system. Once the remote terminal is provisioned, a bi-directional circuit is
automatically created between the terminal and the hub. The terminal is linked to forward and return
resources, network resources, and profiles.
You can also define one or more multicast circuits, which originate from the terminal.

5.2.3 Fault and Performance Management


The key features of the fault and performance operation in NMS are:
• Easy troubleshooting of your system
• Powerful trending and data analysis
• Alarm history for any given date and time
• Service visibility with parameters such as calculated efficiencies
• Accessible anytime anywhere by any HTML5-capable device

Platform Architecture v1.1 83/122


Newtec Dialog R2.4.1
Functional Architecture

• User-friendly UI
• Fully customizable e-mail reports
Newtec Dialog uses Skyline DataMiner® software for the fault and performance monitoring, logging
and reporting. DataMiner® (DMA) delivers out of the box support providing drivers for devices to
monitor such as Linux servers, Cisco devices and Newtec equipment.
The standard configuration is a single DMA cluster or DMS cluster. This single DMA cluster has
three redundant pairs of DMA agents or instances running. This standard cluster configuration can
be extended with extra agents when the scale of your Dialog network grows. The number of
required agents will be based on a dimensioning requirement.
Dialog's initial fault and performance management system fully relied on DMA for metrics collection,
processing and visualization. DMA is part of the NMS system and directly connects to all required
subsystems to gather data across the Dialog system.

In order to realize a more scalable metrics collection architecture, the system is transformed to a
hierarchical collection model where data is collected as close to the data source as possible and
stored in hub local Time Series DataBases (TSDBs). Users can access these TSDBs for local data
collection. This guarantees access to performance metrics without a requirement to the link
availability between the NMS and the hub modules.

Platform Architecture v1.1 84/122


Newtec Dialog R2.4.1
Functional Architecture

DMA is available for visual inspection and initiating alarms but is discouraged to be used as a data
source to fetch metrics for further processing. Instead external tooling should use the TSDB API to
extract recent data and transform and load it into back-end systems externally to the Dialog system.
All terminal metrics and any satellite network related performance data is locally collected at the hub
module and exposed via a local TSDB interface using influxdb query language. By default, metrics
are collected in intervals of 30 seconds and a guaranteed retention period of three weeks of the raw
data is provided. Both the collection interval and retention period are configurable.
The element monitoring of devices, such as modulators and demodulators is still done by DMA.

Platform Architecture v1.1 85/122


Newtec Dialog R2.4.1
Functional Architecture

5.3 Redundancy
Multiple redundancy schemes and mechanisms are used in the Newtec Dialog hub module. The
following redundancy schemes are distinguished:
• Redundancy for RF devices
• Redundancy for server hardware and Virtual Machines
• Redundancy for applications
• Redundancy for network connectivity
• Redundancy for power

5.3.1 Redundancy Controller

5.3.1.1 Interaction

The redundancy controller is in charge of the redundancy. It determines the status and health of
every physical device and all virtual machines. Based on the status, the redundancy controller
makes the decision whether a redundancy swap is required or not.
The redundancy controller polls the IMS via the REST API for the topology information. From this
topology information, the redundancy controller learns which devices are involved in the redundancy
schemes and how to monitor them. The redundancy controller polls the RF devices for alarms using
either RMCP or SNMP. The redundancy controller and uses the Ethernet switches and the USS
switches to swap the devices.

5.3.1.2 Pool Redundancy

A pool is defined as a group of (virtual) devices from the same type, where every device performs a
role (service) and where typically there is one standby device that can take the role/service from any
device from the group.
For example: a pool redundancy of N:M means that there are N active devices and M standby
devices, which are ready to take over in case of failure of any of the N active ones.
Every status change of a device from the pool triggers evaluation of the pool status. If a device fails
and the standby device is OK, then a pool swap occurs.
The redundancy controller will configure the standby device with the role of the failed device and try
to isolate the defect device.
Redundant demodulators are organized in a pool redundancy. Pool redundancy is also applied
within a protection group of the virtual machines.

5.3.1.3 Chain Redundancy

A chain is defined as a group of devices, which as a whole provide full service. If one of the devices
in a chain fails, the whole chain is declared as defect. This will trigger a swap of the service to
another chain, also known as a chain swap.
In the Newtec Dialog HUB6504 and HUB6501 hub modules, a chain is used for the 4CPM return link
with the modulator, the NTC2291 burst demodulators and the USS.
Every status change of a device triggers the evaluation of the overall chain status. Decision whether
a chain is OK or not is straightforward: if any of the devices in a chain is defect then the chain is not
OK. Only when every device in the chain is OK, then the chain is considered as OK. The

Platform Architecture v1.1 86/122


Newtec Dialog R2.4.1
Functional Architecture

redundancy controller keeps track of the currently active and standby chain, so it is able to make a
decision whether a swap is required or not.
A redundancy swap is an action consisting of multiple commands executed in a certain order on
multiple devices.
These commands, executed on a device, have the following goals:
• Make the device itself active/standby.
• In case of standby state, isolate the device as much as possible, so that all inputs and outputs are
cut and harmful signals can not be sent out.

5.3.2 RF Devices

5.3.2.1 1IF

5.3.2.1.1 Modulators and 10 MHz Reference Signal

Modulators
A redundant setup requires two modulator of the same type: 2x M6100 or 2x MCM7500. The
modulators operate in a 1:1 redundancy configuration. When the active modulator fails, the standby
modulator takes over. The 1:1 redundancy is non-revertive.
The RF outputs of both modulators are connected to the TX switch of the USS (output A and B).
Output C of the USS' TX switch serves as the RF TX interface.
(The figure below shows the setup for redundant M6100 modulators. The same setup is used for
redundant MCM7500 modulators.)

Platform Architecture v1.1 87/122


Newtec Dialog R2.4.1
Functional Architecture

10 MHz Reference Signal


In case of the internal 10 MHz mode, the 10 MHz REF OUT interfaces of redundant modulators are
connected to the 10 MHz reference switch of the USS (output A and B). Output C of the USS'
reference switch serves as the 10 MHz output interface.
(The figure below shows the setup for redundant M6100 modulators. The same setup is used for
redundant MCM7500 modulators.)

Platform Architecture v1.1 88/122


Newtec Dialog R2.4.1
Functional Architecture

Both switches of the USS (TX and reference switch) switch simultaneously during a redundancy
swap, making sure that the TX output signal and 10 MHz reference output signal are coming from
the same modulator, i.e. the active one.
In case of the external 10 MHz mode, the external 10 MHz signal is inputted to the IN interface of
the reference splitter (ACC6000) and the 10 MHz REF IN interface of both modulators is connected
to the OUT interfaces of this reference splitter.
(The figure below shows the setup for redundant M6100 modulators. The same setup is used for
redundant MCM7500 modulators.)

5.3.2.1.2 Demodulators

The RF IN of a demodulator is connected to the 8-way L-band splitter. A 1IF hub module can have
up to eight demodulators (limited by the 8-way L-band splitter).

Platform Architecture v1.1 89/122


Newtec Dialog R2.4.1
Functional Architecture

The demodulator redundancy scheme depends on the type of demodulator.

MCD6000 / MCD7000 / MCD7500


These demodulators are grouped into one or more N:M redundancy pools. Pool redundancy is
provided per return technology. For example, MCD6000 HRC, MCD7000 HRC and MCD7500 HRC
with the same capabilities can belong to the same redundancy pool. The N:M redundancy is
non-revertive.
The 1IF hub module can support up to:
• Eight DVB-S2/S2X demodulators. For example, in a 7:1 redundancy pool with seven active
devices and one standby device.
• Eight HRC demodulators. For example, in a 6:2 redundancy pool with six active devices and two
standby devices.
• Four 4CPM demodulators. For example, in a 3:1 redundancy pool with three active devices and
one standby device. The limited set of 4CPM demodulators is due to a limitation of the modulator,
which can only synchronize NCR/ASI signaling to four 4CPM demodulators.
• Eight MRC demodulators. One active.

Platform Architecture v1.1 90/122


Newtec Dialog R2.4.1
Functional Architecture

NTC2291
Because the NTC2291 demodulator has no redundant network connectivity and no redundant
power supply, N:M redundancy cannot be used. Instead, the NTC2291 demodulators operate in an
N:N chain redundancy. Taking into account the limitation of the modulator with respect to NCR/ASI
signaling, you can have up to four NTC2291 demodulators per chain.

5.3.2.2 4IF

5.3.2.2.1 Modulators and 10 MHz Reference Signal

Modulators
Modulator redundancy requires two modulators of the same type per satellite network: 2x M6100 or
2x MCM7500 per satellite network. These modulators operate in a 1:1 redundancy configuration.
When the active modulator fails, the standby modulator takes over. The 1:1 redundancy is
non-revertive.
Modulators can be inserted in rack position 1 to 8. The positions are clearly numbered on the rack
and the patch panel.
The RF outputs of the modulators are connected to the TX switches of the universal redundancy
switch extension module (USS0203). There are four RF TX switches for the redundancy switching
of the TX signals of the four satellite networks.

10 MHz Reference Signal

Platform Architecture v1.1 91/122


Newtec Dialog R2.4.1
Functional Architecture

In case of the internal 10 MHz mode, the 10 MHz REF OUT interfaces of the modulators are
connected to the 10 MHz reference switch of the universal redundancy switch main module
(USS0202). There are four 10 MHz reference switches for the redundancy switching of the 10 MHz
reference output signal of the modulators.

The 10 MHz output interface is the 50 Ohm BNC 10 MHz REF OUT interface on the interface panel.
Each satellite network has its own REF OUT interface.

Platform Architecture v1.1 92/122


Newtec Dialog R2.4.1
Functional Architecture

Both switches of the USSs (TX and reference switch) switch simultaneously making sure that TX
output signal and 10 MHz reference output signal are coming from the same modulator, i.e. the
active one.
In case of the external 10 MHz mode, the external 10 MHz signal is inputted to the REF IN
interface on the interface panel.

This signal is forwarded to the first 8-way reference splitter (ACC6000) and distributed to the 10 MHz
REF IN interface of the modulators. The second 8-way reference splitter can be used as spare part
in case one of the other splitters fails.

Platform Architecture v1.1 93/122


Newtec Dialog R2.4.1
Functional Architecture

5.3.2.2.2 Demodulators

A 4IF hub module can have up to eight demodulators per satellite network. The two L-band splitters
(ACC6000) provide four 8-way L-band splitters for splitting the RX signal of the four satellite
networks.
Demodulators can be inserted in rack position 1 to 18. The positions are clearly numbered on the
rack and the patch panel.

Platform Architecture v1.1 94/122


Newtec Dialog R2.4.1
Functional Architecture

The demodulator redundancy scheme depends on the type of demodulator.

MCD6000 / MCD7000 / MCD7500


These demodulators are grouped into one or more N:M redundancy pools. Pool redundancy is
provided per return technology. For example, MCD6000 HRC, MCD7000 HRC and MCD7500 HRC
with the same capabilities can belong to the same redundancy pool. The N:M redundancy is
non-revertive.
The 4IF hub module can support up to:
• Eight DVB-S2/S2X demodulators. For example, in a 7:1 redundancy pool with seven active
devices and one standby device.
• Eight HRC demodulators. For example, in a 6:2 redundancy pool with six active devices and two
standby devices.
• Four 4CPM demodulators. For example, in a 3:1 redundancy pool with three active devices and
one standby device. The limited set of 4CPM demodulators is due to a limitation of the modulator,
which can only synchronize NCR/ASI signaling to four 4CPM demodulators.
• Eight MRC demodulators. One active.

Platform Architecture v1.1 95/122


Newtec Dialog R2.4.1
Functional Architecture

NTC2291
Because the NTC2291 demodulator has no redundant network connectivity and no redundant
power supply, N:M redundancy cannot be used. Instead, the NTC2291 demodulators operate in an
N:N chain redundancy. Taking into account the limitation of the modulator with respect to NCR/ASI
signaling, you can have up to four NTC2291 demodulators per chain.

5.3.2.3 XIF

5.3.2.3.1 Modulators

Modulators in the XIF baseband hub module operate in an N:M pool redundancy configuration, with
N active devices and M standby devices. A baseband hub module can have one or more
redundancy pools of modulators. Dedicated redundancy pools should be configured for each type of
modulator. The type depends on the modes (e.g. MCM7500 1G, MCM7500 10G) of the device.
The redundancy pools can be defined across different satellite networks and can have one or more
redundant modulators protecting multiple satellite networks. Redundancy pools cannot be
configured across baseband hub modules. The number of redundant devices can be 0.
The RF output of each modulator is connected to an input port of the RF switch matrix. The RF
output signal of each active modulator is switched to the associated TX interface of the satellite
network. The association between the inputs and outputs of the RF switch matrix is defined by
configuration and the modulators’ redundancy state:
• Multiple RF output signals can be multiplexed on the same TX interface (Tx 2 in the figure below).
• Multiple TX interfaces can be used for redundancy, carrying identical (duplicated) RF output
signals (Tx 3 and Tx 4 in the figure below).

Redundant modulators in the redundancy pool can replace a failing modulator of any satellite
network. When the active modulator in a satellite network fails, a standby modulator takes over.
After the redundancy swap, the standby modulator has become the new active modulator. The
failing modulator is isolated from the active network and when the failure is fixed, it can be used as a
standby modulator. The N:M redundancy is non-revertive.

Platform Architecture v1.1 96/122


Newtec Dialog R2.4.1
Functional Architecture

The number of modulators depends on the number of satellite networks you want to serve, the type
of network interface used and the configuration of redundancy:
• You can add up to 10 modulators if they all operate in wideband mode. These modulators can be
inserted in rack positions 12 to 21. The positions are clearly numbered on the rack.
• You can add up to 32 modulators if they all operate in non-wideband mode. These modulators
can be inserted in rack positions 1 to 32. The positions are clearly numbered on the rack.

5.3.2.3.2 Demodulators

Demodulators in the XIF baseband hub module operate in an N:M pool redundancy configuration,
with N active devices and M standby devices. A baseband hub module can have one or more
redundancy pools of demodulators. Dedicated redundancy pools should be configured for each type
of demodulator. The type can depend on the technology (e.g. CPM, HRC, S2) or the capabilities
(e.g. CPM 16MHz, HRC 17MBd/70MHz) of the device.
The redundancy or device pools can be defined across different satellite networks and can have one
or more redundant demodulators protecting multiple satellite networks. For example, you can have a
redundancy pool of 24:1, with 24 active HRC demodulators and 1 redundant HRC demodulator. 8
Active HRC demodulators are linked to one satellite network, another 8 active HRC demodulators
are linked to a second satellite network, and another 8 HRC demodulators are linked to a third
satellite network. The standby demodulator is the redundant demodulator for all three satellite
networks.
Redundancy pools cannot be configured across baseband hub modules. The number of redundant
devices can be 0.
The RF input of each demodulator is connected to an output port of the RF switch matrix. The signal
on any RX interface of the satellite network is switched to the associated demodulator. The
association is defined by configuration and the demodulators’ redundancy state:
• Multiple satellite network RX signals can be multiplexed on the same RX interface.
• Multiple RX interfaces can be forwarded to the same output port of the RF switch matrix.
• Multiple RX interfaces can be used for redundancy, carrying identical (duplicated) satellite network
RX signals.

Platform Architecture v1.1 97/122


Newtec Dialog R2.4.1
Functional Architecture

Redundant demodulators in the redundancy pool can replace a failing demodulator of any satellite
network. When the active demodulator in a satellite network fails, a standby demodulator takes over.
After the redundancy swap, the standby demodulator has become the new active demodulator. The
failing demodulator is isolated from the active network and when the failure is fixed, it can be used
as a standby demodulator. The N:M redundancy is non-revertive.
The number and type of demodulators depends on the required return link capacity and supported
technologies, and the configuration of redundancy. You can add up to 32 demodulators.
Demodulators can be inserted in rack position 1 to 32. The positions are clearly numbered on the
rack.

5.3.3 Redundancy for Network Connectivity

5.3.3.1 1IF

The switches operate in a 1+1 redundancy configuration, where the level of resilience is referred to
as active/active as the backup switch actively participates with the system during normal operation.
For uplink connectivity, following ports on both switches should be connected to the backbone
infrastructure:
• Gi1/0/47 for unicast traffic.
• Gi1/0/45 for multicast traffic.
For management connectivity, following port on both switches should be connected to the backbone
infrastructure:
• Gi1/0/48 for management traffic.
The redundant unicast uplink ports are configured separately (no aggregation) and STP (Spanning
Tree Protocol) is enabled. Following connectivity schemes are supported towards the customer
infrastructure:
• Layer 3
• Layer 2/3, with STP enabled on all unicast uplink VLANs

Platform Architecture v1.1 98/122


Newtec Dialog R2.4.1
Functional Architecture

• Layer 2/3 without STP. STP in 1IF will handle the loop prevention

The number of edge routers in the figure above is just an example.

The switches are unaware of the state in any other redundancy scheme.
All devices in the hub module, except NTC2291, have redundant management and data connectivity.

Platform Architecture v1.1 99/122


Newtec Dialog R2.4.1
Functional Architecture

5.3.3.2 4IF

The distribution and access switches operate in a 1+1 redundancy configuration, where the level of
resilience is referred to as active/active as the backup switch actively participates with the system
during normal operation.
For uplink connectivity, two pairs of redundant interfaces are available:
• Data 1A/1B/2A/2B are two pairs of redundant 1 GbE interfaces for unicast traffic.
• Data 3A/3B is one pair of redundant 1 GbE interfaces for multicast traffic.
For management connectivity, one pair of redundant interfaces is available
• MGMT-A/B is one pair of redundant 1 GbE interfaces for management traffic.
The interfaces can be found on the interface panel, which is located at the rear of the 19” rack.

Internally, the A ports are connected to DSW-1 and the B ports are connected to
DSW-2.

The redundant unicast uplink ports are configured separately (no aggregation). STP is not enabled
on the unicast uplink ports due to the use of QinQ (dot1q tunneling). Following connectivity schemes
are supported towards the customer infrastructure.
• Layer 3
• Layer 2/3 with STP enabled on all unicast uplink VLANs

The number of edge routers in the figure is just an example.

The switches are unaware of the state in any other redundancy scheme.
All devices in the hub module, except NTC2291, have redundant management and data connectivity.

Platform Architecture v1.1 100/122


Newtec Dialog R2.4.1
Functional Architecture

5.3.3.3 XIF

5.3.3.3.1 HUB7208

The four access switches of the XIF baseband hub module are connected in a ring topology and are
configured in an IRF fabric (stack).

All devices within the hub module have redundant management and data connectivity.

Platform Architecture v1.1 101/122


Newtec Dialog R2.4.1
Functional Architecture

5.3.3.3.2 HUB7318

The two TOR switches of HUB7318 are configured in an IRF (Intelligent Resilient Framework) stack.
This switch stack acts as one logical switch. The physical switches are members of the stack.

The processing hub module supports two types of switches: HPE FlexFabric 5710
48XGT 6QS+/2QS28 or HPE FlexFabric 5700 32XGT 8XG 2QSFP+ (legacy).
The different switch types cannot be mixed.

Redundant ports are available for the unicast, multicast, and management uplink. The ports used
depend on the type of switch: HPE 5710 or HPE 5700.

Unicast Traffic
HPE 5710
• TEN 1/0/5 of TOR-DSW-M1
• TEN 2/0/5 of TOR-DSW-M2
OR
• TEN 1/0/49:1 of TOR-DSW-M1
• TEN 2/0/49:1 of TOR-DSW-M2

TEN x/0/49:1 corresponds with port 13/14 on the adapter panel.

HPE 5700
• 1 GbE or 10 GbE RJ45 ports
– TEN1/0/29 of TOR-DSW-M1
– TEN2/0/29 of TOR-DSW-M2

Platform Architecture v1.1 102/122


Newtec Dialog R2.4.1
Functional Architecture

OR
• 10 GbE SFP+ ports
– TEN1/0/33 of TOR-DSW-M1
– TEN2/0/33 of TOR-DSW-M2

Ports 29 and 33 are mutually exclusive.

The redundant unicast uplink ports are configured in a Link Aggregation, creating a single logical link
with double capacity. Link Aggregation is possible thanks to the Stacked Switch implementation on
the TOR switches. This design results in a single logical switch and a single logical unicast uplink
connection, creating a loop-free topology by design. Following connectivity schemes are supported
towards the customer infrastructure:

Multicast Traffic
HPE 5710
• TEN 1/0/4 of TOR-DSW-M1
• TEN 2/0/4 of TOR-DSW-M2

HPE 5700
• TEN1/0/28 of TOR-DSW-M1
• TEN2/0/28 of TOR-DSW-M2

Platform Architecture v1.1 103/122


Newtec Dialog R2.4.1
Functional Architecture

Management Traffic
HPE 5710
• TEN 1/0/8 of TOR-DSW-M1
• TEN 2/0/8 of TOR-DSW-M2

HPE 5700
• TEN1/0/32 of TOR-DSW-M1
• TEN2/0/32 of TOR-DSW-M2
The switches are unaware of the state in any other redundancy scheme. All devices within the hub
module have redundant management and data connectivity.

5.3.4 Redundancy for Power


Each hub module has two Power Distribution Units (PDUs), which can be connected to two different
power circuits for redundancy.
Most devices in the 1IF hub module have dual power supply and are connected to both PDUs.
Following devices have a single power supply and can only be connected to one of both PDUs:
• Distribution switch
• NTC2291
Most devices in the 4IF hub module have dual power supply and are connected to both PDUs.
Following devices have a single power supply and can only be connected to one of both PDUs:
• Access switches
• NTC2291
All devices in the XIF hub module have dual power supply and are connected to both PDUs.

Platform Architecture v1.1 104/122


Newtec Dialog R2.4.1
Functional Architecture

5.3.5 Redundancy for Servers, Virtual Machines and Applications

5.3.5.1 1IF

Two servers are deployed in a redundant setup: SRV-01 and SRV-02. Each server runs the same
set of virtual machines or VMs. The virtual machines work in an active-standby redundancy cluster
across the servers.
Some VMs are common:
• BSC-0-(redid)
• LOG-0-(redid)
• MON-0-(redid)
Others are grouped in the following sub-systems:
• HMMS (Hub Module Management System), which provides the internal management functionality
of the hub module.
– HMGW-0-(redid)
– REDCTL-0-(redid)
– TCS-(hpsid)-(redid)
• HPS (Hub Processing Segment), which deals with data processing, such as encapsulation and
decapsulation, acceleration, demarcation etc.
– PGICSE-(pgiid)
– PGIDCP-(pgiid)
– PGICPMCTL-(pgiid)
– PGIHRCCTL-(pgiid)
– PGIMRCCTL-(pgiid)
– PGIS2XCTL-(pgiid)
– PGITAS-(pgiid)
– PGIL2DEM-(pgiid)
– PGIDEM-(pgiid)
• NMS (Network Management System), which provides centralized management functionality of the
entire Newtec Dialog Platform.

The NMS sub-system is only deployed if the platform uses an embedded NMS.

– CMS-(redid)
– DMA-1-(redid)

• (redid) corresponds with the redundancy ID identifying the addresses used in a


redundancy group: virtual IP (0), first instance (1), second instance (2), and third
instance (3) (BSC only).
• (pgiid) is 1or 2, corresponding with the protection group instance.
• (hpsid) is 1, corresponding with the hub processing segment.

Platform Architecture v1.1 105/122


Newtec Dialog R2.4.1
Functional Architecture

The redundancy behavior depends on the type of sub-systems and VMs.

For HMMS, NMS and common VMs


When an application on the virtual machine (VM) fails, the application is first restarted. If this does
not fix the problem, the VM on the other server takes over and the VM with the failing application
becomes standby.
For example: An application on HMGW-0-1 VM fails. A restart of the application does not help.
HMGW-0-1 is considered as a failed VM and the redundancy controller swaps all applications from
HMGW-0-1 on SRV-01 to HMGW-0-2 on SRV-02. The other VMs, which did not fail, do not swap.

For HPS
The VMs of the HPS sub-system use the concept of “protection groups” for redundancy. Within a
protection group a 1:1 redundancy configuration is applied. The 1IF hub module has one protection
group and an instance of this protection group is deployed on each server (SRV-01 and SRV-02).
If an application on a VM in the active instance of the protection group fails, the application is first
restarted. If this does not fix the problem, the complete set of VMs is activated on the other instance
of the protection group and the VMs on the first instance become standby.
For example: The Tellinet application on PGITAS-1 VM fails in instance 1 of the protection group
(deployed on SRV-01). A restart of the application does not help. PGITAS-1 is considered as a failed
VM and the redundancy controller swaps the complete set of VMs to instance 2 of the protection
group (deployed on SRV-02). The VMs on instance 1 become standby.

5.3.5.2 4IF

The allocation of the blade servers can be divided in three sub-systems:


• Hub Module Management System or HMMS, which provides the internal management
functionality of the hub module. These servers are in slots 1 and 2 of the enclosure.
• Hub Processing Segment or HPS, which aggregates and processes the data of one satellite
network. One HPS contains two servers. These servers are in slots 3 to 12 of the enclosure.
• Network Management System or NMS, which provides centralized management functionality of
the entire Newtec Dialog Platform. These servers are in slots 13 to 16 of the enclosure.

Platform Architecture v1.1 106/122


Newtec Dialog R2.4.1
Functional Architecture

The NMS sub-system is only deployed if the platform uses embedded NMS.

The minimum server deployment of a 4IF hub module is:


• One HMMS server in slot 1.
• Two HPS servers handling one satellite network in slots 3 to 12:
– One server handles the Satellite Channel Processing (SCP) and must be in an uneven slot
position (3, 5, 7, 9 or 11)
– The other server handles the Edge/Data Processing (EDP) and must be in an even slot
position (4, 6, 8, 10 or 12).
To make this minimum deployment redundant you should add:
• One HMMS server in slot 2.
• Two servers in slots 3 to 12 (i.e. a redundant HPS). The SCP server must be in an uneven slot
position (3, 5, 7, 9 or 11), the EDP server must be in an even slot position (4, 6, 8, 10 or 12).
• For each additional satellite network, you should add another set of two servers (an additional
HPS) in slots 3 to 12. The SCP server must be in an uneven slot position (3, 5, 7, 9 or 11), the
EDP server must be in an even slot position (4, 6, 8, 10 or 12).

In case you are using an embedded NMS, the NMS blade servers are in slots 13 to 16 of the
enclosure. The minimum NMS server deployment is one blade server in slot 13. To make this
minimum deployment redundant you should add a server in slot 14. Depending on the size of your
Newtec Dialog network, you can add an extra NMS server in slot 15 and optionally a redundant
server in slot 16.

Platform Architecture v1.1 107/122


Newtec Dialog R2.4.1
Functional Architecture

Each redundant server runs the same set of virtual machines or VMs.
• HMMS sub-system
– The servers are located in slot 1 (non-redundant) and slot 2 (redundant)
– BSC-0-(redid)
– LOG-0-(redid)
– MON-0-(redid)
– HMGW-0-(redid)
– REDCTL-0-(redid)
– TCS-(hpsid)-(redid)
• HPS sub-system
– The servers are located in slots 3 up to 12.
– A hub processing segment is deployed on two blade servers:
– One blade server handles the Satellite Channel Processing (SCP) and must be in an
uneven slot position (3, 5, 7, 9 or 11). Following fixed set of virtual machines is deployed on
an SCP server:
• PGICSE-(pgiid)
• PGIDCP-(pgiid)
• PGICPMCTL-(pgiid)
• PGIHRCCTL-(pgiid)
• PGIMRCCTL-(pgiid)
• PGIS2XCTL-(pgiid)

Platform Architecture v1.1 108/122


Newtec Dialog R2.4.1
Functional Architecture

– The other blade server handles the Edge/Data Processing (EDP) and must be in an even
slot position (4, 6, 8, 10 or 12). Following fixed set of virtual machines is deployed on an
EDP server:
• PGITAS-(pgiid)-(tasid)
• PGIL2DEM-(pgiid)
• PGIDEM-(pgiid)
• NMS sub-system
– The servers are located in slots 13 up to 16.
– The following fixed set of virtual machines is deployed on the first NMS server:
• BSC-0-(redid)
• LOG-0-(redid)
• MON-0-(redid)
• CMS-(redid)
• DMA-1-(redid)
– Two DMA instances will always be deployed on each additional NMS server:
• DMA-2-(redid)
• DMA-3-(redid)

• (redid) corresponds with the redundancy ID identifying the addresses used in a


redundancy group: virtual IP (0), first instance (1), second instance (2), and third
instance (3) (BSC only).
• (pgiid) is 1 up to 5, corresponding with the protection group instance.
• (hpsid) is 1 up to 4, corresponding with the hub processing segment.

The virtual machines work in an active-standby redundancy cluster across the servers. The
redundancy behavior depends on the type of sub-systems.

For HMMS and NMS


When an application on the virtual machine (VM) fails, the application is first restarted. If this did not
fix the problem, the VM on the other server takes over and the VM with the failing application
becomes standby.
For example: An application on HMGW-0-1 VM fails. A restart of the application does not help.
HMGW-0-1 is considered as a failed VM and the redundancy controller swaps all applications from
HMGW-0-1 on SRV-01 to HMGW-0-2 on SRV-02. The other VMs, which did not fail, do not swap.

For HPS
The VMs of the HPS sub-system use the concept of “protection groups” for redundancy. Within a
protection group an N:M redundancy configuration is applied. The 4IF hub module has two
protection groups:
• SCP protection group
• EDP protection group
An instance of the SCP protection group is deployed on each blade server, which is in an uneven
slot position.

Platform Architecture v1.1 109/122


Newtec Dialog R2.4.1
Functional Architecture

An instance of the EDP protection group is deployed on each blade server, which is in an even slot
position.
The collection of one instance of each protection group corresponds with one hub processing
segment (HPS). The number of protection group instances (PGIs) and thus the number of HPSs
depends on the number of satellite networks you want to serve with the 4XIF hub module.
For the maximum number of satellite networks (four), you will need four hub processing segments
(HPSs) per enclosure, each consisting of an SCP blade server and EDP blade servers. To have
redundancy, only one HPS (i.e. one SCP and one EDP blade server) can be added (only 10 blade
server slots are available). This means that you will have five SCP protection group instances and
five EDP protection group instances per enclosure, providing 4:1 redundancy within a protection
group.

The two protection groups are independent from each other, meaning that the redundancy controller
can select any blade server from the SCP protection group and any blade server from the EDP
protection group to serve a satellite network.
For each protection group, the instances can be modeled as devices in a device pool. The
redundancy is controlled by the redundancy controller (REDCTL), which can simply use pool
redundancy logic within the protection group.
For example: The Tellinet application on the TAS-1 on instance 1 (device 1) of the EDP protection
group (device pool) fails. A restart of the application does not help. TAS-1 is considered as a failed
VM and the redundancy controller activates the complete set of VMs on instance 3 (device 3) of the
EDP protection group (device pool). The VMs on instance 1 (device 1) become standby.

5.3.5.3 XIF

In the processing hub module deployed on NPCI, you cannot link VNFs to specific compute nodes.
The VNFs are hosted on the compute nodes in a random way but can be divided in the following
sub-systems:
• HMMS sub-system

Platform Architecture v1.1 110/122


Newtec Dialog R2.4.1
Functional Architecture

– BSC-0-(redid)
– LOG-0-(redid)
– MON-0-(redid)
– HMGW-0-(redid)
– REDCTL-0-(redid)
– TCS-(hpsid)-(redid)
• HPS sub-system
– Hub module transport
• PGICSE-(pgiid)
• PGIDCP-(pgiid)
• PGITAS-(pgiid)-(tasid)
• PGIL2DEM-(pgiid)
• PGIDEM-(pgiid)
– Hub module control
• PGICPMCTL-(pgiid)
• PGIHRCCTL-(pgiid)
• PGIMRCCTL-(pgiid)
• PGIS2XCTL-(pgiid)

• (redid) corresponds with the redundancy ID identifying the addresses used in a


redundancy group: virtual IP (0), first instance (1), second instance (2), and third
instance (3) (BSC only).
• (pgiid) is 1 up to 7, corresponding with the protection group instance.
• (hpsid) is 1 up to 18, corresponding with the hub processing segment.

The redundancy behavior depends on the type of sub-systems.

For HMMS
VNFs of the Hub Module Management or HMMS are hosted on the compute nodes using the
anti-affinity policy. Anti-affinity means that two instances of the same VNF (e.g. TCS-1 and TCS-2)
are never hosted on the same compute node. As a result, HMMS requires at least two compute
nodes. The redundancy is controlled by platform functions.
When an application on the VNF fails, the application is first restarted. If this does not fix the
problem, the VNF on the other compute node takes over and the VNF with the failing application
becomes standby.
For example: An application on HMGW-0-1 VNF fails. A restart of the application does not help.
HMGW-0-1 is considered as a failed VNF and the redundancy controller swaps all applications from
HMGW-0-1 to HMGW-0-2 on another compute node. The other VNFs, which did not fail, do not
swap.

For HPS
The VNFs of these sub-system use the concept of “protection groups” for redundancy. Each VNF
type has its own protection group (PG), meaning that there are eight protection groups:

Platform Architecture v1.1 111/122


Newtec Dialog R2.4.1
Functional Architecture

• PGICSE
• PGIDCP
• PGICMPCTL
• PGIHRCCTL
• PGIMRCCTL
• PGIS2XCTL
• PGITAS
• PGIDEM
• PGIL2DEM

The collection of one instance of each protection group corresponds with one hub processing
segment or HPS.
The number of protection group instances (PGIs) and thus the number of HPSs depends on the
number of satellite networks you want to serve with the XIF hub module.
HPSs are grouped logically into HPS pools; one HPS pool can have zero to six HPSs, or zero to six
instances of the protection groups. Within a protection group of the same HPS pool an N:M
redundancy configuration is applied. You can have one to three HPS pools.
For the maximum number of satellite networks (18), you need six hub processing segments (HPSs)
per HPS pool. Each HPS has an instance of the complete set of VNFs running. To have redundancy,
there is one extra instance of the set of VNFs added to the HPS pool. This means that you will have
seven VNF PGIs per HPS pool, providing 6:1 redundancy within the VNF protection group of the
HPS pool.
For each VNF protection group, the instances can be modeled as devices in a device pool. The
redundancy is controlled by the redundancy controller (REDCTL), which can simply use pool
redundancy logic within the VNF protection group.

Platform Architecture v1.1 112/122


Newtec Dialog R2.4.1
Functional Architecture

For example: The Tellinet application in instance 1 (device 1) of the TAS protection group (device
pool) fails. A restart of the application does not help. TAS-1 (device 1) is considered as a failed VNF
and the redundancy controller activates the applications on instance 3 (device 3) of the TAS
protection group (device pool). Instance 1 of the TAS protection group becomes standby.

When one of the compute nodes fails, its VNFs are redistributed over the remaining
compute nodes in a best effort manner. When no space is available, some VNFs
cannot be deployed and redundancy will be broken. This will only happen if the hub
module was not dimensioned well.

5.3.5.4 NMS

Controller Block
The controller block has two redundant controller nodes and two redundant storage nodes. The
redundant pairs work in hot-standby mode.

Compute Block
The compute nodes host Virtual Network Functions or VNFs.
• BSC-0-(redid)
• LOG-0-(redid)
• MON-0-(redid)
• CMS-(redid)
• DMA-1-(redid)
• DMA-2-(redid)
• DMA-3-(redid)

(redid) corresponds with the redundancy ID identifying the addresses used in a


redundancy group: virtual IP (0), first instance (1), second instance (2), and third
instance (3) (BSC only).

The VNFs are hosted on the compute nodes in a random way and use the anti-affinity policy.
Anti-affinity means that two instances of the same VNF (e.g. DMA-1 and DMA-2) are never hosted
on the same compute node. As a result, the NMS requires at least two compute nodes.
When one of the compute nodes fails, its VMs are redistributed over the remaining compute nodes
in a best effort manner.

5.3.6 Geographic Redundancy

Geo-redundancy with mobility is supported using a Mobility Manager with R1.4.1 or


higher.

The geographic redundancy or geo-redundancy service is an extension to Newtec Dialog® which


facilitates servicing a terminal population from two geographically dispersed Dialog platforms. The
service ensures replication of Dialog provisioning data between the two geographically distant sites
at regular times so that operations can switch from one site to another at any time.

Platform Architecture v1.1 113/122


Newtec Dialog R2.4.1
Functional Architecture

The geo-redundancy controller or GRC is at the heart of the geo-redundancy service and has the
following responsibilities:
• Synchronize the provisioning data between the two sites at regular intervals.
• Perform the switchover from the original (failing) active to the new active site without any manual
operator intervention.
The trigger for a switchover is a manual action of the operator, who optionally has a user
interface (dashboard) with all necessary active / passive hub module KPIs to make an informed
decision to trigger the switch over.
Dialog's geo-redundancy solution can be used for:
• Disaster recovery
• Mitigating severe weather conditions
• Service impacting maintenance or upgrades
The following geo-redundant Dialog deployment scenarios are supported:
• Full hub geo-redundancy consists of two geographically distant located hubs, including both a
set of hub modules and an (embedded) NMS. The hub modules are exact copies of each other.
• NMS geo-redundancy consists of a set of hub modules located at one or more teleports and a
1+1 redundant set of external NMS, typically located at different locations than the hub modules.

The trigger for a switchover is always a manual action, the switchover process itself is
fully automated.

Full Hub Geographic Redundancy

The hub geo-redundancy is supported when following conditions are met:


• Both hubs are exact copies of each other with respect to Dialog hardware and Dialog software
releases
• L2 connectivity between the hubs is required to allow support for GRC virtual IP addresses
• A single gateway is supported, meaning that both NMS and hub modules are part of the same
gateway
The geo-redundancy service provides database synchronization and the switchover process.
Database Synchronization
The GRC regularly synchronizes the provisioning database of the active hub with the database of
the standby hub. It also migrates any site specific data, such as beam name, BUC LO frequency,

Platform Architecture v1.1 114/122


Newtec Dialog R2.4.1
Functional Architecture

LNB LO frequency. The synchronization interval is configurable. Additionally, the synchronization


can be triggered at any time, for example before a graceful switchover, ensuring that no provisioning
data is lost.
The GRC performs the following steps during the database synchronization:
1. Fetch a backup of the active Dialog NMS provisioning database through RSYNC.
2. Extract the database file from the backup file and load it in a local database.
3. Replace hub specific data, for example beam name, BUC LO frequency, LNB LO frequency.
4. Create a dump of the manipulated database and reassembles the backup package.
5. Upload the modified backup to the passive Dialog hub through RSYNC and push the data to its
NMS provisioning database.
6. Wait for the upload process to complete and note the status.

Switchover Process
The switchover process will switch all Dialog hub components from the active to the passive
location. There is no option to only switch over the NMS, or to only switch over a specific hub
module.
There are two types of switchover triggers:
• Graceful switchover, in which case the administrative status (active or passive) and accessibility of
both hubs is checked. If the status is not OK or the gateway cannot be reached, the switchover
will not occur.
• Forced switchover, in which case the administrative status can be forced regardless of the actual
administrative status, or in case the administrative status cannot be determined. The operator is
responsible to ensure that there are no conflicts in the operation of both hubs.
The detailed switchover sequence is as follows:
• The operator triggers a switchover.
• The GRC confirms both Dialog hubs are accessible.
• The GRC disables the currently active Dialog hub using REST API.
• The GRC polls the status of the Dialog hub until the modulators are no longer transmitting and the
uplink ports are disabled.
• The GRC enables the currently passive Dialog hub using REST API.
• The GRC polls the status of the Dialog hub until the modulators are transmitting and the uplink
ports are enabled.
• The GRC waits for the process to complete and notes the status.
The time to perform the switchover, from the trigger until terminals are operational again on the new
hub, is a few minutes. Traffic is impacted during the switchover process.

NMS Geographic Redundancy

Platform Architecture v1.1 115/122


Newtec Dialog R2.4.1
Functional Architecture

The NMS geo-redundancy service requires two geographical locations where a 1+1 active / passive
NMS is defined, but with a non-redundant set of hub modules, The hub modules can be at a
separate geographical location, or at one of the geo-redundant NMS locations.
NMS geo-redundancy is supported when the following conditions are met:
• Both NMS systems are exact copies of each other with respect to Dialog hardware and Dialog
software releases
• Only deployments with an external NMS are supported
• L2 connectivity between the two sites hosting the NMS is required to allow support for GRC virtual
IP addresses
The passive NMS can be accessed by an operator, but will have limited functionality:
• All components are in a 'read-only' mode; provisioning actions are not possible
• There is no monitoring
• The mobility orchestrator will not accept any external API commands
• TICS, if available, will not accept any certification or perform any verification command
The geo-redundancy service provides database synchronization and the switchover process.
Database Synchronization
The GRC regularly synchronizes the provisioning database of the active NMS with the database of
the standby NMS. The synchronization interval is configurable. Additionally, the synchronization can
be triggered at any time, for example before a graceful switchover, ensuring that no provisioning data
is lost.
The GRC performs the following steps during the database synchronization:
1. Set the active NMS in read-only mode.
2. Fetch a backup of the active Dialog NMS provisioning database through RSYNC.
3. Upload the backup to the passive NMS through RSYNC and push the data to its provisioning
database.
All NMS components on the passive site update their own provisioning data, based on any
provisioning deltas pushed to the passive system. The NMS components are in a 'warm
standby' state; they are running but perform no actions except updating provisioning data
4. Wait for the upload process to complete and note the status.

Platform Architecture v1.1 116/122


Newtec Dialog R2.4.1
Functional Architecture

The hub modules are not affected by database synchronization as they are
continuously updated with any provisioning changes on the active NMS.

Switchover Process
The switchover process will deactivate the active NMS and activate the original passive NMS. The
process will not affect the hub modules and therefore will not impact terminal traffic.
The detailed switchover sequence is as follows:
• The operator triggers a switchover.
• The GRC confirms both NMS systems are accessible.
• The GRC disables the currently active NMS system using REST API.
• The GRC polls the status of the active NMS system until it confirms it is in a passive state
• The GRC activates the originally passive NMS system using REST API
• The GRC polls the status of the NMS system until it confirms it is in an active state, additionally
system VIP address is rerouted to newly active NMS
• Hub modules repoint from the original active NMS to the newly active NMS upon system VIP
address change
The time to perform the switchover, from the trigger to a fully up and running NMS on the original
passive site, is less than one hour. Note that traffic is NOT impacted during the switchover process.

Platform Architecture v1.1 117/122


Newtec Dialog R2.4.1
Abbreviations

6 Abbreviations

Abbreviation Definition

AC Alternating Current

ACM Adaptive Coding Modulation

AMP Air MAC Processor

API Application Programming Interface

ASI Asynchronous Serial Interface

ASW Access Switch

AUPC Automatic Uplink Power Control

BBF Base Band Frame

BBP Base Band Packet

BNC Bayonet Neill Concelman

BSC Bootstrapper/System Configurator

BUC Block Up Convertor

CIR Committed Information Rate

CMS Configuration Management System

CPE Customer Premises Equipment

CPM Continuous Phase Modulation

CPMCTL CPM Controller

CPU Central Processing Unit

CSE Controller Shaper Encapsulator

CSV Comma Separated Values

Db Decibel

DCP Decapsulator

DEM Demarcation Service

DM Device Manager

DMA Dialog Management Application

Platform Architecture v1.1 118/122


Newtec Dialog R2.4.1
Abbreviations

Abbreviation Definition

DNS Domain Name Server

DSCP Differentiated Services Code Point

DSW Distribution Switch

DVB-S2 Digital Video Broadcasting - Satellite - version 2

EDP Edge Data Processing

FNG Fast New Gathering

FPGA Field Programmable Gate Array

FWD Forward

GSE Generic Stream Encapsulation

GUI Graphical User Interface

HDD Hard Disk Drive

HMGW Hub Module Gateway

HMMS Hub Module Management Server

HNO Hub Network Operator

HP Hewlett Packard

HPS Hub Processing Segment

HRC™ High Resolution Coding

HRCCTL HRC Controller

HTTP Hyper Text Transfer Protocol

ID Identifier

IDU Indoor Unit

IF Interface / Intermediate Frequency

IMC Inventory Management Client

IMS Inventory Management Server

IP Internet Protocol

ISI Input Stream Identifier

KPI Key Performance Indicator

Platform Architecture v1.1 119/122


Newtec Dialog R2.4.1
Abbreviations

Abbreviation Definition

LAN Local Area Network

LNB Low Noise Block

MAC Media Access Control

MCD Multi Carrier Demodulator

MF-TDMA Multi Frequency-Time Division Multiple Access

MGMT Management

MOD Modulator

MODCOD Modulation and Coding

MPE Multi Protocol Encapsulation

MPEG Moving Picture Experts Group

MRC Multi Resolution Coding

MUC Multiplier Up Convertor

Mx-DMA Newtec Cross-Dimensional Multiple Access™

NCR Network Clock Reference

NMS Network Management System

NNI Network to Network Interface

NxtGen Mx-DMA Next Generation Multi Frequency-Time Division Multiple Access

OA Onboard Administrator

ODU Outdoor Unit

OEM Original Equipment Manufacturer

PDU Power Distribution Unit

PGI Protection Group Instance

PID Packet Identifier

PSU Power Supply Unit

RAM Random Access Memory

RAT Return Accounting Tool

RCG Return Capacity Group

Platform Architecture v1.1 120/122


Newtec Dialog R2.4.1
Abbreviations

Abbreviation Definition

RCM Return Controller Manager

REDCTL Redundancy Controller

REF Reference

REST Representational State Transfer

RF Radio Frequency

RMCP Remote Management and Control Protocol

RTN Return

RTNRTR Return Router

S2X DVB-S2 Extensions

SAS Serial Attached SCSI

SCP Satellite Channel Processing

SCPC Single Channel Per Carrier

SFP Small Form-factor Pluggable

SME Small and Medium Enterprises

SNG Satellite New Gathering

SNMP Simple Network Management protocol

SQL Structured Query Language

SRV Server

SW Software

TAS Traffic Acceleration Service

TB Transport Based

TCP Transmission Control Protocol

TCS Terminal Configuration Server

TS Transport Stream

UDP User Data Protocol

UI User Interface

UNI User to Network Interface

Platform Architecture v1.1 121/122


Newtec Dialog R2.4.1
Abbreviations

Abbreviation Definition

USB Universal serial Bus

USS Universal Switching System

VLAN Virtual Local Area Network

VM Virtual Machine

VNF Virtual Network Function

VNO Virtual Network Operator

VSAT Very Small Aperture Terminal

Platform Architecture v1.1 122/122


Newtec Dialog R2.4.1
ST Engineering iDirect (Europe)
Laarstraat 5
9100 Sint-Niklaas Belgium
+32 3 780 6500
wwww.idirect.net

You might also like