Professional Documents
Culture Documents
Dialog R242 PlatformArchitecture v1.0
Dialog R242 PlatformArchitecture v1.0
Newtec Dialog®
R2.4.2
Revision 1.0
March 11, 2022
© 2022 ST Engineering iDirect (Europe) CY NV and/or its affiliates. All rights reserved.
Reproduction in whole or in part without permission is prohibited. Information contained herein is
subject to change without notice. The specifications and information regarding the products in this
document are subject to change without notice. While every effort has been made to ensure the
accuracy of the statements, information and recommendations in this document, they are provided
without warranty of any kind, express, or implied. Users must take full responsibility for their
application of any products. Trademarks, brand names and products mentioned in this document are
the property of their respective owners. All such references are used strictly in an editorial fashion
with no intent to convey any affiliation with the name or the product's rightful owner.
Table of Contents
A caution message indicates a hazardous situation that, if not avoided, may result in
minor or moderate injury. It may also refer to a procedure or practice that, if not
correctly followed, could result in equipment damage or destruction.
A hint message indicates information for the proper operation of your equipment,
including helpful hints, shortcuts or important reminders.
The Dialog platform fully manages all aspects of a service: bandwidth usage, real-time
requirements, network characteristics and traffic classification. The platform offers these services
with carrier grade reliability through full redundancy of the platform components.
The Dialog platform supports multiple traffic types, such as the following:
• Video and audio
• Data
• Voice
• Data casting
The core of the Dialog platform is the Hub, which is located at a physical gateway site. A Dialog
platform can consist of one or more hubs, located at one or more gateways.
A hub consists of one or more Hub Modules. A hub module contains all hardware and software
required for aggregating and processing traffic of one or more satellite networks.
Following types of hub modules exist:
• The 1IF hub module serves one satellite network and is suited for small networks. It provides less
scalability and flexibility than the next hub modules. It is also referred to as HUB6501.
• The 4IF hub module serves up to four satellite networks and is suited for medium to large
networks. It provides flexibility and scalability. It is also referred to as HUB6504.
• The XIF hub module is suited for very large networks and provides full flexibility and scalability. It
can serve up to 18 satellite networks. It is the combination of one or two baseband hub modules
and one processing hub module. The combination of HUB7208 and HUB7318 is referred to as an
XIF hub module.
– The XIF baseband hub module holds the RF devices. It is also referred to as HUB7208.
– The XIF processing hub module holds the processing servers. It is also referred to as
HUB7318. HUB7318 is deployed on the Newtec Private Cloud Infrastructure or NPCI.
Equipment redundancy is supported for all devices in the hub module. A hub module may be
implemented fully redundant, non-redundant or partially redundant.
The Terminal is the equipment located at the end-user’s site. It consists of the outdoor unit
(antenna, LNB and BUC) and the indoor unit, i.e. the modem.
A hub module is connected to an IP backbone at one side and to an RF interface at the other side,
establishing the Satellite Network.
A satellite network is associated with forward link capacity from one physical or virtual (in case of
DVB-S2X Annex M) forward carrier and with the corresponding return link capacity. The forward link
is based on one of the following technologies:
• DVB-S2
• DVB-S2X
• DVB-S2X Annex M.
The return link supports multiple return link technologies:
• 4CPM MF-TDMA
• DVB-S2 and S2-Extensions SCPC
• HRC SCPC and Mx-DMA
• MRC NxtGen Mx-DMA
Network Resources are configured on top of the physical satellite networks and are isolated from
each other using VLAN identifiers. Dialog provides end-to-end network connectivity for three types
of networks:
• Layer 3
• Layer 2
• Multicast
Layer 3 network resources consist of one or more virtual networks. A layer 3 virtual network is an
isolated IPv4 or IPv6 network. Devices within the same virtual network can directly communicate
with each other. A virtual network can independently use its own addressing scheme and the same
addressing schemes can be reused in different virtual networks.
Layer 2 network resources consist of one or more point-to-point virtual connections. A layer 2
point-to-point virtual connection can be considered as a virtual Ethernet pipe, which establishes
isolated communication between two devices.
A multicast network connects an uplink network on the hub side with one or more LAN networks on
the modem side. This consists of a single multicast routing instance providing unidirectional routing
of multicast IP traffic from the uplink network to the modem LAN networks. The MC network can
therefore be compared to a multicast router.
The Dialog platform is managed through a single Network Management System or NMS. The
NMS can be embedded in a hub module or it can be a standalone hub module, which is deployed on
a Private Cloud Infrastructure or NPCI. The standalone NMS on NPCI is referred to as HUB7318.
The NMS provides a unified management interface to monitor, manage and control the Dialog
platform. It serves as a single point of access and embeds the following configuration and
management interfaces:
• Satellite resources
• Network resources
• Service and classification profile management
• Terminal provisioning
• Fault (alarms) and performance (metrics) management
The main components of the cloud architecture are three types of physical nodes (hosts):
• Controller nodes, which
– Run the cloud services needed to manage the cloud infrastructure.
– Manage the other hosts over the internal management network.
– Provide external administration interfaces to clients over the OAM (Operations,
Administration and Management) network.
– Provide basic disk space for storage.
• Storage nodes, which
– Provide dedicated storage for virtual machine persistent disks and for ephemeral disks.
– Have a root disk and one or more storage disks.
– Are optional, but when used:
• They must be deployed in groups of either two or three for reliability. The number of
nodes required per group depends on the replication factor specified for the system.
• They must connect to the internal management network, and to the optional
infrastructure network .
• The network used for storage cluster activity (either the management network, or the
optional infrastructure network) must support 10 GbE.
• Compute nodes, which
– Run the cloud compute services and host the virtual machines, providing CPU, memory,
optional local storage and L2 networking services. The compute nodes also provide L3
networking services, such as L3 routing, floating IP, and NAT services for the virtual
machines.
– Connect to the controller nodes over the internal management network, to the storage
nodes over the optional infrastructure network, and to the provider networks using data
interfaces.
4 Physical Architecture
This chapter describes the hardware devices, interfaces, redundancy configurations and deployment
scenarios of the different hub modules.
4.1.2.1 RF
The 1IF hub module supports one satellite network and has only one RF TX interface (transmit /
uplink) and one RF RX interface (receive / downlink).
The interface used for RF TX depends on the modulator redundancy.
The RF TX interface is:
• The IF- or L-band output interface at the back panel of the modulator in case of no redundancy.
M6100
The interface used for RF RX is always the 75 Ohm BNC IN interface of the L-band splitter
(ACC6000).
The 1IF hub module offers two modes for time and frequency synchronization.
• External 10 MHz mode: An external 10 MHz reference source should be connected to the 50
Ohm BNC IN interface on the reference splitter (ACC6000).
• Internal 10 MHz mode: The active modulator provides the 10 MHz reference signal. The output
interface of this signal depends on the modulator redundancy.
The 10 MHz output interface is:
– The 50 Ohm BNC 10 MHz REF OUT interface found at the back panel of the modulator in
case of no redundancy.
M6100
MCM7500
4.1.2.3 IP
The 1IF hub module is connected to the customer's IP backbone through specific ports on the
Ethernet distribution switch.
Following ports are used for uplink connectivity:
• Gi1/0/47 for unicast traffic.
In case of redundant switches, the ports of both switches should be connected to the IP backbone.
4.1.3.1 Servers
Two servers are deployed in a redundant setup: SRV-01 and SRV-02. The applications run on
redundant virtual machines in an active-standby redundancy cluster across the servers. The
redundancy behavior depends on the type of sub-systems and VMs. For more information on how
redundancy of the applications and virtual machines work, refer to
Servers, Virtual Machines and Applications for 1IF on page 105.
Modulators
A redundant setup requires two modulators of the same type: 2x M6100 or 2x MCM7500. The
modulators operate in a 1:1 redundancy configuration. When the active modulator fails, the standby
modulator takes over. The 1:1 redundancy is non-revertive.
The RF outputs of both modulators are connected to the TX switch of the USS (output A and B).
Output C of the USS' TX switch serves as the RF TX interface.
(The figure below shows the setup for redundant M6100 modulators. The same setup is used for
redundant MCM7500 modulators.)
Both switches of the USS (TX and reference switch) switch simultaneously during a redundancy
swap, making sure that the TX output signal and 10 MHz reference output signal are coming from
the same modulator, i.e. the active one.
In case of the external 10 MHz mode, the external 10 MHz signal is inputted to the IN interface of
the reference splitter (ACC6000) and the 10 MHz REF IN interface of both modulators is connected
to the OUT interfaces of this reference splitter.
(The figure below shows the setup for redundant M6100 modulators. The same setup is used for
redundant MCM7500 modulators.)
4.1.3.3 Demodulators
The RF IN of a demodulator is connected to the 8-way L-band splitter. A 1IF hub module can have
up to eight demodulators (limited by the 8-way L-band splitter).
NTC2291
Because the NTC2291 demodulator has no redundant network connectivity and no redundant
power supply, N:M redundancy cannot be used. Instead, the NTC2291 demodulators operate in an
N:N chain redundancy. Taking into account the limitation of the modulator with respect to NCR/ASI
signaling, you can have up to four NTC2291 demodulators per chain.
The core of a redundant network in the 1IF hub module is formed by two distribution switches
(DSW-1 and DSW-2). The switches operate in a 1+1 redundancy configuration, where the level of
resilience is referred to as active/active as the backup switch actively participates with the system
during normal operation.
The switches are unaware of the state in any other redundancy scheme. All devices in the hub
module, except NTC2291, have redundant management and data connectivity.
4.1.3.5 Power
The hub module has two Power Distribution Units or PDUs, which can be connected to two different
power circuits for redundancy.
Most devices in the hub module have dual power supply and are connected to both PDUs. Following
devices have a single power supply and can only be connected to one of both PDUs:
• Distribution switch(es)
• NTC2291
HUB6504 is delivered with a 19” rack by default. Optionally, you can order the hub
module without a rack.
The NMS sub-system is only deployed if the platform uses embedded NMS.
– The other server handles the Edge/Data Processing (EDP) and must be in an even slot
position (4, 6, 8, 10 or 12).
In case you are using an embedded NMS, the NMS blade servers are in slots 13 to 16 of the
enclosure. The minimum NMS server deployment is one blade server in slot 13.
4.2.2.1 RF
The 4IF hub module supports one to four satellite networks. The RF TX interface(s) (transmit /
uplink) and RX interface(s) (receive / downlink) are available through the interface panel, which is at
the rear of the 19” rack.
The RX and TX interface of each satellite network is terminated on the interface panel as shown in
the figure.
• Transmit is a 50 Ohm interface with BNC connector that outputs the RF signal of the active
modulator.
• Receive is a 75 Ohm interface with BNC connector that receives the RF signal from the antenna
and distribute it to the demodulators.
The frequency synchronization interface is available through the interface panel, which is at the rear
of the 19” rack. The 4IF hub module offers two frequency synchronization modes.
• External 10 MHz mode: An external 10 MHz reference source should be connected to the REF
IN interface on the interface panel.
• Internal 10 MHz mode: The active modulator provides the 10 MHz reference signal. The 10 MHz
output interface is the 50 Ohm BNC 10 MHz REF OUT interface on the interface panel. Each
satellite network has its own REF OUT interface.
4.2.2.3 IP
The IP interfaces are available through the interface panel, which is at the rear of the 19” rack.
• Data 1A/1B/2A/2B are two pairs of redundant 1 GbE interfaces for unicast traffic.
• Data 3A/3B is one pair of redundant 1 GbE interfaces for multicast traffic.
• MGMT-A/B is one pair of redundant 1 GbE interfaces for management traffic.
Internally, the A ports are connected to DSW-1 and the B ports are connected to
DSW-2.
• OA-1/2 is one pair of redundant 1GbE interfaces for accessing the onboard administrator of the
enclosure.
4.2.3.1 Servers
In case you are using an embedded NMS, the NMS blade servers are in slots 13 to 16 of the
enclosure. The minimum NMS server deployment is one blade server in slot 13. To make this
minimum deployment redundant you should add a server in slot 14. Depending on the size of your
Newtec Dialog network, you can add an extra NMS server in slot 15 and optionally a redundant
server in slot 16.
The applications run on redundant virtual machines in an active-standby redundancy cluster across
the blade servers.
The redundancy behavior depends on the type of sub-systems the server is used for.
For more information on application and virtual machine redundancy, refer to
Servers, Virtual Machines and Applications for 4IF on page 106.
Modulators
A redundant setup requires two modulators of the same type per satellite network: 2x M6100 or 2x
MCM7500 per satellite network. These modulators operate in a 1:1 redundancy configuration. When
the active modulator fails, the standby modulator takes over. The 1:1 redundancy is non-revertive.
Modulators can be inserted in rack position 1 to 8. The positions are clearly numbered on the rack
and the patch panel.
The RF outputs of the modulators are connected to the TX switches of the universal redundancy
switch extension module (USS0203). There are four RF TX switches for the redundancy switching
of the TX signals of the four satellite networks.
The 10 MHz output interface is the 50 Ohm BNC 10 MHz REF OUT interface on the interface panel.
Each satellite network has its own REF OUT interface.
Both switches of the USSs (TX and reference switch) switch simultaneously making sure that TX
output signal and 10 MHz reference output signal are coming from the same modulator, i.e. the
active one.
In case of the external 10 MHz mode, the external 10 MHz signal is inputted to the REF IN
interface on the interface panel.
This signal is forwarded to the first 8-way reference splitter (ACC6000) and distributed to the 10 MHz
REF IN interface of the modulators. The second 8-way reference splitter can be used as spare part
in case one of the other splitters fails.
4.2.3.3 Demodulators
A 4IF hub module can have up to eight demodulators per satellite network. The two L-band splitters
(ACC6000) provide four 8-way L-band splitters for splitting the RX signal of the four satellite
networks.
Demodulators can be inserted in rack position 1 to 18. The positions are clearly numbered on the
rack and the patch panel.
NTC2291
Because the NTC2291 demodulator has no redundant network connectivity and no redundant
power supply, N:M redundancy cannot be used. Instead, the NTC2291 demodulators operate in an
N:N chain redundancy. Taking into account the limitation of the modulator with respect to NCR/ASI
signaling, you can have up to four NTC2291 demodulators per chain.
The core of a redundant network in the 4IF hub module is formed by two access switches and two
distribution switches. network switches.The switches operate in a 1+1 redundancy configuration.
The switches are unaware of the state in any other redundancy scheme. All devices in the hub
module, except NTC2291, have redundant management and data connectivity.
4.2.3.5 Power
The hub module has two Power Distribution Units (PDUs), which can be connected to two different
power circuits for redundancy.
Most devices in the hub module have dual power supply and are connected to both PDUs. Following
devices have a single power supply and can only be connected to one of both PDUs:
• Access switches
• NTC2291
HUB7208 is delivered with a 19” rack by default. Optionally, you can order the hub
module without a rack.
switches are
members of the stack.
Number of Modulators
The MCM7500 supports two types of network interfaces:
• 1 Gigabit Ethernet with transceiver modules on RJ-45, fiber or DAC (non-wideband mode).
• 10 Gigabit Ethernet with transceiver modules on fiber or DAC (wideband mode)
The number of modulators depends on the number of satellite networks you want to serve, the type
of network interface used and the configuration of redundancy.
• You can add up to 10 modulators if they all operate in wideband mode. These modulators can be
inserted in rack positions 12 to 21. The positions are clearly numbered on the rack.
• You can add up to 32 modulators if they all operate in non-wideband mode. These modulators
can be inserted in rack positions 1 to 32. The positions are clearly numbered on the rack.
• You can link one satellite network to one non-wideband modulator. You can link multiple satellite
networks to one wideband modulator (has multiple virtual carriers).
Number of Demodulators
The number and type of demodulators depend on the number of satellite networks you want to
serve, the required return link capacity and supported technologies, and the configuration of
redundancy.
• You need at least one demodulator per satellite network. You can add up to 32 demodulators.
Demodulators can be inserted in rack position 1 to 32. The positions are clearly numbered on the
rack.
• The number of demodulators that can be used per satellite network is limited to:
– Eight active HRC demodulators, due to the limited processing capacity of the HRC
controller. The HRC controller is a virtual machine, which runs on the processing hub
module.
– One active 4CPM demodulator for the XIF baseband hub module combined with a
HUB7318 processing hub module. This limit is due to the number of CPM controllers that
are deployed on the processing hub module. There's only one CPM controller deployed on
HUB7318. One CPM controller serves one 4CPM demodulator.
– Eight active MCD7000 DVB-S2/S2X demodulators. This is not a hard limit but is driven by
the maximum throughput that can be reached.
– Eight MRC demodulators, one active.
The modulators and demodulators, which serve the same satellite network, can be
spread over multiple baseband hub modules.
The hardware of an XIF baseband hub module is referred to as HUB7318. HUB7318 is deployed on
the Newtec Private Cloud Infrastructure or NPCI.
Controller Block
The controller block consists of a Dell PowerEdge FX2s enclosure that includes:
• Two FC430 blade servers that act as controller nodes.
• Two FC430 blade servers that act as storage nodes.
• Two FD332 storage sleds, each containing eight storage disks.
• Two FN410T Ethernet switches, which provide connection with TOR-DSW and interconnect the
blade servers in the enclosure.
• One Chassis Management Controller (CMC).
Compute Block
The first compute block consists of a Dell PowerEdge FX2s enclosure that includes:
• Three (minimum) to four FC640 blade servers that act as compute nodes.
• Two FN410T Ethernet switches, which provide connection with TOR-DSW and interconnect the
blade servers in the enclosure.
• One Chassis Management Controller (CMC).
Additionally, you can have two extra compute blocks, each containing:
The number of control, storage and compute nodes depends on the number and size
of satellite networks you want to serve, the configuration of redundancy and the type
deployments on NPCI.
4.3.2.1 RF
The RF TX interface(s) (transmit / uplink) and RX interface(s) (receive / downlink) are available on
the baseband hub module through the RF switch matrix.
The RX and TX interface of each satellite network is terminated on the RF switch matrix as shown in
the figure.
• Output TX is a 50 Ohm interface with BNC connector that outputs the RF signal of the active
modulator. You can have up to 16 TX interfaces on the RF switch matrix.
• Input RX is a 50 Ohm interface with BNC connector that receives the RF signal from the antenna
and distributes it to the demodulators. You can have up to 16 RX interfaces on the RF switch
matrix.
A baseband hub module supports up to 16 satellite networks.
Due to the internal cabling of the hub module, the input and output RF signal will suffer some loss.
This loss is less than 3 dB. The RF switch matrix does not generate extra losses.
Time and frequency synchronization of the XIF hub module is based on PTP (Precision Time
Protocol). A PTP source with stable oscillator is used as the external PTP master clock.
The 10 MHz output interface is the 50 Ohm BNC 10 MHz REF OUT interface on the back panel of
the PTP master clock.
The PTP master clock can be slaved to an external 10 MHz reference signal. In this case, an
external 10 MHz reference source should be connected to the 10 MHz REF IN interface on the back
panel of the PTP master clock.
Two PTP source deployment modes exist:
• Dedicated: A redundant pair of PTP sources is available per processing hub module. The PTP
sources are connected directly to the Ethernet distribution switches.
• Shared: A redundant pair of PTP sources is available per hub/gateway. The PTP sources are
connected via an external PTP-enabled switch. This switch in turn is connected to multiple
processing hub modules via the Ethernet distribution switches.
The following interfaces of the distribution switches are used: TEN1/0/25 and TEN2/0/25
The figure shows the dedicated mode at the left and the shared mode at the right.
The distribution switches in the XIF hub modules are PTP-enabled. The switches slave on a single
master port to the PTP master clock. On all other PTP-enabled ports of the switch, the PTP
messages are regenerated considering the path and processing delay and sent to the connected
slave devices:
• Access switches
• Modulators
• Demodulators
• HRC controller
• MRC Controller
4.3.2.3 IP
In this chapter:
• Network Connectivity using HPE 5710 Switch on page 41
• Network Connectivity using HPE 5700 Switch on page 42
• Management Connectivity to Enclosures on page 44
For multicast data traffic, insert CAT6A cables into following redundant 1 GbE or 10 GbE RJ45 ports
on the TOR switches:
• TEN 1/0/4 of TOR-DSW-M1
• TEN 2/0/4 of TOR-DSW-M2
For the remote management of HUB7318, following redundant 1GbE RJ45 port on the TOR
switches are available:
• TEN 1/0/8 of TOR-DSW-M1
• TEN 2/0/8 of TOR-DSW-M2
For multicast data traffic, insert CAT6A cables into following redundant 1 GbE or 10 GbE RJ45 ports
on the TOR switches:
• TEN 1/0/28 of TOR-DSW-M1
• TEN 2/0/28 of TOR-DSW-M2
For the remote management of HUB7318, following redundant 1GbE RJ45 port on the TOR
switches are available:
• TEN 1/0/32 of TOR-DSW-M1
• TEN 2/0/32 of TOR-DSW-M2
One baseband hub module can be connected to one processing hub module.
One processing hub module can be connected to up to two baseband hub modules.
The baseband and processing hub module are connected through ports on the access switches of
the baseband hub module and the distribution switches of the processing hub module.
The type of switches used in the baseband hub module(s) and the processing hub
module must be the same. The type is either HPE FlexFabric 5710 48XGT
6QS+/2QS28 switches or HPE FlexFabric 5700 32XGT 8XG 2QSFP+ switches
(legacy).
The connection depends on the switch type: HPE 5710 or HPE 5700.
HPE 5710
For traffic between HUB7318 and the first baseband hub module, the following ports are connected:
• For management traffic
– Connect TEN 1/0/49:1 of ASW-1-1 member 1 (BBSW-1) to TEN 1/0/49:2 of TOR-DSW-M1
– Connect TEN 2/0/49:1 of ASW-1-1 member 2 (BBSW-2) to TEN 2/0/49:2 of TOR-DSW-M2
If you are using a second baseband hub module, the following ports are connected:
• For management traffic
– Connect TEN 1/0/49:1 of ASW-2-1 member 1 (BBSW-1) to TEN 1/0/49:3 of TOR-DSW-M1
– Connect TEN 2/0/49:1 of ASW-2-1 member 2 (BBSW-2) to TEN 2/0/49:3 of TOR-DSW-M2
HPE 5700
For traffic between HUB7318 and the first baseband hub module, the following ports are connected:
• For management traffic
– Connect XGE 1/0/34 of ASW-1-1 member 1 (BBSW-1) to TEN 1/0/35 of TOR-DSW-M1
– Connect XGE 2/0/34 of ASW-1-1 member 2 (BBSW-2) to TEN 2/0/35 of TOR-DSW-M2
• For data traffic
– Connect XGE 1/0/40 of ASW-1-1 member 1 (BBSW-1) to TEN 1/0/36 of TOR-DSW-M1
– Connect XGE 2/0/40 of ASW-1-1 member 2 (BBSW-2) to TEN 2/0/36 of TOR-DSW-M2
If you are using a second baseband hub module, the following ports are connected:
• For management traffic
– Connect XGE 1/0/34 of ASW-2-1 member 1 (BBSW-1) to TEN 1/0/37 of TOR-DSW-M1
– Connect XGE 2/0/34 of ASW-2-1 member 2 (BBSW-2) to TEN 2/0/37 of TOR-DSW-M2
• For data traffic
– Connect XGE 1/0/40 of ASW-2-1 member 1 (BBSW-1) to TEN 1/0/38 of TOR-DSW-M1
– Connect XGE 2/0/40 of ASW-2-1 member 2 (BBSW-2) to TEN 2/0/38 of TOR-DSW-M2
4.3.3.1 Servers
Controller Block
The controller block has two redundant controller nodes and two redundant storage nodes. The
redundant pairs work in hot-standby mode.
Compute Block
The compute nodes host Virtual Network Functions or VNFs. The virtual network functions
correspond with the virtual machines of hub modules, which are not deployed on NPCI.
The VNFs are hosted on the compute nodes in a random way but can be divided in the following
sub-systems:
• Hub Module Management System or HMMS, which provides the internal management
functionality of the hub module.
• Hub Processing Segment or HPS, which aggregates and processes the data of one satellite
network.
The redundancy behavior of the VNFs depends on the type of sub-systems the VNF belongs to.
When one of the compute nodes fails, its VMs are redistributed over the remaining compute nodes
in a best effort manner.
For more information on application and virtual machine redundancy, refer to
Servers, Virtual Network Functions and Apllications for XIF on page 110.
4.3.3.2 Modulators
Modulators in the XIF baseband hub module operate in an N:M pool redundancy configuration, with
N active devices and M standby devices. A baseband hub module can have one or more
redundancy pools of modulators. Dedicated redundancy pools should be configured for each type of
modulator. The type depends on the modes (e.g. MCM7500 1G, MCM7500 10G) of the device.
The redundancy pools can be defined across different satellite networks and can have one or more
redundant modulators protecting multiple satellite networks. Redundancy pools cannot be
configured across baseband hub modules. The number of redundant devices can be 0.
The RF output of each modulator is connected to an input port of the RF switch matrix. The RF
output signal of each active modulator is switched to the associated TX interface of the satellite
network. The association between the inputs and outputs of the RF switch matrix is defined by
configuration and the modulators’ redundancy state:
• Multiple RF output signals can be multiplexed on the same TX interface (Tx 2 in the figure below).
• Multiple TX interfaces can be used for redundancy, carrying identical (duplicated) RF output
signals (Tx 3 and Tx 4 in the figure below).
Redundant modulators in the redundancy pool can replace a failing modulator of any satellite
network. When the active modulator in a satellite network fails, a standby modulator takes over.
After the redundancy swap, the standby modulator has become the new active modulator. The
failing modulator is isolated from the active network and when the failure is fixed, it can be used as a
standby modulator. The N:M redundancy is non-revertive.
The number of modulators depends on the number of satellite networks you want to serve, the type
of network interface used and the configuration of redundancy:
• You can add up to 10 modulators if they all operate in wideband mode. These modulators can be
inserted in rack positions 12 to 21. The positions are clearly numbered on the rack.
• You can add up to 32 modulators if they all operate in non-wideband mode. These modulators
can be inserted in rack positions 1 to 32. The positions are clearly numbered on the rack.
4.3.3.3 Demodulators
Demodulators in the XIF baseband hub module operate in an N:M pool redundancy configuration,
with N active devices and M standby devices. A baseband hub module can have one or more
redundancy pools of demodulators. Dedicated redundancy pools should be configured for each type
of demodulator. The type can depend on the technology (e.g. CPM, HRC, S2) or the capabilities
(e.g. CPM 16MHz, HRC 17MBd/70MHz) of the device.
The redundancy or device pools can be defined across different satellite networks and can have one
or more redundant demodulators protecting multiple satellite networks. For example, you can have a
redundancy pool of 24:1, with 24 active HRC demodulators and 1 redundant HRC demodulator. 8
Active HRC demodulators are linked to one satellite network, another 8 active HRC demodulators
are linked to a second satellite network, and another 8 HRC demodulators are linked to a third
satellite network. The standby demodulator is the redundant demodulator for all three satellite
networks.
Redundancy pools cannot be configured across baseband hub modules. The number of redundant
devices can be 0.
The RF input of each demodulator is connected to an output port of the RF switch matrix. The signal
on any RX interface of the satellite network is switched to the associated demodulator. The
association is defined by configuration and the demodulators’ redundancy state:
• Multiple satellite network RX signals can be multiplexed on the same RX interface.
• Multiple RX interfaces can be forwarded to the same output port of the RF switch matrix.
• Multiple RX interfaces can be used for redundancy, carrying identical (duplicated) satellite network
RX signals.
Redundant demodulators in the redundancy pool can replace a failing demodulator of any satellite
network. When the active demodulator in a satellite network fails, a standby demodulator takes over.
After the redundancy swap, the standby demodulator has become the new active demodulator. The
failing demodulator is isolated from the active network and when the failure is fixed, it can be used
as a standby demodulator. The N:M redundancy is non-revertive.
The number and type of demodulators depends on the required return link capacity and supported
technologies, and the configuration of redundancy. You can add up to 32 demodulators.
Demodulators can be inserted in rack position 1 to 32. The positions are clearly numbered on the
rack.
4.3.3.5.1 HUB7208
The four access switches of the XIF baseband hub module are connected in a ring topology and are
configured in an IRF fabric (stack). This switch stack acts as one logical switch. The physical
switches are members of the stack.
All devices within the hub module have redundant management and data connectivity.
4.3.3.5.2 HUB7318
The two TOR switches of HUB7318 are configured in an IRF (Intelligent Resilient Framework) stack.
This switch stack acts as one logical switch. The physical switches are members of the stack.
The processing hub module supports two types of switches: HPE FlexFabric 5710
48XGT 6QS+/2QS28 or HPE FlexFabric 5700 32XGT 8XG 2QSFP+ (legacy).
The different switch types cannot be mixed.
Redundant ports are available for the unicast, multicast, and management uplink.
The switches are unaware of the state in any other redundancy scheme. All devices within the hub
module have redundant management and data connectivity.
For more information about network redundancy, refer to Redundancy for Network Connectivity on
page 101.
4.3.3.6 Power
Each XIF hub module has two Power Distribution Units (PDUs), which can be connected to two
different power circuits for redundancy.
All devices in the hub module have dual power supply and are connected to both PDUs.
Controller Block
The controller block consists of a Dell PowerEdge FX2s enclosure that includes:
• Two FC430 blade servers that act as controller nodes.
• Two FC430 blade servers that act as storage nodes.
• Two FD332 storage sleds, each containing eight storage disks.
• Two FN410T Ethernet switches, which provide connection with TOR-DSW and interconnect the
blade servers in the enclosure.
• One Chassis Management Controller (CMC).
Compute Block
The first compute block consists of a Dell PowerEdge FX2s enclosure that includes:
• Three (minimum) to four FC640 blade servers that act as compute nodes.
• Two FN410T Ethernet switches, which provide connection with TOR-DSW and interconnect the
blade servers in the enclosure.
• One Chassis Management Controller (CMC).
Additionally, you can have two extra compute blocks, each containing:
• One to four FC640 blade servers that act as compute nodes.
• Two FN410T Ethernet switches, which provide connection with TOR-DSW and interconnect the
blade servers in the enclosure.
• One Chassis Management Controller (CMC).
4.4.3.1 Server
Controller Block
The controller block has two redundant controller nodes and two redundant storage nodes. The
redundant pairs work in a hot-standby mode.
Compute Block
The compute nodes host Virtual Network Functions or VNFs. The VNFs are hosted on the compute
nodes in a random way and use the anti-affinity policy. Anti-affinity means that two instances of the
same VNF (e.g. DMA-1 and DMA-2) are never hosted on the same compute node. As a result, the
NMS requires at least two compute nodes.
When one of the compute nodes fails, its VMs are redistributed over the remaining compute nodes
in a best effort manner.
Connectivity towards your management infrastructure is always done through redundant ports on the
distribution switches. For more information on the redundant ports, refer to NMS External Interfaces
on page 54.
4.4.3.3 Power
Each NMS hub module has two Power Distribution Units (PDUs), which can be connected to two
different power circuits for redundancy.
All devices in the hub module have dual power supply and are connected to both PDUs.
The most basic hub module deployment is a single hub module with embedded NMS. A single hub
module deployment can be done with a 1IF hub module and 4IF hub module.
A multiple hub module deployment exists of more than one hub module, which can be geographically
spread. The number of extra hub modules depends on the number of satellite networks and the
number of terminals. The multiple hub module deployment can be handled by an embedded NMS.
However, when the Newtec Dialog Platform has many hub modules, satellite networks and
terminals, a standalone NMS hub module is required.
In case of XIF hub module with HUB7318, the standalone NMS can be deployed on the same NPCI
infrastructure.
5 Functional Architecture
The functional architecture consists of management plane functions, control plane functions and
data plane functions. The Dialog hub consists of one or more hub modules and one NMS module
which is responsible for the central control and management plane functions. The hub modules
serve one or more satellite networks.
A satellite network consists of dedicated data plane functions which process and forward packets
and frames from the network edge at the hub side to the network edge at the terminal side, and vice
versa.
The control plane is responsible for the adaptive bandwidth management on both FWD link and RTN
link based on link quality and bandwidth requests.
The management plane interacts with the operator and integrates fault management, configuration
management, accounting, performance monitoring and security functionality. It is available at hub
module level and terminal level, and at central level. Via the central NMS module, the operator
obtains a consolidated management view of the system.
The NMS module also has central control plane functions, which are responsible for mobility
management and modem certification. The central mobility control plane functionality offers a north
bound interface to external systems, allowing an operator to implement his own business logic for
beam handover when appropriate.
A network connects an uplink network on the hub side to one or more LAN networks behind the
modems.
An uplink network is identified by a customer-defined VLAN on a specific uplink interface. There can
be multiple uplink interfaces on a hub module to provide the required network capacity (for example,
multiple 1 GbE uplinks). Each uplink interface handles a distinct set of networks. The uplink
interfaces are:
• Aggregated interfaces grouping multiple (typically two) physical interfaces for redundancy.
• Configured in Trunk mode allowing to define multiple VLANs on top of them.
The LAN network on the modem side is also identified by a VLAN. This can either be a native or
untagged VLAN.
Layer 3 Network
A layer 3 or L3 network connects an uplink network (1 in the figure below) on the hub side with one
or more LAN networks on the modem side (2 and 3 in the figure below). This consists of a single
routing instance (Virtual Routing and Forwarding or VRF) which provides bidirectional routing of IP
traffic between the uplink network and the modem LAN networks. The L3 network can therefore be
compared to a router.
The L3 network is identified by a configurable VLAN tag on the uplink interface and the LAN
interface of the modem.
You can configure multiple L3 networks in the same satellite network resulting in multiple VRF
instances. Each instance has an isolated routing and addressing context allowing to reuse private
address ranges for the different networks in the same satellite network. And you can terminate
multiple L3 networks on the same modem, also resulting in multiple VRF instances with isolation of
the routing and addressing context.
The number of virtual networks supported per satellite network depends on the type of hub module.
1IF 50
XIF 256
The number of virtual networks supported on a terminal depends on the type of terminal:
MDM2200 4
MDM2210 4
MDM2500 4
MDM2510 16
MDM3100 8
MDM3300 8
MDM3310 16
MDM3315 16
SMB3310 16
SMB3315 16
MDM5000 16
MDM5010 24
The uplink network can be used in networks on different satellite networks within the same hub
module. This provides a network that extends over terminals in multiple satellite networks. The layer
3 network configuration should be defined per satellite network in a consistent way avoiding any
network conflicts.
The L3 network supports two modes for assigning IP addresses or subnets to the LAN network of
the modem. The modes are:
• Dedicated Subnet
• Shared Subnet
In Dedicated Subnet mode, the modem receives a unique and dedicated range of IPv4 and/or IPv6
addresses. One IP address from this range is assigned to the modem's network interface. The
remaining addresses in the range are available for the hosts behind the modem. The modem can
serve as a DHCP server for the allocation of the IP addresses. If the modem is not used as a DHCP
server, another device in the LAN has to act as the DHCP server, or a static IP address on each host
has to be configured.
In Shared Subnet mode, the modem receives a single unique IP address for the host behind the
modem. This IP address is taken from a centrally managed IPv4 and/or IPv6 address pool. The IP
address of the modem's network interface in a shared subnet is always the first IP address of this
pool. This address is used as proxy IP address on each modem that receives an IP address from the
same address pool. The host behind the modem will behave as if it is part of a larger subnet. By
means of Proxy ARP on the modem, the host will be able to reach other hosts in the same subnet
but connected to different modems.
In case of NAT or Network Address Translation, the IP address is not handed out the host but
remains on the modem and acts as the routable IP address.
Layer 2 Network
A layer 2 or L2 network is a point-to-point virtual connection and can be considered as a virtual
Ethernet pipe which establishes isolated communication between two devices. A L2 network
connects an uplink network (1 in the figure below) on the hub side with a single LAN network on the
modem side (2 in the figure below). This consists of a single switching instance providing
bidirectional switching of Ethernet traffic between the uplink network and the modem LAN network.
The L2 network can therefore be compared to a switch with two ports connecting the uplink network
and the modem LAN network in a single broadcast domain.
The layer 2 network does not support terminal to terminal communication.
The L2 network is identified by a configurable single or double VLAN tag on the uplink interface and
by a configurable single VLAN tag on the LAN interface of the modem. The supported VLAN tagging
is according to the IEEE802.1Q standard (0x8100). You can use the different VLAN tags on the hub
and modem for the same L2 network.
Layer 2 point-to-point virtual connections are only supported on 4IF and XIF hub
modules with HP switches.
Multicast Network
A multicast or MC network connects an uplink network (1 in the figure below) on the hub side with
one or more LAN networks on the modem side (2 and 3 in the figure below). This consists of a
single multicast routing instance providing unidirectional routing of multicast IP traffic from the uplink
network to the modem LAN networks. The MC network can therefore be compared to a multicast
router.
The MC network is identified by a configurable VLAN tag on the uplink interface. It connects to the
'native' or 'untagged' VLAN on the modem LAN network.
You can configure only one MC network in a satellite network. Different satellite networks can reuse
the same MC VLAN on the uplink interface, creating a multicast network covering multiple satellite
networks.
The MC network features are listed below.
5.1.1 Architecture
The data plane is functionally scoped into four different segments. Each segment has its own
functionality. On the hub side, the functionality is provided by physical devices and virtual machines
or virtual network functions. This is shown in the table below. Segments 2, 3, and 4 correspond with
the HPS (Hub Processing Segment) subsystem.
On the terminal side, the functionality of the different segments is provided by applications in the
modem.
The four functionalities are described below.
• Baseband Processing embodies the satellite physical layer baseband processing related to
modulation (transmit) and demodulation (receive).
• Satellite Channel Processing is responsible for the encapsulation and decapsulation of
baseband frames,associated with a "satellite channel". It also manages the satellite bandwidth
and offers QoS for both FWD (forward) and RTN (return) link in collaboration with the segment 2
control plane functions.
• Data Processing is dedicated to specific “protocol enhancement”. It enhances end-to-end
protocol behavior over satellite and reduces bandwidth needs via network optimization
technologies, such as header compression, payload compression, packet aggregation .... It also
implements the payload encryption/decryption.
• Edge Processing implements the “network edge” and controls the endpoints of the satellite
channels. It interacts directly with terrestrial networking functionality of both data plane and control
plane level. For layer 3 all router functionality is implemented here.
Each satellite network is a combination of these four segments.
Layer 3, layer 2 and multicast networks can be mixed within the scope of one satellite network. The
segment 4 network instances at the hub are transparently connected to the segment 4 instances at
the modem. Each connection is established through a 'satellite channel' as shown in different colors
in the figure below.
For L3 networks, the network connection passes through segment 3, 2 and 1. For example, network
1 and 2 in the figure above.
For L2 networks, the network connection passes through segment 3, 2 and 1. Each layer 2 network
instance connects with a single remote network instance on only one modem. For example, network
3 in the figure above.
For multicast networks, the network connection passes through segment 2 and 1. Segment 3 is
bypassed because no protocol enhancement is performed on multicast traffic. The multicast network
scope connects with the remote network instance on all modems in the satellite network. For
example, network 4 in the figure above.
5.1.2 Functions
Each segment can be organized in functional blocks, which perform specific task(s). The diagrams
below show the data and control plane functions for the hub module and modem side.
For reason of simplicity, the diagram above only shows one demodulator and return link controller. In
reality you have a demodulator and return link controller per return link technology.
For reason of simplicity, the diagram above only shows one modulating block. In reality you have a
modulating block per return link technology.
Segment 3
The ACM client sends line quality feedback to the ACM controller on the CSE. The ACM feedback is
sent over the return link. In case of MRC, HRC and S2, the feedback is forwarded from the DCP to
the ACM controller. In case of 4CPM, the feedback is forwarded from the CPMCTL to the ACM
controller. Based on this quality feedback, the ACM controller notifies the encapsulator of the
required MODCOD for the modem and sends ACM configuration messages to the shaper. The ACM
client extracts the ACM configuration messages from the forward link signaling.
Forward Link Control
The CSE sends layer 2 forward signaling to control the forward link selection. It also sends the
population ID signaling, containing the Air MAC address of the provisioned terminals. At the modem
the signaling and path timing is extracted from the forward link. The path timing is forwarded to the
time and frequency controller on the modem, which slaves the modem clock to the hub clock. The
forward signaling is forwarded to the FWD controller on the modem.
Return Link Control
The CPMCTL controls the dynamic behavior of 4CPM. It assigns bandwidth capacity to modems
based on their capacity requests.
The modem inserts the return capacity requests into the return link. The CPM demodulator extracts
the 4CPM return related layer 2 signaling, including return capacity requests and ACM feedback from
the modems and sends it to the CPMCTL. The CPMCTL inserts the 4CPM layer 2 return signaling
with the bandwidth assignments into the forward link. The modem extracts the capacity assignments
from the forward link.
For MRC and HRC, the return link controller (MRCCTL or HRCCTL) controls the dynamic behavior
of the return technology. The DCP sends the capacity requests from the terminals to the controller
and the controller inserts the layer 2 return signaling with bandwidth assignments into the forward
link. The modem extracts the return signaling from the forward link.
The S2CTL performs the following functions:
• Controlling the DVB-S2(ext) return carrier resources.
• Managing the DVB-S2(ext) demodulator hardware resources.
• Configuring the remote terminals.
• Controlling the ACM functionality of the DVB-S2(ext) return link.
• Controlling the AUPC functionality of the DVB-S2(ext) return link.
Modem Controller
The modem controller is the main control and management application of the modem and performs
the following functions:
• Storage of data.
• Handling configuration updates of data path.
• Handling terminal installation flow and terminal authentication.
• Communication with Terminal Control Server or TCS
• Provide data output to the modem GUI.
• Provide terminal statistics to the TCS.
In each hub module a number of generic VMs or VNFs are deployed. These are part of the HMMS
sub-system and are common to all satellite networks configured in the hub module.
• BSC: is responsible for bootstrap management, system configuration management and runs the
inventory management client.
• LOG: stores metrics.
The NMS sub-system is only deployed on the hub module if the platform uses an
embedded NMS. In case of a standalone NMS, these components run on the NMS
hub module.
NMS
Dialog Mobility Hosts the Mobility API which is used for CMS
Manager (DMM) communication between a Mobility Manager and the
hub. Only applicable for mobile terminals which switch
beams during operation.
Logging • Getting all logs from the different system modules. LOG
• Digest logging and visualize the logs via the MON
centralized logging of the Dialog NMS.
• Storing the metrics in a database.
Hub Module
Return Controller • Managing settings for the Return technology (CPM, HMGW
management HRC, MRC, S2(X)) per terminal.
• Managing switching between Return technology on
a terminal.
Logging • Getting all logs from the different system modules. LOG
• Digest logging and visualize the logs via the
centralized logging of the Dialog NMS.
• Storing the metrics in a database.
Platform
5.2.1.1 Components
Inventory management plays an important role in the installation and configuration of the Newtec
Dialog platform. Through inventory management, the customer specifies the actual hardware
configuration and deployment of his platform. When a customer updates the Newtec Dialog platform
(for example adding/removing equipment, adding capacity,... ), inventory management is used to
define the changes and to push the necessary configurations to all involved equipment.
Inventory management provides flexibility, scalability and modularity in the Newtec Dialog platform.
Based on the items configured by the customer, inventory management will bootstrap the necessary
devices and VMS/VNFs on the platform. The NMS components will also learn from inventory
management which are the available devices, what is the capacity and redundancy configuration in
order to set up the configuration and monitoring of the platform.
The Inventory Management Server or IMS gives the Newtec Dialog customer the ability to
configure the inventory of his Newtec Dialog platform in terms of hub modules, satellite networks,
RF equipment (modulators/demodulators), servers and redundancy. The IMS functionality is part of
the nms-conf application running on the CMS, which is deployed on the NMS.
The Inventory Management Client or IMC is an application running on the BSC, which is deployed
on each hub module. The IMC learns the IMS configuration for the hub module during installation or
when changing the inventory, and triggers actions to apply the necessary configuration to the hub
module.
The first entity that should be defined in the IMS is the hub gateway, which corresponds to the
physical location of the hub. There can be one or more gateways depending on your Newtec Dialog
platform constellation: a Newtec Dialog platform can consist of multiple hubs at different gateway
locations.
Next, the hub modules that are located at the gateway should be defined.
The hub module configuration includes:
• The number of satellite networks
• The RF devices and their location in the rack (does not exist for the XIF processing hub module)
• The pool and/or chain redundancy of the RF devices.The IMS validates the input and gives
feedback to the client whether the redundancy scheme can be applied. The redundancy controller
will learn the devices and redundancy schemes from the IMS.
When devices are added to increase capacity, this will impact the redundancy scheme.
• The servers and their location in the rack (is not available for the XIF processing hub module
deployed on NPCI)
• The HPS pools.
• Uplink ports to use
• Number of enclosures, use of USS, PTP mode
5.2.2.1 Components
nms-conf is responsible for maintaining the central configuration model in an SQL database. It
exposes interfaces (REST API) that allows the user to change/update the configuration of the
system, either directly via REST calls or via the GUI. It ensures consistency of the model and
enforces both security (permissions) and validation rules with regard to the boundaries of the
system before writing the changes into the configuration database.
• Device managers are deployed on or near the device they're managing and are responsible for
applying configuration changes to the device. They keep themselves in sync with nms-conf by
polling the CMS every second for changes. This polling mechanism ensures that the system
recovers automatically from temporary outages. If the communication fails between a device
manager and the CMS, the NMS fault & performance application will trigger a 'device manager
outdated' alarm. During this communication failure period, it is still possible to change the
configuration via REST or GUI. Once the communication is restored, the device manager can poll
the CMS again and apply the configuration changes (if present).
The core of the resources model is the satellite network. A satellite network is the combination of
specific forward and return link resources on which terminals can be provisioned.
The Satellite Resources can be configured via the Newtec Dialog GUI or via REST
API.
When the physical satellite network has been configured, it needs to be linked to actual satellite
resources. The satellite resources correspond with a beam, which covers a geographical area in
which terminals are serviced.
The forward link is defined as the link from the hub over the satellite to the terminals. The forward
link can use the DVB-S2 and DVB-S2X standard as well as the DVB-S2X Annex M standard. The
forward resources model contains the configuration of the forward carrier (frequency as seen by the
terminal, symbol rate, roll-off factor, etc.) and the configuration of the quality of service capacity
(QoS DSCPs, CIR/PIR/Weights of the different pools). The forward resources should also be linked
to a transponder and a logical satellite network.
The return link is defined as the link from the terminals over the satellite to the hub. The return link in
Newtec Dialog supports following access and coding & modulation technologies:
• MF-TDMA - 4CPM
• SCPC - DVB-S2 and S2 Extensions
• SCPC - HRC
• Mx-DMA – HRC
• NxtGen Mx-DMA - MRC
The return link contains both the frequency plan and capacity plan of the return path. Multiple
technologies and carriers can be used simultaneously in the return for one satellite network. The
return link has a couple of common configuration parameters (QoS DSCPs used in the return, the
local oscillator frequency in the return) and consists of one or more return capacity groups. The
return capacity groups bundle RF-related properties for a given return technology:
• For DVB-S2 return capacity groups, this is the spectral range (defined as frequency as transmitted
by the terminal) that can be used to provision DVB-S2 carriers in.
• For HRC SCPC return capacity groups, these are the spectral range and the ACM settings for the
terminals in this RCG (whether or not to use ACM, min and max MODCOD).
• For HRC Mx-DMA return capacity groups, these are the spectral range, the ACM settings and the
symbol rate terminal should use when logging in.
• For MRC NxtGen Mx-DMA return capacity groups, these are the spectral range, the ACM settings
and the symbol rate the terminal should use when logging in.
• For 4CPM return capacity groups, this the entire carrier frequency plan.
For multiple access return technologies (MF-TDMA, Mx-DMA, NxtGen Mx-DMA), the capacity of the
return capacity group is split in return pools, each with CIR/PIR/weights (as in the forward link).
Terminals are attached in the forward to a forward pool and in the return either to a return capacity
group directly for SCPC technologies or to a return pool otherwise.
The network resources are configured on top of a satellite networks and are isolated from each other
using VLAN identifiers.
Network resources can be grouped into:
• Layer 3 network resources
• Layer 2 network resources
The Network Resources can be configured via the Newtec Dialog GUI or via REST
API.
The main difference between a shared and dedicated subnet types is that in a shared subnet, IP
addresses belong to shared IP address pools and are handed out address per address; in a
dedicated subnet no address pools are used and terminals are provisioned with an entire subnet.
The following table gives a more extensive overview of the differences:
IP addresses One IP address per terminal. The One subnet per terminal.
per terminal terminal acts as a "bridge" and The terminal acts as a "router".
captures the traffic to other IP
addresses in the same subnet.
Security VNOs are granted access to IP VNOs are granted access to the
pools, and thereby gain access to virtual network directly.
the network.
Layer 2 point-to-point virtual connections are only supported on 4IF and XIF hub
modules with HP switches.
A layer 2 point-to-point virtual connection can be considered as a virtual Ethernet pipe, which
establishes isolated communication between two devices. A layer 2 point-to-point connection is
completely agnostic of the payload of the Ethernet frame. Any L2 Ethernet traffic (IP, ARP, OAM
flows) is passed transparently between these two devices. The forwarding is based on VLAN tags.
5.2.2.2.3 Profiles
Profiles allow to reuse common settings over a (large) number of terminals. The system supports the
following profiles:
• Classification Profile, which contains rules for classifying incoming traffic into traffic classes. The
classification profile also specifies if the classified traffic needs to be marked with DSCP values or
not.
• Service Profile, which defines the shaping parameters (CIR/PIR/weight) for the terminal circuit
and QoS classes.
• Attachment Profile, which is a group of attachments and each attachment defines a beam, a
satellite network, a forward pool and a return pool. This is used for terminals that can be
operational in multiple beams.
• Firewall Profile, which allows you to block any incoming traffic, except the one that matches
specific rules.
• BGP Profile, which contains rules to filter the routes which are exchanged across BGP peers.
5.2.2.2.4 Security
The security system uses Domains and Users. The Hub Network Operator (HNO) and Virtual
Network Operators (VNO) are domains and can have one or multiple users. Access to resources is
granted to domains (for example domain VNO-1 has access to forward pool A, return pool B,
service profiles C and D, virtual network E, ...). Users are granted roles (for example read-only
users).
Domains are hierarchical, typically there is one "System" domain, that has one HNO domain and all
VNO domains belong to this HNO domain.
Resources are linked to domains in two different ways:
• All resources belong to a domain (this is also reflected in the resources identifier). Users of the
owning domain can make changes to the resource.
• Domains can be granted access to resources. Users of these domains cannot change the
resource, but can use them, for example provision a terminal using this resource.
The resources that can be assigned to (VNO) domains are:
• Forward Attachments (Forward Pools)
• Return Attachments (Return Pools, S2 and HRC SCPC Return Capacity Groups)
• IP Pools
• Dedicated subnets
• Different profiles, such as service profiles and classification profiles
5.2.2.2.5 Terminal
• User-friendly UI
• Fully customizable e-mail reports
Newtec Dialog uses Skyline DataMiner® software for the fault and performance monitoring, logging
and reporting. DataMiner® (DMA) delivers out of the box support providing drivers for devices to
monitor such as Linux servers, Cisco devices and Newtec equipment.
The standard configuration is a single DMA cluster or DMS cluster. This single DMA cluster has
three redundant pairs of DMA agents or instances running. This standard cluster configuration can
be extended with extra agents when the scale of your Dialog network grows. The number of
required agents will be based on a dimensioning requirement.
Dialog's initial fault and performance management system fully relied on DMA for metrics collection,
processing and visualization. DMA is part of the NMS system and directly connects to all required
subsystems to gather data across the Dialog system.
In order to realize a more scalable metrics collection architecture, the system is transformed to a
hierarchical collection model where data is collected as close to the data source as possible and
stored in hub local Time Series DataBases (TSDBs). Users can access these TSDBs for local data
collection. This guarantees access to performance metrics without a requirement to the link
availability between the NMS and the hub modules.
DMA is available for visual inspection and initiating alarms but is discouraged to be used as a data
source to fetch metrics for further processing. Instead external tooling should use the TSDB API to
extract recent data and transform and load it into back-end systems externally to the Dialog system.
All terminal metrics and any satellite network related performance data is locally collected at the hub
module and exposed via a local TSDB interface using influxdb query language. By default, metrics
are collected in intervals of 30 seconds and a guaranteed retention period of three weeks of the raw
data is provided. Both the collection interval and retention period are configurable.
The element monitoring of devices, such as modulators and demodulators is still done by DMA.
5.3 Redundancy
Multiple redundancy schemes and mechanisms are used in the Newtec Dialog hub module. The
following redundancy schemes are distinguished:
• Redundancy for RF devices
• Redundancy for server hardware and Virtual Machines
• Redundancy for applications
• Redundancy for network connectivity
• Redundancy for power
5.3.1.1 Interaction
The redundancy controller is in charge of the redundancy. It determines the status and health of
every physical device and all virtual machines. Based on the status, the redundancy controller
makes the decision whether a redundancy swap is required or not.
The redundancy controller polls the IMS via the REST API for the topology information. From this
topology information, the redundancy controller learns which devices are involved in the redundancy
schemes and how to monitor them. The redundancy controller polls the RF devices for alarms using
either RMCP or SNMP. The redundancy controller and uses the Ethernet switches and the USS
switches to swap the devices.
A pool is defined as a group of (virtual) devices from the same type, where every device performs a
role (service) and where typically there is one standby device that can take the role/service from any
device from the group.
For example: a pool redundancy of N:M means that there are N active devices and M standby
devices, which are ready to take over in case of failure of any of the N active ones.
Every status change of a device from the pool triggers evaluation of the pool status. If a device fails
and the standby device is OK, then a pool swap occurs.
The redundancy controller will configure the standby device with the role of the failed device and try
to isolate the defect device.
Redundant demodulators are organized in a pool redundancy. Pool redundancy is also applied
within a protection group of the virtual machines.
A chain is defined as a group of devices, which as a whole provide full service. If one of the devices
in a chain fails, the whole chain is declared as defect. This will trigger a swap of the service to
another chain, also known as a chain swap.
In the Newtec Dialog HUB6504 and HUB6501 hub modules, a chain is used for the 4CPM return link
with the modulator, the NTC2291 burst demodulators and the USS.
Every status change of a device triggers the evaluation of the overall chain status. Decision whether
a chain is OK or not is straightforward: if any of the devices in a chain is defect then the chain is not
OK. Only when every device in the chain is OK, then the chain is considered as OK. The
redundancy controller keeps track of the currently active and standby chain, so it is able to make a
decision whether a swap is required or not.
A redundancy swap is an action consisting of multiple commands executed in a certain order on
multiple devices.
These commands, executed on a device, have the following goals:
• Make the device itself active/standby.
• In case of standby state, isolate the device as much as possible, so that all inputs and outputs are
cut and harmful signals can not be sent out.
5.3.2 RF Devices
5.3.2.1 1IF
Modulators
A redundant setup requires two modulator of the same type: 2x M6100 or 2x MCM7500. The
modulators operate in a 1:1 redundancy configuration. When the active modulator fails, the standby
modulator takes over. The 1:1 redundancy is non-revertive.
The RF outputs of both modulators are connected to the TX switch of the USS (output A and B).
Output C of the USS' TX switch serves as the RF TX interface.
(The figure below shows the setup for redundant M6100 modulators. The same setup is used for
redundant MCM7500 modulators.)
Both switches of the USS (TX and reference switch) switch simultaneously during a redundancy
swap, making sure that the TX output signal and 10 MHz reference output signal are coming from
the same modulator, i.e. the active one.
In case of the external 10 MHz mode, the external 10 MHz signal is inputted to the IN interface of
the reference splitter (ACC6000) and the 10 MHz REF IN interface of both modulators is connected
to the OUT interfaces of this reference splitter.
(The figure below shows the setup for redundant M6100 modulators. The same setup is used for
redundant MCM7500 modulators.)
5.3.2.1.2 Demodulators
The RF IN of a demodulator is connected to the 8-way L-band splitter. A 1IF hub module can have
up to eight demodulators (limited by the 8-way L-band splitter).
NTC2291
Because the NTC2291 demodulator has no redundant network connectivity and no redundant
power supply, N:M redundancy cannot be used. Instead, the NTC2291 demodulators operate in an
N:N chain redundancy. Taking into account the limitation of the modulator with respect to NCR/ASI
signaling, you can have up to four NTC2291 demodulators per chain.
5.3.2.2 4IF
Modulators
Modulator redundancy requires two modulators of the same type per satellite network: 2x M6100 or
2x MCM7500 per satellite network. These modulators operate in a 1:1 redundancy configuration.
When the active modulator fails, the standby modulator takes over. The 1:1 redundancy is
non-revertive.
Modulators can be inserted in rack position 1 to 8. The positions are clearly numbered on the rack
and the patch panel.
The RF outputs of the modulators are connected to the TX switches of the universal redundancy
switch extension module (USS0203). There are four RF TX switches for the redundancy switching
of the TX signals of the four satellite networks.
In case of the internal 10 MHz mode, the 10 MHz REF OUT interfaces of the modulators are
connected to the 10 MHz reference switch of the universal redundancy switch main module
(USS0202). There are four 10 MHz reference switches for the redundancy switching of the 10 MHz
reference output signal of the modulators.
The 10 MHz output interface is the 50 Ohm BNC 10 MHz REF OUT interface on the interface panel.
Each satellite network has its own REF OUT interface.
Both switches of the USSs (TX and reference switch) switch simultaneously making sure that TX
output signal and 10 MHz reference output signal are coming from the same modulator, i.e. the
active one.
In case of the external 10 MHz mode, the external 10 MHz signal is inputted to the REF IN
interface on the interface panel.
This signal is forwarded to the first 8-way reference splitter (ACC6000) and distributed to the 10 MHz
REF IN interface of the modulators. The second 8-way reference splitter can be used as spare part
in case one of the other splitters fails.
5.3.2.2.2 Demodulators
A 4IF hub module can have up to eight demodulators per satellite network. The two L-band splitters
(ACC6000) provide four 8-way L-band splitters for splitting the RX signal of the four satellite
networks.
Demodulators can be inserted in rack position 1 to 18. The positions are clearly numbered on the
rack and the patch panel.
NTC2291
Because the NTC2291 demodulator has no redundant network connectivity and no redundant
power supply, N:M redundancy cannot be used. Instead, the NTC2291 demodulators operate in an
N:N chain redundancy. Taking into account the limitation of the modulator with respect to NCR/ASI
signaling, you can have up to four NTC2291 demodulators per chain.
5.3.2.3 XIF
5.3.2.3.1 Modulators
Modulators in the XIF baseband hub module operate in an N:M pool redundancy configuration, with
N active devices and M standby devices. A baseband hub module can have one or more
redundancy pools of modulators. Dedicated redundancy pools should be configured for each type of
modulator. The type depends on the modes (e.g. MCM7500 1G, MCM7500 10G) of the device.
The redundancy pools can be defined across different satellite networks and can have one or more
redundant modulators protecting multiple satellite networks. Redundancy pools cannot be
configured across baseband hub modules. The number of redundant devices can be 0.
The RF output of each modulator is connected to an input port of the RF switch matrix. The RF
output signal of each active modulator is switched to the associated TX interface of the satellite
network. The association between the inputs and outputs of the RF switch matrix is defined by
configuration and the modulators’ redundancy state:
• Multiple RF output signals can be multiplexed on the same TX interface (Tx 2 in the figure below).
• Multiple TX interfaces can be used for redundancy, carrying identical (duplicated) RF output
signals (Tx 3 and Tx 4 in the figure below).
Redundant modulators in the redundancy pool can replace a failing modulator of any satellite
network. When the active modulator in a satellite network fails, a standby modulator takes over.
After the redundancy swap, the standby modulator has become the new active modulator. The
failing modulator is isolated from the active network and when the failure is fixed, it can be used as a
standby modulator. The N:M redundancy is non-revertive.
The number of modulators depends on the number of satellite networks you want to serve, the type
of network interface used and the configuration of redundancy:
• You can add up to 10 modulators if they all operate in wideband mode. These modulators can be
inserted in rack positions 12 to 21. The positions are clearly numbered on the rack.
• You can add up to 32 modulators if they all operate in non-wideband mode. These modulators
can be inserted in rack positions 1 to 32. The positions are clearly numbered on the rack.
5.3.2.3.2 Demodulators
Demodulators in the XIF baseband hub module operate in an N:M pool redundancy configuration,
with N active devices and M standby devices. A baseband hub module can have one or more
redundancy pools of demodulators. Dedicated redundancy pools should be configured for each type
of demodulator. The type can depend on the technology (e.g. CPM, HRC, S2) or the capabilities
(e.g. CPM 16MHz, HRC 17MBd/70MHz) of the device.
The redundancy or device pools can be defined across different satellite networks and can have one
or more redundant demodulators protecting multiple satellite networks. For example, you can have a
redundancy pool of 24:1, with 24 active HRC demodulators and 1 redundant HRC demodulator. 8
Active HRC demodulators are linked to one satellite network, another 8 active HRC demodulators
are linked to a second satellite network, and another 8 HRC demodulators are linked to a third
satellite network. The standby demodulator is the redundant demodulator for all three satellite
networks.
Redundancy pools cannot be configured across baseband hub modules. The number of redundant
devices can be 0.
The RF input of each demodulator is connected to an output port of the RF switch matrix. The signal
on any RX interface of the satellite network is switched to the associated demodulator. The
association is defined by configuration and the demodulators’ redundancy state:
• Multiple satellite network RX signals can be multiplexed on the same RX interface.
• Multiple RX interfaces can be forwarded to the same output port of the RF switch matrix.
• Multiple RX interfaces can be used for redundancy, carrying identical (duplicated) satellite network
RX signals.
Redundant demodulators in the redundancy pool can replace a failing demodulator of any satellite
network. When the active demodulator in a satellite network fails, a standby demodulator takes over.
After the redundancy swap, the standby demodulator has become the new active demodulator. The
failing demodulator is isolated from the active network and when the failure is fixed, it can be used
as a standby demodulator. The N:M redundancy is non-revertive.
The number and type of demodulators depends on the required return link capacity and supported
technologies, and the configuration of redundancy. You can add up to 32 demodulators.
Demodulators can be inserted in rack position 1 to 32. The positions are clearly numbered on the
rack.
5.3.3.1 1IF
The switches operate in a 1+1 redundancy configuration, where the level of resilience is referred to
as active/active as the backup switch actively participates with the system during normal operation.
For uplink connectivity, following ports on both switches should be connected to the backbone
infrastructure:
• Gi1/0/47 for unicast traffic.
• Gi1/0/45 for multicast traffic.
For management connectivity, following port on both switches should be connected to the backbone
infrastructure:
• Gi1/0/48 for management traffic.
The redundant unicast uplink ports are configured separately (no aggregation) and STP (Spanning
Tree Protocol) is enabled. Following connectivity schemes are supported towards the customer
infrastructure:
• Layer 3
• Layer 2/3, with STP enabled on all unicast uplink VLANs
• Layer 2/3 without STP. STP in 1IF will handle the loop prevention
The switches are unaware of the state in any other redundancy scheme.
All devices in the hub module, except NTC2291, have redundant management and data connectivity.
5.3.3.2 4IF
The distribution and access switches operate in a 1+1 redundancy configuration, where the level of
resilience is referred to as active/active as the backup switch actively participates with the system
during normal operation.
For uplink connectivity, two pairs of redundant interfaces are available:
• Data 1A/1B/2A/2B are two pairs of redundant 1 GbE interfaces for unicast traffic.
• Data 3A/3B is one pair of redundant 1 GbE interfaces for multicast traffic.
For management connectivity, one pair of redundant interfaces is available
• MGMT-A/B is one pair of redundant 1 GbE interfaces for management traffic.
The interfaces can be found on the interface panel, which is located at the rear of the 19” rack.
Internally, the A ports are connected to DSW-1 and the B ports are connected to
DSW-2.
The redundant unicast uplink ports are configured separately (no aggregation). STP is not enabled
on the unicast uplink ports due to the use of QinQ (dot1q tunneling). Following connectivity schemes
are supported towards the customer infrastructure.
• Layer 3
• Layer 2/3 with STP enabled on all unicast uplink VLANs
The switches are unaware of the state in any other redundancy scheme.
All devices in the hub module, except NTC2291, have redundant management and data connectivity.
5.3.3.3 XIF
5.3.3.3.1 HUB7208
The four access switches of the XIF baseband hub module are connected in a ring topology and are
configured in an IRF fabric (stack).
All devices within the hub module have redundant management and data connectivity.
5.3.3.3.2 HUB7318
The two TOR switches of HUB7318 are configured in an IRF (Intelligent Resilient Framework) stack.
This switch stack acts as one logical switch. The physical switches are members of the stack.
The processing hub module supports two types of switches: HPE FlexFabric 5710
48XGT 6QS+/2QS28 or HPE FlexFabric 5700 32XGT 8XG 2QSFP+ (legacy).
The different switch types cannot be mixed.
Redundant ports are available for the unicast, multicast, and management uplink. The ports used
depend on the type of switch: HPE 5710 or HPE 5700.
Unicast Traffic
HPE 5710
• TEN 1/0/5 of TOR-DSW-M1
• TEN 2/0/5 of TOR-DSW-M2
OR
• TEN 1/0/49:1 of TOR-DSW-M1
• TEN 2/0/49:1 of TOR-DSW-M2
HPE 5700
• 1 GbE or 10 GbE RJ45 ports
– TEN1/0/29 of TOR-DSW-M1
– TEN2/0/29 of TOR-DSW-M2
OR
• 10 GbE SFP+ ports
– TEN1/0/33 of TOR-DSW-M1
– TEN2/0/33 of TOR-DSW-M2
The redundant unicast uplink ports are configured in a Link Aggregation, creating a single logical link
with double capacity. Link Aggregation is possible thanks to the Stacked Switch implementation on
the TOR switches. This design results in a single logical switch and a single logical unicast uplink
connection, creating a loop-free topology by design. Following connectivity schemes are supported
towards the customer infrastructure:
Multicast Traffic
HPE 5710
• TEN 1/0/4 of TOR-DSW-M1
• TEN 2/0/4 of TOR-DSW-M2
HPE 5700
• TEN1/0/28 of TOR-DSW-M1
• TEN2/0/28 of TOR-DSW-M2
Management Traffic
HPE 5710
• TEN 1/0/8 of TOR-DSW-M1
• TEN 2/0/8 of TOR-DSW-M2
HPE 5700
• TEN1/0/32 of TOR-DSW-M1
• TEN2/0/32 of TOR-DSW-M2
The switches are unaware of the state in any other redundancy scheme. All devices within the hub
module have redundant management and data connectivity.
5.3.5.1 1IF
Two servers are deployed in a redundant setup: SRV-01 and SRV-02. Each server runs the same
set of virtual machines or VMs. The virtual machines work in an active-standby redundancy cluster
across the servers.
Some VMs are common:
• BSC-0-(redid)
• LOG-0-(redid)
• MON-0-(redid)
Others are grouped in the following sub-systems:
• HMMS (Hub Module Management System), which provides the internal management functionality
of the hub module.
– HMGW-0-(redid)
– REDCTL-0-(redid)
– TCS-(hpsid)-(redid)
• HPS (Hub Processing Segment), which deals with data processing, such as encapsulation and
decapsulation, acceleration, demarcation etc.
– PGICSE-(pgiid)
– PGIDCP-(pgiid)
– PGICPMCTL-(pgiid)
– PGIHRCCTL-(pgiid)
– PGIMRCCTL-(pgiid)
– PGIS2XCTL-(pgiid)
– PGITAS-(pgiid)
– PGIL2DEM-(pgiid)
– PGIDEM-(pgiid)
• NMS (Network Management System), which provides centralized management functionality of the
entire Newtec Dialog Platform.
The NMS sub-system is only deployed if the platform uses an embedded NMS.
– CMS-(redid)
– DMA-1-(redid)
For HPS
The VMs of the HPS sub-system use the concept of “protection groups” for redundancy. Within a
protection group a 1:1 redundancy configuration is applied. The 1IF hub module has one protection
group and an instance of this protection group is deployed on each server (SRV-01 and SRV-02).
If an application on a VM in the active instance of the protection group fails, the application is first
restarted. If this does not fix the problem, the complete set of VMs is activated on the other instance
of the protection group and the VMs on the first instance become standby.
For example: The Tellinet application on PGITAS-1 VM fails in instance 1 of the protection group
(deployed on SRV-01). A restart of the application does not help. PGITAS-1 is considered as a failed
VM and the redundancy controller swaps the complete set of VMs to instance 2 of the protection
group (deployed on SRV-02). The VMs on instance 1 become standby.
5.3.5.2 4IF
The NMS sub-system is only deployed if the platform uses embedded NMS.
In case you are using an embedded NMS, the NMS blade servers are in slots 13 to 16 of the
enclosure. The minimum NMS server deployment is one blade server in slot 13. To make this
minimum deployment redundant you should add a server in slot 14. Depending on the size of your
Newtec Dialog network, you can add an extra NMS server in slot 15 and optionally a redundant
server in slot 16.
Each redundant server runs the same set of virtual machines or VMs.
• HMMS sub-system
– The servers are located in slot 1 (non-redundant) and slot 2 (redundant)
– BSC-0-(redid)
– LOG-0-(redid)
– MON-0-(redid)
– HMGW-0-(redid)
– REDCTL-0-(redid)
– TCS-(hpsid)-(redid)
• HPS sub-system
– The servers are located in slots 3 up to 12.
– A hub processing segment is deployed on two blade servers:
– One blade server handles the Satellite Channel Processing (SCP) and must be in an
uneven slot position (3, 5, 7, 9 or 11). Following fixed set of virtual machines is deployed on
an SCP server:
• PGICSE-(pgiid)
• PGIDCP-(pgiid)
• PGICPMCTL-(pgiid)
• PGIHRCCTL-(pgiid)
• PGIMRCCTL-(pgiid)
• PGIS2XCTL-(pgiid)
– The other blade server handles the Edge/Data Processing (EDP) and must be in an even
slot position (4, 6, 8, 10 or 12). Following fixed set of virtual machines is deployed on an
EDP server:
• PGITAS-(pgiid)-(tasid)
• PGIL2DEM-(pgiid)
• PGIDEM-(pgiid)
• NMS sub-system
– The servers are located in slots 13 up to 16.
– The following fixed set of virtual machines is deployed on the first NMS server:
• BSC-0-(redid)
• LOG-0-(redid)
• MON-0-(redid)
• CMS-(redid)
• DMA-1-(redid)
– Two DMA instances will always be deployed on each additional NMS server:
• DMA-2-(redid)
• DMA-3-(redid)
The virtual machines work in an active-standby redundancy cluster across the servers. The
redundancy behavior depends on the type of sub-systems.
For HPS
The VMs of the HPS sub-system use the concept of “protection groups” for redundancy. Within a
protection group an N:M redundancy configuration is applied. The 4IF hub module has two
protection groups:
• SCP protection group
• EDP protection group
An instance of the SCP protection group is deployed on each blade server, which is in an uneven
slot position.
An instance of the EDP protection group is deployed on each blade server, which is in an even slot
position.
The collection of one instance of each protection group corresponds with one hub processing
segment (HPS). The number of protection group instances (PGIs) and thus the number of HPSs
depends on the number of satellite networks you want to serve with the 4XIF hub module.
For the maximum number of satellite networks (four), you will need four hub processing segments
(HPSs) per enclosure, each consisting of an SCP blade server and EDP blade servers. To have
redundancy, only one HPS (i.e. one SCP and one EDP blade server) can be added (only 10 blade
server slots are available). This means that you will have five SCP protection group instances and
five EDP protection group instances per enclosure, providing 4:1 redundancy within a protection
group.
The two protection groups are independent from each other, meaning that the redundancy controller
can select any blade server from the SCP protection group and any blade server from the EDP
protection group to serve a satellite network.
For each protection group, the instances can be modeled as devices in a device pool. The
redundancy is controlled by the redundancy controller (REDCTL), which can simply use pool
redundancy logic within the protection group.
For example: The Tellinet application on the TAS-1 on instance 1 (device 1) of the EDP protection
group (device pool) fails. A restart of the application does not help. TAS-1 is considered as a failed
VM and the redundancy controller activates the complete set of VMs on instance 3 (device 3) of the
EDP protection group (device pool). The VMs on instance 1 (device 1) become standby.
5.3.5.3 XIF
In the processing hub module deployed on NPCI, you cannot link VNFs to specific compute nodes.
The VNFs are hosted on the compute nodes in a random way but can be divided in the following
sub-systems:
• HMMS sub-system
– BSC-0-(redid)
– LOG-0-(redid)
– MON-0-(redid)
– HMGW-0-(redid)
– REDCTL-0-(redid)
– TCS-(hpsid)-(redid)
• HPS sub-system
– Hub module transport
• PGICSE-(pgiid)
• PGIDCP-(pgiid)
• PGITAS-(pgiid)-(tasid)
• PGIL2DEM-(pgiid)
• PGIDEM-(pgiid)
– Hub module control
• PGICPMCTL-(pgiid)
• PGIHRCCTL-(pgiid)
• PGIMRCCTL-(pgiid)
• PGIS2XCTL-(pgiid)
For HMMS
VNFs of the Hub Module Management or HMMS are hosted on the compute nodes using the
anti-affinity policy. Anti-affinity means that two instances of the same VNF (e.g. TCS-1 and TCS-2)
are never hosted on the same compute node. As a result, HMMS requires at least two compute
nodes. The redundancy is controlled by platform functions.
When an application on the VNF fails, the application is first restarted. If this does not fix the
problem, the VNF on the other compute node takes over and the VNF with the failing application
becomes standby.
For example: An application on HMGW-0-1 VNF fails. A restart of the application does not help.
HMGW-0-1 is considered as a failed VNF and the redundancy controller swaps all applications from
HMGW-0-1 to HMGW-0-2 on another compute node. The other VNFs, which did not fail, do not
swap.
For HPS
The VNFs of these sub-system use the concept of “protection groups” for redundancy. Each VNF
type has its own protection group (PG), meaning that there are eight protection groups:
• PGICSE
• PGIDCP
• PGICMPCTL
• PGIHRCCTL
• PGIMRCCTL
• PGIS2XCTL
• PGITAS
• PGIDEM
• PGIL2DEM
The collection of one instance of each protection group corresponds with one hub processing
segment or HPS.
The number of protection group instances (PGIs) and thus the number of HPSs depends on the
number of satellite networks you want to serve with the XIF hub module.
HPSs are grouped logically into HPS pools; one HPS pool can have zero to six HPSs, or zero to six
instances of the protection groups. Within a protection group of the same HPS pool an N:M
redundancy configuration is applied. You can have one to three HPS pools.
For the maximum number of satellite networks (18), you need six hub processing segments (HPSs)
per HPS pool. Each HPS has an instance of the complete set of VNFs running. To have redundancy,
there is one extra instance of the set of VNFs added to the HPS pool. This means that you will have
seven VNF PGIs per HPS pool, providing 6:1 redundancy within the VNF protection group of the
HPS pool.
For each VNF protection group, the instances can be modeled as devices in a device pool. The
redundancy is controlled by the redundancy controller (REDCTL), which can simply use pool
redundancy logic within the VNF protection group.
For example: The Tellinet application in instance 1 (device 1) of the TAS protection group (device
pool) fails. A restart of the application does not help. TAS-1 (device 1) is considered as a failed VNF
and the redundancy controller activates the applications on instance 3 (device 3) of the TAS
protection group (device pool). Instance 1 of the TAS protection group becomes standby.
When one of the compute nodes fails, its VNFs are redistributed over the remaining
compute nodes in a best effort manner. When no space is available, some VNFs
cannot be deployed and redundancy will be broken. This will only happen if the hub
module was not dimensioned well.
5.3.5.4 NMS
Controller Block
The controller block has two redundant controller nodes and two redundant storage nodes. The
redundant pairs work in hot-standby mode.
Compute Block
The compute nodes host Virtual Network Functions or VNFs.
• BSC-0-(redid)
• LOG-0-(redid)
• MON-0-(redid)
• CMS-(redid)
• DMA-1-(redid)
• DMA-2-(redid)
• DMA-3-(redid)
The VNFs are hosted on the compute nodes in a random way and use the anti-affinity policy.
Anti-affinity means that two instances of the same VNF (e.g. DMA-1 and DMA-2) are never hosted
on the same compute node. As a result, the NMS requires at least two compute nodes.
When one of the compute nodes fails, its VMs are redistributed over the remaining compute nodes
in a best effort manner.
The geo-redundancy controller or GRC is at the heart of the geo-redundancy service and has the
following responsibilities:
• Synchronize the provisioning data between the two sites at regular intervals.
• Perform the switchover from the original (failing) active to the new active site without any manual
operator intervention.
The trigger for a switchover is a manual action of the operator, who optionally has a user
interface (dashboard) with all necessary active / passive hub module KPIs to make an informed
decision to trigger the switch over.
Dialog's geo-redundancy solution can be used for:
• Disaster recovery
• Mitigating severe weather conditions
• Service impacting maintenance or upgrades
The following geo-redundant Dialog deployment scenarios are supported:
• Full hub geo-redundancy consists of two geographically distant located hubs, including both a
set of hub modules and an (embedded) NMS. The hub modules are exact copies of each other.
• NMS geo-redundancy consists of a set of hub modules located at one or more teleports and a
1+1 redundant set of external NMS, typically located at different locations than the hub modules.
The trigger for a switchover is always a manual action, the switchover process itself is
fully automated.
Switchover Process
The switchover process will switch all Dialog hub components from the active to the passive
location. There is no option to only switch over the NMS, or to only switch over a specific hub
module.
There are two types of switchover triggers:
• Graceful switchover, in which case the administrative status (active or passive) and accessibility of
both hubs is checked. If the status is not OK or the gateway cannot be reached, the switchover
will not occur.
• Forced switchover, in which case the administrative status can be forced regardless of the actual
administrative status, or in case the administrative status cannot be determined. The operator is
responsible to ensure that there are no conflicts in the operation of both hubs.
The detailed switchover sequence is as follows:
• The operator triggers a switchover.
• The GRC confirms both Dialog hubs are accessible.
• The GRC disables the currently active Dialog hub using REST API.
• The GRC polls the status of the Dialog hub until the modulators are no longer transmitting and the
uplink ports are disabled.
• The GRC enables the currently passive Dialog hub using REST API.
• The GRC polls the status of the Dialog hub until the modulators are transmitting and the uplink
ports are enabled.
• The GRC waits for the process to complete and notes the status.
The time to perform the switchover, from the trigger until terminals are operational again on the new
hub, is a few minutes. Traffic is impacted during the switchover process.
The NMS geo-redundancy service requires two geographical locations where a 1+1 active / passive
NMS is defined, but with a non-redundant set of hub modules, The hub modules can be at a
separate geographical location, or at one of the geo-redundant NMS locations.
NMS geo-redundancy is supported when the following conditions are met:
• Both NMS systems are exact copies of each other with respect to Dialog hardware and Dialog
software releases
• Only deployments with an external NMS are supported
• L2 connectivity between the two sites hosting the NMS is required to allow support for GRC virtual
IP addresses
The passive NMS can be accessed by an operator, but will have limited functionality:
• All components are in a 'read-only' mode; provisioning actions are not possible
• There is no monitoring
• The mobility orchestrator will not accept any external API commands
• TICS, if available, will not accept any certification or perform any verification command
The geo-redundancy service provides database synchronization and the switchover process.
Database Synchronization
The GRC regularly synchronizes the provisioning database of the active NMS with the database of
the standby NMS. The synchronization interval is configurable. Additionally, the synchronization can
be triggered at any time, for example before a graceful switchover, ensuring that no provisioning data
is lost.
The GRC performs the following steps during the database synchronization:
1. Set the active NMS in read-only mode.
2. Fetch a backup of the active Dialog NMS provisioning database through RSYNC.
3. Upload the backup to the passive NMS through RSYNC and push the data to its provisioning
database.
All NMS components on the passive site update their own provisioning data, based on any
provisioning deltas pushed to the passive system. The NMS components are in a 'warm
standby' state; they are running but perform no actions except updating provisioning data
4. Wait for the upload process to complete and note the status.
The hub modules are not affected by database synchronization as they are
continuously updated with any provisioning changes on the active NMS.
Switchover Process
The switchover process will deactivate the active NMS and activate the original passive NMS. The
process will not affect the hub modules and therefore will not impact terminal traffic.
The detailed switchover sequence is as follows:
• The operator triggers a switchover.
• The GRC confirms both NMS systems are accessible.
• The GRC disables the currently active NMS system using REST API.
• The GRC polls the status of the active NMS system until it confirms it is in a passive state
• The GRC activates the originally passive NMS system using REST API
• The GRC polls the status of the NMS system until it confirms it is in an active state, additionally
system VIP address is rerouted to newly active NMS
• Hub modules repoint from the original active NMS to the newly active NMS upon system VIP
address change
The time to perform the switchover, from the trigger to a fully up and running NMS on the original
passive site, is less than one hour. Note that traffic is NOT impacted during the switchover process.
6 Abbreviations
Abbreviation Definition
AC Alternating Current
Db Decibel
DCP Decapsulator
DM Device Manager
Abbreviation Definition
FWD Forward
HP Hewlett Packard
ID Identifier
IP Internet Protocol
Abbreviation Definition
MGMT Management
MOD Modulator
OA Onboard Administrator
Abbreviation Definition
REF Reference
RF Radio Frequency
RTN Return
SRV Server
SW Software
TB Transport Based
TS Transport Stream
UI User Interface
Abbreviation Definition
VM Virtual Machine