Download as pdf or txt
Download as pdf or txt
You are on page 1of 51

R&S SwissQual AG®

Data Performance - Interactivity and


Latency
Test Description
Release 24.0

(T^OZ2)
3646314202
Version 24.0
© Rohde & Schwarz SwissQual AG
Niedermattstrasse 8b, 4528 Zuchwil, Switzerland
Phone: +41 32 686 65 65
Fax:+41 32 686 65 66
E-mail: sq-info@rohde-schwarz.com
Internet: http://www.rohde-schwarz.com/mobile-network-testing
Subject to change – Data without tolerance limits is not binding.
R&S® is a registered trademark of Rohde & Schwarz GmbH & Co. KG.
Trade names are trademarks of the owners.

3646.3142.02 | Version 24.0 | R&S SwissQual AG®


R&S SwissQual AG® Contents

Contents
1 Introduction............................................................................................ 5
1.1 Testing Network Latency and Interactivity................................................................. 5
1.2 Latency and Interactivity as a General Dimension of Network Testing................... 5

2 Interactivity Test.....................................................................................7
2.1 Test Principle................................................................................................................. 7
2.1.1 Control of Data Rate....................................................................................................... 8
2.1.2 Individual Data Rates in Uplink and Downlink (Asymmetric Traffic)................................8
2.1.3 Latency Measurement by Extended TWAMP over UDP.................................................9
2.1.4 Results of the Extended TWAMP Measurement.............................................................9
2.1.4.1 Per-packet Two-way Latency........................................................................................ 10
2.1.4.2 Packet Delay Variation (PDV)....................................................................................... 10
2.1.4.3 Disqualified Packets...................................................................................................... 11
2.1.5 Prediction of Interactivity Score.....................................................................................12
2.2 Test Configuration.......................................................................................................13
2.2.1 Considerations for Choosing the Traffic Pattern........................................................... 14
2.2.1.1 Patterns with Target Application and QoE Score.......................................................... 14
2.2.1.2 Technical Patterns.........................................................................................................15
2.3 Measurement Results and KPIs.................................................................................15
2.3.1 Test Parameters and Reported Information.................................................................. 16
2.3.2 Overall Measurement Results and KPIs....................................................................... 16
2.3.3 One-way Results...........................................................................................................19
2.3.4 Test Results for Intermediate Intervals..........................................................................20
2.3.5 Test Results for Individual Packets............................................................................... 21
2.3.6 Statistical Aggregation of Results and KPIs..................................................................22
2.4 Example Results and Interpretation..........................................................................22

3 Ping Test............................................................................................... 25
3.1 Technical description - Ping Test.............................................................................. 25
3.1.1 Classic ICMP Ping Test.................................................................................................25
3.1.2 TCP Ping.......................................................................................................................26
3.1.3 TCP Ping on unrooted devices..................................................................................... 27
3.2 Test Configuration.......................................................................................................28

Test Description 3646.3142.02 ─ 24.0 3


R&S SwissQual AG® Contents

3.2.1 Ping............................................................................................................................... 28
3.2.2 Ping TCP.......................................................................................................................28
3.3 Measurement Results and KPIs.................................................................................29
3.3.1 Test Parameters and Reported Information.................................................................. 29
3.3.2 KPIs and Test Results................................................................................................... 29
3.3.3 Statistical Aggregation of Results and KPIs..................................................................30

4 Traceroute Test.....................................................................................31
4.1 Technical Description................................................................................................. 31
4.2 Test Configuration.......................................................................................................32
4.3 Measurement Results and KPIs.................................................................................33
4.3.1 Test Parameters and Reported Information.................................................................. 33
4.3.2 KPIs and Test Results................................................................................................... 33
4.3.3 Intermediate Test Results and Parameters................................................................... 34

5 Literature...............................................................................................36

Annex.................................................................................................... 37

A Traffic Patterns..................................................................................... 37
A.1 Real-time E-Gaming.................................................................................................... 37
A.2 HD Video Chat............................................................................................................. 39
A.3 Interactive Remote Meeting....................................................................................... 40
A.4 VR / Cloud-gaming HD................................................................................................42
A.5 I4.0 Process Automation............................................................................................ 44
A.6 Technical Patterns.......................................................................................................45

B Latency, Jitter and Server Location................................................... 48


B.1 Base Latency............................................................................................................... 48
B.2 Latency and Server Position......................................................................................48
B.3 Jitter and Server Position...........................................................................................50
B.4 Conclusion...................................................................................................................51

Test Description 3646.3142.02 ─ 24.0 4


R&S SwissQual AG® Introduction
Latency and Interactivity as a General Dimension of Network Testing

1 Introduction

1.1 Testing Network Latency and Interactivity


The Rohde & Schwarz MNT products support a wide range of test cases to cover the
full span of network services and applications. It ranges from telephony, to streaming
and web services but covers also network performance measurements on lower layers
without a linked smartphone application.
If looking at the overview of the current test landscape of the Rohde & Schwarz MNT
product lines, testing transport latency is grouped into the fourth pillar “Data Perfor-
mance”.

The test cases described in this document are highlighted in green.

1.2 Latency and Interactivity as a General Dimension of


Network Testing
Most users are under the impression that 5G will simply provide higher data rates. It is
true that the transport capacity will increase due to the additional spectrum made avail-
able by licensing 5G frequencies to operators. It will surely improve the download
experience, especially in heavily loaded areas.
But 5G is more than just higher data rates. The main advantage lies in the scalable
network slices that support the individual transport requirements for critical and less
critical applications. These network slices can be tuned for extremely short latencies,
especially for reliable connections and peak transport rates.
5G is prepared to be a technology for more than humans using smartphones, outdated
paging and modem services. 5G is intended to be the digital transport backbone for
industries and the automotive sector. An important aspect of data transmission perfor-
mance of networks is data transfer times and resulting reactivity in interactive scenar-

Test Description 3646.3142.02 ─ 24.0 5


R&S SwissQual AG® Introduction
Latency and Interactivity as a General Dimension of Network Testing

ios. Latency and reactivity are becoming even more essential for new interactive and
real-time applications as, e.g., in augmented reality but also in Industry 4.0 or automo-
tive use.
Latency is a technical measure, it reports the travel time of data or each individual
packet through the network. This latency and its variation among the packets belong-
ing to one data flow of an application, have a strong influence on the experienced
responsiveness of interactive and real-time applications. The influence of a given
latency is not the same for different applications and use cases. There are use cases
being more insensitive to a given latency and its variation, while others (e.g. real-time
gaming or device remote control) may fail in use under the same latency conditions.
Interactivity is a perception-driven measure based on transport latency and latency
variation but reflecting perception of responsiveness. As said, the influence of latency
and its variation is different for individual applications and use cases. It is important to
note that there is not a single transformation from latency into interactivity, rather the
perceived interactivity and therefore the perceptual transformation from latency to inter-
activity depends on the application to be rated. Even though Interactivity appears as a
QoE metric, it does not describe the overall quality of the tested application, it only
rates the responsiveness and how it influences the experience. It is an important
aspect for interactive applications but the overall quality is also driven by other aspects
as e.g. setup-time of the service or quality of audio or video presentation.
Interactivity is a dimension of QoE that is almost not considered or measured today in
real-field measurements, but is becoming a very important metric along with very short
latencies and real-time, interactive applications making use of it.

Test Description 3646.3142.02 ─ 24.0 6


R&S SwissQual AG® Interactivity Test
Test Principle

2 Interactivity Test
The interactivity test is described in detail in the
White Paper – Interactivity Test.pdf. It is fully compliant with the method
recommended in [ITU-T G.1051].

2.1 Test Principle


R&S SwissQual has implemented a dedicated interactivity test. The idea is to measure
latency under realistic traffic and load conditions and predicting interactivity as typically
experienced for a certain class of applications or use cases.
This two-step methodology is also represented in the technical realization of the test. It
consists of two layers. In the first layer, the latency of transported packets is measured
as technical QoS metric. In the second layer on top, these measured latencies are
transformed into a perceptive interactivity score. Different transformation schemes are
available reflecting individual typical applications as, for example, e-gaming or remote
meeting.
The methodology used for the interactivity test measures latency under realistic load
conditions as created by a user running an app on the smartphone. Like most real-time
applications, the transport protocol is UDP. Even more important than being close to
the physical layer is the ability to generate traffic load patterns that are typical for the
real use cases. Transmitting a heavy flow of packets at a high frequency when simulat-
ing a real use case will put different demands on the network than a series of pings.
This will result in a more realistic picture of latency, latency jitter and packet losses.
The principle is that the user equipment (UE) acts as a client that sends a stream of
packets to an active remote station, for example a server that acts as the responder
and reflects the packets back to the UE. The outcome reflects two-way delay informa-
tion as known from ping or SYN;ACK analysis in TCP. However, there is an important
difference between the interactivity test and a ping test. A simple ping test uses a sep-
arate protocol and is designed to check availability of an IP device in the network. It
creates almost no load and the transport system might be protocol-aware.

The interactivity tests is based on data exchange on UDP. UDP the main protocol for
real-time applications. It comes the closest to the physical layer, and avoids any addi-
tional, uncontrollable traffic caused by acknowledgements and retransmissions. The

Test Description 3646.3142.02 ─ 24.0 7


R&S SwissQual AG® Interactivity Test
Test Principle

higher layer protocol for the data flow between the client and server is TWAMP, the
two-way active measurement protocol. This is a state-of-the-art protocol specified by
the IETF to be implemented in components like firewalls, routers and IP gateways for
performance measurements.
On the client side, the packet size and frequency can be set and therefore the data is
controlled. The reflecting remote server just reflects the received packets but can add
information to the payload or react on commands received by the packets that exceed
the capabilities of the IETF TWAMP protocol.
The client on the UE side is implemented under Android native. This minimizes the OS
influence and achieves a high measurement accuracy for real ultra-reliable low latency
communications (URLLC) measurements. The server responder script can be installed
under Apache on CDN server instances in the cloud, the same on private servers at
operator's premises.

2.1.1 Control of Data Rate

To derive realistic measurement results, it is important to emulate traffic and load pat-
terns during the measurement as they would occur in real applications. The implemen-
ted test case is designed to create data streams like in real-time applications. The cli-
ent sets the sent traffic pattern – packet size and frequency – and thus controls the
data rate. Both, packet size and frequency, are adjusted to mimic the packet flow of
real applications and may change during one test.
It is important to note that the packet sending rate, applied in the interactivity test, is in
the range of 60 to 1500 packets per second. Such packet rate creates a quasi-continu-
ous packet flow as in a normal real-time application.

2.1.2 Individual Data Rates in Uplink and Downlink (Asymmetric Traffic)

The TWAMP protocol as described in [IETF RFC 5357] is based on a one-to-one


reflection of packets. It means, each sent packet is reflected in the same size by the
server. Consequently, packet rate, packet size and therefore the resulting data rate is
the same in up- and downlink. This is fully acceptable when assuming symmetric trans-
port capacities and not emulating real load patterns. If emulating real field applications,
the observable data rate is different for up- and downlink. To reflect this and to enable
different data rates in either direction, we in Rohde & Schwarz MNT have implemented
a simple extension of the TWAMP protocol in our server and client software.
In each sent packet’s payload we store information about the desired size of the reflec-
ted package. Based on this, for example, a 200 Byte package can trigger the reflection
of 800 Byte package to achieve a downlink data rate of four times the uplink rate.
The asymmetric data rate can only be achieved by different packet sizes, the packet
frequency remains the same (one reflected packet for each sent packet) to match the
basic principle of the test, where the two-way latency for each sent packet is evaluated.

Test Description 3646.3142.02 ─ 24.0 8


R&S SwissQual AG® Interactivity Test
Test Principle

This extension is available starting with release 22.0. It requires a corresponding


server installation as available from Rohde & Schwarz. Please refer to our
Installation Manual - IP Test Server.pdf.

2.1.3 Latency Measurement by Extended TWAMP over UDP

The TWAMP methodology and protocol is defined in [IETF RFC 5357]. It is based on a
UDP packet stream of packets in pre-defined size and frequency. The packets are sent
to a reflecting server which sends each packet back to the sending client. The received
reflected packet can be matched to a sent packet by an ID. The difference of the send-
ing and receiving time stamps are reported as two-way latency. The TWAMP protocol
according to the IETF is defined and, depending from the vendor, supported by the
infrastructure components such as routers and IP-gateways.
The amount of transmission data and the resulting data rate can be specified by the
packet size and the sending frequency. The data rate has to be variant of the time.
This allows emulating data stream characteristics as produced by real applications
which usually are not constant. To achieve this without violating the underlying TWAMP
principle, the traffic is subdivided into short segments of 1 second length. For each of
these segments, an individual bitrate can be defined and instantiated by setting the
matching packet frequency. The emulation of the traffic shape can be imagined as a
series of short TWAMP measurements with different bitrates.
When analyzing the raw IP traffic created by TWAMP on the client side, it can often be
seen that the amount of returned IP packets is smaller than the number of sent pack-
ets, although no errors were reported on the application level. This reflects a property
of the UDP stack on the phone, which groups several UDP packets from the same
stream together in the receive buffer for improved efficiency. This has no influence on
the results as the packets are transferred through the network individually.
To minimize the configuration effort at this lower definition level, typical traffic patterns
are preconfigured. Those traffic patterns are not only based on the constant packet
rates, but they also emulate bursty shapes in which the packet rate changes over time,
as it is typical for applications with temporary highly interactive phases. The supported
patterns are described in Annex A Traffic Patterns.

2.1.4 Results of the Extended TWAMP Measurement

The basic result of the applied measurement procedure is the two-way latency of each
individual packet that successfully arrived back at the session sender in time. Based on
the series of measured latencies, a further aggregation and analysis is applied for each
transmitted segment and finally over the overall test. In addition to KPIs based on the
latency itself, the test methodology enables more analysis and results. There is the
variation of the latency, but also lost or not sent packets are identified. Furthermore, a
delay budget is applied to disqualify packets not received within this predefined period
of time.
● Round-trip latency of each individual received packet
● Packet delay variation (jitter) as the latency variation over time

Test Description 3646.3142.02 ─ 24.0 9


R&S SwissQual AG® Interactivity Test
Test Principle

● Not sent, lost and disqualified packets


Since the client controls the packet payload as well as the sending and receiving time,
it is possible to evaluate:
● TX and RX data rates and its deviation from the configured target rates
The statistical aggregation and KPIs are described in detail in Chapter 2.3.6, "Statisti-
cal Aggregation of Results and KPIs", on page 22.

2.1.4.1 Per-packet Two-way Latency

The realization of the described two-way latency measurement method allows the
determination of the latency of each individual sent and received UDP packet. As a
result of the measurement, the vector D(i), where D is the latency of an individual
packet i for a measurement interval, is available for detailed analysis.
As statistical aggregation metrics of D(i) quantiles are recommended:
● Delay D(i) 50th percentile (median) – reported as "Round-trip Latency (median)"
● Delay D(i) 10th percentile (approximation for the shortest reachable latencies in
practice) – reported as" Round-trip Latency (10th percentile)"
An arithmetic average, as a statistical mean for characterization of the latency in a
measurement interval, cannot be recommended, since individual extreme latencies
dominate this average. Measured packet latencies follow a so called "heavy-tailed dis-
tribution" with reduced meaning of arithmetic averages.

2.1.4.2 Packet Delay Variation (PDV)

In line with [ITU-T Y.1540] and [IETF RFC 5481], the packet latency of an individual
packet is rated to the minimal packet latency and is defined as:
PDV(i) = D(i) – D(min) where PDV(i) ∈ ℝ+
where D(i) is the individual latency of one packet and D(min) is the minimum individual
latency of all packets in the measurement interval.
PDV values can only be zero or positive, and quantiles of the PDV distribution are
direct indications of delay variation.
The PDV is computed and available for the two-way latency but also for each direction
(client to server, server to client) separately.
This vector PDV(i) is used to calculate the:
● PDV 50th percentile (median)
● PDV 99.9th percentile (approximately maximum)

Test Description 3646.3142.02 ─ 24.0 10


R&S SwissQual AG® Interactivity Test
Test Principle

2.1.4.3 Disqualified Packets

As disqualified packets all packets are counted, which would not be usable for a real
and running application in practice. This covers more than actually lost packets, in par-
ticular:
● Not sent packets: Packets that could not leave the client device due to uplink con-
gestion and being discarded by the device kernel after timeout.
● Lost packets: Packets that were lost during transmission or could not leave the
reflecting server due to downlink congestion. They are being discarded by the
server kernel after the timeout or due to corrupted payload.
Lost packets are detected for the whole two-way transmission, but also for each
direction (client to server, server to client) separately.
Please note that packets lost at the very end of a test cannot be classified accord-
ing to direction. They add to the overall loss ratio, but not to uplink or downlink loss
ratio. Therefore, the loss ratios per direction do not always add up to the overall
loss ratio.
● Discarded packets: Packets that were received back after a predefined timeout at
the client device. This timeout, also called delay budget, emulates packet discard-
ing as done by a real application. The timeout is specified and defined according to
the maximum acceptable latency for the target application and forms one parame-
ter of the test settings.
From an application’s point of view, there is no need for differentiation among the indi-
vidual causes of not considering a packet as received. An overall indicator like the
Total Packet Error Ratio (TPER) is seen as sufficient at application level.
TPER = number of disqualified packets / number of all packets scheduled to send by
the application.
For more detailed analysis and troubleshooting, the individual reasons for disqualifying
packets can be reported too.
Other values like the per-packet two-way latency and the packet delay variation can
either be reported for all packets received back at the client (QoS level results) or for
the qualified packets only (QoE level results). For each value, this information is con-
tained in its description in Chapter 2.3, "Measurement Results and KPIs", on page 15.
There is another special category of packets, namely the duplicated packets. In certain
situations, it happens that IP packets are duplicated along the path to their destination.
For example, a router could decide to forward incoming traffic through two different net-
work interfaces in case of congestion. These packets can be recognized by a duplica-
tion of the TWAMP server sequence number. These packets do not increase the Total
Packet Error Ratio in case of delay, as the information has already reached the server
or client. Also, the duplicated packets should not be used to calculate further results
like the ‘Round-trip Latency (median)’, as retransmitted packets with already received
data have no influence on the QoS or QoE. As a consequence, duplicated packets are
reported in the single packet results but are not used for the result calculation.

Test Description 3646.3142.02 ─ 24.0 11


R&S SwissQual AG® Interactivity Test
Test Principle

2.1.5 Prediction of Interactivity Score

The basic concept of scoring interactivity is that latency and rate of disqualified packets
are determining the perceived interactivity by a user. The two-way packet latency gives
information how fast a response to an action originated at the client user device is
received back. In addition, disqualified packets are missing information for the user’s
application. Whether they can be interpolated by the application or lead to the tempo-
rary distortions or pausing depends on the application’s implementation.
To receive results for latency and ratio of disqualified packets as close to a real appli-
cation as possible, the packet stream – especially in data rate and traffic pattern –
should emulate the targeted application in use.
The latency of the received packets and number of disqualified packets determine the
perceived interactivity, but it depends on the target application and the expectation of
the user, how the influence on the perceived interactivity of an application or use case
is.
The basic assumption when modeling perceived interactivity is its monotonous
dependency on data latency. The shorter the data transport time is, the shorter the
response time in an interactive application is, and the use of the application is per-
ceived as more interactive.
However, this dependency is not a simple linear function. There are saturation areas at
both tails of the function, where no change in the perception happens even if the
latency changes.
A valid approximation of this nonlinear dependency is a logistic (sigmoid) function.
Each individual two-way packet latency is transformed to a 0% to 100% scale by the
applied logistic function. The average of the received transformed results over all pack-
ets forms the initial interactivity score. In addition to latency, it is anticipated that delay
variation and number of disqualified packets contribute to the perceived interactivity
too. Simplified, both indicators are considered as degrading factors by multiplication.
The following picture shows an example as applied for e-gaming:

Test Description 3646.3142.02 ─ 24.0 12


R&S SwissQual AG® Interactivity Test
Test Configuration

For very short latencies in error free transmission, an interactivity score of 100% can
be achieved. With increasing latencies, the interactivity score decreases. Furthermore,
in case of delay variation or packet loss, the interactivity score is further decreased.
The steepness of the decay and the other model parameters depends on the user
expectation for using a certain type of application or service. Therefore, each traffic
pattern emulating a certain target application is linked to a corresponding interactivity
score model that takes the expectations on this target application into account. The
interactivity score models are described in Annex A Traffic Patterns.

2.2 Test Configuration


Host is the host name or IP address of the server.

It is sufficient to use a running standard TWAMP server for symmetric patterns. In case
asymmetric patterns should be applied as well, a Rohde & Schwarz extended TWAMP
server is required. For the avoidance of doubts, the Rohde & Schwarz extended server
supports all types of provided patterns in the interactivity test. For information on how
to install the server component, please refer to our document
Installation Manual - IP Test Server.pdf.

Port is the UDP port used for the connection to the server. The default port for the
TWAMP protocol is 862.
Pattern is the name of the traffic pattern describing the data flow between client and
server. There are technical patterns like "Constant long" and patterns targeting specific
application scenarios like "eGaming real-time" or pattern with asymmetric traffic like
"VR / Cloud-gaming HD". The full set of available patterns is described in Annex A
Traffic Patterns.

The recommendation which pattern to use largely depends on the use case. For more
information, please refer to Chapter 2.2, "Test Configuration", on page 13.

Minimum test duration is the minimum duration in seconds before the next test of the
data session is started, even if the active phase of the Interactivity test ended before,
e.g. due to a failure. This setting helps to reduce oversampling of very bad locations
with low or no coverage.

We recommend using a setting in the range of 5 to 10 seconds for benchmarking use


cases, e.g. for a Network Performance Score campaign.

Maximum test duration is the maximum duration of the test in seconds before the test
gets aborted.

Test Description 3646.3142.02 ─ 24.0 13


R&S SwissQual AG® Interactivity Test
Test Configuration

The recommendation is to use the default value of 90 seconds, or a value of at least


the duration of the selected traffic pattern, + 10 seconds. The reason for this is the
additional time the test needs for connection setup before the test, plus the waiting
time after the test for potential delayed packets.

2.2.1 Considerations for Choosing the Traffic Pattern

There are two basic classes of traffic patterns available for the interactivity test:
● Traffic patterns linked to a target application are based on archetype models of real
traffic of certain use case classes and provide in addition to the QoS results also
the Interactivity score as a QoE measure delivering a realistic estimation of the net-
work performance for the target application considering also constraints like the
maximum packet delay budget (emulating the application’s jitter buffer) and a maxi-
mum tolerable packet error rate.
● Technical patterns have a focus on technical analysis and troubleshooting. They
provide QoS results like the round-trip latency but not an Interactivity Score. These
technical patterns are usually applying a constant packet rate.
The patterns emulating the traffic of a target application are mainly suited for bench-
marking scenarios, where the end-user experience is the most important rating factor
for network quality. In addition, these patterns can be used to prove the readiness of a
network for a certain use case. On the other hand, the technical patterns are more
suited for optimization and troubleshooting, where the absolute values of technical
measures like round-trip latency and PDV are important and should not depend on the
packet delay budget of a certain target application.

2.2.1.1 Patterns with Target Application and QoE Score

eGaming real-time: A traffic pattern emulating the realistic traffic of online gaming, for
example with multiple players, where the interactivity between client and server is of
high importance while the bandwidth requirements are low, because only status infor-
mation is exchanged between client and server and the video processing is done
locally on the client device. Ideal for benchmarking a mobile gaming experience requir-
ing a good 4G or 5G non-standalone network for a good scoring.
● VR / Cloud-gaming: A traffic pattern emulating the realistic traffic of cloud-gaming
services where the client sends status information with low bitrate, but video pro-
cessing is done on the server and the video is streamed back to the client in a high
data rate. Here the requirements on interactivity remain as high as for eGaming
real-time. Ideal for benchmarking a cloud-gaming experience requiring a good 4G
or 5G non-standalone network for a good scoring.
● HD Video chat: A traffic pattern emulating the realistic traffic of multi-user video
telephony applications like Zoom® or Skype®. A video signal is streamed in up-
and downstream while the requirements on interactivity are a bit more relaxed
compared to gaming scenarios. Ideal for benchmarking the very common use case
of a video chat experience requiring a 4G network for a good scoring.
● Interactive remote meeting: A traffic pattern emulating the realistic traffic of a multi-
user virtual meeting using an application like Teams® or Skype®. In addition to

Test Description 3646.3142.02 ─ 24.0 14


R&S SwissQual AG® Interactivity Test
Measurement Results and KPIs

video streams, a presentation is usually shared leading to peaks in the required


bandwidth. Ideal for benchmarking the very common use case of online meetings
requiring a 4G network for a good scoring.
● I4.0 Process automation: A traffic pattern covering the emerging use case of proc-
ess automation in smart factories emulating traffic as expected to be used by major
industry stakeholders. While the requirements on the bandwidth are low, the ones
on latency and interactivity are very strict and can only be fulfilled in well-deployed
5G stand-alone networks. Ideal for proving readiness of a private network for proc-
ess automation.

2.2.1.2 Technical Patterns

● Constant low: A short traffic pattern resulting in a target bandwidth of only 100
Kbit/s. Ideal for measuring in poor network situations.
● Constant medium: A short traffic pattern resulting in a target bandwidth of 1 Mbit/s,
which is typically high enough for many services. Successful tests with this pattern
could be a proof of service for less demanding interactive applications.
● Constant high: A short traffic pattern resulting in a target bandwidth of 15 Mbit/s.
This pattern is quite demanding, mainly due to the high uplink throughput, and thus
can be used for measuring in very good network situations.
● Constant long: A 30 s long traffic pattern resulting in a low target bandwidth of 300
Kbit/s. Besides the longer duration it also has a very high packet delay budget of 2
s. This pattern is intended to be used for deeper analysis of the packet delay
behavior over time.
● Constant downlink dominant: A medium length traffic pattern with a 10 times higher
data rate in downlink than in uplink and a high packet delay budget. This pattern
should be used when the channel is supposed to have a higher bandwidth in
downlink as it is the case for many consumer-driven use-cases.
● Constant downlink dominant high: A 24 s long traffic pattern with a highly demand-
ing uplink data rate of 5 Mbit/s and a downlink data rate of 20 Mbit/s. This covers
the needs of practically all consumer-driven use-cases.
● Constant uplink dominant: A medium length traffic pattern with a 10 times higher
data rate in uplink than in downlink and a high packet delay budget. This pattern
should be used when the channel is supposed to have a higher bandwidth in uplink
as it might be the case for producer-driven use cases.
● Stepwise DL dominant: A medium length traffic pattern with a 15 times higher data
rate in downlink then in uplink, where the relation between the two remains fix but
the absolute data rate changes during the test in steps in the form of low – medium
– high – medium – low. This pattern can be used to test the dependency of the
data channel on the network load.

2.3 Measurement Results and KPIs


In this chapter, we give a detailed overview of the test parameters, results and KPIs of
the interactivity test as implemented in the R&S SwissQual product line.

Test Description 3646.3142.02 ─ 24.0 15


R&S SwissQual AG® Interactivity Test
Measurement Results and KPIs

2.3.1 Test Parameters and Reported Information

You can use the set of parameters and dimensions provided by the interactivity test for
selecting and grouping results.
Pattern Name is the name of the traffic pattern describing the data flow between client
and server. There are technical patterns like "Constant long" and patterns targeting
specific application scenarios like "eGaming", Real-time" or "Online meeting".
Host is the host name or IP address of the server.
Port is the UDP port used for the connection to the server. The default port for the
TWAMP protocol is 862.
(Configured) Test Duration is the duration of the packet stream sent to the server for
the current traffic pattern.
Packet Delay Budget is the maximum allowed round-trip delay before a returning
packet is considered as lost. This emulates the behavior of a jitter buffer of a real-time
application. Packets arriving too late cannot be considered anymore. This property is
defined by 3GPP per QoS application class.
Max Packet Error Rate is the maximum overall packet error rate allowed to meet the
quality expectation of the channel for a given application type. All types of packet
errors are considered, delayed, lost and not sent packets. This value is defined by
3GPP per QoS application class.
Maximum Packet Size UL is the maximum size of the individual UDP packets includ-
ing headers and TWAMP payload throughout the test in Bytes in uplink direction.
Maximum Packet Size DL is the maximum size of the individual UDP packets includ-
ing headers and TWAMP payload throughout the test in Bytes in downlink direction. It
can differ from the value in uplink direction in the case of asymmetric traffic.
Number Of Packets To Send is the overall number of packets that should be sent
from client to server and back to successfully complete the traffic pattern.

2.3.2 Overall Measurement Results and KPIs

All results are calculated for the active transfer period between TR1 and TR2. Addi-
tional trigger points are available for reporting interruptions in the packet flow.

Trigger ID Abstract description Technical description

TR1 Time when the first TWAMP UDP Marker "Interactivity first packet sent"
packet is sent from the client to the
server.

TR2 Time when the last TWAMP UDP Timestamp of the Interactivity test result
packet is sent from the client to the
server + packet delay budget

TR3a Start of an interruption caused by Sending timestamp of first packet with status "lost"
packet loss in a set of 1 or more subsequently lost packets

Test Description 3646.3142.02 ─ 24.0 16


R&S SwissQual AG® Interactivity Test
Measurement Results and KPIs

Trigger ID Abstract description Technical description

TR3b End of an interruption caused by Sending timestamp of first packet with status not
packet loss equals "lost" after a set of 1 or more subsequently
lost packets

TR3c Start of an interruption caused by Sending timestamp of first packet with status "dis-
packet delay beyond the packet delay carded" in a set of 1 or more subsequently discar-
budget ded packets

TR3d End of an interruption caused by Sending timestamp of first packet with status not
packet delay beyond the packet delay equals "discarded" after a set of 1 or more subse-
budget quently discarded packets

Test Result is the test status, "OK" for a valid test, an error message for a failed test.
In addition to the general test, session, DNS and other errors, it can take the following
values:
● Interactivity failed: Generic failures during the TWAMP test.
● Interactivity connection error: Connection-related errors during the TWAMP test.
● Interactivity initialization error: Initialization errors during the TWAMP test.
● Interactivity result calculation error: Not enough data available to calculate a valid
result.
● Interactivity all packets lost: The control connection could be established and some
packets send, but no single packet could be returned from server to client. In this
case, it is not possible to calculate a valid result.
● Interactivity feature not supported: In case the server does not support the reques-
ted feature, e.g. in case of a traffic pattern with asymmetric traffic in a test against a
standard TWAMP server.
Interactivity Score is the integrated score on a percentage scale of how well the net-
work channel is suited for the target application given by the traffic pattern. For more
details, see section Chapter 2.1.5, "Prediction of Interactivity Score", on page 12. For
technical patterns with, e.g., constant traffic, no interactivity score is calculated as there
is no underlying target application.
Round-trip Latency (median) is the median round-trip time of all individual packets
returned during the test in milliseconds, see also Chapter 2.1.4.1, "Per-packet Two-way
Latency", on page 10.
Round-trip Latency (10th percentile) is the 10th percentile of the round-trip times of
all individual packets returned during the test in milliseconds. It can be seen as the
best-case latency of the channel, see also Chapter 2.1.4.1, "Per-packet Two-way
Latency", on page 10.
Round-trip Latency QoS (median) is the median round-trip time of all individual pack-
ets returned on the QoS level, i.e. not taking the packet delay budget given by the tar-
get application into account, see also Chapter 2.1.4.1, "Per-packet Two-way Latency",
on page 10.
Round-trip Latency QoS (10th percentile) is the 10th percentile of the round-trip
times of all individual packets returned on the QoS level, i.e. not taking the packet
delay budget given by the target application into account, see also Chapter 2.1.4.1,
"Per-packet Two-way Latency", on page 10.

Test Description 3646.3142.02 ─ 24.0 17


R&S SwissQual AG® Interactivity Test
Measurement Results and KPIs

Packet Delay Variation (median)) is the median of the packet delay variation distribu-
tion in milliseconds based on all packets returned, see also Chapter 2.1.4.2, "Packet
Delay Variation (PDV)", on page 10.
Packet Delay Variation (99.9th percentile) is almost the maximum of the packet
delay variation distribution in milliseconds, based on all packets returned, see also
Chapter 2.1.4.2, "Packet Delay Variation (PDV)", on page 10.
Packet Delay Variation (stddev) is the standard deviation of the packet delay varia-
tion distribution in milliseconds of all individual packets returned on QoE level, i.e.
within the given packet delay budget. It serves as input for the Interactivity Score.
Packets Sent is the number of packets that could be sent out from the client. Ideally,
this should match the "Number Of Packets To Send" property.
Packets Not Sent Ratio is the percentage of packets, that were scheduled to be sent
by the client to the server but could not leave the UE due to uplink congestion. These
packets are regarded as disqualified and are part of the "Total Packet Error Rate", see
also Chapter 2.1.4.3, "Disqualified Packets", on page 11.
Packets Lost Ratio is the percentage of all sent packets that were either lost or
delayed beyond the maximum waiting time of 2 seconds. These packets are regarded
as disqualified and are part of the "Total Packet Error Ratio", see also Chapter 2.1.4.3,
"Disqualified Packets", on page 11.
Packets Discarded Ratio is the percentage of all sent packets that arrived back at the
client beyond the given packet delay budget. These packets are regarded as disquali-
fied and are part of the "Total Packet Error Ratio", see also Chapter 2.1.4.3, "Disquali-
fied Packets", on page 11.
Total Packet Error Ratio is the percentage of all disqualified packets with respect to
the number of 'Packets Sent', see also Chapter 2.1.4.3, "Disqualified Packets",
on page 11. It servers as input for the Interactivity Score.
Channel QoS 3GPP is an integrating measure for the QoS with respect to the target
application given by the configured traffic pattern. If the measured "Total Packet Error
Rate" is not bigger than the given "Max. Packet Error Rate" and while applying the
given "Packet Delay Budget", the 3GPP quality criteria are met and "Channel QoS
3GPP" takes the value 100%. When double the allowed packets have to be disquali-
fied, the "Channel QoS 3GPP" takes the value 50% and so on. It is given by the follow-
ing formulas:

The result is also reported as KPI 30520 "Integrity - Data - Interactivity" which reports
status "Failed" with cause "Channel quality too low" in case the "Channel QoS 3GPP2
is lower than 100%.

Test Description 3646.3142.02 ─ 24.0 18


R&S SwissQual AG® Interactivity Test
Measurement Results and KPIs

KPI ID KPI Name Start End Status Cause


Time Time

30520 Integrity - Data - Interactivity TR1 TR2 Successful / Failed Error code from test or
"Channel quality too low"

30525 Integrity – Data – Interactivity TR3a TR3b Successful in case


Interruption by Loss of an interruption

30526 Integrity – Data – Interactivity TR3c TR3d Successful in case


Interruption by Delay of an interruption

Duration is the duration of the active transfer phase from TR1 to TR2.

2.3.3 One-way Results

A number of results that are calculated only based on the uplink transfer from client to
server or on the downlink transfer from server to client.
Throughput UL is the uplink throughput on application layer in Mbit/s.
Throughput DL is the downlink throughput on application layer in Mbit/s.
Packets Lost Ratio UL is the percentage of all sent packets that were either lost or
delayed beyond the maximum waiting time of 2 seconds in uplink direction. Packet loss
at the end of a test cannot be classified according to direction, so the UL and DL ratios
don't necessarily add up to the overall Packets Lost Ratio.
Packets Lost Ratio DL is the percentage of all sent packets that were either lost or
delayed beyond the maximum waiting time of 2 seconds in downlink direction. Packet
loss at the end of a test cannot be classified according to direction, so the UL and DL
ratios don't necessarily add up to the overall Packets Lost Ratio.
Packet Delay Variation UL (median) is the median of the PDV in ms of all packets at
the point of the packets being received by the server with reference to the sending
timestamps of the client.
Packet Delay Variation DL (median) is the median of the PDV in ms of all packets at
the point of the packets being received by the client with reference of the sending time-
stamps of the server.
Packet Delay Variation UL (99.9th percentile) is the 99.9th percentile of all packets
of the PDV in ms at the point of the packets being received by the server with refer-
ence to the sending timestamps of the client.
Packet Delay Variation DL (99.9th percentile) is the 99.9th percentile of all packets
of the PDV in ms at the point of the packets being received by the client with reference
of the sending timestamps of the server.
Duplicated Packets UL is the number of packets that were duplicated during an uplink
transfer and eventually were received back at the client.
Duplicated Packets DL is the number of packets that were duplicated during a down-
link transfer and eventually were received back at the client.

Test Description 3646.3142.02 ─ 24.0 19


R&S SwissQual AG® Interactivity Test
Measurement Results and KPIs

Time To Live / TTL UL is the average number of hops left for packets in the uplink
direction before they would be dropped. A measure for the path length between client
and server, the higher the value the shorter the path.
Time To Live / TTL DL is the average number of hops left for packets in the downlink
direction before they would be dropped. A measure for the path length between server
and client, the higher the value the shorter the path.

2.3.4 Test Results for Intermediate Intervals

Intermediate results are calculated per 1 second time intervals. Please note that some
of the values have to be considered as preliminary, as the detection of not sent and
duplicated packets only takes place for the overall result at test end. The following
KPIs are available with the same semantic as above:
● Interactivity Score
● Packets Sent
● Packets Lost Ratio
● Packets Discarded
● Packets Discarded Ratio
● Round-trip Latency (median)
● Round-trip Latency (10th percentile)
● Packet Delay Variation (median)
● Packet Delay Variation (99.9th percentile)
● Throughput UL/DL
● Intermediate Duration
● Packet Delay Variation UL/DL (median)
● Packet Delay Variation UL/DL (99.9th percentile)
Intermediate and overall time-based results can be presented in SmartAnalytics
Scene:

Test Description 3646.3142.02 ─ 24.0 20


R&S SwissQual AG® Interactivity Test
Measurement Results and KPIs

2.3.5 Test Results for Individual Packets

In addition, we also report information per individual packet. In the figure below, each
dot represents the round-trip latency of one packet, yellow areas mark discarded pack-
ets because of delay while red areas would represent lost packets. RAT changes, cell
handovers and even the addition of carriers often lead to disruptions in the packet flow.
Timestamp is the time when the packet was sent by the client.
RTT is the round-trip latency measured for this individual packet from leaving the client
to returning back to the client in ms.
Status is the packet status and can take the values "ok", "lost", "discarded" or "duplica-
ted", where discarded means delayed beyond the packet delay budget.

Test Description 3646.3142.02 ─ 24.0 21


R&S SwissQual AG® Interactivity Test
Example Results and Interpretation

2.3.6 Statistical Aggregation of Results and KPIs

In SmartAnalytics Scene, we have additional statistical results available that are calcu-
lated over the whole database or the current filtered data set.
Count of Interactivity Tests is the number of (valid) interactivity tests in the current
set of tests.
Interactivity Success Rate is the percentage of interactivity tests with "Test Result"
equal "OK", compared to the number of all valid interactivity tests.
Channel QoS Success Rate is the percentage of interactivity tests with a "Channel
QoS 3GPP" of 100%, i.e. a successful KPI 30520.
Average, Min, Max of the test-based KPIs: For most of the test-based KPIs, we offer
the average, minimum and maximum value calculated for the whole data set.
CDF / PDF distributions are available for the "Round-trip Latency (median)" and the
"Interactivity Score".

2.4 Example Results and Interpretation


The interactivity test and its granularity down to a per-packet transport time allows
deep insights into the temporal changes in the transport. The packet transport time is
not just given by the pure airlink latency plus the transport through the fiber in the core
network and to a server in the internet. This is only the best-case scenario. The individ-
ual latencies are jittering around a value given by the physical transport plus the aver-
age delay cause by transport components and potential queuing (more information in
Annex B Latency, Jitter and Server Location.
The following diagram shows the individual packet two-way latencies for a steady leg-
acy 4G channel from the mobile client to a cloud instance of the server and back. The
blue line indicates the bitrate and the black line the median latencies in intervals of 1
second.

Test Description 3646.3142.02 ─ 24.0 22


R&S SwissQual AG® Interactivity Test
Example Results and Interpretation

The jitter is symmetrically, slightly Gaussian distributed with a jitter amplitude of about
±10 ms around the median.
However, there are many sources of packet defects or queued packets until the chan-
nel is able to transport again. Those effects are short term, they lead to packet loss or
increased latencies for a short period of time.
Not only bad channels delay packets because of retransmitted transport blocks, espe-
cially all sorts of network reconfigurations like cell changes and adding/releasing carri-
ers interrupt transport for 100 ms or more, before the temporarily buffered packets are
transported again to the next node. As consequence, also regarding cell change and
carrier aggregation, perfectly configured networks can disturb a fluent, real-time packet
flow massively. The figure below shows a representative example with different sour-
ces of interruptions and increased latencies.
Within the observed 10s period, at first an additional 4G carrier was added. Second, a
cell change happened, and finally at second eight, a 5G carrier was added. Each of
these changes in the radio transmission has a dramatic impact onto the data latency.
In the range of these network reconfigurations, latencies increase dramatically,
exceeding the 100 ms budget and a part of the packets get lost. These time spans with
packet loss are highlighted in red in the diagram.

Even though the data throughput as such can be kept and the amount of transmitted
data in this observation period is not affected seriously, the dramatic increase and
instability of transport time will make a real-time application impossible to work under
those conditions.
Just a technical detail: it is further visible that the data traffic was moved to the now
available 5G carrier and consequently, the two-way latency dropped slightly down to
the shorter latencies as can be observed in today’s 5G EN-DC channels.

Test Description 3646.3142.02 ─ 24.0 23


R&S SwissQual AG® Interactivity Test
Example Results and Interpretation

Those effects stay almost invisible when TCP is used in download focused applica-
tions. Whether a series of packets arrives a few hundred milliseconds later is not a
game-changer in a download that lasts a few seconds to be completed. However, in
real-time and interactive applications this is a serious disturbance.
The following table gives an overview of typically measured round-trip latency times
and the resulting interactivity scores for real-time eGaming under undistorted network
conditions. Please note that the latencies are round-trip values. The uplink and down-
link technology can be different and both have an influence on the results. These
example values are obtained by end of year 2020 in European networks.

RAT uplink RAT downlink Round-trip PDV / discarded Interactivity Score


latency (median) packets

4G LTE 4G LTE 25-40 ms ~15 ms / 0% 80-92%

4G LTE 5G EN-DC 15-30 ms 10-15 ms / 0% 85-95%

5G EN-DC 5G EN-DC 10-20 ms 5-10 ms / 0% 90-98%

Local LAN Local LAN <1 ms 0.7 ms / 0% 100%

Test Description 3646.3142.02 ─ 24.0 24


R&S SwissQual AG® Ping Test
Technical description - Ping Test

3 Ping Test
Ping is one of the most known and traditional methods in internet communication. It is
designed to prove availability, i.e. whether a device is connected to the internet. A ping
is sent to the device identified by its IP address and the device responds to the ping, if
the response is permitted by the device. This test also provides a basic measure for
the round-trip time between client and server. We offer two flavors of this test – the
classical ping test based on the ICMP protocol and a variant using the TCP transport
protocol instead.

3.1 Technical description - Ping Test

3.1.1 Classic ICMP Ping Test

Ping is a software tool to test whether a device or host is reachable and connected to
the internet. It is mainly used in computer network administration.
Ping uses the Internet Control Message Protocol (ICMP). It sends echo request pack-
ets to the target device or host and waits for an echo reply by ICMP too. The native
implementation of ping reports mainly the reachability of the target (echo received
within a time-out), a summary of minimum, maximum and the mean of time from send-
ing the echo request to receive the echo (message round-trip time) but also errors and
packet loss.
The ping tool is available for all popular operating systems that have networking capa-
bility including Android devices.
The name "ping" comes from active sonar terminology that sends a pulse of sound and
listens for the echo to detect objects under water [Wikipedia Ping].
Please consider that ping is designed for testing reachability of a network device and
not primarily to measure the round-trip time as in a real application. It uses a dedicated
protocol and there is almost no load to carry. The round-trip time, as a result of ping, is
not always comparable with transport times in real applications. This is because the
network transport can be protocol-aware (and prioritize or delay ICMP) and in a
dynamic scheduling network the transport times can depend on the carried load (e.g.
adding carriers, prioritizing traffic). For these test cases, please consider using the TCP
ping test (Chapter 3.1, "Technical description - Ping Test", on page 25) or the Interac-
tivity test (Chapter 2, "Interactivity Test", on page 7).
In the following schematic, you can see the test sequence of a ping test. The DNS res-
olution phase can be omitted if the IP address is known to the client.

Test Description 3646.3142.02 ─ 24.0 25


R&S SwissQual AG® Ping Test
Technical description - Ping Test

3.1.2 TCP Ping

In addition to the classic Ping test based on the ICMP protocol, we also offer a similar
test based on TCP.
When the goal of the testing is not only to measure the connectivity to a host but also
the round-trip time (RTT) between the two sides, measuring with the TCP protocol has
one main advantage:
ICMP is a very special protocol that is only used for this purpose and never for real
data transport. Therefore, it is likely that ICMP packets are treated differently than TCP
or UDP packets in many networks. From a measurement perspective though it is more
interesting to know the latency of a real transport stream. The Ping TCP test is ideal to
measure the round-trip latency without load. In case the latency under load conditions
is of interest the Interactivity test should be used instead (see Chapter 2, "Interactivity
Test", on page 7).
In practice, a TCP connection is setup to the configured URL and the round-trip time is
measured between the TCP [SYN] and [SYN;ACK] packages, see also the schematics
below. This procedure is repeated 30 times (by default) with the configured interval in
order to get a reliable result. The connection setup timeout for each connection attempt
is 2 s.

Test Description 3646.3142.02 ─ 24.0 26


R&S SwissQual AG® Ping Test
Technical description - Ping Test

As it is common for RTT measurements to produce outliers, e.g. when a packet needs
to be retransmitted, we recommend to use the median and 10th percentile value for
evaluation and analysis.

3.1.3 TCP Ping on unrooted devices

As the TCP Ping test requires direct access to the IP layer for the SYN;ACK analysis, it
is not possible to use exactly the same method on unrooted devices. Instead, the
analysis is done on an application layer. A TCP connection is setup to the configured
URL and the time is measured for the socket to reach the connected state.
This test shares the advantage of using TCP as an underlying protocol, which is realis-
tic for real data transfer. The evaluation of the round-trip time on the application layer
introduces some additional delay though. Therefore, the average value is expected to
be a few milliseconds higher than the IP layer RTT. Here again we recommend using
the median and 10th percentile value for analysis in order to account for possible outli-
ers.
For this test flavor, the protocol dimension in post processing will be set to "App".

Test Description 3646.3142.02 ─ 24.0 27


R&S SwissQual AG® Ping Test
Test Configuration

3.2 Test Configuration

3.2.1 Ping

Host is the hostname or IP address of the host where the echo request is sent to.
Number of pings is the number of ping requests that will be sent in a row.
Interval is the pause between the individual ping requests in milliseconds.
Packet Size is the size in Bytes of the echo request packet.
Timeout is the maximum waiting time in seconds for the echo response before a
request is considered to be failed.
Minimum test duration is the minimum duration in seconds before the next test of the
data session is started, even if the active phase of the ping test ended before, e.g. due
to a failure. This setting helps to reduce oversampling of very bad locations with low or
no coverage.
Maximum test duration is the maximum duration of the test in seconds before the test
gets aborted.

3.2.2 Ping TCP

URL/Host is the web address to which a TCP connection is established. Default is the
connection test URL from Microsoft.
Attempts is the number of connection establishments that are initiated. Default and
recommended value is 30, minimum value is 10.
Interval is the pause between the individual TCP SYN requests in milliseconds.
Default is 100 ms.
Minimum test duration is the minimum duration in seconds before the next test of the
data session is started, even if the active phase of the Ping test ended before, e.g. due
to a failure. This setting helps to reduce oversampling of very bad locations with low or
no coverage.
Maximum test duration is the maximum duration of the test in seconds before the test
gets aborted.

Test Description 3646.3142.02 ─ 24.0 28


R&S SwissQual AG® Ping Test
Measurement Results and KPIs

3.3 Measurement Results and KPIs

3.3.1 Test Parameters and Reported Information

You can use the set of parameters and dimensions provided by the ping test for select-
ing and grouping results:
● Host is the hostname, IP address or URL of the host where the request is sent to.
● Packet Size is the size in bytes of the echo request packet. Only available for
ICMP ping tests.
● Protocol can take the following values depending on the type of ping test:
– ICMP
– TCP
– App (see also Chapter 3.1.3, "TCP Ping on unrooted devices", on page 27)
● Sequence Number is the sequence number of one individual ping test in the con-
figured set of tests.

3.3.2 KPIs and Test Results

Status can take the following values in addition to the general test, session and DNS
error codes:
● Completed / OK: The test is successful; the echo reply was returned within the con-
figured test duration.
● ICMP error: Generic ping test errors.
● ICMP timeout: The ping host did not respond in time.
– The option only affects timeout in the absence of any response. If there is a
response, ping waits for 2*average RTT before timing out again, if this number
is exceeded.
● ICMP host unreachable: The ping host was not available.
● ICMP TTL exceeded: The Time-To-Live (TTL) parameter was set too low and the
echo request could not reach the destination host.
● ICMP truncated: The data size returned is smaller than the date size sent.
● Connection timeout: The TCP connection was not established in time.
● Too few results: The number of individual results is too small to calculate a valid
median or other result. This is the case when more than 2/3 of attempts fail.
● Connection failure: No connection could be established.
● General failure: Any other error.
Round Trip Time is the individual round-trip time in milliseconds of:
● One echo request and its corresponding echo reply.
● One TCP SYN and its corresponding TCP SYN;ACK.
● Duration of the TCP socket connection establishment.

Test Description 3646.3142.02 ─ 24.0 29


R&S SwissQual AG® Ping Test
Measurement Results and KPIs

Round Trip Time float is the individual round-trip time in milliseconds, but with a
higher precision in the order of µs.
In case of the ICMP ping test, status and "Round Trip Time" are also reported as KPI
21000. For the TCP ping test, KPI 31000 can be used instead.

KPI ID KPI Name Start Time End Time Status Duration

21000 Retainability - Data Send echo Receive echo Successful / RTT (End Time -
- Ping request reply Failed Start Time)

31000 Integrity - Data – TCP [SYN] TCP [SYN;ACK] Successful / RTT (End Time -
TCP Roundtrip Failed Start Time)

3.3.3 Statistical Aggregation of Results and KPIs

Count of Ping Tests is the number of individual ping tests in the current data set.
Ping Success Ratio is the percentage of successful individual ping tests compared to
the total "Count of Ping Tests".
Round Trip Time (Average, Min, Max) are the average, minimum and maximum val-
ues of the ping test round-trip time.

Important: "Test Type" or "Protocol" shall be used in SmartAnalytics Scene to distin-


guish test results from both approaches.

Test Description 3646.3142.02 ─ 24.0 30


R&S SwissQual AG® Traceroute Test
Technical Description

4 Traceroute Test
Traceroute is a standard network diagnostic tool for displaying possible routes (paths)
and measuring transit delays of packets across an IP network. The history of the route
is recorded as the round-trip times of the packets received from each successive host
(hop) in the route (path). The sum of the mean times in each hop is a measure of the
total time spent to establish the connection. Ping, on the other hand, only computes the
final round-trip times from the destination point.

4.1 Technical Description


The traceroute binary sends a series of packets to the host either in ICMP, UDP or
TCP protocol.
[Wikipedia Traceroute]: The time-to-live (TTL) value, also known as hop limit, is used in
determining the intermediate routers being traversed towards the destination. Tracer-
oute sends packets with TTL values that gradually increase from packet to packet,
starting with TTL value of one. Routers decrement TTL values of packets by one when
routing and discard packets whose TTL value has reached zero, returning the ICMP
error message - ICMP Time Exceeded. For the first set of packets, the first router
receives the packet, decrements the TTL value and drops the packet because it then
has TTL value zero. The router sends an ICMP Time Exceeded message back to the
source. The next set of packets is given a TTL value of two, so the first router forwards
the packets, but the second router drops them and replies with ICMP Time Exceeded.
Proceeding in this way, traceroute uses the returned ICMP Time Exceeded messages
to build a list of routers that packets traverse, until the destination is reached and
returns an ICMP Destination Unreachable message if UDP packets are being used or
an ICMP Echo Reply message if ICMP Echo messages are being used.
The timestamp values returned for each router along the path are the delay (latency)
values, typically measured in milliseconds for each packet.
The sender expects a reply within a specified number of seconds. If a packet is not
acknowledged within the expected interval, an asterisk is displayed. The Internet proto-
col does not require packets to take the same route towards a particular destination,
thus hosts listed might be hosts that other packets have traversed. If the host at hop
#N does not reply, the hop is skipped in the output.
In the following schematic, you can see the test sequence of a traceroute test using the
UDP protocol. The sequences with TCP or ICMP protocol look slightly different.

Test Description 3646.3142.02 ─ 24.0 31


R&S SwissQual AG® Traceroute Test
Test Configuration

4.2 Test Configuration


Host is the host name or IP address of the host where the traceroute test is directed
to.
Port is the host port to connect to, only needed for UDP and TCP.

The port specified requires an active service "listening" to it. Otherwise there might be
no answer from the destination host. E.g. UDP port 53 is typically active on DNS serv-
ers while TCP port 80 is typically active on web-servers.

Max. Hops is the maximum number of hops to record, i.e. the maximum TTL value
used. On longer routes, the test will be stopped before reaching the target.
Packet Size is the full packet length in Bytes including the IP header.
First TTL is the first TTL value set so the test would start at the first TTL hop, default is
1.

A value higher than 1 is recommended when the local network nodes are known or not
of interest and the test should be shortened by skipping them.

Number of Probes is the number of packets sent with the same TTL, i.e. will expire
after the same number of hops. Default value is 3 in order to even out singular routing
events.
Wait time is the maximum time to wait for a reply per packet in seconds.
Minimum test duration is the minimum duration in seconds before the next test of the
data session is started, even if the active phase of the Ping test ended before, e.g. due
to a failure. This setting helps to reduce oversampling of very bad locations with low or
no coverage.
Maximum test duration is the maximum duration of the test in seconds before the test
gets aborted.
Module is the communication protocol used and can take the following values: ICMP,
UDP or TCP.

Test Description 3646.3142.02 ─ 24.0 32


R&S SwissQual AG® Traceroute Test
Measurement Results and KPIs

UDP is the default protocol and can also be used for unrooted devices. The number of
nodes answering strongly depends on the protocol used, as there needs to be an
active service listening for incoming packets. Often, the traceroute results are more
complete when using TCP or ICMP.

4.3 Measurement Results and KPIs


The traceroute test is not yet supported by SmartAnalytics Scene.

4.3.1 Test Parameters and Reported Information

You can use the set of parameters and dimensions provided by the traceroute test for
selecting and grouping results.
Host is the host name or IP address of the host where the traceroute test is directed
to.
Protocol is the communication protocol used and can take the following values:
● ICMP
● UDP
● TCP
Port is the host port to connect to, only needed for UDP and TCP.
Max. Hops is the maximum number of hops to record, i.e. the maximum TTL value
used.
First TTL is the first TTL value set so the test would start at the first TTL hop instead of
1.
Packet Size is the full packet length in bytes including the IP header.
Number of Probes per Hop is the number of packets sent with the same TTL, i.e. will
expire after the same number of hops.
Max. wait Time per Probe is the maximum time to wait for a reply per packet in sec-
onds.

4.3.2 KPIs and Test Results

Test Result can take the following values in addition to the general test, session and
DNS error codes:
● OK: Successful test
● Traceroute failed: Generic failures with traceroute tool.
● Traceroute connection error: Connection-related errors with the traceroute tool.
● Traceroute initialization error: Initialization errors of traceroute tool.

Test Description 3646.3142.02 ─ 24.0 33


R&S SwissQual AG® Traceroute Test
Measurement Results and KPIs

● Destination host unreachable: All probes against the destination host (last hop)
exceeded the timeout, i.e. the destination host was not reachable.
Test Duration is the duration of the overall traceroute test in seconds.
Avg. RTT Host is the average round-trip time of the probes that reached the host in
milliseconds.
Number of Hops is the number of hops that were needed to reach the host.
KPIs: One accessibility and one retainability KPI are defined for the traceroute test.
The first one is successful, if any hop has returned a valid response. The second one
is successful, if the overall test is successful and the destination host could be
reached.

KPI ID KPI Name Start Time End Time Status Cause

10510 Accessibility - Test start First intermedi- Successful or If error is 'All probes
Data - Tracer- ate test result failed exceeded' or 'Destina-
oute with (partial) tion host unreachable'
success the cause is set to 'All
hops unreachable'.

20510 Retainability - Test start Overall test Successful or Test error


Data - Tracer- result failed
oute

4.3.3 Intermediate Test Results and Parameters

Intermediate results are reported for every hop.


Hop is the number of the hop.
Last Hop is a flag only set to 1 for the last hop (reaching the destination host).
Min RTT is the minimum round-trip time measured of all probes for the current hop in
milliseconds.
Avg RTT is the average round-trip time measured of all probes for the current hop in
milliseconds.
Max RTT is the maximum round-trip time measured of all probes for the current hop in
milliseconds.
Host(s) is the list of hosts reached with the current number of hops.
Packet size is the full packet length including IP header in bytes.
Status is the result of the probes with current hop number:
● OK: All packets with the current hop number led to a valid response in time.
● Probes partially exceeded: At least one of the probes against that hop exceeded
the timeout.
● All probes exceeded: All the probes against that hop exceeded the timeout.
Example result displayed in NQDI:

Test Description 3646.3142.02 ─ 24.0 34


R&S SwissQual AG® Traceroute Test
Measurement Results and KPIs

Test Description 3646.3142.02 ─ 24.0 35


R&S SwissQual AG® Literature

5 Literature
[IETF RFC 5357] IETF: A Two-Way Active Measurement Protocol (TWAMP), 2008
[IETF RFC 5481] IETF: Packet Delay Variation Applicability Statement, 2009
[ITU-T Y.1540] ITU-T: IP packet transfer and availability performance parameters, 2019
[3GPP TS 23.501] 3GPP: System architecture for the 5G System (5GS), 2017
[3GPP TS 22.804] 3GPP: Technical Specification Group Services and System
Aspects; Study on Communication for Automation in Vertical Domains, 2020
[ITU-T G.1051] ITU-T: Latency measurement and interactivity scoring under real appli-
cation data traffic patterns, 2023
[Wikipedia Traceroute] 01 07, 2021. - https://en.wikipedia.org/wiki/Traceroute
[Wikipedia Ping] 01 11, 2021. - https://en.wikipedia.org/wiki/Ping_(networking_utility)

Test Description 3646.3142.02 ─ 24.0 36


R&S SwissQual AG® Traffic Patterns
Real-time E-Gaming

Annex
A Traffic Patterns

A.1 Real-time E-Gaming


The traffic pattern for e-gaming is derived from typical interactive multi-player scenar-
ios. It consists of a short initial phase at 100kbit/s followed by a longer sustainable
phase at 300kbit/s. This sustainable phase is overlaid by a short phase with highest
interactivity driving the bitrate to 1 Mbit/s for 1s. Finally, there is a trailing phase at
100kbit/s again. This pattern with 10s length covers individual phases of a high interac-
tive game in a typical proportion and order.

Name Traffic Packets Packet Result- Packet Max. Test Traffic


shape per sec- size UL ing tar- delay packet duration shape 1
ond [Bytes] get budget error [s]
band- [ms] rate
width UL
[Mbit/s]

eGaming Bursty 125 / 100 0.1 / 0.3 / 2 * 50 2 * 10-3 10 lmh


real-time 375 / 1 mmmm lll
1250

1Traffic shape in 1 second steps, l = low, m = medium, h = high, u = ultrahigh target


bandwidth as specified in the table.

Test Description 3646.3142.02 ─ 24.0 37


R&S SwissQual AG® Traffic Patterns
Real-time E-Gaming

The chosen packet size is 100 bytes sent in a frequency of 125 to 1250 packets per
second. The pattern is same in uplink and downlink, so that each packet is reflected in
the same size.
In [3GPP TS 23.501] a maximal one-way latency for online real time gaming of 50 ms
is defined (5QI class 3). Therefore, the two-way latency should not exceed 100 ms. It
forms the delay budget for this test case and is used for calculation of the ratio of lost
packets that is considered in the calculation of the interactivity score.
For parameterization of the interactivity model, the following principles are used: A flu-
ent video stream produces 60 frames per second (fps), a movie 24. It is assumed that
the degradation starts if the channel adds a two-way delay of ~30 ms (two frames
delay for 60 fps). Furthermore, a degradation to 60 of 100 is assumed in case of a two-
way delay added by the channel of ~60 ms (four frames for 60 fps). These thresholds
can be seen as challenging but refer to highly interactive gaming applications in high
speed networks as in 5G URLLC.
In addition, it is anticipated that a standard deviation of the PDV of 12ms reduces the
interactivity as defined by latency by another 10% (DPDV = 0.9) and a ratio of disquali-
fied packets of 5% reduces the perceived interactivity by 20% (DDQ = 0.8). The
parameterization of the model
IntAct= I_L 〖 × D〗_PDV 〖 × D〗_DQ
with DPDV = 1 - 〖PDV〗_stddev/ u, DDQ = 1 – v * PDQ and I_L=1/N ∑_(i=1)^N
〖f_max/f_0 (1- 1/(1+ e^(-1/b (t_i-a)) )) 〗
is resulting in:

parameter fmax a b u v

value 100 61 14 120 4

Test Description 3646.3142.02 ─ 24.0 38


R&S SwissQual AG® Traffic Patterns
HD Video Chat

A.2 HD Video Chat


The traffic pattern for HD video chat is derived from typical chat applications offering
live HD video conferencing. The chosen packet size is 1000 bytes, which is typical for
transmission of real-time video content. The traffic pattern consists of a short initial
phase that imitates the setup and dial-in of the participants and is followed by the sus-
tainable phase of continuous live video transmission between the chat partners.
In 3GPP TS 23.501, a maximal one-way latency for conversational video of 150ms is
defined (5QI class 2), the two-way latency therefore should not exceed 300ms and this
value forms the delay budget for this test case.

Name Traffic Packets Packet Result- Packet Max. Test Traffic


shape per sec- size UL ing tar- delay packet duration shape1
ond [Bytes] get budget error [s]
band- [ms] rate
width UL
[Mbit/s]

Video Non-con- 62 / 125 1000 0.5 / 1 2 * 150 2 * 10-3 10 ll


chat HD stant hhhhhhhh

1Traffic shape in 1 second steps, l = low, m = medium, h = high, u = ultrahigh target


bandwidth as specified in the table.

Test Description 3646.3142.02 ─ 24.0 39


R&S SwissQual AG® Traffic Patterns
Interactive Remote Meeting

For parameterization of the interactivity model, the following principles are used: While
running a video chat with interactions and feedback, a two-way delay of 100ms can be
seen as the length of a spoken syllable, a beginning degradation is anticipated for this
range (90 of 100). The interactions become very difficult in case of a response delay
>250 ms.
It is further expected that while using an HD video chat, a user may have more relaxed
expectations on PDV than for e-gaming but is more sensitive in case of lost or disquali-
fied packets because video meetings are usually based on unreliable transmission and
missed frames are visible as image distortions.
To reflect this, it is anticipated that a standard deviation of the PDV of 15ms reduces
the interactivity as defined by latency by another 10% (DPDV = 0.9) and a ratio of dis-
qualified packets of 1% reduces the perceived interactivity already by 30% (DDQ =
0.7).
The parameterization of the model
IntAct= I_L 〖 × D〗_PDV 〖 × D〗_DQ
with DPDV = 1 - 〖PDV〗_stddev/ u, DDQ = 1 – v * PDQ and I_L=1/N ∑_(i=1)^N
〖f_max/f_0 (1- 1/(1+ e^(-1/b (t_i-a)) )) 〗
is resulting in:

parameter fmax a b u v

value 100 215 50 150 30

A.3 Interactive Remote Meeting


The traffic pattern for interactive remote meeting is derived from typical remote meeting
applications, the chosen packet size is up to 1000 bytes, which is typical for transmis-
sion of such content. It consists of sustainable throughput at 500kbit/s as for continu-
ous sharing of content like video and audio. This sustainable phase is overlaid by two
short phases at a bitrate of 2 Mbit/s for 1 s each, one in the uplink and the other in the
downlink direction. These two peaks imitate the sharing of additional content like a

Test Description 3646.3142.02 ─ 24.0 40


R&S SwissQual AG® Traffic Patterns
Interactive Remote Meeting

slide change, sending of a document or an animation. The network is challenged to


provide rapidly more resources without increasing transport latency.
The applied delay budget is the same as for HD video chat, that is a maximal one-way
latency for conversational video of 150 ms resulting in a maximum accepted 300 ms
two-way latency for the calculation of packet loss ratio of the interactivity test.

Name Traffic Packets Packet Result- Packet Max. Test Traffic


type per sec- size UL ing tar- delay packet duration shape1
ond [Bytes] get budget error [s]
band- [ms] rate
width
UL/DL
[Mbit/s]

Online Bursty 62 / 248 1008 / 0.5 / 2 2 * 150 2 * 10-3 10 DL: ll l llll


meeting 252 h ll UL: ll
h llll l ll

1Traffic shape in 1 second steps, l = low, m = medium, h = high, u = ultrahigh target


bandwidth as specified in the table.

For parameterization of the interactivity model, the same model is used as for the HD
video chat. The parameterization of the model

Test Description 3646.3142.02 ─ 24.0 41


R&S SwissQual AG® Traffic Patterns
VR / Cloud-gaming HD

IntAct= I_L 〖 × D〗_PDV 〖 × D〗_DQ


with DPDV = 1 - 〖PDV〗_stddev/ u, DDQ = 1 – v * PDQ and I_L=1/N ∑_(i=1)^N
〖f_max/f_0 (1- 1/(1+ e^(-1/b (t_i-a)) )) 〗
is resulting in:

parameter fmax a b u v

value 100 215 50 150 30

A.4 VR / Cloud-gaming HD
This traffic pattern is based on the analysis of real cloud-gaming services like Google
Stadia, PlayStation Now and Xbox Cloud Gaming. For these services, the individual
video frames of a gaming experience are calculated on the server side based on input
from the client (player interaction) and streamed in real-time to the client device. The
services as of 2022 offer HD video experiences up to a resolution of 1080p with frame
rates up to 60 frames per second. The requirements on latency and interactivity are
identical to the eGaming real-time application described in Chapter A.1, "Real-time E-
Gaming", on page 37, while the bandwidth requirements are much higher as a live
video needs to be streamed. Therefore, the traffic is very asymmetric, with only a few
hundred kB in upstream and several MB in downstream.

Name Traffic Pack- Packet Packet Result- Result- Packet Max. Test Traffic
type ets per size size ing tar- ing tar- delay packet dura- shape
sec- UL DL get get budget error tion [s]
ond [Bytes] [Bytes] band- band- [ms] rate
width width
UL DL
[Mbit/s [Mbit/s
] ]

VR / chang- 60 540 l: 0.25 l: 2.5 / 100 2 * 10-3 10 DL: ll


Cloud- ing, DL 5'460/ h: 5 hhhhhh
gaming domi- h: ll UL:
HD nant 10'920 llllllllll

Test Description 3646.3142.02 ─ 24.0 42


R&S SwissQual AG® Traffic Patterns
VR / Cloud-gaming HD

In [3GPP TS 23.501] a maximal one-way latency for online real time gaming of 50ms is
defined (5QI class 3), the two-way latency therefore should not exceed 100ms, which
forms the delay budget for this test case. For parameterization of the interactivity
model, the same model is used as for eGaming real-time, as the requirements on
latency and interactivity remain the same. The parametrization of the model:
IntAct= I_L 〖 × D〗_PDV 〖 × D〗_DQ
with DPDV = 1 - 〖PDV〗_stddev/ u, DDQ = 1 – v * PDQ and I_L=1/N ∑_(i=1)^N
〖f_max/f_0 (1- 1/(1+ e^(-1/b (t_i-a)) )) 〗
is resulting in:

parameter fmax a b u v

value 100 61 14 120 4

Test Description 3646.3142.02 ─ 24.0 43


R&S SwissQual AG® Traffic Patterns
I4.0 Process Automation

A.5 I4.0 Process Automation


This traffic pattern is based on 3GPP 5QI class 82 "Discrete automation" and on the
requirements posed on use-cases in the application area Industry 4.0 / Factory Auto-
mation as described in [3GPP TR 22.804].
For example, in the use case “Process automation – closed-loop control”, several sen-
sors are installed in a plant and each sensor performs continuous measurements. The
measurement data are transported to a controller, which takes decision to set actua-
tors. This use case has very stringent requirements in terms of latency, service availa-
bility and determinism (small jitter).
Very similar are the requirements for certain use cases involving mobile robots in smart
factories, e.g. "Motion planning" or "Cooperative driving".
With no video transmission involved, the data flow is very symmetric and low in
throughput, with a very restrictive packet delay budget of 20 ms in round-trip and a
very restrictive error rate (as in 5QI class 82). In order to measure the small error rate
with a minimum certainty, the test needs to be longer than most of the other traffic pat-
terns.

Name Traffic Packets Packet Packet Result- Result- Packet Max. Test
type per sec- size UL size DL ing tar- ing tar- delay packet dura-
ond [Bytes] [Bytes] get get budget error tion [s]
band- band- [ms] rate
width width
UL DL
[Mbit/s] [Mbit/s]

I4.0 Con- 625 100 100 0.5 0.5 20 2 * 10-4 30


Process stant
automa- low rate
tion

Test Description 3646.3142.02 ─ 24.0 44


R&S SwissQual AG® Traffic Patterns
Technical Patterns

For parameterization of the interactivity model, the requirements are much higher than
for the other target applications and can usually only be achieved in 5G standalone
networks. The following principles are used: In the application area of process automa-
tion, the setting of actuators or similar needs to be very deterministic and the service
quality will drop rapidly approaching the maximum packet delay budget from around
90% at 10 ms to 10% at 20 ms round-trip latency. Furthermore, the requirement for the
maximum packet error rate is very strict, reducing the service quality already to 50% at
only 0.02% of disqualified packets. The PDV is supposed to be kept in a margin of
around 10% of the latency resulting in a substantial drop in quality of around 25% at
2.5 ms packet delay variation standard deviation.
IntAct= I_L 〖 × D〗_PDV 〖 × D〗_DQ
with DPDV = 1 - 〖PDV〗_stddev/ u, DDQ = 1 – v * PDQ and I_L=1/N ∑_(i=1)^N
〖f_max/f_0 (1- 1/(1+ e^(-1/b (t_i-a)) )) 〗
is resulting in:

parameter fmax a b u v

value 100 15 2 10 2500

A.6 Technical Patterns


In addition to the traffic patterns tailored for specific target applications, we also offer a
set of constant rate patterns. They are destined to be used, e.g., in an optimization
context. The results are easier to interpret and with the increased packet delay budget
it is possible to further analyze the behavior of packets with long delays. In addition,
the tests are either much shorter to yield a fast result or much longer for covering more
network events like handovers within one test.
As these traffic patterns do not have a target application, there is no user expectation
or experience associated that could be modeled. Therefore, there is no interactivity
score delivered when using constant traffic patterns.

Test Description 3646.3142.02 ─ 24.0 45


R&S SwissQual AG® Traffic Patterns
Technical Patterns

The following constant traffic patterns are available:


● Constant low: A short traffic pattern resulting in a target bandwidth of only 100
Kbit/s. Ideal for measuring in poor network situations.
● Constant medium: A short traffic pattern resulting in a target bandwidth of 1 Mbit/s,
which is typically high enough for many services. Successful tests with this pattern
could be a proof of service for less demanding interactive applications.
● Constant high: A short traffic pattern resulting in a target bandwidth of 15 Mbit/s.
This pattern is quite demanding, mainly due to the high uplink throughput, and thus
can be used for measuring in very good network situations.
● Constant long: A 30 seconds long traffic pattern resulting in a low target bandwidth
of 300 Kbit/s. Besides the longer duration, it also has a high packet delay budget of
2 seconds. This pattern is intended to be used for deeper analysis of the packet
delay behavior.
● Constant DL dominant: A medium length traffic pattern with a 10 times higher data
rate in downlink then in uplink and a high packet delay budget. This pattern should
be used when the channel is supposed to have a higher bandwidth in downlink as
it is the case for many consumer-driven use-cases.
● Constant DL dominant high: A 24 s long traffic pattern with a highly demanding
uplink data rate of 5 Mbit/s and a downlink data rate of 20 Mbit/s. This covers the
needs of practically all consumer-driven use-cases.
● Constant UL dominant: A medium length traffic pattern with a 10 times higher data
rate in uplink then in downlink and a high packet delay budget. This pattern should
be used when the channel is supposed to have a higher bandwidth in uplink as it
might be the case for producer-driven use-cases.
● Stepwise DL dominant: A medium length traffic pattern with a 15 times higher data
rate in downlink then in uplink, where the relation between the two remains fix but
the absolute data rate changes during the test in steps in the form of low – medium
– high – medium – low. This pattern can be used to test the dependency of the
data channel on the network load.

Name Traffic Pack- Packet Packet Result- Result- Packet Max. Test Traffic
type ets per size size ing tar- ing tar- delay packet dura- shape1
sec- UL DL get get budget error tion [s]
ond [Bytes] [Bytes] band- band- [ms] rate
width width
UL DL

Con- Con- 85 150 150 100 100 500 2 * 10-2 2


stant stant kbps kbps
low low
rate

Con- Con- 200 650 650 1 Mbps 1 Mbps 500 2 * 10-2 2


stant stant
mediu mediu
m m rate

Con- Con- 1300 1450 1450 15 15 500 2 * 10-2 2


stant stant Mbps Mbps
high high
rate

Test Description 3646.3142.02 ─ 24.0 46


R&S SwissQual AG® Traffic Patterns
Technical Patterns

Name Traffic Pack- Packet Packet Result- Result- Packet Max. Test Traffic
type ets per size size ing tar- ing tar- delay packet dura- shape1
sec- UL DL get get budget error tion [s]
ond [Bytes] [Bytes] band- band- [ms] rate
width width
UL DL

Con- Con- 150 250 250 300 300 2000 2 * 10-2 30


stant stant kbps kbps
long low-
mediu
m rate
long
dura-
tion

Con- Con- 1040 120 1200 1 Mbps 10 2000 2 * 10-2 10


stant stant Mbps
DL high
domi- rate in
nant DL, low
rate in
UL

Con- Con- 2083 300 1200 5 Mbps 20Mbp 2000 2 * 10-2 24


stant stant s
DL high
domi- rate in
nant DL,
high mediu
m rate
in UL

Con- Con- 1040 1200 120 10 1 Mbps 2000 2 * 10-2 10


stant stant Mbps
UL low
domi- rate in
nant DL,
high
rate in
UL

Step- High l: 134 93 1400 l: 0.1 l: 1.5 2000 2 * 10-2 10 ll mm


wise DL, low Mbps Mbps hh mm
m: 672
DL UL ll
m: 0.5 m: 7.5
domi- rate, h:
Mbps Mbps
nant chang- 2’688
ing h: 2 h: 30
step- Mbps Mbps
wise

Test Description 3646.3142.02 ─ 24.0 47


R&S SwissQual AG® Latency, Jitter and Server Location
Latency and Server Position

B Latency, Jitter and Server Location

B.1 Base Latency


The latency is not a constant value in a given channel. It would be if there was just a
physical link in between two points and the packets were transmitted untouched. In
such a case, the latency is rather constant and mainly depending on the length and the
type of the medium.
Today’s networks are much more complex, a transmission from a user device to the
far-end side consists of many different – also physically different – parts. There is the
airlink, transmissions in the core network by cable interfaces to the "outside world", the
CDN that can be in the cloud or a private server elsewhere, also connected by cable
that can be fiber or even copper. It is not only the pure physical media and their
lengths, rather the interfaces and nodes in between those connections that are receiv-
ing the packets and provide them to the next hop. This is often combined with queuing
mechanisms, because of time division transport, the packets have to be sorted into the
transport schedule.
Considering this, there is at first a dependency of the actual transport time on the pure
distance and transport media itself. This can be seen as sort of "physically given
latency". This cannot be reached in practice because there are processing components
in place too. If considering these processing components operating under optimal con-
ditions (immediate passing through of the packets, no queuing) we have a "base
latency" for a given connection, which is the lower limit. On top of this "base latency"
there are all the additional processing times, dynamic detours and the queuing buffers
of all the network components under realistic conditions. These additional delays let
the measured delay spread around a mean value above the "base latency". This is
addressed by the KPIs measured e.g. in the interactivity test. The "Round-trip Latency
(10th percentile)" is a good approximation of the two-way transport time that is reacha-
ble in best case conditions and can be seen as "base latency" that is given by the
physical media, its lengths and all components with shortest buffering (ideal schedul-
ing). The "Round-trip Latency (median)" indicates the two-way transport time that is the
"base latency" plus the average of all processing and buffering times under the given
realistic field conditions.

B.2 Latency and Server Position


When defining the scenario and setup for latency and interactivity measurements, it is
important to decide what the measurement is targeting. Is the measurement focusing
to the delay of the mobile link or – as for a user – to a cloud instance, where, e.g., a
gaming server is hosted.
The location of the server/reflector becomes important because the further it is from
the network edge and outside the core network, the higher the latency. Therefore the

Test Description 3646.3142.02 ─ 24.0 48


R&S SwissQual AG® Latency, Jitter and Server Location
Latency and Server Position

interactivity will be dominated by the content delivery network to be passed, its techni-
cal means and how it is connected.
If the latency and interactivity measurement should focus on the mobile link, it is impor-
tant to place the server or a TWAMP responder as close as possible to the network
edge in the operator’s core network.
If the measurement target is a realistic emulation of interactive cloud services e.g. of a
real-time eGaming experience, it is better to position the server in the cloud or even in
a private network where eGaming service providers are usually located.
When actually setting-up the test, there is the question of the influence of the distance
from the client’s position to this cloud server. Is there a measurable difference in
latency to a server in 1000 or 5000 km distance?
To answer this question, we conducted a study using the Interactivity test. Clients
where located in 3 different regions worldwide and measurements to 8 servers in 7
countries were conducted.
These example results are shown in the graph below. The mobile access was in all
cases a well-conditioned LTE connection. The two-way latency to a near-by cloud
server connected via this good LTE network and the content delivery network is around
30 ms. A server in a distance of 4000 km leads to a two-way latency of around 100 ms
and a server on the other side of the globe to 200 – 300 ms.
As you can see in the left diagram, there is a more or less linear dependency between
median round-trip transport time measured by the interactivity test and the geographi-
cal distance between the client and the server. Here the individual packet delay jitters
are part of the values. The right diagram reports the 10th percentile of the two-way
latencies instead. This value is closer to the "base latency" and best-case transport. As
a consequence, the two-way latencies are not just a bit shorter, but rather primarily
less scattered within a narrower margin.

Test Description 3646.3142.02 ─ 24.0 49


R&S SwissQual AG® Latency, Jitter and Server Location
Jitter and Server Position

Nevertheless, there are a few outliers where the two-way latency is visibly higher than
expected in comparison. It must be noted that these are measurements to the same
server, just from three different client locations. This indicates strongly that this server
is badly connected to the internet.

B.3 Jitter and Server Position


In addition, there was also an investigation of the packet jitter in relation to the server
distance to answer the question whether the delay jitter (packet delay variation, PDV)
increases with the distance and the potentially increasing number of hops. The follow-
ing diagram illustrates this for four individual mobile operators to the same set of
serves as in the previous diagram.

Test Description 3646.3142.02 ─ 24.0 50


R&S SwissQual AG® Latency, Jitter and Server Location
Conclusion

It is obvious that the delay jitter does not depend on the geographical distance to the
server. It can be seen as constant for a mobile operator, while for each operator a dif-
ferent packet delay jitter was measured. As a consequence, it means that the jitter is
introduced in the given RAN access and the mobile core network configuration and not
in the CDN. This implies too that all queuing and scheduling mechanisms are concen-
trated in the local configuration of the client side. Neither the distance to the server nor
the number of hops have a considerable influence on a packet delay variation.

B.4 Conclusion
For interactivity tests with traffic patterns like eGaming and video conferencing where
the host is placed in the CDN, we propose to use a server within a region of around
2000 km from the client device location. This will lead to a round-trip latency that is not
dominated by the distance and is also realistic, as popular services usually offer serv-
ers in every larger region.
With the emerging URLLC use cases, this kind of server placement will not be suffi-
cient anymore, but will have to be very close to the network edge.

Test Description 3646.3142.02 ─ 24.0 51

You might also like