Download as pdf or txt
Download as pdf or txt
You are on page 1of 44

NETWORK PERFORMANCE SCORE

White paper | Version 01.00 | Dr. Jens Berger, Johanna Sochos, Dr. Marija Stoilkovic
CONTENTS
1 Introduction ..........................................................................................................................4

2  Basic structure .....................................................................................................................5

3  Voice telephony ....................................................................................................................6
3.1  Voice telephony contributors ....................................................................................................... 6
3.1.1  Accessibility and retainability – success ratio ................................................................................... 6
3.1.2  Call setup time ................................................................................................................................... 6
3.1.3  Perceptual objective listening quality analysis (POLQA) in line with ITU-T P.863 ............................. 7
3.2  Contributors transformation to a percentage scale ..................................................................... 8

4  Data services ........................................................................................................................9
4.1  Data transfer services .................................................................................................................. 9
4.1.1  Availability/accessibility – HTTP UL/DL success ratio  ...................................................................... 9
4.1.2  Data transfer performance – HTTP DL/UL throughput  ................................................................... 10
4.1.3  Data transfer services – contribution and weighting ...................................................................... 11
4.2  Video streaming services ........................................................................................................... 11
4.2.1  Typical video streaming service structure ........................................................................................ 12
4.2.2  Video streaming services performance contributors ....................................................................... 12
4.2.3  Video streaming services – contribution and weighting ................................................................. 13
4.3  HTTP browsing and social media .............................................................................................. 13
4.3.1  HTTP browsing ................................................................................................................................. 13
4.3.2  Browsing/social media – contribution and weighting ..................................................................... 15

5  Regions and final aggregation categorization ....................................................................16

6  Point score application .......................................................................................................17

7 Summary ............................................................................................................................20

Appendix A .........................................................................................................................21

A-1 Testing HTTP browsing in benchmarking campaigns ........................................................21


A-1.1 Introduction ............................................................................................................................... 21
A-1.2  Selection criteria for websites ................................................................................................... 21
A-1.3  Best practice for selecting websites ......................................................................................... 23
A-1.4  Websites in benchmarking campaigns – statistical analysis  .................................................. 23

A-2  Measurement scenarios .....................................................................................................25


A-2.1  Job structures ........................................................................................................................... 26
A-2.1.1  Call session ...................................................................................................................................... 26
A-2.1.2  Data session ..................................................................................................................................... 26
A-2.2  Voice test settings ..................................................................................................................... 27
A-2.2.1  Double ended call settings .............................................................................................................. 27
A-2.3  Data test settings ...................................................................................................................... 27
A-2.3.1  Ping testing ...................................................................................................................................... 27
A-2.3.2  Data transfer testing ........................................................................................................................ 28
A-2.3.3  Video streaming service testing ....................................................................................................... 30
A-2.3.4  HTTP browsing and social media service testing ............................................................................ 30
A-2.4  Rohde & Schwarz measurement systems .................................................................................. 31
A-2.5  NPS campaign setup in SmartBenchmarker ............................................................................. 33
A-2.6 Postprocessing .......................................................................................................................... 35

2
Appendix B  ........................................................................................................................37

B-1  Network performance score (NPS V1.0)  ...........................................................................37


B-1.1  KPIs, weightings and thresholds ............................................................................................... 37
B-1.2  Maximum scoring points per category ...................................................................................... 38
B-1.3  Improvement in points ............................................................................................................... 39
B-1.4 Abbreviations ............................................................................................................................. 41

Customer support ...............................................................................................................42
Technical support – where and when you need it ..................................................................... 42
Up-to-date information and upgrades ....................................................................................... 42
Regional contact ........................................................................................................................ 42

This white paper describes the structure and application of an


integrated and ­transparent methodology to characterize mobile
network performance.

Rohde & Schwarz | White paper Network Performance Score 3


1  INTRODUCTION
The perceived technical performance of a network depends on how satisfied users are
with the quality and availability of telecommunications services. Smartphones are not
only used for telephony and text messaging services; today, they are predominately used
for over-the-top (OTT) data services.

There is a demand for an efficient method of calculating an overall score that reflects the
perceived technical performance of a network or one of its subsets, such as a region, a
period of time or a technology. The method described here considers and weights the key
performance ­indicators (KPI) for a wide range of services that are essential for and rep-
resentative of the service quality and combines them into an overall performance score.
This score can be calculated for individual regions such as cities, highways and popular
areas. The scores of the individual regions are then aggregated in an overall network per-
formance score.

There are two approaches to aggregation:


►► Applying a scaling of 0 % to 100 % to each level. This allows direct comparison of
individual regions or service types and is used for benchmarking purposes.
►► Assigning points to each individual subscore instead of applying a scaling. The
maximum number of points for a KPI or a subscore is defined by its weighting in the
overall score based on the weighting of the service category, e.g. telephony or data,
and the weighting of the regional category, e.g. city or road. All points are accumulated
to determine the overall score in points. This point scale (from 0 to 1000) can be
considered a network optimization tool since it makes it easy to identify the best place
to improve the overall performance.

Between these two scales, there is a transformation of the applied weightings for the in-
dividual services. Section 2 Basic structure to section 5 Regions and final aggregation
categorization explain the KPIs used and the structure of the score based on the percent-
age score. The point score is explained separately in section 6 Point score application.

The scoring mechanism allows very efficient comparison of operators in a market, of dif-
ferent measurement campaigns in regions and countries, or before and after deploy-
ment of new technology or software. The transparent structure of the score allows effi-
cient drilldown to the region, service or even the KPIs responsible for a nonoptimal overall
score.

The scoring methodology is in line with the current state of the ETSI TR 103 559 V1.1.1
and is available in Rohde & Schwarz mobile network testing (MNT) products as network
performance score ­version 1.1.

This score only takes into account the technical performance of the services; other dimen-
sions of user satisfaction such as billing, tariffs and support quality are not considered.

4
2  BASIC STRUCTURE
The structure of the network performance score is highly transparent and consists of dif-
ferent layers of weighting and accumulation.

On the technical side, the score is based on telephony and data services subscores, each
of which is scaled separately from 0 % to 100 %. Each of these two subscores consists
of a set of comprehensive KPIs or contributors. Today, the subscores have weightings of
40 % telephony and 60 % data services and form a complete network score.

Fig. 1: Network performance score basic structure


0 % to 100 %
Telephony
0 % to 100 %
40 %
0 % to 100 %
Data services

60 %

It is possible to apply an additional intermediate weighting layer that gives d


­ ifferent
weightings for individual regions and categories such as cities, connecting roads,
hotspots and rural areas. The weighted and cumulative scores for the subregions again
range from 0 % to 100 %.

Fig. 2: Network performance score subregions


Cities
Data services

0 % to 100 %
Telephony

Connecting roads
Data services

0 % to 100 % 0 % to 100 %
Telephony

..
.
Spots
Data services

0 % to 100 %
Telephony

The number, categorization and weighting of these regions is flexible and can be defined
to meet regional or national needs. This regional categorization is described in section 5
Regions and final aggregation categorization.

Rohde & Schwarz | White paper Network Performance Score 5


3  VOICE TELEPHONY
The telephony subscore is based on the end-user experience of telephony services.
Service availability, accessibility, retainability and performance are evaluated to determine
the score.

3.1  Voice telephony contributors


The contributors (KPIs) for telephony performance are:
►► Call setup success ratio (CSSR)
►► Call drop ratio (CDR)
►► Call setup time (CST) average
►► CST > 15 s ratio
►► CST 10th percentile
►► Voice mean opinion score (MOS) average
►► Voice MOS < 1.6 ratio
►► Voice MOS 90th percentile

The contributors are described in more detail in the following.

3.1.1  Accessibility and retainability – success ratio


Call setup success ratio (CSSR) is the percentage of all attempts to make a call that
­result in a connection to the dialed number. Typical ratios are between 90 % and 100 %.

CSSR is a straightforward calculation:


Number of successfully established calls
CSSR =
Number of attempts

Note:  Successfully established calls include completed calls and established but dropped
calls. Attempts include completed, dropped and failed calls.

Call drop ratio (CDR) is the percentage of telephone calls that were cut off due to tech-
nical reasons before the speaking parties finished their conversation and before one of
them intentionally hung up. This KPI is measured as a percentage of all successfully es-
tablished calls. Typical scores are < 10 %.

The formula only considers completed and dropped calls as successfully established:
Number of dropped calls
CDR =
Number of completed + Number of dropped calls

3.1.2  Call setup time


In Rohde & Schwarz SwissQual measurement systems, the call setup time (CST) is taken
from the application layer. The CST is the time interval from the point when the user starts
dialing until the call is reported as connected to the user. It takes into account both the
telephone software’s configuration and processing times, and results in a setup time as
perceived by the user.

Note:  There is a difference between the CST described above and the shorter call setup
times measured at the signaling level.

Call setup time (CST) average is the overall average performance of the network or of
the applied selection of measurements. This value is calculated as the average of all mea-
sured CSTs for all completed and dropped calls.

6
CST > 15 s ratio is a KPI used to identify poor performers. Usually, the contribution of
this KPI is important due to fact that users have a very negative perception when some-
thing goes wrong while they consider exceptional service as the norm.

CST 10th percentile is the threshold below which the shortest 10 % of CST values fall.
This score rewards best performers and gives an indication of the shortest CST reachable
in a technology or region. The 10 % percentile value also indicates a good practice expec-
tation of what is possible with respect to the CST for a region, technology or weighting.

3.1.3  Perceptual objective listening quality analysis (POLQA) in line with ITU-T P.863
To assess the speech quality provided by mobile operators, three KPIs are defined based
on the ITU-T P.863 (POLQA) MOS. ITU-T P.863 is used in its super-wideband/fullband
mode to access HD voice/wideband channels correctly. The applied ITU-T P.863 algorithm
considers the full audio spectrum applied to EVS-SWB and FB codecs as deployed in
VoLTE.

The POLQA scores are evaluated per speech sample, each call delivering multiple individ-
ual scores into the analysis in both directions of the conversation (half-duplex).

The absolute POLQA score depends on the test speech signal used. It is recommended
to use reference signals specified in ITU P.501 Annex D and apply the same signal for all
measurements in a campaign for comparison reasons. A good example is the English test
sample “EN_fm_P501”, which is part of the Rohde & Schwarz system installation.

Voice MOS average measures the overall, average speech quality performance of a
network or a selection of measurements. This KPI is the plain average of all considered
POLQA scores without any further preselection, exclusion or weighting. Typical MOS
scores are around 3.0 for narrowband channels and 3.5 to 3.7 for wideband (HD voice)
channels.

Note:  Silence in the audio channel is not part of the mean opinion score (MOS); it is in-
directly considered, either by failed calls due to silence or by dropped calls where
­silence typically occurs shortly before a call drops.

Voice MOS < 1.6 ratio is the ratio of very bad speech samples. Considering the strong
negative perception of low quality, this score explicitly penalizes high ratios of bad
samples.

Voice MOS 90th percentile is the threshold above which the best 10 % of voice MOS
values fall. It rewards good performers, keeping in mind that users perceive very good
performance very positively. It also gives an indication of the MOS scores that are attain-
able with a given setup or technology (based on the applied selection of measurement
data).

Rohde & Schwarz | White paper Network Performance Score 7


3.2  Contributors transformation to a percentage scale
The described contributors are scaled according to their original units, i.e. seconds for the
CST, MOS for speech quality and percentage for all ratios. To ensure the transparency and
comparability of the actual contribution of each contributor/KPI to the voice telephony
subscore, each contributor is also scaled on a percentage scale.

The transformation applies a linear weighting of 0 % to 100 % between a bad and good
threshold. Scores outside of these boundaries stay saturated.

Fig. 3: Linear weighting

100 %

0%

Low threshold “bad” High threshold “good”

Generally, 0 % is assigned to the bad threshold and 100 % is assigned to the good thresh-
old. The assignment depends on the contributor: for the call success ratio or the average
speech quality, a high score is used as the good threshold; for the call drop ratio or the
average call setup time, a high score is considered bad and is used as the bad threshold.
The terms bad and good refer to the contribution in terms of quality.

For each contributor, bad and good thresholds are defined in the same way as the
weighting in the telephony subscore. The table shows these thresholds as applied in the
initial Rohde & Schwarz SwissQual implementation.

For each contributor, the following formula is applied:


KPI value – Bad threshold
Contributor Norm = 100 % × Weighting Contr ×
Good threshold – Bad threshold

Table 1: NPS voice contributors (Rohde & Schwarz implementation)

Bad threshold Good threshold Weighting in


telephony
Call setup success ratio (CSSR) 85.00 % 100.00 % 0.3125
Call drop ratio (CDR) 10.00 % 0.00 % 0.3750
Call setup time (CST) average 12.00 4.50 0.0625
CST > 15 s ratio 3.00 % 0.00 % 0.0875
CST 10th percentile 8.00 4.00 0.0375
Voice MOS average 2.00 4.30 0.0438
Voice MOS < 1.6 ratio 10.00 % 0.00 % 0.0562
Voice MOS 90th percentile 4.00 4.75 0.0250
Sum 1.0

8
4  DATA SERVICES
The data services subscore consists of three areas of contributors addressing different
types of services and characterizing different types of requests in a network:
►► (Plain) data transfer (HTTP) (25 %)
►► Video streaming (22 %)
►► HTTP browsing (38 %) and social media (15 %)

Similar to voice telephony, each area consists of a set of individual contributors quantify-
ing the typical midrange, rewarding outstanding performers and giving an e ­ xtra bonus for
operators with little or no bad performance.

4.1  Data transfer services


The plain data transfer performance is calculated by HTTP download and upload.
Availability/accessibility and transfer performance are separated.

For availability/accessibility, a file of a fixed size is completely downloaded.

For the transfer performance, multiple connections are opened and the transfer rate is
measured for a given time. This test is also known as the capacity test.

The individual contributors to this area are:


►► HTTP UL/DL success ratio
►► HTTP DL throughput average
►► HTTP DL throughput 10th percentile
►► HTTP DL throughput 90th percentile
►► HTTP UL throughput average
►► HTTP UL throughput 10th percentile
►► HTTP UL throughput 90th percentile

4.1.1  Availability/accessibility – HTTP UL/DL success ratio


The HTTP UL/DL success ratio measures the data service availability/accessibility. It is the
ratio of successfully completed tests (completed downloads and uploads) to all started
tests. It rates both nonavailability and incomplete uploads and downloads in one score.
Number of successfully completed tests
HTTP UL / DL success ratio =
Number of attempts

Successfully completed tests are tests with ErrorCode = 0 (state = success). Attempts
­include successfully completed tests and tests with ErrorCode ≠ 0 (state = test timeout,
HTTP request timeout, file transfer failed, etc.).

The applied settings for the HTTP UL/DL tests are:


►► Uplink
–– File size: 2 Mbyte
–– Timeout for completion: 40 s (requires min. 0.4 Mbit/s)
►► Downlink:
–– File size: 5 Mbyte
–– Timeout for completion: 40 s (requires min. 1.0 Mbit/s)

Rohde & Schwarz | White paper Network Performance Score 9


4.1.2  Data transfer performance – HTTP DL/UL throughput
Data throughput is what users experience most of the time while using a data service. It
depends on many protocol issues such as transmission schemes (slower schemes are
used at longer distances from the access point due to better redundancy), packet retrans-
mission and packet size. The typical throughput is often lower due to traffic sharing inside
the same network or cell, interference or even limited fixed line capacity from the base
station onwards.

The total duration of an HTTP transfer test is set to 10 s with a connection lost t­ imeout of
3 s and transfer duration of 7 s (see Table 15: HTTP DL test – multi-connection and Table
16: HTTP UL test – multi-connection). These parameters are the same for both upload
and download tests. The active transfer duration starts only after all configured TCP sock-
ets have been connected.

As an indicator of the average data throughput, the mean data rate (MDR) is calculated.
For an HTTP transfer test, the MDR is calculated as the sum of all transferred bytes during
the test’s active transfer period divided by the transfer time. It is calculated separately for
upload and download data.
∑ all mean data rates per test
Mean data rate per test =
Number of tests

In the calculation, all tests with ErrorCode = 0 are considered. Tests where the TCP con-
nection could not be established for any or all sockets, tests for which the server is not re-
sponding (ErrorCode ≠ 0) or tests classified as system release are excluded.

HTTP DL/UL throughput average is the main score and quantifies the average transfer
rate in Mbit/s across a network or an applied selection of measurement data.
∑ all mean data rates per test
HTTP DL / UL throughput average =
Number of tests

HTTP DL/UL throughput 10th percentile is a KPI that measures the poor performance of a
network, i.e. the data rate below which the worst 10 % of transfers fall. It is used to con-
sider the negative perception if there is a very slow transfer that is not well considered in
the average throughput.

Example:
There are 100 DL tests, 85 with 120 Mbit/s and 15 with just 0.5 Mbit/s. The average MDR
is considered good at 102 Mbit/s even if 15 % of the tests have a much lower value. The
10 % percentile MDR indicates this with a score of 0.5 Mbit/s. Consequently, such a net-
work is rated lower than one having consistently 102 Mbit/s in all tests.

HTTP DL/UL throughput 90th percentile is a KPI that evaluates the good performance of
the network, i.e. the best 10 % of transfers are above this value. The goal of this KPI is to
find the maximum performance of the network or the selected measurements. The 90 %
percentile value is preferred over the absolute maximum (which is just a single test) and
is considered a more reliable KPI for showing the network’s full capacity.

10
4.1.3  Data transfer services – contribution and weighting
The individual contributors are rescaled on a 0 % to 100 % scale as described in section
3.2 Contributors transformation to a percentage scale.

For each contributor, the following formula is applied:


KPI value – Bad threshold
Contributor Norm = 100 % × Weighting Contr ×
Good threshold – Bad threshold

The HTTP data transfer performance contributes 25 % to the data services subscore.

Table 2: NPS HTTP data transfer contributors (Rohde & Schwarz implementation)

Bad threshold Good threshold Weighting in


data services
HTTP UL/DL success ratio 80 % 100 % 0.0550
HTTP DL throughput average 1.0 Mbit/s 100.0 Mbit/s 0.0350
HTTP DL throughput 10th percentile 1.0 Mbit/s 40.0 Mbit/s 0.0450
HTTP DL throughput 90th percentile 10.0 Mbit/s 240.0 Mbit/s 0.0175
HTTP UL throughput average 0.5 Mbit/s 50.0 Mbit/s 0.0350
HTTP UL throughput 10th percentile 0.5 Mbit/s 30.0 Mbit/s 0.0450
HTTP UL throughput 90th percentile 5.0 Mbit/s 100.0 Mbit/s 0.0175
Sum 0.25

4.2  Video streaming services


Video services consume the majority of data transferred in mobile networks and there-
fore have to be considered. YouTube is the most common service used for v ­ ideo stream-
ing. Besides the pure access times and perceived quality, these tests are very demand-
ing for the network, particularly for ensuring continuous data delivery without outages.
Consequently, it is a valuable extension to the plain HTTP data transfer tests where a con-
tinuous transfer is not required and cannot be measured.

In YouTube – and in all video services – there is a basic difference between live video and
video on demand (VoD). In the VoD case, the video is completely stored on the server
and is usually completely – or mostly – downloaded to the device; there are many tech-
niques, progressive download being the most common one. Live video is not available as
a file. It is sent almost in real time to the device, and in practice, in short portions of a few
seconds each. If VoD is used as the test case, the video is largely buffered on the phone
and outages in the connection can easily be bridged. Live video is much more sensitive
since an interruption in the data flow will lead to freezing a ­ fter a short time. The VoD is
the less sensitive test case and leads to average or higher video quality since there is less
freezing. Live video is more sensitive and reflects the continuity of the data flow provided
by the network.

Consideration of live video streams is best practice for network benchmarking today.

Rohde & Schwarz | White paper Network Performance Score 11


4.2.1  Typical video streaming service structure
Each YouTube streaming test can be divided into two phases: the video access phase and
the video playout phase. The video access phase starts with the video request and contin-
ues until the first picture is displayed. This time is measured by the KPI video setup time
or time to first picture (TTFP). Video setup time is the time interval from the point when
a request to play a video is made to the moment when the video starts to play. It mea-
sures the overall performance of the network in terms of accessibility to video streaming
services.

Fig. 4: Typical video streaming service structure Video finished


Video requested Start displaying Displaying stopped by user

DNS First video packet arrives


Connect to server Prebuffering

Access phase Video playout phase

After the playout starts, the perceived video quality is considered as the main contributor.
The perceived video quality is determined by compression artifacts, rescaling effects, low-
er frame rates and freezing (stalling) during the display. The perceived quality is measured
by ITU J.343.1 and combines all possible degradations into one video MOS on a common
scale from 1 to 5. ITU J.343.1 is especially recommended by ETSI TS 102250-2 for evalu-
ating mobile streaming services. The testing methodology for YouTube and other video
streaming services is described in ETSI TR 101578.

4.2.2  Video streaming services performance contributors


The contributors (KPIs) for video stream performance are:
►► Video success ratio
►► Video setup average
►► Video setup > 10 s ratio
►► Video MOS average
►► Video MOS 10th percentile

Video success ratio considers all tests that achieve the defined display time of the
video. These tests are classified as completed. The typical display time applied for live
YouTube streams is 45 s.
Number of successfully completed tests
Video success ratio =
Number of attempts

Attempts include tests with the following states: completed, failed and dropped.
# test status Completed
Video success ratio =
# test status Completed + Failed + Dropped

The status failed or dropped is defined by timeouts because unlike for telephony, there is
no ongoing signaling information available. A video streaming test is considered as failed
if no picture is displayed within a defined timeout (connection timeout). This timeout de-
fines therefore the maximum length of the video access phase. A timeout value of 30 s is
used. A video streaming test is considered as dropped if 15 s of subsequent freezing (vid-
eo pause) is observed. This stream lost timeout is considered the maximum time a viewer
is willing to wait for the video to resume.

12
Video setup average is the average value of all measured times to first picture (TTFP)
for all completed and dropped tests. It quantifies the average length of the video access
phase.

Video setup > 10 s ratio is the ratio of attempts where TTFP lasts longer than 10 s. This
performance is considered to have a negative impact on the user experience and the per-
ceived degradation.

Video MOS average is calculated for all successfully completed tests. It is defined as
the average of already averaged video MOS (ITU J.341) per test. It incorporates all visible
degradations during the video display into a MOS.

Video MOS 10th percentile is the threshold below which the lowest 10 % of video
MOS values fall. This KPI evaluates poor network performance in terms of video quality.
The percentile values focus on bad and very good performers. This KPI is calculated tak-
ing into account only completed tests.

4.2.3  Video streaming services – contribution and weighting


The individual contributors are rescaled on a 0 % to 100 % scale as described in section
3.2 Contributors transformation to a percentage scale.

For each contributor, the following formula is applied:


KPI value – Bad threshold
Contributor Norm = 100 % × Weighting Contr ×
Good threshold – Bad threshold

The performance of video streaming services contributes 22 % to the data services
subscore.

Table 3: NPS video streaming contributors (Rohde & Schwarz implementation)

Bad threshold Good threshold Weighting in data


services
Video success ratio 80.0 % 100.0 % 0.1276
Video MOS average 3.0 4.5 0.0363
Video MOS 10th percentile 2.0 4.0 0.0363
Video setup average 7.0 s 2.0 s 0.0099
Video setup > 10 s ratio 5.0 % 0.0 % 0.0099
Sum 0.22

4.3  HTTP browsing and social media


Web browsing and social media apps are heavily used by today’s subscribers. They con-
tribute much less to the amount of transferred data than video, but users spend consid-
erable time using such services and therefore their performance is essential for the per-
ceived network performance.

4.3.1  HTTP browsing


HTTP browsing tests access (public) websites. They can be static like the well-known
Kepler page or dynamic like usual webpages from commercial providers. Static means
pages where all content (text, pictures, videos, etc.) is fixed and stored on one server.
A dynamic page is a website where content changes regularly based on access and is
spread across many locations and servers. A typical dynamic website, e.g. a popular
newspaper, consists of several hundred items to download in individual connections from
(different) servers.

Rohde & Schwarz | White paper Network Performance Score 13


Compared to a plain (multithreaded) HTTP download from a dedicated server, the HTTP
browsing test includes problems in the wider core network and especially the content
­delivery network (CDN), and best reflects the user’s perception. The performance of HTTP
browsing not only depends on the performance of the mobile network, but also on indi-
vidual constellations of the CDN behind it. For a fair test, many different webpages have
to be tested to have a good mix while connecting to external servers in the CDN.

The HTTP browsing tests considered in this scoring methodology should access a set of
different pages. This set has to be a mix of static and dynamic pages where the Alexa rat-
ing gives a valid indication of the most popular websites.

No matter what pages are used, all HTTP tests are considered equally by the contributors
(KPIs) for HTTP browsing performance:
►► Browsing success ratio
►► Browsing duration average
►► Activity duration > 6 s ratio

Browsing success ratio is the ratio of data tests with status OK (ErrorCode = 0) to all
tests. The status OK is given if a website (including all items) is downloaded completely
and does not exceed the time limit (typically set to 15 s). The ­criterion for failing an HTTP
browsing test is ErrorCode ≠ 0 (status: test timeout, cancelled, etc.).
Number of successful tests
Browsing success ratio =
Number of successful tests + Number of failed tests

Browsing duration average is a KPI that measures network performance in HTTP


browsing tests. It is the average of the download times for all successfully downloaded
pages. In an HTTP browsing test, multiple DNSs are requested, multiple items are down-
loaded and typically redirections are required.

Unlike HTTP transfer, browsing is not performed in a controlled environment. The select-
ed live internet pages often result in different transfer times. A well-chosen set of different
pages minimizes this problem by averaging as explained above.

Browsing duration > 6 s ratio is a KPI that measures the percentage of webpages that
were downloaded in a time interval longer than 6 s and shorter than the 15 s test timeout.
It provides a further distinction between operators.

Social media app tests


Facebook and Dropbox are typical examples of social media. However, the principle can
be applied to other services, too.

To mimic a typical Facebook or Dropbox user, the test includes different typical sub-
sequent tasks or actions before leaving the site and finishing the test. A social media
­session can involve actions such as opening the home page, opening several posts,
­commenting on posts, liking posts and creating posts.

To take into account posting to social media in this scoring methodology, the test in-
cludes creating or uploading a post that includes transferring a 1 Mbyte media file.

Since most social media actions only transfer a minimal amount of data, the throughput
is not an important indicator. Instead, the main results of the test are the durations of the
individual actions and the entire session and the action success rates.

The test success ratio (task completed in a defined time) and the data transfer time for
uploading the 1 Mbyte file are used as contributors.

14
This test is similar to an HTTP transfer test. Therefore, it is not necessary to extract more
KPIs from this test. The critical part of this test – compared to plain HTTP transfer tests –
is the performance of the connection to the Facebook or Dropbox server and the potential
data rate restrictions imposed by the server/network in case of traffic.

Social media success ratio is the ratio of data tests with status OK (ErrorCode = 0) to
all tests. ErrorCode = 0 is assigned to all tests completed without ­exceeding the time limit
(defined timeout = 30 s). The criterion for failed tests is ErrorCode ≠ 0 (status: cancelled,
service unavailable; test result: initialization failed, test timeout, service unavailable, etc.).
Number of successful tests
Social media success ratio =
Number of successful tests + Number of failed tests

Social media duration average measures the network (including the CDN) and server
performance in this application test. It differs from HTTP browsing tests in that only one
DNS request is sent and only a single object is uploaded to one location. It differs from
HTTP transfer upload in that the server is a third-party server and may, like the content
­delivery network, apply its own data transfer restrictions.

This KPI is calculated for successfully completed tests only (ErrorCode = 0, status OK).

Social media duration > 15 s ratio is a KPI that measures the percentage of uploads
that were performed in a time interval longer than 15 s and shorter than the 30 s test
timeout. It contributes to more objective scoring by introducing a further distinction
­between operators.

4.3.2  Browsing/social media – contribution and weighting


The individual contributors are rescaled on a 0 % to 100 % scale as described in section
3.2 Contributors transformation to a percentage scale.

For each contributor, the following formula is applied:


KPI value – Bad threshold
Contributor Norm = 100 % × Weighting Contr ×
Good threshold – Bad threshold

The performance of HTTP browsing contributes 38 % and the performance of the ­social
media applications contributes 15 % to the data services subscore.

Table 4: NPS HTTP browsing and social media contributors

Bad threshold Good threshold Weighting in data


services
Browsing success ratio 80.0 % 100.0 % 0.25333
Browsing duration average 6.0 s 1.0 s 0.10857
Browsing duration > 6 s ratio 15.0 % 0.00 % 0.01810
Social media
Social media success ratio 80.0 % 100.0 % 0.100005
Social media duration average 15.0 s 3.0 s 0.042855
Social media duration > 15 s ratio 5.00 % 0.00 % 0.00714
Sum 0.53

Rohde & Schwarz | White paper Network Performance Score 15


5  REGIONS AND FINAL AGGREGATION
CATEGORIZATION
If a drive or walk test for collecting field measurements is performed, the collected data
and its density for individual regions is defined by the measurement time spent for each
­region. It is inefficient to adjust the measurement time to exactly reflect the number of
subscribers or the traffic in these regions.

Regional categorization of measurement campaigns in a postprocessing and aggregation


step allows customers to weight the measurement results according to the importance
of each region. This importance can be defined by the traffic in the individual ­regions, the
number of subscribers, the existence of service level agreements (SLA) with customers or
areas where the law or authorities request mobile coverage.

This regional categorization can be seen as an intermediate weighting layer where the
network performance score is calculated separately for each regional category, weighted
according to the importance of each region and then aggregated to a final score.

Typical examples of such categories are:


►► Big cities
►► Medium and small towns
►► Connecting roads
►► Highways
►► Rural areas
►► Trains and public transport
►► Popular areas (sightseeing, tourism)
►► Hotspots (stadiums, malls, airports, downtown areas)

Depending on the operators or the country, not all categories apply. In addition, the
score’s reliability depends on the available number of measurements: if there are only a
limited number of measurements, using fewer categories leads to more reliable scores.

There are advantages to applying this regional weighting as a separate layer during post-
processing. The first one is the flexible assignment of categories and weightings; different
views can be applied, e.g. one based on traffic and another based on the number of sub-
scribers. Another advantage is that in the case of nonoptimal performance, it is possible
to immediately drill down to the underperforming category or region.

A region or category can be assigned during data collection by assigning campaigns


to predefined categories and/or by reporting markers if r­ egions change while driving.
Regions can be also defined in postprocessing by selecting them based on the date/time,
geolocation, technology or other criteria.

The architecture of the implementation allows different weightings on the lower layers
based on the regional category. This means that an individual weighting for telephony and
data can be applied for each region. The contribution of individual KPIs and their thresh-
olds can be adjusted according to regions to serve special customer needs. For public
use, it is recommended to use the same low-layer weightings for all regional categories
for transparency reasons.

16
6  POINT SCORE APPLICATION
The point score is based on exactly the same KPIs and thresholds as the percentage
score. Instead of scaling the subscores (e.g. of telephony or cities) from 0 % to 100 % on
each level, the number of achievable points is specified and implicitly contains all later ap-
plied weightings.

Example:
Consider a 2 % call drop ratio in the road category (see section 3.2 Contributors transfor-
mation to a percentage scale).

On the percentage scale, this 2 % CDR would be rated as 80 % after applying the thresh-
olds and the linear equation formula. This 80 % would contribute 80 % × 0.375 = 30 % to
the telephony score.

If telephony is weighted by 40 % in the mixed telephony and data score, the CDR contrib-
utes 30 % × 0.4 = 12 % to the overall percentage score for the road category that is the
next aggregation layer.

If the road category is weighted by 25 % in the overall network performance score, the
2 % CDR in the road category contributes 12 % × 0.25 = 3 % to the overall percentage.

There is a direct dependency between the percentage and the point scale at this ­final
­level. A percentage score e.g. of 91.5 % is equivalent to 915 points. In the example, the
contribution of the 2 % CDR in the road category is equivalent to 30 points (3 %). The
maximum number of points to be reached by CDR in the road category would be 37.5
(0 % CDR ⇒ 100 % × 0.375 × 0.4 × 0.25 = 3.75 % ⇒ 37.5 points).

If individual categories weightings are defined, the number of points the categories
contribute to the overall network performance score can be directly calculated and
presented.

The following tables present the contributions of all individual KPIs to the overall net-
work performance score based on the individual weightings for the measured regional
categories.

Table 5: NPS voice contributors category weighting

Weighting Max. points for regional category weighting in %


in (weighting for telephony in overall: 40 %)
telephony
100 50 40 30 25 20 10
Call setup success
0.3125 125 62.5 50.0 37.5 31.25 25.0 12.50
ratio (CSSR)
Call drop ratio (CDR) 0.3750 150 75 60.0 45 37.5 30.0 15.00
Call setup time (CST)
0.0625 25 12.5 10.0 7.5 6.25 5.0 2.50
average
CST > 15 s ratio 0.0875 35 17.5 14.0 10.5 8.75 7.0 3.50
CST 10th percentile 0.0375 15 7.5 6.0 4.5 3.75 3.0 1.50
Voice MOS average 0.0438 17.5 8.75 7.0 5.3 4.38 3.5 1.75
Voice MOS < 1.6 ratio 0.0562 22.5 11.25 9.0 6.7 5.62 4.5 2.25
Voice MOS 90th
0.0250 10 5.0 4.0 3.0 2.5 2.0 1.00
percentile
Sum telephony 1.0 400 200 160 120 100 80 40

Rohde & Schwarz | White paper Network Performance Score 17


Using the previous example, the maximum number of points for the call drop ratio in the
road category is 37.5, see column 25 in table 5. The table shows that the maximum num-
ber of points to be reached with telephony services in total and across all categories is
400.

The following table shows the maximum scoring points for data services. In total, data
services can contribute up to 600 points and e.g. data transfer can contribute 150 points.

Table 6: NPS data contributors category weighting

Weighting Max. points for regional category weighting in %


in data (weighting for data services in overall 60 %)
services
100 50 40 30 25 20 10
Data transfer
HTTP UL/DL success
0.0550 33.0 16.5 13.2 9.9 8.3 6.6 3.30
ratio
HTTP DL
0.035 21.0 10.5 8.4 6.3 5.3 4.2 2.10
throughput average
HTTP DL throughput
0.045 27.0 13.5 10.8 8.1 6.8 5.4 2.70
10th percentile
HTTP DL throughput
0.0175 10.5 5.3 4.2 3.2 2.6 2.1 1.05
90th percentile
HTTP UL throughput
0.035 21.0 10.5 8.4 6.3 5.3 4.2 2.10
average
HTTP UL throughput
0.045 27.0 13.5 10.8 8.1 6.8 5.4 2.70
10th percentile
HTTP UL throughput
0.0175 10.5 5.3 4.2 3.2 2.6 2.1 1.05
90th percentile
Sum data transfer 0.25 150.0 75.0 60.0 45.0 37.5 30.0 15.0
Video streaming
Video success ratio 0.1276 76.6 38.3 30.6 23.0 19.1 15.3 7.66
Video MOS average 0.0363 21.8 10.9 8.7 6.5 5.4 4.4 2.18
Video MOS 10th
0.0363 21.8 10.9 8.7 6.5 5.4 4.4 2.18
percentile
Video setup average 0.0099 5.9 3.0 2.4 1.8 1.5 1.2 0.59
Video setup > 10 s ratio 0.0099 5.9 3.0 2.4 1.8 1.5 1.2 0.59
Sum video streaming 0.22 132.0 66.0 52.8 39.6 33.0 26.4 13.2
HTTP browsing and social media
Browsing success ratio 0.25333 152.0 76.0 60.8 45.6 38.0 30.4 15.20
Browsing duration
0.10857 65.1 32.6 26.1 19.5 16.3 13.0 6.51
average
Browsing duration > 6 s
0.0181 10.9 5.4 4.3 3.3 2.7 2.2 1.09
ratio
Social media
Success ratio 0.100005 60.0 30.0 24.1 18.0 15.0 12.1 6.00
Social media duration
0.042855 25.75 12.88 10.28 7.7 6.4 5.2 2.58
average
Social media duration
0.00714 4.25 2.12 1.62 1.30 1.10 0.8 0.43
> 15 s ratio
Sum browsing/
0.53 318.0 159.0 127.2 95.4 79.5 63.6 31.80
social media
Sum data services 1 600 300 240 180 150 120 60

The following tables give an indication of the improvement in points if a KPI is changed
across all regions by a certain margin. It is anticipated that the improvement will stay
­between the bad and good limit. An improvement outside the limits will have no e ­ ffect
due to saturation.

18
The following formula is applied:
Change in KPI
Improvement in points = 1000 points × Weighting in overall ×
Good limit – Bad limit

Table 7: NPS voice available improvement potential

Weighting Weighting Maximum KPI Improvement


in in overall points Bad Good Changes in points (in
telephony limit limit KPI by overall score)
Call setup success
0.3125 0.1250 125 85 % 100 % 0.1 % 0.8
ratio (CSSR)
Call drop ratio (CDR) 0.3750 0.1500 150 10 % 0 % –0.1 % 1.5
Call setup time (CST)
0.0625 0.0250 25 12 s 4.5 s –1 s 3.3
average
CST > 15 s ratio 0.0875 0.0350 35 3 % 0 % –1 % 11.7
CST 10th percentile 0.0375 0.0150 15 8s 4s –1 s 3.8
Voice MOS average 0.0438 0.0175 17.5 2 4.3 0.1 MOS 0.8
Voice MOS < 1.6 ratio 0.0562 0.0225 22.5 10 % 0 % –1 % 2.2
Voice MOS 90th
0.0250 0.0100 10 4 4.75 0.1 MOS 1.3
percentile

Table 8: NPS data available improvement potential

Weighting Weighting Maximum KPI Improvement


in data in overall points Bad Good Changes in points (in
services limit limit KPI by overall score)
HTTP UL/DL success
0.055 0.033 33 80 % 100 % 1.0 % 1.65
ratio
HTTP DL throughput 100
0.035 0.021 21 1 Mbit/s 1 Mbit/s 0.21
average Mbit/s
HTTP DL throughput
0.045 0.027 27 1 Mbit/s 40 Mbit/s 1 Mbit/s 0.7
10th percentile
HTTP DL throughput 240
0.0175 0.011 10.5 10 Mbit/s 1 Mbit/s 0.05
90th percentile Mbit/s
HTTP UL throughput 0.5
0.035 0.021 21 50Mbit/s 1 Mbit/s 0.4
average Mbit/s
HTTP UL throughput 0.5
0.045 0.027 27 30 Mbit/s 1 Mbit/s 0.9
10th percentile Mbit/s
HTTP UL throughput 100
0.0175 0.011 10.5 5 Mbit/s 1 Mbit/s 0.2
90th percentile Mbit/s
Video success ratio 0.1276 0.077 76.6 80 % 100 % 1.0 % 3.85

Video MOS average 0.0363 0.022 21.8 3 4.5 0.1 MOS 1.5
Video MOS 10th
0.0363 0.022 21.8 2 4 0.1 MOS 1.1
percentile
Video setup average 0.0099 0.006 5.9 7.0 s 2.0 s –1 s 1.2
Video setup > 10 s
0.0099 0.006 5.9 5 % 0 % –1.0 % 1.2
ratio
Browsing success
0.25333 0.152 152 80 % 100 % 1.0 % 7.6
ratio
Browsing duration
0.10857 0.065 65.1 6.0 s 1.0 s –1 s 13
average
Browsing duration
0.0181 0.011 10.9 15 % 0 % 1% 0.73
> 6 s ratio
Social media success
0.100005 0.060 60.0 80 % 100 % 1.0 % 3.0
ratio
Social media duration
0.042855 0.02575 25.75 15 s 3s –1 s 2.14
average
Social media duration
0.00714 0.004 4.25 5 % 0 % 1 % 0.8
> 15 s ratio

Rohde & Schwarz | White paper Network Performance Score 19


7  SUMMARY
This white paper reflects the status of the specification for a network performance score
in Rohde & Schwarz mobile network testing postprocessing. It is in line with the published
STQ standard ETSI TR 103559 V1.1.1 (dated August 22, 2019). It will be updated to in-
clude new scientific findings, field experience and best practice approaches as well as
standardization activities.

From the analysis documented in this white paper, it can be seen that a robust and scal-
able methodology for quantifying a network’s performance on a local, regional or national
scale can be produced by taking into account all factors that affect the delivery of servic-
es to end users and applying appropriate weighting.

The value of such a methodology is that mobile network performance can be expressed
in a single, integrated metric that can be independently and transparently compared on
a national and international level. These results can be used to develop a program of net-
work improvement actions to achieve the desired level of performance.

When defining and executing a benchmarking campaign, careful consideration must be


given to the test sites, routes, test devices and performance test websites to ensure a
­truly representative evaluation.

20
APPENDIX A

A-1  T ESTING HTTP BROWSING IN BENCHMARKING


CAMPAIGNS
A-1.1  Introduction
HTTP browsing is one of the most used web applications in mobile networks.
Consequently, testing HTTP browsing is a core metric for benchmarking quality of ex-
perience (QoE); it is weighted strongly in the overall network performance score that
­describes a network’s performance in general.

When testing HTTP browsing, several factors need to be considered, including suc-
cess ­ratio and access, download times (which depend heavily on the website structure)
and the connections to the content delivery network (CDN). Today’s popular websites
are highly dynamic, which means that content and advertisements change within short
­periods of time. Therefore, multiple different websites are included in benchmarking cam-
paigns to diversify and average the sites’ individual behaviors.

Typically, five to eight different websites are used in one benchmarking campaign. They
are continuously observed and can be replaced if the applied rules are no longer met.

A-1.2  Selection criteria for websites


Testing HTTP browsing provides metrics for the user’s QoE. Therefore, highly popular
websites should be selected. Popularity can be determined on a global or on a local mar-
ket level. The public website Alexa regularly reports the 500 most popular global websites
(www.alexa.com/topsites) and country websites (www.alexa.com/topsites/countries).

Depending on the focus of a benchmarking campaign, only global or only local favorites
can be selected for testing. It is best practice to have a mix of global and local favorites.
Examples of global favorites are www.google.com and www.wikipedia.org; examples of
local favorites are local newspapers, newsfeeds, sports and common local services such
as banks.

Websites to be included in a measurement campaign are preferably among the top rank-
ing sites listed on Alexa. Technically, the websites should have an average complexity and
meet the following criteria; otherwise, one of the next highest ranking websites should be
chosen.

Criteria include:
►► Websites of services that are predominantly accessed via a dedicated app on a
smartphone should not be selected. For example, Facebook, YouTube and similar
websites/services are typically not accessed through a mobile browser and should not
be used for mobile benchmarking campaigns.
►► Websites with a very simple structure and a small amount of data should not be
selected. Examples include login and landing pages that offer a service or further
selection only. A user would consider such pages as service access points and not
browsing experiences.

Rohde & Schwarz | White paper Network Performance Score 21


►► Websites with a very large amount of data should not be selected if a large portion of
the downloaded data is unnecessary for readability or perceived completeness. For
example websites with an exceptional number of advertisements. Some KPIs for HTTP
browsing are based on the complete download of the website (download complete as
criterion). This criterion will not meet the user’s experience since a user would perceive
the website as complete in a shorter period of time than when the technical criterion
download complete is reached.
►► Websites with embedded (long) videos should be avoided. Videos usually already start
to play after the first seconds of content are downloaded, but the remaining content
continues to be downloaded while the video is being played (progressive download).
A complete download would not reflect the user’s experience since the perceived
readiness is a much shorter time.
►► Websites with embedded live videos must be avoided. Live videos trigger an ongoing,
continuous download of video data. The website cannot be downloaded completely
and the technical criterion would always fail even when the website is perceived as
complete by a user.
►► Websites with perpetual, subsequent replacement of objects must be avoided, e.g. a
succession of pictures or videos. The website cannot be downloaded completely and
the criterion would fail even when the website is perceived as complete by a user.
►► No website should be selected that is a subpage/subsite of another already selected
website, i.e. two websites stored on the same server.
►► No website should be selected where the content is legally suspicious or contains
harmful, racist or sexist content

Results obtained by using websites that do not meet the above criteria or whose content
changes so that they no longer meet the above criteria are invalidated during postpro-
cessing and used ­neither in the reporting nor to calculate the network performance score.

Since websites may have to be changed during a measurement campaign, it is not a giv-
en that the same websites will be used throughout the entire campaign. Regular screen-
ing and the potential replacement of websites ensure the use of websites that adhere to
the predefined selection criteria and that measurement results are as close as possible to
the user’s perception while browsing.

For each campaign, we decided to include two of the most popular websites such as
www.google.com and www.wikipedia.org. If available, a Google website with a local do-
main should be chosen. The remaining websites are chosen for their complexity to reflect
the distribution of a large population of websites.

Based on the statistical analysis presented in A-1.4 Websites in benchmarking campaigns


– statistical analysis, the following size and resource based selection criteria apply:
►► When selecting five websites
–– Two (global) websites with a size < 500 kbyte (google.*, wikipedia.org)
–– Two local websites with a size of 1000 kbyte to 2000 kbyte and > 40 resources
–– One local website with a size > 2500 kbyte but < 4000 kbyte and > 60 resources
►► When selecting eight websites
–– Two (global) websites with a size < 500 kbyte (google.*, wikipedia.org)
–– One local website with a size > 500 kbyte but < 1000 kbyte and > 20 resources
–– Three local websites with a size of 1000 kbyte to 2000 kbyte and > 40 resources
–– One local website with a size > 2500 kbyte but < 4000 kbyte and > 60 resources
–– One local website with a size > 4000 kbyte but < 6000 kbyte and > 60 resources

A couple of spare websites of appropriate size and complexity also have to be selected so
that websites can be changed during a campaign in case a chosen website no longer ful-
fills the defined criteria.

22
A-1.3  Best practice for selecting websites
There is an established procedure for the selection and continuous use of websites in
benchmarking campaigns:
►► Definition of test cases and measurement campaigns:
During the definition of a measurement campaign, a set of websites is selected and
proposed by Rohde & Schwarz MNT. These websites must fulfill the selection criteria as
listed above when accessed from the Rohde & Schwarz MNT office in Switzerland.
►► Precampaign local test measurements:
Since content and especially advertisements are delivered individually for the local
market and differently for individual MNOs, prior to starting the measurement
campaign the selected websites have to be checked in the local market to ensure that
they meet the above listed selection criteria by using subscriptions of local providers.
Measurement files have to be provided and the list has to be confirmed by
Rohde & Schwarz MNT experts. If confirmation fails, alternative websites have to be
proposed and tested.
►► In-campaign sanity checks:
Contents of websites are subject to regular change. It can easily happen that a website
suddenly no longer fulfills the selection criteria. The local test team is obliged to
monitor whether there are any severe changes in the measurement results.

A good practice is to monitor the success ratio for the selected websites. In environments
with good RF coverage, the success ratio is > 90 % when downloading a website ­within
the defined timeout (typically 15 s). If the success ratio falls below this threshold within
an observation period of a few hours, the Rohde & Schwarz MNT e ­ xperts have to be in-
formed and – if possible – the website will need to be replaced by ­another one. In addi-
tion, measurement files have to be provided on short notice for offline analysis.

A-1.4  Websites in benchmarking campaigns – statistical analysis


For the previously described reasons, a certain number of websites have to be tested
in the benchmarking campaign in order to provide relevant data for assessing the user
experience.

Based on our statistical analysis of more than 200 Alexa websites in June and July 2018,
the following conclusions have been drawn:
►► The size distribution of the most popular webpages decreases exponentially. 73 % of
all webpages are smaller than 2 Mbyte and 83.5 % are smaller than 3 Mbyte. Most of
the small websites are simply landing pages of search engines, social networks, email
login pages, etc. These landing and login pages will not be used in benchmarking
campaigns in accordance with the criteria defined in A-1.2 Selection criteria for
websites.

Rohde & Schwarz | White paper Network Performance Score 23


Fig. 5: Distribution of website sizes in kbyte
30

Relative occurrence in %
25

20

15

10

00

00
00

50

00

50

00

50

00

50

00

50

00

50

00

50

00
o5

80
o1

8
0t

to

to

to

to

to

to

to

to

to

to

to

to

to

to

>
0t

00

00

00

00

00

00

00

00

00

00

00

00

00

00
50

10

15

20

25

30

35

45

45

50

55

60

65

70

75
Website size in kbyte

►► A typical website consists of text content and several web resources, such as images,
external JavaScript files, external style sheets and other related files. The website is
correctly rendered in the web browser only if each of the resources is successfully
downloaded and parsed. Ideally, it is desirable to reduce the number of HTTP requests
made or required to display the website completely. The reason for this is that several
often time-consuming processes happen when downloading web resources.

The average number of resources (http requests) is approximately 74 across all tested
websites. 75 % of all tested websites have less than 100 resources.
Distribution number of resources in websites
Fig. 6: Distribution of resources in websites
30
Relative occurrence in %

25

20

15

10

0
0

40

60

80

0
o2

10

12

14

16

18

20

22

24

26

28

30

30
to

to

to
0t

to

to

to

to

to

to

to

to

to

to

to

>
20

40

60

80

0
10

12

14

16

18

20

22

24

26

28

Number of resources in websites

24
Fig. 7: Distribution of images in websites
40

Relative occurrence in %
35

30

25

20

15

10

00

10

20

30

40

50

0
o1

15
1

o1

o1

o1

o1

o1
to

to

to

to

to

to

to

to
0t

to

>
0t

0t

0t

0t

0t
10

20

30

40

50

60

70

80

90

10

11

12

13

14
Number of images in websites

A-2  MEASUREMENT SCENARIOS


The main test scenarios include:
►► Typical call (120 s call duration/180 s call window)
►► Data sessions (different scenarios)
►► Capacity test (voice and data sessions – stress test for all operators)

The methodology used and the measurement setup define the voice and data scenarios
to be followed during data collection to obtain meaningful results for:
►► Accessibility (e.g. call setup time (CST))
►► Retainability (e.g. call drop ratio (CDR))
►► Integrity (e.g. handover success ratio (HOSR))
►► Air interface logging
►► Voice call testing
►► Speech testing POLQA, narrowband and wideband
►► Video streaming (intrusive and non-intrusive) such as YouTube

There are many other KPIs registered during the drive test and the collected data can be
accessed via a very convenient interface on the SmartAnalytics report platform.

The displayed settings are used in the Rohde & Schwarz measurement system.

Rohde & Schwarz | White paper Network Performance Score 25


A-2.1  Job structures

A-2.1.1  Call session


The call session consists of a sync action and a double ended call with speech quality
evaluation.

Fig. 8: Call session

A-2.1.2  Data session


For the data session, it is possible to choose between a simple setup with one session
that contains all tests and five browsing URLs or a more complex setup with interleaving
of three similar sessions where the fast converging capacity test is only part of the first
session and a larger set of browsing URLs can be tested. In the interleaved mode, session
1 is followed by session 2 and then session 3 and then the overall set is repeated. There is
a sync action for each subsession.

Table 9: Data session

Session 1 Potential session 2 Potential session 3


sync sync sync
preload ping 800 byte preload ping 800 byte preload ping 800 byte
ping ping ping
HTTP DL test – single connection
HTTP DL test – single connection HTTP DL test – single connection
(FDFS)
wait (3 s) wait (3 s) wait (3 s)
HTTP UL test – single connection HTTP UL test – single connection HTTP UL test – single connection
wait (3 s) wait (3 s) wait (3 s)
browsing page 1 small (global) browsing page 1 small (global) browsing page 1 small (global)
browsing page 2 small (global) browsing page 2 small (global) browsing page 2 small (global)
browsing page 3 medium (local) browsing page 6 medium (local) browsing page 9 medium (local)
browsing page 4 medium (local) browsing page 7 medium (local) browsing page 10 medium (local)
browsing page 5 large (local) browsing page 8 large (local) browsing page 11 large (local)
wait (3 s) wait (3 s) wait (3 s)
– video streaming live video streaming live
– wait (3 s) wait (3 s)
Dropbox transfer Dropbox transfer Dropbox transfer
wait (3 s)
HTTP UL test – multiple connections – –
wait (3 s) – –
HTTP DL test – multiple connections – –
wait (15 s)

26
A-2.2  Voice test settings

A-2.2.1  Double ended call settings

Table 10: Double ended call settings

Parameter Comment Value


ITU-T P.863 POLQA in its super-
Algorithm wideband mode as defined in POLQA (WB)
­section 3.1.3

Reference file default, standardized voice sample English (US) P 501


Call duration default 120 s
Call direction default A to B
Sample direction default half-duplex
to ensure even A to B versus B to A
Sample frequency 11 s
sample rate
Save wave files to optimize measurement file size on low quality
Low quality threshold to optimize measurement file size 2.5
Exclude call setup time from to ensure the requested call
checked
call duration duration
Window duration default 180 s
to avoid a second call in the same
Pause duration 180 s
window
First call setup delay default 3s
Maximum call setup time default 30 s
Guard time default 10 s
Call alerting timeout default 0s
Call answer timeout default 0s
No call setup time default 30 s
to avoid a second call in the same
Pause after failed call 180 s
window
to avoid a second call in the same
Pause after maximum failed calls 180 s
window
No connection timeout default 600 s
Maximum failed calls default 1

A-2.3  Data test settings

A-2.3.1  Ping testing

Table 11: Preload ping

Parameter Comment Value


Host server needs to be defined URL to be defined
Number of pings default 5
Interval default 1000 ms
Packet size large packets for preloading 800 byte
Timeout default 2s
Minimum test duration default 0
shorter than default to not waste
Maximum test duration 15 s
too much time

Rohde & Schwarz | White paper Network Performance Score 27


Table 12: Ping

Parameter Comment Value


Host server needs to be defined URL to be defined
Number of pings many small fast pings in a row 25
Interval many small fast pings in a row 10 ms
Packet size default 32 byte
Timeout default 2s
Minimum test duration default 0
shorter than default to not waste
Maximum test duration 30 s
too much time

A-2.3.2  Data transfer testing

Table 13: HTTP DL test – single connection

Parameter Comment Value


URL 5 Mbyte as defined in section 4.1.1 5 Mbyte file, URL to be defined
Username default
Password default
so that the high number of tests un-
Minimum test duration der very good network conditions 5
are not given an unfair advantage
Maximum test duration 40 s as defined in section 4.1.1 40 s
Maximum setup time default 30 s
complete download as defined in
Transfer type full
section 4.1
Fixed duration no need if transfer type = full –

allows the calculation of DNS reso-


Force DNS lookup lution time even if the host name is checked
already cached

Table 14: HTTP UL test – single connection

Parameter Explanation Value


URL server needs to be defined URL to be defined
Username default
Password default
Upload size 2 Mbyte as defined in section 4.1.1 2
Upload size unit 2 Mbyte as defined in section 4.1.1 Mbyte
so that the high number of tests un-
Minimum test duration der very good network conditions 5
are not given an unfair advantage
Maximum test duration 40 s as defined in section 4.1.1 40 s
Maximum setup time default 30 s
complete download as defined in
Transfer type full
section 4.1
Fixed duration no need if transfer type = full –
Transfer method default put

allows the calculation of DNS reso-


Force DNS lookup lution time even if the host name is checked
already cached

28
Table 15: HTTP DL test – multi-connection

Parameter Explanation Value


with 4 connections the maximum
GET available network capacity can be 4×
reliably measured
Transfer duration 7 s as defined in section 4.1.2 7s
so that the high number of failed
tests under bad network con-
Minimum test duration 5s
ditions are not given an unfair
disadvantage
Maximum test duration default 20 s
allows the calculation of DNS reso-
Force DNS lookup lution time even if the host name is checked
already cached
server needs to be defined; file
must be big enough to avoid com-
URL 1 Gbyte file, to be defined
plete download before the transfer
duration ends

Table 16: HTTP UL test – multi-connection

Parameter Explanation Value


with 4 connections, the maximum
PUT available network capacity can be 4×
reliably measured
Transfer Duration 7 s as defined in section 4.1.2 7s
so that the high number of failed
tests under bad network con-
Minimum test duration 5s
ditions are not given an unfair
disadvantage
Maximum test duration default 20 s
allows the calculation of DNS reso-
Force DNS lookup lution time even if the host name is checked
already cached
file must be big enough to avoid
Upload size complete download before the 1
transfer duration ends
file must be big enough to avoid
Upload size unit complete download before the Gbyte
transfer duration ends
URL server needs to be defined to be defined

Rohde & Schwarz | White paper Network Performance Score 29


A-2.3.3  Video streaming service testing

Table 17: YouTube video service test

Parameter Explanation Value


YouTube might not be accessible needs to be defined based on cur-
URL or behaves differently in some rent YouTube behavior; a live chan-
countries nel is recommended

45 s display time as defined in


Display duration 45 s
­section 4.2.2

Maximum test duration default 135 s

15 s stream lost timeout as defined


Stream lost timeout 15 s
in section 4.2.2

30 s timeout as defined in s­ ection


Connection timeout 30 s
4.2.2

Video-MOS J.343.1 as defined in


Run MOS evaluation checked
section 4.2.2

allows the calculation of DNS reso-


Force DNS lookup lution time even if the host name is checked
already cached

A-2.3.4  HTTP browsing and social media service testing

Table 18: HTTP browsing tests (top 5 to 15 websites)

Parameter Explanation Value


URL top 5 to 11 public websites to be defined
so that the high number of tests un-
Minimum test duration der very good network conditions 5s
are not given an unfair advantage

Maximum test duration 15 s as defined in section 4.3.1 15 s

allows the calculation of DNS reso-


Force DNS lookup lution time even if the host name is checked
already cached

Table 19: Dropbox test

Parameter Explanation Value


Maximum test duration 30 s as defined in section 4.3.2 30 s

Actions 1 actions as defined in section 4.3.2 upload file

Upload size 1
Upload size unit Mbyte
Remote file to be defined

30
A-2.4  Rohde & Schwarz measurement systems
SmartBenchmarker is a solution for drive test based quality of experience benchmarking
campaigns.

SmartBenchmarker is an innovative benchmarking and network testing tool that delivers


precise and accurate measurement data for a vast set of network technologies. Robust
and tough by design, it functions under and withstands the most extreme conditions.

SmartBenchmarker’s cutting-edge features enable strategic network investment deci-


sions by allowing operators to fairly compare and assess service quality based on state-
of-the-art standards, including ITU/ETSI.

SmartBenchmarker, successor of Diversity Benchmarker II, makes it possible to more


efficiently test and compare voice and data services. Fully 4G/LTE MIMO capable and
prepared for LTE-Advanced, it offers a wide range of integrated mobile devices and
scanners.

Fig. 9: SmartBenchmarker

For more information, see:


https://www.rohde-schwarz.com/us/product/smartbenchmarker-productstart-
page_63493-528256.html

Rohde & Schwarz | White paper Network Performance Score 31


The following Rohde & Schwarz SwissQual tools are available for postprocessing:
►► Automation agent to automate data upload and data import into SQL servers
►► SmartAnalytics scenarios and NQDI for data validation

Fig. 10: SmartAnalytics scenarios

Fig. 11: NQDI for data validation

For more information, see:


https://www.mobile-network-testing.com/en/products/
data-management-analysis-and-reporting-netqual/nqdi-classic/

32
A-2.5  NPS campaign setup in SmartBenchmarker
To easily set up an NPS measurement campaign with all timings and other set-
tings as recommended, Rohde & Schwarz offers an NPS campaign template. In
SmartBenchmarker in the campaigns section, choose the NPS icon in the top right
­corner.

Fig. 12: SmartBenchmarker, NPS campaign setup workflow

This will open the first step of the NPS campaign setup workflow: Basic info. Here, you
can enter the campaign name and other basic properties as well as create a list of region-
al categories that will be part of the campaign.

Fig. 13: NPS campaign, basic info

In the second step, all project-dependent parts of the jobs can be configured. These are
mainly the URLs for browsing and HTTP transfer. In the top section, it is possible to acti-
vate options to reduce data usage (see section A-2.1.2 Data session) and enable packet
capture. Both settings are recommended.

Fig. 14: NPS campaign, job config

Rohde & Schwarz | White paper Network Performance Score 33


In the following steps, probes can be assigned to the data and voice jobs. After complet-
ing step 5, voice probe pairing, press the FINISH button. All needed campaigns and jobs
are created automatically.

Fig. 15: NPS campaign, voice probe pairing

The newly created items appear in the job and campaign lists. It is possible to edit them
manually afterwards, but then they lose their status of being NPS compliant. A warning is
displayed in the edit screen.

Fig. 16: Update campaign

For detailed instructions, see the SmartBenchmarker manual.

34
A-2.6  Postprocessing
SmartAnalytics calculates the NPS for the overall network and can drill it down by use
case, service, technology and other variables. It is not only an integrated quality score for
the overall network quality, it is also the ideal entry point into deeper analysis for network
optimization because the NPS makes it obvious at first glance where the most ­potential
for improvement can be found.

SmartAnalytics offers many ways to display the NPS and the contributing KPIs, ­including
the full list of KPIs and the remaining potential on a point scale.

Fig. 17: Network performance score

Fig. 18: Network performance score by category

Rohde & Schwarz | White paper Network Performance Score 35


Fig. 19: Voice KPIs

Fig. 20: Data KPIs

36
APPENDIX B

B-1  NETWORK PERFORMANCE SCORE (NPS V1.0)


B-1.1  KPIs, weightings and thresholds
In this section, KPIs of the first version of NPS (NPS V1.0) are given in both percentages
and point scale.

The main differences between NPS V1.0 and NPS V1.1 are:
►► New thresholds for HTTP UL/L throughput KPIs
►► Introduction of two new KPIs as defined by ETSI TR 103559 V1.1.1; activity duration
for HTTP browsing test and social media test
►► TCP round trip time (RTT) in HTTP browsing test replaced by activity duration KPI,
keeping the same weighting
►► The weightings of the social media KPIs modified to be in line with ETSI TR 103559
V1.1.1

Table 20: NPS V1.0 voice contributors (Rohde & Schwarz implementation)

Bad threshold Good threshold Weighting in


telephony
Call setup success ratio (CSSR) 85.00 % 100.00 % 0.3125
Call drop ratio (CDR) 10.00 % 0.00 % 0.3750
Call setup time (CST) average 12.00 4.50 0.0625
CST > 15 s ratio 3.00 % 0.00 % 0.0875
CST 10th percentile 8.00 4.00 0.0375
Voice MOS average 2.00 4.30 0.0438
Voice MOS < 1.6 ratio 10.00 % 0.00 % 0.0562
Voice MOS 90th percentile 4.00 4.75 0.0250
Sum 1.0

Table 21: NPS V1.0 HTTP data transfer contributors (Rohde & Schwarz implementation)

Bad threshold Good threshold Weighting in


data services
HTTP UL/DL success ratio 80 % 100 % 0.0550
HTTP DL throughput average 2.0 Mbit/s 50.0 Mbit/s 0.0350
HTTP DL throughput 10th percentile 2.0 Mbit/s 10.0 Mbit/s 0.0450
HTTP DL throughput 90th percentile 10.0 Mbit/s 120.0 Mbit/s 0.0175
HTTP UL throughput average 0.5 Mbit/s 35.0 Mbit/s 0.0350
HTTP UL throughput 10th percentile 0.5 Mbit/s 4.0 Mbit/s 0.0450
HTTP UL throughput 90th percentile 5.0 Mbit/s 45 Mbit/s 0.0175
Sum 0.25

Rohde & Schwarz | White paper Network Performance Score 37


Table 22: NPS V1.0 video streaming contributors (Rohde & Schwarz implementation)

Bad threshold Good threshold Weighting in


data services
Video success ratio 80.0 % 100.0 % 0.1276
Video MOS average 3.0 4.5 0.0363
Video MOS 10th percentile 2.0 4.0 0.0363
Video setup average 7.0 s 2.0 s 0.0099
Video setup > 10 s ratio 5.0 % 0.0 % 0.0099
Sum 0.22

Table 23: NPS V1.0 HTTP browsing and social media contributors (Rohde & Schwarz implementation)

Bad threshold Good threshold Weighting in


data services
Browsing success ratio 80.0 % 100.0 % 0.25333
Browsing duration average 6.0 s 1.0 s 0.10857
Round trip time (RTT) 150 ms 30 ms 0.01810
Social media
Social media success ratio 80.0 % 100.0 % 0.09375
Social media duration average 15.0 3.0 0.05625
Sum 0.53

B-1.2  Maximum scoring points per category


The following tables present the contributions of all individual KPIs to the overall network
performance score V1.0 by considering the individual weightings for the regions where
they are measured.

Table 24: NPS V1.0 voice contributors category weighting

Weighting Max. points for regional category weighting in %


in (weighting for telephony in overall: 40 %)
telephony
100 50 40 30 25 20 10
Call setup ­success
0.3125 125 62.5 50.0 37.5 31.25 25.0 12.50
ratio (CSSR)
Call drop ratio (CDR) 0.3750 150 75 60.0 45 37.5 30.0 15.00
Call setup time (CST)
0.0625 25 12.5 10.0 7.5 6.25 5.0 2.50
average
CST > 15 s ratio 0.0875 35 17.5 14.0 10.5 8.75 7.0 3.50
CST 10th percentile 0.0375 15 7.5 6.0 4.5 3.75 3.0 1.50
Voice MOS average 0.0438 17.5 8.75 7.0 5.3 4.38 3.5 1.75
Voice MOS < 1.6 ratio 0.0562 22.5 11.25 9.0 6.7 5.62 4.5 2.25
Voice MOS 90th
0.0250 10 5.0 4.0 3.0 2.5 2.0 1.00
percentile
Sum telephony 1.0 400 200 160 120 100 80 40

The following table shows the maximum scoring points for data services. In total, data
services can contribute up to 600 points and e.g. data transfer can contribute 150 points.

38
Table 25: NPS V1.0 data contributors category weighting

Weighting Max. points for regional category weighting in %


in data (weighting for data services in overall 60 %)
services
100 50 40 30 25 20 10
Data transfer
HTTP UL/DL success
0.0550 33.0 16.5 13.2 9.9 8.3 6.6 3.30
ratio
HTTP DL throughput
0.035 21.0 10.5 8.4 6.3 5.3 4.2 2.10
average
HTTP DL throughput
0.045 27.0 13.5 10.8 8.1 6.8 5.4 2.70
10th percentile
HTTP DL throughput
0.0175 10.5 5.3 4.2 3.2 2.6 2.1 1.05
90th percentile
HTTP UL throughput
0.035 21.0 10.5 8.4 6.3 5.3 4.2 2.10
average
HTTP UL throughput
0.045 27.0 13.5 10.8 8.1 6.8 5.4 2.70
10th percentile
HTTP UL throughput
0.0175 10.5 5.3 4.2 3.2 2.6 2.1 1.05
90th percentile
Sum data transfer 0.25 150.0 75.0 60.0 45.0 37.5 30.0 15.0
Video streaming
Video success ratio 0.1276 76.6 38.3 30.6 23.0 19.1 15.3 7.66
Video MOS average 0.0363 21.8 10.9 8.7 6.5 5.4 4.4 2.18
Video MOS 10th
0.0363 21.8 10.9 8.7 6.5 5.4 4.4 2.18
percentile
Video setup average 0.0099 5.9 3.0 2.4 1.8 1.5 1.2 0.59
Video setup > 10 s
0.0099 5.9 3.0 2.4 1.8 1.5 1.2 0.59
ratio
Sum video streaming 0.22 132.0 66.0 52.8 39.6 33.0 26.4 13.2
HTTP browsing and social media
Browsing success ratio 0.25333 152.0 76.0 60.8 45.6 38.0 30.4 15.20
Browsing duration
0.10857 65.1 32.6 26.1 19.5 16.3 13.0 6.51
average
Round trip time (RTT) 0.0181 10.9 5.4 4.3 3.3 2.7 2.2 1.09
Social media
Success ratio 0.09375 56.3 28.1 22.5 16.9 14.1 11.3 5.63
Social media duration
0.05625 33.8 16.9 13.5 10.1 8.4 6.8 3.38
average
Sum browsing/
0.53 318.0 159.0 127.2 95.4 79.5 63.6 31.80
social media
Sum data services 1 600 300 240 180 150 120 60

B-1.3  Improvement in points


The following tables give an indication of the improvement in points if a KPI is changed
across all regions by a certain margin. It is anticipated that the improvement stays
­between the bad and good limit. An improvement outside the limits will have less or
no effect due to saturation.

The following formula is applied:


Change in KPI
Improvement in points = 1000 points × Weighting in overall ×
Good limit – Bad limit

Rohde & Schwarz | White paper Network Performance Score 39


Table 26: NPS V1.0 voice available improvement potential

Weighting Weighting Maximum KPI Improvement


in in overall points Bad Good Changes in points (in
telephony limit limit KPI by overall score)
Call setup success
0.3125 0.1250 125 85 % 100 % 0.1 % 0.8
ratio (CSSR)
Call drop ratio (CDR) 0.3750 0.1500 150 10 % 0 % –0.1 % 1.5
Call setup time (CST)
0.0625 0.0250 25 12 s 4.5 s –1 s 3.3
average
CST > 15 s ratio 0.0875 0.0350 35 3 % 0 % –1 % 11.7
CST 10th percentile 0.0375 0.0150 15 8s 4s –1 s 3.8
Voice MOS average 0.0438 0.0175 17.5 2 4.3 0.1 MOS 0.8
Voice MOS < 1.6
0.0562 0.0225 22.5 10 % 0 % -–1 % 2.2
ratio
Voice MOS 90th
0.0250 0.0100 10 4 4.75 0.1 MOS 1.3
percentile

Table 27: NPS V1.0 data available improvement potential

Weighting Weighting Maximum KPI Improvement


in data in overall points Bad Good Changes in points (in
services limit limit KPI by overall score)
HTTP UL/DL success
0.055 0.033 33 80 % 100 % 1.0 % 1.65
ratio
HTTP DL throughput 50
0.035 0.021 21 2 Mbit/s 1 Mbit/s 0.44
average Mbit/s
HTTP DL throughput 10
0.045 0.027 27 2 Mbit/s 1 Mbit/s 3.375
10th percentile Mbit/s
HTTP DL throughput 10 120
0.0175 0.011 10.5 1 Mbit/s 0.1
90th percentile Mbit/s Mbit/s
HTTP UL throughput 0.5 35
0.035 0.021 21 1 Mbit/s 0.60
average Mbit/s Mbit/s
HTTP UL throughput 0.5
0.045 0.027 27 4 Mbit/s 1 Mbit/s 7.7
10th percentile Mbit/s
HTTP UL throughput 45
0.0175 0.011 10.5 5 Mbit/s 1 Mbit/s 0.4
90th percentile Mbit/s
Video success ratio 0.1276 0.077 76.6 80 % 100 % 1.0 % 3.85
Video MOS average 0.0363 0.022 21.8 3 4.5 0.1 MOS 1.5
Video MOS 10th
0.0363 0.022 21.8 2 4 0.1 MOS 1.1
percentile
Video setup average 0.0099 0.006 5.9 7.0 s 2.0 s –1 s 1.2
Video setup > 10 s
0.0099 0.006 5.9 5 % 0 % –1.0 % 1.2
ratio
Browsing success
0.25333 0.152 152 80 % 100 % 1.0 % 7.6
ratio
Browsing duration
0.10857 0.065 65.1 6.0 s 1.0 s –1 s 13
average
Round trip time
0.0181 0.011 10.9 150 ms 30 ms –10 ms 0.73
(RTT)
Social media
0.09375 0.056 56.3 80 % 100 % 1.0 % 2.8
­success ratio
Social media
0.05625 0.034 33.8 15 s 3s –1 s 2.8
­duration average

40
B-1.4  Abbreviations

Abbreviation Designation
3rd generation partnership project: globally recognized specifications and standards
3GPP
for GSM and eventually 3G network deployments
CA carrier aggregation: technology used in LTE to improve data throughput
DC dual carrier: technology used in WCDMA to improve data throughput
circuit switched fallback: technology that allows LTE devices to fall back to WCDMA
­networks in order to establish phone calls when VoLTE is not available.

CSFB was specified in 3GPP Release 8. CSFB requires a software upgrade of the
CSFB ­operator’s core and radio network.

CSFB is often seen as an interim solution for LTE operators. Voice over LTE (VoLTE)
is considered to be the long-term goal for the delivery of voice services on LTE
networks.
European Telecommunications Standards Institute: independent, nonprofit, standard-
ETSI ization organization in the telecommunications industry with members across five
continents.
EU end user
global system for mobile communications: normally operating in 900 MHz and
GSM
1800 MHz bands
inter-radio access technology: allows handover and cell change between different
IRAT
technologies, such as 3G and 2G depending on the covered area of each technology.
ITU-T International Telecommunication Union – Telecommunication Standardization Sector
KPI key performance indicator
KQI key quality indicator
MNO mobile network operator
multiple input, multiple output: antenna technology for wireless communications in
which multiple antennas are used at both the transmitter and the destination receiver.
MIMO
The antennas at each end of the communications circuit are combined to minimize
errors and optimize data speed.
NGN next generation network
NP network performance
network quality data investigator (NQDI Classic): postprocessing system that maxi-
NQDI mizes the potential of data collected by QualiPoc and diversity products for network
and service optimization and benchmarking
OpCo operating company related to the customer group
OMG object management group
PM performance monitoring
QoS quality of service
RFS ready for service
radio access technology: the underlying physical connection method for a wireless
RAT based communications network. Many UEs support several RATs in one device such
as Bluetooth®, Wi-Fi, 2G, 3G or LTE.
SA service availability
SPoC single point of contact
SUA service unavailability
UE user equipment (usually mobile phones, smartphones or modems)
WCDMA wideband code division multiple access. ITU IMT-2000 family of 3G standards.

The Bluetooth® word mark and logos are registered trademarks owned by B
­ luetooth SIG, Inc. and any use of such marks
by Rohde & Schwarz is under license.

Rohde & Schwarz | White paper Network Performance Score 41


CUSTOMER SUPPORT
Technical support – where and when you need it
For quick, expert help with any Rohde & Schwarz product, contact one of our customer
support centers. A team of highly qualified engineers will provide telephone support and
work with you to find a solution to your query regarding applications or any aspect of
­operating and programming Rohde & Schwarz equipment.

Up-to-date information and upgrades


To keep your instrument up to date and stay informed about new relevant application
notes, simply send an email to the customer support center stating your instrument and
your request. We will make sure you get the information you need.

Regional contact
Europe, Africa, Middle East
Phone +49 89 4129 12345
customersupport@rohde-schwarz.com

North America
Phone 1-888-TEST-RSA (1-888-837-8772)
customer.support@rsa.rohde-schwarz.com

Latin America
Phone +1-410-910-7988
customersupport.la@rohde-schwarz.com

Asia/Pacific
Phone +65 65 13 04 88
customersupport.asia@rohde-schwarz.com

China
Phone +86-800-810-8228 /+86-400-650-5896
customersupport.china@rohde-schwarz.com

42
Rohde & Schwarz | White paper Network Performance Score 43
Rohde & Schwarz
The Rohde & Schwarz electronics group offers innovative
solutions in the following business fields: test and mea-
surement, broadcast and media, secure communications,
cybersecurity, monitoring and network testing. Founded
more than 80 years ago, the independent company which
is headquartered in Munich, Germany, has an extensive
sales and service network with locations in more than
70 countries.

www.rohde-schwarz.com

Mobile network testing


The company’s broad and diverse product portfolio for
mobile network testing addresses every test scenario in
the network lifecycle – from base station installation to
network acceptance and network benchmarking, from op-
timization and troubleshooting to interference hunting and
spectrum analysis, from IP application awareness to QoS
and QoE of voice, data, video and app-based services.

www.rohde-schwarz.com/mnt

Regional contact
►► Europe, Africa, Middle East | +49 89 4129 12345
customersupport@rohde-schwarz.com
►► North America | 1 888 TEST RSA (1 888 837 87 72)
customer.support@rsa.rohde-schwarz.com 3608177552

►► Latin America | +1 410 910 79 88


customersupport.la@rohde-schwarz.com
►► Asia Pacific | +65 65 13 04 88
customersupport.asia@rohde-schwarz.com
►► China | +86 800 810 82 28 | +86 400 650 58 96
customersupport.china@rohde-schwarz.com

R&S® is a registered trademark of Rohde & Schwarz GmbH & Co. KG


Trade names are trademarks of the owners
3608.1775.52 01.00 PDP 1 en

PD 3608.1775.52 | Version 01.00 | December 2019 (jr)


White paper | Network Performance Score
Data without tolerance limits is not binding | Subject to change
© 2019 Rohde & Schwarz GmbH & Co. KG | 81671 Munich, Germany

You might also like