Download as pdf or txt
Download as pdf or txt
You are on page 1of 102

ADVANCED MPLS

Black Book dition

Edition 10
Wi-Fi Device Testing

Critical Testing for Wireless Client Devices

http://www.ixiacom.com/blackbook April September 2015


PN 915-2640-01 Rev A September 2015 i
Critical Testing for Wireless Client Devices

Your feedback is welcome

Our goal in the preparation of this Black Book was to create high-value, high-quality content. Your
feedback is an important ingredient that will help guide our future books.

If you have any comments regarding how we could improve the quality of this book, or
suggestions for topics to be included in future Black Books, please contact us at
ProductMgmtBooklets@ixiacom.com.

Your feedback is greatly appreciated!

Copyright © 2015 Ixia. All rights reserved.

This publication may not be copied, in whole or in part, without Ixia’s consent.

RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the U.S. Government is


subject to the restrictions set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and
Computer Software clause at DFARS 252.227-7013 and FAR 52.227-19.

Ixia, the Ixia logo, and all Ixia brand names and product names in this document are either
trademarks or registered trademarks of Ixia in the United States and/or other countries. All other
trademarks belong to their respective owners. The information herein is furnished for informational
use only, is subject to change by Ixia without notice, and should not be construed as a
commitment by Ixia. Ixia assumes no responsibility or liability for any errors or inaccuracies
contained in this publication.

PN 915-2640-01 Rev A September 2015 iii


Critical Testing for Wireless Client Devices

Table of Contents
Table of Contents........................................................................................................................ v
How to Read this Book.............................................................................................................. vii
Dear Reader ............................................................................................................................ viii
Introduction ................................................................................................................................ 1
Wi-Fi Device Issues ................................................................................................................... 2
Wi-Fi Device Test Challenges .................................................................................................... 4
Test Case 1: Throughput Benchmarking Test ...........................................................................11
Test Case 2: Performance Characterization over Packet Sizes ................................................29
Test Case 3: Performance Characterization over Distance – Rate vs Range ............................36
Test Case 4: Cost of Throughput Analysis ................................................................................47
Test Case 5: Roaming Validation ..............................................................................................55
Test Case 6: Security Test ........................................................................................................63
Test Case 7: Ecosystem Test ...................................................................................................71
Test Case 8: Radio Transmitter Quality ....................................................................................77
Test Case 9: Interoperability Testing – Performance Characterization over Distance................80
Contact Ixia ...............................................................................................................................91

PN 915-2640-01 Rev A September 2015 v


Critical Testing for Wireless Client Devices

How to Read this Book


The book is structured as several standalone sections that discuss test methodologies by type.
Every section starts by introducing the reader to relevant information from a technology and
testing perspective.

Each test case has the following organization structure:

Overview Provides background information specific to the test


case.

Objective Describes the goal of the test.

Setup An illustration of the test configuration highlighting the


test ports, simulated elements and other details.

Step-by-Step Instructions Detailed configuration procedures using Ixia test


equipment and applications.

Test Variables A summary of the key test parameters that affect the
test’s performance and scale. These can be modified to
construct other tests.

Results Analysis Provides the background useful for test result analysis,
explaining the metrics and providing examples of
expected results.

Troubleshooting and Provides guidance on how to troubleshoot common


Diagnostics issues.

Conclusions Summarizes the result of the test.

Typographic Conventions
In this document, the following conventions are used to indicate items that are selected or typed
by you:

 Bold items are those that you select or click on. It is also used to indicate text found on
the current GUI screen.

 Italicized items are those that you type.

PN 915-2640-01 Rev A September 2015 vii


Critical Testing for Wireless Client Devices

Dear Reader
Ixia’s Black Books include a number of IP and wireless test methodologies that will help you
become familiar with new technologies and the key testing issues associated with them.

The Black Books can be considered primers on technology and testing. They include test
methodologies that can be used to verify device and system functionality and performance. The
methodologies are universally applicable to any test equipment. Step-by-step instructions using
Ixia’s test platform and applications are used to demonstrate the test methodology.

This tenth edition of the Black Books includes twenty-five volumes covering key technologies and
test methodologies:

Volume 1 – Higher Speed Ethernet Volume 14 – Network Security


Volume 2 – QoS Validation Volume 15 – MPLS-TP
Volume 3 – Advanced MPLS Volume 16 – Ultra Low Latency (ULL) Testing
Volume 4 – LTE Evolved Packet Core Volume 17 – Impairments
Volume 5 – Application Delivery Volume 18 – LTE Access
Volume 6 – Voice over IP Volume 19 – 802.11ac Wi-Fi Benchmarking
Volume 7 – Converged Data Center Volume 20 – SDN/OpenFlow
Volume 8 – Test Automation Volume 21 – Network Convergence Testing
Volume 9 – Converged Network Adapters Volume 22 – Testing Contact Centers
Volume 10 – Carrier Ethernet Volume 23 – Automotive Ethernet
Volume 11 – Ethernet Synchronization Volume 24 – Audio Video Bridging
Volume 12 – IPv6 Transition Technologies Volume 25 – Wi-Fi Client Device Testing
Volume 13 – Video over IP

A soft copy of each of the chapters of the books and the associated test configurations are
available on Ixia’s Black Book website at http://www.ixiacom.com/blackbook. Registration is required
to access this section of the web site.

Ixia is committed to helping our customers’ network perform at its highest level, so that end users
get the best application experience. We hope this Black Book series provides valuable insight into
the evolution of our industry, and helps customers deploy applications and network services—in
a physical, virtual, or hybrid network configuration.

Bethany Mayer, Ixia President and CEO

PN 915-2640-01 Rev A September 2015 viii


Critical Testing for Wireless Client Devices

Critical Testing for Wireless


Client Devices

Test Methodologies

Wi-Fi has become the de-facto standard of communication for local area networks. It’s fast,
flexible and cheap, which has led to a proliferation of Wi-Fi devices in the market. While this
trend is overall a good thing for the market, a large number of devices still exhibit issues
resulting in poor experience for the end user.

This booklet aims to address this quality gap by proposing various test methodologies to verify
the performance, functionality and security resiliency of Wi-Fi client devices.

PN 915-2640-01 Rev A September 2015 ix


Critical Testing for Wireless Client Devices

PN 915-2640-01 Rev A September 2015 x


Introduction

Introduction
Over 10 Billion Wi-Fi enabled devices have been shipped to date, and this number is projected to
grow at 10% for the years to come. Although a majority of the Wi-Fi device shipments today are
made up of smart phones, tablets, e-readers and laptops, there is a growing trend of Wi-Fi
becoming the access technology for several application specific devices, including:

 Home – security cameras, set-top-boxes and media players, thermostats etc.

 Hospitals – patient monitors, infusion pumps, oxygen monitoring devices etc.

 Industry – machine diagnostics, sensors, smart grids etc.

For most of these devices, good Wi-Fi connectivity is critical to their functioning and quality of
experience delivered to the end-user.

Figure 1: Wi-Fi Devices

PN 915-2640-01 Rev A September 2015 1


Wi-Fi Device Issues

Wi-Fi Device Issues


Wi-Fi devices have been around for over a decade now; however their use-cases have come a
long way from their initial days. These new use-cases impose several requirements that the
technology was not originally designed to address. Wi-Fi technology has been evolving to keep
up with these new use-cases, but it’s not immune to issues. From a test perspective it’s these
issues that need to be targeted comprehensively.

 Unlicensed frequencies – As Wi-Fi operates in unlicensed frequencies, devices typically


have to contend with interference issues coming from other devices operating in the same
frequencies. This can come from Bluetooth, Microwave, DECT and other Wi-Fi devices.

 Lack of deployment standards – There are no standard deployment models. This leads
to lot of variation between each deployment. Devices have to cope with these variations
when they operate.

 Legacy devices – Wi-Fi standards evolve pretty fast. This model is possible because the
standard imposes backwards compatibility on devices. Often Wi-Fi devices have to
operate in an ecosystem with several legacy devices. This causes a range of issues,
because legacy devices operate at different PHY rates and usually occupy the medium
for much longer periods.

 Roaming issues – Handover/Roaming is a relatively new concept in Wi-Fi. Originally Wi-


Fi was designed for fixed or nomadic wireless access; however due to its recent popularity
in the enterprise several new use-cases now require a seamless handover. Moreover, the
devices are completely responsible for planning and executing the handover. This creates
an extremely challenging scenario for devices.

 AP/Device Interoperability Issues – Interoperability issues exist in any technology; Wi-


Fi is not immune to it either.

 QoS – Wi-Fi networks mostly carried best effort data traffic in the beginning; however, with
its surge in popularity, several new use-cases require QoS. This is extremely challenging
in Wi-Fi networks because of a lack of centralized control.

 Radio Resource Management – Transmitting data, and transmitting data efficiently are
different concepts. Efficient transmission is increasingly becoming a focal point as it leads
to overall better network utilization and minimal impact on the battery.

 Battery Performance – Most Wi-Fi devices are battery operated. Devices that don’t
optimize transmission algorithms will find that they spend too many cycles in transmissions
and re-transmissions. Optimizing transmission is key to better battery performance.

 Radar Compliance – Regulatory bodies impose restrictions around usage of certain


frequencies at certain times. This functionality is also referred to as Dynamic Frequency

PN 915-2640-01 Rev A September 2015 2


Wi-Fi Device Issues

Selection. Devices exhibit many issues when it comes to DFS, because it involves
dynamically switching from one channel to another.

 Antenna design – Antenna design is a complex subject: if not done right, device
performance can vary vastly, depending on its orientation or its interaction with the
environment.

PN 915-2640-01 Rev A September 2015 3


Wi-Fi Device Test Challenges

Wi-Fi Device Test Challenges

Tools and equipment


Prior to Ixia’s foray into device testing, Wi-Fi device test labs had to use several test tools from
various vendors.

 Real APs – Access Points that DUT can connect to

 Traffic generation tools – Tools to generate different types of traffic.

 Programmable attenuator – Controls attenuation to set different path loss.

 Channel emulator – Emulates different channel conditions like home, small/large office,
outdoor etc. The channel conditions vary in each of these deployment conditions.

 Packet capture/sniffing and protocol analyzer – Captures wireless traffic and helps
decode and analyze it.

 Signal and Spectrum analyzer – Tools to analyze the Radio Frequency transmitted.

This hodge-podge solution had several issues:

 Complexity – imagine working with 6 vendors!

 Usability – users have to train themselves in all of these different products with different
interfaces

 Predictability – accounting for failure is 6 times harder when dealing with different pieces

 Time – the time-cost to put together and maintain these different pieces

 Cost – and finally $$$

With Ixia’s WaveDevice solution, this hodge-podge approach should now be in the rear view.

PN 915-2640-01 Rev A September 2015 4


Wi-Fi Device Test Challenges

Testing Methodology and Focus


Unlike cellular technologies, in Wi-Fi there are no test-focused specifications for Wi-Fi devices.
This whole space is relatively new compared to the cellular world, where there have been several
generations of technologies released, and the market has evolved to standards-based test
approaches.

When testing Wi-Fi Devices, three main technology areas should be assessed:

PHY Layer
This is responsible for converting information into RF signals, and for transmitting them between
the source and the destination. To ensure successful communication, the transmitter and receiver
of the device need to perform well and also conform to all necessary specifications. On the
functional side, the transmitter needs to make sure that the RF energy transmitted is confined to
the allowed spectrum mask within the frequency band of operation. It also needs to ramp up and
ramp down power within spec to ensure any given transmission does not interfere with the next
one. Because ensuring a proper PHY layer proves critical for ensuring end-user quality of
experience on a Wi-Fi device, RF testing is an essential step. On the performance side, the
transmitter needs to have proper transmitter modulation accuracy at different modulation rates,
while the receiver needs to have a low Error Vector Magnitude (EVM) when receiving at various
data rates and power levels.

MAC Layer:
Unlike other wireless access technologies such as LTE, where the base station makes most of
the decisions for the device, the Wi-Fi protocol requires the device to make lots of decisions. The
device must decide:

 When to transmit

 How to contend and acquire the channel

 How to roam between access points

 How and when to rate adapt

 How and when to use power-save mechanisms

A typical Wi-Fi device is expected to be a lot smarter, and needs to implement several complex
algorithms at the MAC layer. Hence the MAC layer on a device needs to be tested thoroughly for
both functionality and performance.

On the functional side, the device needs to be tested to make sure it can roam, rate adapt, connect
to the AP using proper security mechanisms; and that it can only connect to APs with matching
credentials. On the performance side, it’s important to test that the device can optimize its
resources to maximize throughput, implement proper traffic classification under load, and
minimize battery consumption.

PN 915-2640-01 Rev A September 2015 5


Wi-Fi Device Test Challenges

Application Layer
This is what the user sees and interacts with. Here, we need to look at several aspects of
application performance, including issues such as seamless LTE to Wi-Fi handover, delay-
sensitive Unified Communications (UC) applications, high-definition video streaming over Wi-Fi,
and the like. One very important point to note is that bad RF performance or bad MAC layer
performance will result in a bad user experience with an application. It’s important to thoroughly
test and harden a device at each one of these layers in isolation, and then test the system as a
whole.

Ixia recommends that testing is best conducted using a staged approach. It is important to
baseline the performance of the Device under Test (DUT) under ideal conditions, and to find and
fix issues. Testing should then progress by introducing one variable at a time, moving from the
most deterministic to the most realistic test conditions.

Testing can be divided into three major stages:

PN 915-2640-01 Rev A September 2015 6


Wi-Fi Device Test Challenges

Design/Development/QA Testing – Stage 1


During Stage 1 of testing – Dev Test and QA – configurability, repeatability, stress, and
automation are very important to achieving maximum test coverage in the minimum amount of
time. This stage of testing is best addressed by using a piece of test equipment that can simulate
a Wi-Fi access point.

The key tests that need to be run in Stage 1 include measuring radio performance, validating
device connectivity, measuring raw throughput, and ensuring protocol conformance. Stage 1 also
includes other MAC and PHY protocol-related aspects such as roaming, rate adaptation, power-
save protocols, and security. Extensive test coverage is critical during this stage, and covering
numerous test cases in a small amount of time is essential for an on-time release of a high-quality
product in a highly competitive market.

The WaveDevice Golden AP solution combines hardware and software. The hardware includes
an 802.11a/b/g/ac Golden AP emulator, full line-rate traffic generator, channel and distance
emulator, line-rate real-time protocol sniffer, and line-rate, real-time signal generator and
analyzer.

The test hardware connects to an RF enclosure using RF cables; the device under test is placed
inside the RF enclosure. The DUT runs simple endpoint software – called the “WaveAgent” - that
sits at the transport layer on the device. The WaveAgent receives commands from the Ixia
WaveDevice hardware to send/receive different types of traffic at different rates, making precise
performance measurements.

Interoperability Testing – Stage 2


Stage 2 of testing can begin during the post-production phase of the development life-cycle. The
device is now fully developed and must be tested as a system prior to release. During this stage,
it is important to subject the device to more realistic test conditions, including testing against real
APs and testing over-the-air

PN 915-2640-01 Rev A September 2015 7


Wi-Fi Device Test Challenges

In Stage 2, the client device under test has to be tested against the most common real APs to
make sure that the device can work well with those APs in the field. The key tests in Stage 2
include TCP/UDP/VOIP Upstream/Downstream performance at different frame sizes and rates,
and on different frequency channels with different settings on the AP and the client.
The client device under test is connected using RF cables to the AP through the IxVeriWave RF
Management Unit. Both the AP and the Client are placed in separate RF enclosures to create an
isolated, fully controllable, and repeatable test environment.
The testbed also includes the IxVeriWave WT-90/92 that houses two 802.11ac wireless cards
that can capture all the traffic between the AP and the Client on the wireless interface and perform
expert analysis to isolate and identify PHY/MAC-level issues with the AP and the Client device.
The WT-90/92 chassis also includes an Ethernet card that is connected to the Ethernet interface
of the AP and acts as one of the endpoints for traffic. The second endpoint is the WaveAgent
software installed on the device under test. The WaveDevice software application can create
TCP/UDP/VOIP traffic in both upstream and downstream directions and measure end-to-end Key
Performance Indicators (KPIs).
The RF Management Unit can also programmatically simulate distance between the AP and the
client and thus allows the software application to run performance over distance tests.
All the components of the testbed are nicely integrated and can be controlled from an easy-to-
use GUI. The user can run automated tests with various combinations of test settings and watch
the results in real-time.

Field Testing – Stage 3


Stage 3 occurs when the device is deployed in a live network. Here, it is important to characterize
the behavior of the device in real-world conditions, and to find and fix the small percentage of
issues that fell through the cracks during lab testing. IxVeriWave users can also validate that the
network into which the client device is being deployed has no major issues and can support the
reliable operation of the device.

PN 915-2640-01 Rev A September 2015 8


Wi-Fi Device Test Challenges

In Stage 3 of testing, the goal is to evaluate the device’s performance in the field. Stage 1 and
Stage 2 testing provide an excellent platform for the test engineers to do everything possible in
the lab and ship an excellent product. However, there will almost certainly be some issues that
only show up in the field.

While testing Wi-Fi networks and devices in the field, the common misconception is: good RF
coverage means happy users. A device could be getting excellent signal strength at all the
locations on the floor but still have poor performance. There could be several reasons for this:
maybe the device is getting excellent signal in general but is not connected to the best available
AP; maybe the traffic load is not balanced across all the APs and thus resulting in low throughput
on the device; maybe all the neighboring devices are communicating only on the 2.4GHz band
even though there is ample free bandwidth available on the 5GHz band; or maybe it is not even
a wireless problem, and there is some policy/role based misconfiguration on the wired network
that is causing poor performance.
Ixia’s WaveDeploy test tool allows customers to run active site assessments from real devices
using the same WaveAgent software used in Stage 1 and Stage 2 testing. These assessments
measure the voice, video, and data performance of real devices at various locations on the
deployment floor under very real usage conditions.
From extensive testing conducted in the field over several sites, it becomes clear that the
traditional RSSI-based survey kind of testing is not sufficient. Coverage doesn’t mean capacity. It
is very important to:

 Measure application performance


 Use the real client device in the test
 Test in the actual deployment site.

PN 915-2640-01 Rev A September 2015 9


Wi-Fi Device Test Challenges

Only then can the users find the issues in the field that were not found in the Stage 1 and Stage
2 lab testing.

Device Test Methodology – Summary


 Start with a standard solution offering such as a reference design from a silicon
manufacturer, or an embedded Wi-Fi module.
 Customize the generic solution to hit the performance, power, reliability, and physical
requirements of your client device.
 Exhaustively test the behaviors of the device to ensure that the Wi-Fi customizations
have not compromised functionality or performance. Identifying and addressing a
handful of issues at this stage can save a lot of angry phone calls and customer support
trips after deployment.
 Tune the design: test, then adjust, then test again. Continue until you reliably achieve
the behavior you need.
 Once the device is rock-solid on its own, ensure that it interoperates with the network
under all possible realistic environments while testing in the lab.
 Finally, test the end user’s network while performing a major rollout to ensure that the
user’s network can support a high-quality client transaction and that there are no site-
specific issues.

PN 915-2640-01 Rev A September 2015 10


Test Case 1: Throughput Benchmarking Test

Test Case 1: Throughput Benchmarking Test

Overview
Devices come in various shapes and sizes, but one function common among all of them is the
ability to transmit, receive, and process traffic. The throughput benchmark test provides a concise
sketch of the overall performance of the device. It indirectly validates the radio HW, RF signal
chain, device driver, OS and application level performance, all in a single test.

In Wi-Fi, the overall throughput of a device can be impacted by several factors; some of them are
listed below:

 Transmit power

 Ecosystem traffic

 Packet sizes

 Frame Aggregation

 Number of spatial streams – SISO/MIMO

 TCP/UDP

To be able to make sense of the results, it’s essential to keep the test variables to a minimum and
under control during each trial. A test-bed like Ixia’s WaveDevice Golden AP gives users this
control while enabling them to execute various tests.

Objective
Benchmark the maximum Downstream & Upstream throughput of a given device.

Setup
The setup here is made up of AP with traffic generation and control capabilities, a chamber to
isolate the RF environment from external RF sources and DUT with an agent that can be
controlled to generate data traffic.

PN 915-2640-01 Rev A September 2015 11


Test Case 1: Throughput Benchmarking Test

Step-by-step Instructions
1. Launch IxVeriwave Golden AP WaveDevice. The workflow for configuring a test is outlined
in the left frame of the GUI – System (chassis and port assignment), Access Points (AP
configuration), Devices (Devices and Tests), Analysis (Results analysis). Please refer to
the user guide to familiarize yourself with WaveDevice GUI

2. Enter the IP address of the chassis that hosts the Golden AP card. Click on connect when
done.

3. Select the Golden AP card to be used for the simulating the AP and click Reserve at the
bottom of the screen

4. Set the Channel information for the simulated AP. Note: channel selection can have an
impact on AP configuration parameters like – AP Type, Bandwidth.

PN 915-2640-01 Rev A September 2015 12


Test Case 1: Throughput Benchmarking Test

5. Switch to Access Point configuration to configure the simulated AP. You can leave most
of the parameters in their default values.

 General Tab

o Port - <Make sure this matches, what you’ve reserved>

o SSID - Blackbook_Exercise

o Default Tx Power - <leave as default to start with, adjust based on RSSI


feedback from device>

 Data and Beacon PHY rates tab

o This is the screen to set max. supported Data and Mgmt. PHY rates of the
simulated AP. The DUT used for this exercise is an Apple iPhone 6, which is a
SISO device that supports 256QAM. Therefore the AP will be configured to
support MCS 8 and 9.

o Note: Disable Antennas 2, 3 and 4 in the configuration screen, if they are not
connected to the device

o Under OFDM, leave the default settings as is, as it offers ultimate compatibility.
Also set the Beacon PHY Rate to 6 Mbps for maximum compatibility. The
Beacon PHY rate sets the PHY rate for management frames transmitted by
AP.

PN 915-2640-01 Rev A September 2015 13


Test Case 1: Throughput Benchmarking Test

Note: Setting a low beacon PHY rate will impact the maximum throughput the device under
test can achieve. If you are sure your DUT supports higher management PHY rates, you
can override this setting.

o Under VHT Rates, set

 NSS 1 - MCS 0-9

 NSS 2 - Not Supported

 NSS 3 - Not Supported

 NSS 4 - Not Supported

 Advanced

o Leave all remaining parameters in their default values.

o Aggregation parameters will be discussed in more detail in a separate test


case.

6. Activate the AP and switch to Devices page

Clicking Activate AP will begin beacon transmission from AP. The Beacons will be
transmitted at Tx Power level set in AP config screen (default value 15dBm).

7. From DUT join the wireless network and start WaveAgent.

PN 915-2640-01 Rev A September 2015 14


Test Case 1: Throughput Benchmarking Test

8. Select the DUT from the devices that show up in the summary screen

9. Pick the “GDPT” option from Test Type drop down menu and configure it with the following
parameters to drive maximum throughput. The General Data Plane Test is designed to
characterize the performance of a device by subjecting it to different types of traffic. GDPT
Test is an ideal test to benchmark throughput performance of a device under test.

 Traffic Type: UDP and TCP

 Traffic Direction: Downstream and Upstream

 Frame Size: Set to MTU value 1518, for best results

 Frame rate: Set 100% of theoretical frame rate. Theoretical frame rate is derived based
on several configuration parameters like channel bandwidth, max. data PHY rate,
guard timer, aggregation settings. For an 80Mhz, AC SISO device with support for 256
QAM modulation (MCS 9), Short Guard timer – the theoretical frame rate works out to
be 433.3Mbps

 Under options set the trial duration to 60 secs. Trial duration can be increased for long
duration and stability tests.

PN 915-2640-01 Rev A September 2015 15


Test Case 1: Throughput Benchmarking Test

10. Start the test by clicking on the “Start Test” icon in the ribbon on top.

Result Analysis
When test starts executing, monitoring stats will begin populating simultaneously. Monitoring stats
are retrieved from IxVeriwave cards like (RFA/WBA) as well as the WaveAgent running on DUT.
Retrieved stats are presented in WaveDevice GUI in 3 categories

 Flow Stats - stats measuring the active traffic flows

 Client Stats - stats pertaining to the each client or DUT

 Port Stats - stats measured at port which include all clients and APs using the specific
IxVeriwave HW port

Stats are also stored away as CSV files in the hard disk. There is an analysis module called “View
Measurement” that can analyze results by co-relating stats and presenting results as bar or line
graphs.

GDPT tests go by trials, for each trial the set of key configuration parameters are highlighted as
the trial progresses. In this test there are 4 trials:

 UDP Downstream with packet size of 1518 bytes

 UDP Upstream with packet size of 1518 bytes

 TCP Downstream with packet size of 1518 bytes

 TCP Upstream with packet size of 1518 bytes

When each trial starts executing, the first set of stats to look at would be Offered Load and
Forwarding rate. But first, some background into the stats used here:

PN 915-2640-01 Rev A September 2015 16


Test Case 1: Throughput Benchmarking Test

Statistics Terminology
Intended Load

Intended Load is the throughput intended to be generated. For tests like GDPT and RvR
it’s computed as a percent of theoretical PHY rate, and it is configurable in the test.

Theoretical frame rate

Theoretical frame rate is an estimate of maximum PHY throughput achievable, based on


the current GoldenAP configuration. It is derived from several configuration parameters
like channel bandwidth, maximum data PHY rate, guard interval, and few more. This value
is an estimate, because the exact value cannot be determined without taking into account
clients and their behavior.

Offered load

Downstream
In the downstream direction, this stat represents the L4 traffic load generated at the
simulated distribution system. As the L4 traffic is generated in the same hardware as
simulated golden AP, this stat also represents the throughput load at L2 of AP. Moreover,
because of the way Wi-Fi MAC works, the system will limit the load value to traffic
successfully ACK’ed by the L2 of DUT. So offered load is the L4 traffic load generated at
the simulated AP that is also successfully ACK’ed at L2 of the DUT. The difference
between offered load and Intended load can be attributed to system overhead
(management frames, contention, retransmission etc) and receiver performance.

Upstream
In the upstream direction, this stat represents the L4 traffic generated by Waveagent. The
Waveagent relies on DUT’s Operating system TCP/IP stack to send the traffic out. If for
whatever reason there is a bottleneck in the transmission path and the OS is unable to
keep-up with traffic generated by WaveAgent, it will reflect in a lower offered load.

Forwarding rate

Downstream
In the downstream direction, this stat represents the effective traffic that reaches the
WaveAgent. If any traffic is dropped between L2 and L4 of the DUT, it will be reflected
between the delta between Forwarding Rate and Offered Load.

Upstream
In the upstream direction, this stat represents the traffic received by the simulated AP (L2).
Because the entire simulated AP sub system is in the same HW, there are no resulting
packet losses between layers of the simulated AP. Therefore, this can also be interpreted
as L4 traffic at the distribution system.

PN 915-2640-01 Rev A September 2015 17


Test Case 1: Throughput Benchmarking Test

Trial 1 - UDP Downstream with packet size of 1518 bytes


To begin with, check the offered load and forwarding rate for the given trial. Also take note of
medium utilization, Failed ACK frames (L2 Frame error in DL) and FCS errors (L2 Frame errors
in UL).

Observation 1 - Throughput
Offered load and forwarding rate are pretty close – 375.983Mbps vs 375.087 Mbps

PN 915-2640-01 Rev A September 2015 18


Test Case 1: Throughput Benchmarking Test

The difference between offered load and forwarding rate is generally attributed to packet loss
across the various interfaces of the stack. Note that L2 frame errors are all only in 1 direction.
That’s because traffic is only in downstream direction, and even though the client transmits some
management frames in upstream, it doesn’t result in any L2 errors.

Observation 2 – Packet loss


Packet loss and corresponding L2 Frame errors are low – 0.23% and 1.8% respectively. L2 Frame
errors can be related to DUT not being able to keep up with received traffic rates, processing
aggregated frames (AMPDUs), or MCS decoding error. Minimizing L2 frame error will improve
overall throughput performance.

PN 915-2640-01 Rev A September 2015 19


Test Case 1: Throughput Benchmarking Test

Observation 3 – Aggregation
In the downstream direction the simulated AP will aggregate packets-based negotiations with
DUT. Aggregating MPDUs (mac layer protocol data unit) results in overall higher throughput, as
there is lesser overhead in acquiring the medium for transmitting the same amount of data.

As we can see from the result below, aggregation performance was quite good; all Tx Packets
were aggregated in the max aggregation bucket, which is 32-64.

Overall, these results reflect very good performance.

PN 915-2640-01 Rev A September 2015 20


Test Case 1: Throughput Benchmarking Test

Trial 2 - UDP Upstream with packet size of 1518 bytes


To begin with, check the offered load and forwarding rate for the given trial. Also, take note of
medium utilization, failed ACK frames (L2 frame error in DL), and FCS errors (L2 frame errors in
UL).

Observation 1 - Throughput
Once again, offered load and forwarding rate are pretty close – 326.578 Mbps vs 325.087 Mbps

PN 915-2640-01 Rev A September 2015 21


Test Case 1: Throughput Benchmarking Test

The difference between offered load and forwarding rate is generally attributed to packet loss
across the various interfaces of the stack. Note that L2 frame errors are all only in 1 direction,
that’s because traffic is only in upstream direction and even though the Golden AP transmits some
management frames in upstream, it doesn’t result in any L2 errors.

Observation 2 – Packet loss


Packet loss and corresponding L2 frame errors are pretty low – 0.31% and 0% respectively. In
the upstream direction, L2 frame errors occur due to transmitter issues or low signal quality.
Packet loss is measured as the number of packets lost between transmitter and receiver, as L2
takes care of retransmission in case of errors, packet loss equals the packets lost between layers
of the DUT (L4-L2).

PN 915-2640-01 Rev A September 2015 22


Test Case 1: Throughput Benchmarking Test

Observation 3 – Aggregation
In the upstream direction the DUT has to aggregate packets. Aggregating MPDUs (mac layer
protocol data unit) results in overall higher throughput, as there is lesser overhead in acquiring
the medium for transmitting the same amount of data.

As we can see from the result below, aggregation performance was OK, but it was not maximized.
Each AMPDU had roughly 32 MPDUs aggregated. Effective throughput could have been higher
if more MPDUs were aggregated. In real-life though, DUTs have to balance high throughput with
overheads associated with retransmission; from that angle, aggregating fewer MPDUs could be
seen as a balancing act to optimize throughput.

Overall, these results reflect very good performance.

PN 915-2640-01 Rev A September 2015 23


Test Case 1: Throughput Benchmarking Test

Trial 3 - TCP Downstream with packet size of 1518 bytes


To begin with, check the offered load and forwarding rate for the given trial. Also take note of
medium utilization, failed ACK frames (L2 frame error in DL) and FCS errors (L2 frame errors in
UL).

Observation 1 - Throughput
Offered load and forwarding rate match - 216.365 Mbps. But the overall throughput is lower
compared to UDP. This is expected for TCP. Note that the L2 errors exist in both directions (unlike
UDP). This is because in TCP the ACKs coming back also take up resources.

PN 915-2640-01 Rev A September 2015 24


Test Case 1: Throughput Benchmarking Test

Observation 2 – Packet loss


Packet loss and corresponding L2 frame errors are low – 0.23% and 1.8% respectively. L2 frame
errors can be related to DUT not being able to keep up with received traffic rates, processing
aggregated frames (AMPDUs), or MCS decoding error. Minimizing L2 frame error will improve
overall throughput performance.

PN 915-2640-01 Rev A September 2015 25


Test Case 1: Throughput Benchmarking Test

Observation 3 – Aggregation
In the downstream direction the simulated AP will aggregate packets based negotiations with
DUT. Aggregating MPDUs (mac layer protocol data unit) results in overall higher throughput, as
there is lesser overhead in acquiring the medium for transmitting the same amount of data.

As we can see from the result below, aggregation performance was quite good; all Tx Packets
were aggregated in the max aggregation bucket, which is 32-64.

Overall, these results reflect very good performance.

PN 915-2640-01 Rev A September 2015 26


Test Case 1: Throughput Benchmarking Test

Trial 4 - TCP Upstream with packet size of 1518 bytes


This trial is not analyzed here, as the same techniques described in prior trials can be applied to
analyze results from this trial.

Troubleshooting and Diagnostics


Symptom Diagnosis Comments

1. Check for L2 frame errors, high L2 frame errors


implies any one of the following
 DUT unable to keep up with high traffic rates
Assume testing
 DUT framing issues with aggregated packets is done in RF
 DUT unable to decode specific MCS at isolated
certain power levels (even though this is a L1 chamber, so
Low throughput issue, its reported as a L2 frame error) interference is
(DL) 2. Check packet loss; packet loss can be result of inter- not an issue
layer processing errors
1. Check for L2 frame errors, high L2 frame errors
implies any one of the following Assume testing
2. DUT Transmitter Issue is done in RF
3. Tx power level below Golden AP’s receiver isolated
sensitivity chamber, so
Low throughput 4. Check packet loss; packet loss can be result of inter- interference is
(UL) layer processing errors not an issue
5. Check aggregation performance

Test Variables
 Different packet sizes

 Device orientation (TRP/TIS)

PN 915-2640-01 Rev A September 2015 27


Test Case 1: Throughput Benchmarking Test

Conclusion
Based on analysis done above we conclude that maximum throughput for this device has been:

 UDP

o Downstream – 375 Mbps

o Upstream – 325 Mbps

 TCP

o Downstream – 216 Mbps

By using the Golden AP and a conductive setup, we were able to get consistent results with
regards to maximum throughput.

PN 915-2640-01 Rev A September 2015 28


Test Case 2: Performance Characterization over Packet Sizes

Test Case 2: Performance Characterization over Packet Sizes

Overview
Several wireless devices are designed for specific applications like medical devices, point of sale,
VoIP phones etc. Traffic pattern in such devices is generally pretty deterministic, these devices
need to be benchmarked with specific traffic profiles. General purpose devices (laptops, tablets
etc.), on the other hand, deal with wide-ranging traffic patterns, and they too need to be
benchmarked with different traffic profiles. There have been many instances where device
performance deteriorates for specific packet sizes. It’s therefore important to characterize a
device performance with different traffic characteristics.

Objective
Characterize device performance over different packet sizes.

Setup

Step-by-step Instructions
1. Follow steps 1-8 defined in TC1 to configure a simulated AP.

2. Pick the “GDPT” option from Test Type drop down menu and configure it with the following
parameters. The General Data Plane Test is designed to characterize the performance of
a device by subjecting it to different types of traffic. GDPT Test is an ideal test to
benchmark performance of a device under test.

 Traffic Type: UDP and TCP

 Traffic Direction: downstream and upstream

 Frame Size: Set to MTU value 128, 256, 512, 1024, 1518, for a full sweep of different
packet sizes

PN 915-2640-01 Rev A September 2015 29


Test Case 2: Performance Characterization over Packet Sizes

 Frame Rate: Set 100% of theoretical frame rate. Theoretical frame rate is derived
based on several configuration parameters like channel bandwidth, max. data PHY
rate, guard timer, aggregation settings. For an 80Mhz, AC SISO device with support
for 256 QAM modulation (MCS 9), Short Guard timer – the theoretical frame rate works
out to be 433.3Mbps

Under Options, set the trial duration to 60 secs. Trial duration can be increased for long
duration and stability tests.

3. Start the test and begin monitoring statistics.

Result Analysis
Tests are broken into trials, which basically lock the configuration down during execution. For this
testcase there are 10 trials. For sake of brevity, we will focus our analysis to only UDP traffic.

Observation 1 – Offered Load


From the graph below, the first thing that becomes evident is that the offered load decreases with
decreasing frame size. Offered load represents the traffic successfully transferred between the
L2 (mac) AP and DUT. Refer to <> for more information on offered load.

The second point that becomes evident is that offered load is higher in downstream compared to
upstream.

PN 915-2640-01 Rev A September 2015 30


Test Case 2: Performance Characterization over Packet Sizes

Observation 2 – Forwarding rate


Overall forwarding rate also tracks offered load pretty closely with a small percentage of packet
loss, which we will analyze next.

PN 915-2640-01 Rev A September 2015 31


Test Case 2: Performance Characterization over Packet Sizes

Observation 3 – L2 Frame Errors and Packet loss


L2 frame errors – represents transmission issues between the L2 of AP and DUT. These errors
are an overhead for the system as they increase retries and slow down the overall system. Note:
L2 frame errors don’t directly result in packet loss, as the 802.11 stack takes care of re-
transmissions.

In the run below, it’s evident that L2 errors increase drastically – almost 40X in downstream – as
the frame size decreases.

Packet loss on the other hand is nominal. In the downstream direction Packet loss is highest for
the biggest frame size (1518 bytes) and in the upstream direction Packet loss is pretty even across
the board.

PN 915-2640-01 Rev A September 2015 32


Test Case 2: Performance Characterization over Packet Sizes

Observation 4: Aggregation
Aggregation stats represent the number of MAC layer PDUs aggregated by the system. It’s
presented in both upstream and downstream directions. From the graph below, it becomes
evident that in the upstream direction the DUT aggregates MPDUs differently for different packet
sizes. In the downstream direction, the simulated AP always aggregates at a constant rate of 64
MPDUs. This could be one of the reasons why the L2 frame errors are high in downstream
direction, especially for smaller frame sizes.

PN 915-2640-01 Rev A September 2015 33


Test Case 2: Performance Characterization over Packet Sizes

Troubleshooting and Diagnostics


Symptom Diagnosis Comments

1. Check for L2 frame errors, high L2 frame errors


Assume testing
implies any one of the following
is done in RF
 DUT unable to keep up with high traffic rates
isolated
 DUT framing issues with aggregated packets chamber, so
Low throughput  DUT unable to decode specific MCS at certain interference is
(DL) power levels (even though this is a L1 issue, not an issue
its reported as a L2 frame error)
1. Check for L2 frame errors, high L2 frame errors
implies any one of the following
Assume testing
 DUT Transmitter Issue is done in RF
 Tx Power level below Golden AP’s receiver isolated
sensitivity chamber, so
Low throughput 2. Check packet loss: packet loss can be a result of inter- interference is
(UL) layer processing errors not an issue
3. Check aggregation performance

PN 915-2640-01 Rev A September 2015 34


Test Case 2: Performance Characterization over Packet Sizes

Test Variables
 Frame rates

 More packet sizes

 Longer duration tests

Conclusion
Based on analysis done above, we conclude that for the given DUT:

 The bigger the frame size, the better the throughput performance

 Number of L2 frame errors is much lower for bigger frame sizes compared to small frame
sizes. This translates to more overhead for lower frame sizes

 Bigger frame sizes have better forwarding rate to intended load, compared to smaller
frame sizes

 Overall performance is much better for higher frame sizes

PN 915-2640-01 Rev A September 2015 35


Test Case 3: Performance Characterization over Distance – Rate vs Range

Test Case 3: Performance Characterization over Distance – Rate vs


Range

Overview
Wi-Fi technology is very commonly used in residential and enterprise scenarios to carry deal
sensitive and high bandwidth real-time voice and video traffic. Since the devices connected to the
Access Point are wireless, they can be located at different distances from the AP. It’s important
to make sure that users get good quality of experience at different distances from the Access
Point.

Objective
Characterize the performance of device over various distance profiles.

Setup

Step-by-step Instructions
1. Follow steps 1 - 5 of Test Case 1.

2. Switch to Access Point configuration to configure the simulated AP. You can leave most
of the parameters in their default values.

 General Tab

o Port - Select Golden AP port reserved in ports page.

o SSID - Blackbook_Exercise

o Default Tx Power - Set this value to default and adjust if required, depending
on feedback from device.

PN 915-2640-01 Rev A September 2015 36


Test Case 3: Performance Characterization over Distance – Rate vs Range

 Data and Beacon PHY rates tab

o This is the screen to set max. Supported Data and Mgmt. PHY rates of the
simulated AP. The DUT used for this exercise is a Nexus 6, which is a MIMO
2x2 device that supports 256QAM. Therefore the AP will be configured to
support Nss 1, Nss 2 with MCS 0 - 9.

o Disable Antennas 3 and 4 in the configuration screen, if they are not connected
to the device

o Under OFDM, leave the default settings as is, as these are mandatory
supported rates according to 802.11 specification. User has the privilege to
change these settings.

Note: Setting a low beacon PHY rate will impact the maximum throughput the
device under test can achieve. If you are sure your DUT supports higher
management PHY rates, you can override this setting.

o Under VHT Rates, set

 NSS 1 - MCS 0-9

 NSS 2 - MCS 0-9

 NSS 3 - Not Supported

 NSS 4 - Not Supported

PN 915-2640-01 Rev A September 2015 37


Test Case 3: Performance Characterization over Distance – Rate vs Range

 Advanced

o Leave all remaining parameters in their default values.

o Aggregation parameters will be discussed in more detail in a separate test


case.

3. Activate the AP and switch to Devices page

Clicking Activate AP will begin beacon transmission from AP.

4. From DUT join the wireless network and start WaveAgent

5. Select the DUT from the devices that show up in the summary screen

PN 915-2640-01 Rev A September 2015 38


Test Case 3: Performance Characterization over Distance – Rate vs Range

Note: RSSI value shown for each device is reported by WaveAgent endpoint installed
on DUT; initial path loss can be measured using Tx Power configured in Golden AP and
RSSI value reported by WaveAgent: TxPower - RSSI

6. Pick the “Rate vs. Range Test” from “Test Type” drop down menu and configure test with
the following parameters. The Rate vs. Range test is designed to characterize the device
receiver performance at different distances from the Access Point while locking the data
rate. This provides a fair idea about receiver performance for the given Modulation and
Coding Scheme/data rate. This test can be configured with multiple data rates so user can
benchmark the receiver performance for different Modulation and Coding Scheme/data
rate at different distances in single click.

Golden AP uses Tx Power to simulate distance. The Tx power can be configured with
minimum of 1 dB step difference.

RvR Configuration

PN 915-2640-01 Rev A September 2015 39


Test Case 3: Performance Characterization over Distance – Rate vs Range

Test Configuration

 Tx PHY. Rates: Nss 1 MCS 0, Nss 1 MCS 9, Nss2 MCS 3 and Nss2 MCS 6

Note: Ideally this test case should be run with all possible MCS values to fully
sweep and identify any issues. However, to keep the content relevant for this
Black Book, this test has been configured to target certain key MCS indexes.

 Tx Powers: 0 dBm to -40 dBm with step size -2 dBm

Note: Tx Power value can configured with minimum of 1 dB step size in the
range of +15 dBm to -50 dBm

 Traffic Type: UDP

 Traffic Direction: Downstream

 Frame Size: Set the Frame Size value to 1518

 Frame Rates: Set 50% of theoretical frame rate. Frame rate has been set as
50% of theoretical, to target reasonable throughput values.

7. Start the test by clicking on the “Start Test” icon in the ribbon on top

Result Analysis
Path Loss in setup: It is always recommended to measure the path loss between AP and DUT
before measuring the receiver performance. Initial estimated path loss can be measured using
TxPower configured in Golden AP and RSSI value reported by WaveAgent. In this example the
initial estimated path loss is 44 dB.

When test starts executing, monitoring stats will begin populating simultaneously. Monitoring stats
are retrieved from IxVeriWave cards like (RFA/WBA3601) as well as the WaveAgent running on
DUT. Retrieved stats are presented in WaveDevice GUI in 3 categories

 Flow Stats - stats measuring the active traffic flows

 Client Stats - stats pertaining to the each client or DUT

 Port Stats - stats measured at port which include all clients and APs using
the specific IxVeriWave HW port

PN 915-2640-01 Rev A September 2015 40


Test Case 3: Performance Characterization over Distance – Rate vs Range

Stats are also stored away as CSV files for each trial in the host PC. There is an analysis module
called “View Measurement” that can analyze results by co-relating stats and presenting results
as bar or line graphs. Along with other UI graphs, trial results are also available in table format
with a great deal of information.

RvR tests go by trials; for each trial, the set of key configuration parameters is highlighted as the
trial progresses. In this test there are 84 trials. The number of trials will be derived from number
of Tx PHY Rates times number of Tx Powers times frame rates.

For sake of brevity, we will pick some key trials for analysis. This should be sufficient to give an
idea of how to go about analyzing results from the RvR test.

When each trial starts executing, the first set of stats to look at would be Offered Load and
Forwarding Rate, L2 Frame Errors and Medium Utilization.

The following chart shows the typical receiver sensitivity for 802.11ac modulation coding schemes
with channel width20/40/80 and 160 MHz. Most receivers today exceed these performance
metrics quite comfortably.

If the DUT does not have any specifications on its receiver sensitivity, this reference chart
should provide some guidance on how to evaluate the device.

Observation 1 – Throughput Analysis


Note: Please refer to Test Case 1 - Statistics Terminology for more info on terms used here.

Use View Measurement module to plot Offered Load and Forwarding Rate graphs over
distance (represented as decreasing TxPower)

 First observation is that Offered Load maps close to Intended load. This implies the
DUT is receiving the target throughput from L2 perspective.

PN 915-2640-01 Rev A September 2015 41


Test Case 3: Performance Characterization over Distance – Rate vs Range

 Second, DUT L2 performance of NSS-2-MCS-6 is slightly better NSS-1-MCS-9, this


matches our expectation.

 DUT exhibits expected waterfall curve when comes to L2 performance.

PN 915-2640-01 Rev A September 2015 42


Test Case 3: Performance Characterization over Distance – Rate vs Range

 Bingo – key issue noticed. Forwarding rate drops close to 0, for NSS-2-MCS-6. Offered
load for the same modulation was 250Mbps; this implies that at L2 of DUT the success
rate was close to 100%. However, between L2 and L4 of DUT, there was over 99%
packet loss. These packets will not be retransmitted, as the test was using UDP packets.
This translates to poor Quality of Experience for the user.

 At -16dBm Tx power, there is again a dip in Forwarding Rate, for NSS-2-MCS-6.

 NSS 1 MCS 9 and NSS 2 MCS3 also shows drops in forwarding rate at certain Tx
powers, and again shoots up at adjacent lower power values.

PN 915-2640-01 Rev A September 2015 43


Test Case 3: Performance Characterization over Distance – Rate vs Range

 Finally, when comparing results with packet loss graph, it’s once again clear there is an
issue for NSS-2-MCS-6 at certain power levels.

Observation 2 - L2 Frame Errors


L2 Frame errors are related to DUT not being able to keep up with received traffic rates,
processing aggregated frames (AMPDUs), or MCS decoding error at certain Tx power.

 L2 errors are yo-yo’ing up and down for a range of power levels, after which they go all
the way up. This behavior clearly demonstrates that DUT is not able to
decode/acknowledge frames with specific Tx Power and Tx PHY rate. This behavior will
result more Layer 2 retries, and will finally increase the cost of throughput by using
additional air time. Since Wi-Fi is shared medium, this will not only impact the device
performance but overall Wi-Fi network performance.

Observation 3 – Jitter
Increased Path Loss may result in high jitter values. The high variation in jitter across different
power levels can cause degraded Voice/Video quality at different distances from the Access
Point.

 Observed high jitter values with Nss 1 MCS 0, which is lowest Tx PHY Rate configured
irrespective of Tx Powers

 Higher Tx PHY rates resulted in low jitter values, which shows that the device is receiving
a contentious stream.

PN 915-2640-01 Rev A September 2015 44


Test Case 3: Performance Characterization over Distance – Rate vs Range

Troubleshooting and Diagnostics


Symptom Diagnosis Comments

Check for L2 frame errors, high L2 frame errors implies


any one of the following
 DUT unable to keep up with high traffic rates
Assume testing
 DUT framing issues with aggregated packets is done in RF
 DUT unable to decode specific MCS at isolated
certain power levels (even though this is a L1 chamber, so
Low Forwarding issue, its reported as a L2 frame error) interference is
Rate Check Packet loss: packet loss can be a result of inter- not an issue
layer processing errors

Test Variables
 Modulation and Coding Scheme/Tx PHY Rates

 Tx Power

 Frame Rates

PN 915-2640-01 Rev A September 2015 45


Test Case 3: Performance Characterization over Distance – Rate vs Range

Conclusion
Based on the analysis done above, we can conclude that the DUT clearly exhibited receiver
issues at some modulation rates and power levels. This calls for deeper analysis to understand
and remediate the issues.

Forwarding rate of device at different distances can be heavily influenced based on several
factors:

 Modulation and Coding Scheme/Tx PHY Rate

 Path Loss between AP and Client

 Device receiver sensitivity

 Device ability to process the frames at receive data rate

There can be other factors as well, but assuming the test bed setup is isolated and left in optimum
conditions, the above factors are key influencers.

PN 915-2640-01 Rev A September 2015 46


Test Case 4: Cost of Throughput Analysis

Test Case 4: Cost of Throughput Analysis

Overview
Wi-Fi devices operate in a shared access medium, and they have to co-operate and co-exist with
several other devices. Wi-Fi, like Ethernet, uses a distributed access scheme with only a small
difference: that it uses a CSMA/CA scheme (carrier sensing multiple access / collision avoidance)
to control access to the medium. However, collisions still occur. Moreover, there are several other
challenges that MAC layer has to deal with that add to the overhead.

Cost of throughput is a lesser known but very important metric that represents the performance
of the device in terms of overhead to the DUT, as well as to the overall system. In simple terms,
cost of throughput is a reflection of the proportional use of medium – medium utilization – to
transfer a given amount of data. There will always be some cost associated with throughput: the
aim is to minimize it.

Being a shared medium, a higher cost of throughput implies a burden for all devices using the
medium.

Objective
Validate that cost of throughput improves with increasing modulation rate.

Setup

Step-by-step Instructions
1. Follow steps 1-8 defined in TC1 to configure a simulated AP.

2. Pick the “Simple” test for this exercise. It supports locking down Tx Power, MCS and
Data rate for a given trial.

3. Configure and collect results for the following 4 trials:

 Trial 1 – MCS=3, Tx Power=5, Data rate=50Mbps

PN 915-2640-01 Rev A September 2015 47


Test Case 4: Cost of Throughput Analysis

 Trial 2 – MCS=5, Tx Power=5, Data rate=50Mbps

PN 915-2640-01 Rev A September 2015 48


Test Case 4: Cost of Throughput Analysis

 Trial 3 – MCS=8, Tx Power=5, Data rate=50Mbps

 Trial 4 – MCS=9, Tx Power=5, Data rate=50Mbps

PN 915-2640-01 Rev A September 2015 49


Test Case 4: Cost of Throughput Analysis

4. Tx Power setting needs to correspond to optimum RSSI. In this case Tx Power = 5, results
in RSSI of 30 dBm at the device.

Result Analysis
Observation 1 – MCS 3 @ 50Mbps
As the test execution starts, begin monitoring real-time stats.

First make sure the forwarding rate matches the target throughput

 Forwarding rate: 49.997 Mbps

Next, check the Tx Flow Medium Utilization

 Tx Flow Medium Utilization %: 79.4

This represents the amount of resources taken up by of the DUT to transmit 50Mbps successfully.
It includes retransmissions; as the retransmission rate is pretty low in this case, the medium
utilization is mostly made up of cost of transmission of 50Mbps.

 Tx Failed ACK Frame rate (pps): 34

Observation 2 – MCS 5 @ 50Mbps


Same as above. Note down the Forwarding Rate, Medium Utilization and Tx Failed ACK Frame
rate

 Forwarding rate: 50 Mbps

PN 915-2640-01 Rev A September 2015 50


Test Case 4: Cost of Throughput Analysis

 Tx Flow Medium Utilization %: 62.6

 Tx Failed ACK Frame rate (pps): 58

Observation 3 – MCS 8 @ 50Mbps


Same as above, note down the Forwarding Rate, Medium Utilization and Tx Failed ACK Frame
rate

 Forwarding rate: 49.992 Mbps

 Tx Flow Medium Utilization %: 56.4

 Tx Failed ACK Frame rate (pps): 95

PN 915-2640-01 Rev A September 2015 51


Test Case 4: Cost of Throughput Analysis

Observation 4 – MCS 9 @ 50Mbps


Same as above. Note down the Forwarding Rate, Medium Utilization and Tx Failed ACK Frame
rate

 Forwarding rate: 49.997 Mbps

 Tx Flow Medium Utilization %: 73

 Tx Failed ACK Frame rate (pps): 1467

PN 915-2640-01 Rev A September 2015 52


Test Case 4: Cost of Throughput Analysis

Troubleshooting and Diagnostics


Symptom Diagnosis Comments

Assume testing
 Check L2 frame errors (or Tx failed ACK rate): if
is done in RF
that’s high, medium utilization can be impacted
isolated
 Check the modulation rate: if that’s set low again chamber, so
High Medium medium utilization can be impacted interference is
utilization  Check target throughput: setting that high can not an issue
result in high medium utilization

Test Variables
 Check results for different MCS rates

 Check different packet sizes

PN 915-2640-01 Rev A September 2015 53


Test Case 4: Cost of Throughput Analysis

Conclusion
The expected result, in general, is that medium utilization goes down with higher order modulation
rates, because high MCS results in more efficient encoding at the PHY layer, resulting in better
overall performance.

Looking closer at the results, it becomes clear that medium utilization starts trending down from
MCS3 to MCS 8. However, for MCS 9 it shoots back up. This is because of the amount of L2
errors and associated retransmission. It’s indeed quite high for MCS 9.

 Medium Utilization for MCS 3 = 79.4

 Medium Utilization for MCS 5 = 62.6

 Medium Utilization for MCS 8 = 56.4

 Medium Utilization for MCS 9 = 73

With this data we can conclude that MCS 9 performance for the given Tx Power is not on par with
expectation. The overall cost of throughput is much higher for MCS 9.

PN 915-2640-01 Rev A September 2015 54


Test Case 5: Roaming Validation

Test Case 5: Roaming Validation

Overview
Roaming is the ability of a device to move from one AP to another while keeping an active network
session. Roaming is now very common in most commercial deployments; users typically move
within the campus and expect their connection to stay up.

When it comes to roaming function, the network only plays a small part (this is changing somewhat
with 802.11r and k). The device makes the key decisions on when to roam and where to roam.
The complexity arises from the fact that the active connection has to be maintained and serviced
in parallel to completing the roaming process. The 802.11 standards don’t address roaming, so
every vendor has their own implementation. This makes roaming function particularly susceptible
to failures and interoperability issues.

Testing roaming should be at the top of a device vendor’s test plan, as it can impact the user’s
experience quite significantly.

Objective
Validate the roaming success rate of a device roaming between APs in channels 48 and 44.

Setup

PN 915-2640-01 Rev A September 2015 55


Test Case 5: Roaming Validation

Step-by-step Instructions
1. Launch IxVeriwave Golden AP WaveDevice. The workflow for configuring a test is outlined
in the left frame of the GUI – System (chassis and port assignment), Access Points (AP
configuration), Devices (Devices and Tests), and Analysis (Results analysis). Please refer
to the user guide to familiarize yourself with WaveDevice GUI

2. Enter the IP address of the chassis that hosts the Golden AP card. Click on connect when
done

3. Select and reserve the Golden AP cards (IxAP) to be used for simulating the roaming test.
For this example, we select 2 IxAP cards.

4. Set the Channel information for the assigned cards. Set them to channels 44 and 48, as
example.
Note: channel selection can have an impact on AP configuration parameters, such as AP
Type and Bandwidth.

5. Switch to Access Point configuration to configure the simulated AP. You can leave most
of the parameters in their default values.

PN 915-2640-01 Rev A September 2015 56


Test Case 5: Roaming Validation

 General Tab

o Count - 5 (for each port)

o Port - <Make sure this matches, what has been reserved>

o SSID - Blackbook_Exercise

o Default Tx Power - <leave as default to start with; adjust based on RSSI


feedback from device>

 Data and Beacon PHY rates tab

o This is the screen to set max. supported Data and Mgmt. PHY rates of the
simulated AP. The DUT used for this exercise is an Apple iPhone 6, which is a
SISO device that supports 256QAM. Therefore, the AP will be configured to
support MCS 8 and 9.

o Note: Disable Antennas 2, 3 and 4 in the configuration screen, if they are not
connected to the device

PN 915-2640-01 Rev A September 2015 57


Test Case 5: Roaming Validation

o Under OFDM, leave the default settings as is, as it offers ultimate compatibility.
Also set the Beacon PHY Rate to 6 Mbps for maximum compatibility. The
Beacon PHY rate sets the PHY rate for management frames transmitted by
AP.

Note: Setting a low beacon PHY rate will impact the maximum throughput the device under
test can achieve. If you are sure your DUT supports higher management PHY rates, you
can override this setting.

o Under VHT Rates, set

 NSS 1 - MCS 0-9

 NSS 2 - Not Supported

 NSS 3 - Not Supported

 NSS 4 - Not Supported

 Advanced

o Leave all remaining parameters in their default values.

o Aggregation parameters will be discussed in more detail in a separate test


case.

PN 915-2640-01 Rev A September 2015 58


Test Case 5: Roaming Validation

6. Activate the AP and switch to Devices page

Toolbar

Clicking Activate AP will begin beacon transmission from AP. The Beacons will be
transmitted at Tx Power level set in AP config screen (default value 15dBm).

7. From DUT join the wireless network and start WaveAgent

8. Pick the “Roaming” test for this exercise. Roaming test is designed to simulate various
roaming scenarios like

 2-AP roam back and forth

 Multi-AP roam

 Intra-channel

 Inter-channel

 Mix of Intra and Inter-channel roam

 Roam in presence of neighbor APs

Roaming simulation works by adjusting Tx Power levels between a source AP and target
AP; the test engine will step down the power level in the source AP and step up the power
level in target AP. These power transitions occur at regular intervals, which can be

PN 915-2640-01 Rev A September 2015 59


Test Case 5: Roaming Validation

configured in the test. When the device roams, the test engine determines if the roam was
successful and then calculates a roam delay.

Select the device and configure roaming with following settings

 Path: Inter-channel Roam. This will automatically create a roam path of APs with
alternating channels.

 Repeat: 3, to get sufficient trials. This will cycle through the roam path 3 times.

 Return to source AP: check

 Continue on fail: check. If roam fails, i.e. device doesn’t end up in the target AP,
this will continue the test and try to recover from the failure for future trials.

 Min Power: Set this to -50dB. This will be lowest Tx Power level setting applied
when stepping down power level in source AP. Currently -50 is a hardware
limitation as well. External attenuators can be used if more attenuation is needed.

 Max Power: Set this to +15dB. This will be the maximum Tx Power setting applied
to a target AP when stepping up power. This is also a hardware limitation.

 Power Step: 1 dB.

 Every: 1000 ms. Power step and every, are bets interpreted together. Power
program will step up/down power in source/target AP based on these values.

 Neighbor APs Enable: Uncheck. This is an advanced configuration that is designed


to validate the roaming algorithms performance with regards to picking the right
AP. This will be covered in another test.

 Estimated Attenuation: This setting is only exposed for devices that don’t report an
RSSI. For such devices, enter the approximate pathloss, the test engine will use
this to work out the estimate RSSI based on changing Tx Power.

 Start test.

PN 915-2640-01 Rev A September 2015 60


Test Case 5: Roaming Validation

Result Analysis
When the test starts, roaming dashboard will begin populating. Roaming dashboard tracks
BSSID, Channel, TxPower, Estimated RSSI and it will automatically calculate Roam Delay and
Trial Status. Real-time Monitoring stats are also available in parallel for deeper analysis.

Observation 1 – Roam Summary


The roaming dashboard presents a run-time view of key statistics while the roam simulation is in
progress. TxPower and Est. RSSI are updated every time they change, and they pause right at
the moment the device roams. Soon after the roam, the status column indicates “Passed” or
“Failed” based on a successful roam. A successful roam is when a device switches to the
configured target AP is able to resume the active traffic session.

PN 915-2640-01 Rev A September 2015 61


Test Case 5: Roaming Validation

Based on the table above, this device roamed successfully in each of the 6 trials. The roam-delay
was somewhere between 30ms and 60ms for each trial.

Observation 2 – Forwarding Rate Vs Packet Loss


Forwarding rate is the effective throughput that a device is able to achieve, Packet loss tracks
number of packets lost during the last sampled period. These stats give a good insight into the
impact on quality of experience for the user. High packet loss results in high medium utilization or
in other words increased overhead to keep up with the same amount of forwarding rate. Based
on the graph below, this device had overall very little impact on forwarding rate during the roam.
This could also be because the intended load was not set high.

20
18
16
14
12
10
8
6
4
2
0
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70

ForwardingRate RxFlow1PacketLossNumber

Observation 3 – TxPHYDataRate
Client Device’s TxPHYDataRate represents the link rate that device picks for transmission. A
number of factors determine this selection, chief among which is quality of previous transmissions.
Looking at the graph below the TxPHYDataRate remains mostly steady around 38Mbps, but it
also fluctuates between a lower PHY rate quite often. Before we analyze this further, it will help
to understand how the test system simulates roaming. The GoldenAP simulates roaming by
controlling transmit power, which basically affects traffic in the downstream direction. The roaming
test, however, only works in the upstream direction. Therefore, it shouldn’t have much of an impact
on client’s TxPHYDataRate. Moreover, the fluctuating rate doesn’t make sense. This is an issue
that requires further investigation.

PN 915-2640-01 Rev A September 2015 62


Test Case 5: Roaming Validation

TxDataPHYRate
60

50

40

30

20

10

226

271

316
1
16
31
46
61
76
91
106
121
136
151
166
181
196
211

241
256

286
301

331
346
361
376
391
406
421
Troubleshooting and Diagnostics
Symptom Diagnosis Comments

 Check call flow, and see if the device probes all


Roaming active channels
Failure  Check if the device sends out association request
messages to the target AP

Test Variables
 Number of roam trials

 Inter channel and Intra channel

 Roam frequency

 Traffic types

 Traffic direction

Conclusion
Overall, the device under test performed well, as all the roams were successful and roam delay
was under 100ms, which is the benchmark for voice traffic. However, the client’s TxPHYDataRate
fluctuated quite a bit, which requires further investigat

PN 915-2640-01 Rev A September 2015 63


Test Case 6: Security Test

Test Case 6: Security Test

Overview
Wireless security covers two key functions, Authentication and Encryption. Today, most wireless
networks operate with some form of 802.1x based authentication scheme and 802.11i based
encryption scheme. It is therefore important to measure the impact of these settings on the
performance of the device. For instance, a device’s performance with or without encryption might
vary quite a bit. The same goes for other roaming, power save, and other functionalities.

Objective
Measure the impact of AES-CCMP encryption on effective performance

Setup

Step-by-step Instructions
1. Follow steps 1-8 defined in TC1 to configure a simulated AP.

2. Select a GDPT test and setup a simple configuration to benchmark throughput. UDP
traffic with 1518 byte packet size for both Upstream and Downstream direction.

PN 915-2640-01 Rev A September 2015 64


Test Case 6: Security Test

3. Start test, collect results.

4. Re-run the same test configuration with Security turned ON. In Access Point configuration,
enable security and set a password

5. Start and collect results.

Result Analysis
To analyze this test case, we will compare the results of the same test configuration with
security turned ON and OFF.

PN 915-2640-01 Rev A September 2015 65


Test Case 6: Security Test

Observation 1 – Security setup (Authentication & Encryption)


When a device associates with the AP, it goes through a security handshake to complete
authentication and setup encryption. This can be analyzed by capturing and viewing a trace in
wireshark.

WaveDevice supports packet capture in real-time.

Note: this step is not necessary, but it’s always good practice to validate that the security setup
was successful.

PN 915-2640-01 Rev A September 2015 66


Test Case 6: Security Test

Observation 2 – Forwarding Rate


Forwarding rate is the effective throughput a device is able to achieve in the test. The results
below indicate that the throughput is lower in both Upstream and Downstream directions when
security is turned ON.

Encryption ON Encryption OFF Delta %

Upstream 312 Mbps 327 Mbps 5% drop

Downstream 316 Mbps 373 Mbps 16% drop

As you can see, in the upstream direction there is a 5% drop in forwarding rate when security is
turned ON in up.

In the downstream direction there is a 16% drop in forwarding rate.

Figure 2: Security Turned ON Figure 3: Security Turned OFF

Let’s continue analyzing forwarding rate with packet loss and L2 frame errors.

Observation 3 – L2 frame errors


L2 frame errors indicate the number of transmission errors that occur between the AP and DUT.
The results below indicate that, in the downstream direction, the L2 frame errors are significantly
higher when encryption is turned ON – 16.24% vs 1.89%

PN 915-2640-01 Rev A September 2015 67


Test Case 6: Security Test

Figure 4: Security Turned ON Figure 5: Security Turned OFF

The L2 frame errors are unusually high when security is turned ON, this requires deeper
investigation at the device level.

Observation 4 – Comparison with lower load


In the previous observation we noted that L2 frame errors jumped unusually high – to 16% from
1.88% when security was turned ON, and this had a direct impact on the end result. One
possible explanation for this outcome is that the DUT was unable to keep up with the frame rate
when encryption was turned ON. To validate this claim, we need to cross-check with a lower
intended load. The earlier test was run at intended load set to 100% of theoretical frame rate, so
cross-check that with 25% of theoretical frame rate.

Following are results from test run with security turned ON

PN 915-2640-01 Rev A September 2015 68


Test Case 6: Security Test

Following are results from security turned OFF

From the result above:

 When the frame rate is set to 25% of theoretical frame rate, forwarding rate is not
impacted by security (unlike when frame rate is set to 100% of theoretical frame rate).

 When security is turned off, for frame rate set to 100% of theoretical, the L2 frame errors
drop drastically from 16% to 1.8%. This directly correlates to higher forwarding rate in
downstream.

 When security is turned off, for frame rate set to 25% of theoretical, the L2 frame errors
increase drastically from 1.8% to 22%. This is an anomaly: even after re-running this
several times the results are consistent. This is also something to investigate.

PN 915-2640-01 Rev A September 2015 69


Test Case 6: Security Test

Troubleshooting and Diagnostics


Symptom Diagnosis Comments

Authentication  Download the capture file and analyze the call flow
failure between DUT and AP
Poor forwarding  Check for related stats like L2 frame errors, packet
rate loss

Conclusion
Turning ON encryption typically takes up some resources for processing, and it’s bound to have
an impact on the performance of the device. It’s important to understand this impact, and in the
case of this device, we can conclude that a couple of key issues were noticed:

 In the downstream direction, at 100% theoretical frame rate, with security turned ON, the
device is unable to keep up with the generated traffic.

 In the downstream direction, at 25% theoretical frame rate, with security turned OFF, the
L2 errors were much higher than expected.

PN 915-2640-01 Rev A September 2015 70


Test Case 7: Ecosystem Test

Test Case 7: Ecosystem Test

Overview
Most devices operate in an ecosystem with several other devices. Performance in this ecosystem
is heavily dependent on the ability to acquire medium and transmit successfully. This is quite
complex, as Wi-Fi uses a distributed coordination mechanism, and with more devices comes
more complexity. It’s therefore important to model these deployment scenarios in the lab to
understand the impact on the performance of the device and also optimize the performance.

Objective
Determine the performance of DUT in the presence of 10 other devices connected to the same
Access Point

Setup

PN 915-2640-01 Rev A September 2015 71


Test Case 7: Ecosystem Test

Step-by-step Instructions
1. Follow steps 1-8 defined in TC1 to configure a simulated AP. In Step-3 instead of reserving
just 1 port, reserve a second IxClient port as well. The IxClient port will be used for
simulating ecosystem clients. I

2. Select the “Simple Test” from Devices page. Currently, Simple test is the only test that
supports ecosystem client simulation. As the DUT is being tested for its ability to acquire
medium and transmit, the traffic configuration will target upstream traffic.

PN 915-2640-01 Rev A September 2015 72


Test Case 7: Ecosystem Test

 Traffic Direction: Upstream

 Frame Size: 1518 bytes

 Data Rate: 30 Mbps

 Client Count: 10 (number of ecosystem clients simulated by Ixia’s IxClient Cards)

3. Start test

Result Analysis
When the test starts, monitoring stats will automatically begin reporting. Two columns will be
created, one for reporting stats from the DUT and another for reporting stats from simulating
IxClients.

PN 915-2640-01 Rev A September 2015 73


Test Case 7: Ecosystem Test

Observation 1 – Throughput impact


45

40

35

30

25

20

15

10

0
133

409
1
13
25
37
49
61
73
85
97
109
121

145
157
169
181
193
205
217
229
241
253
265
277
289
301
313
325
337
349
361
373
385
397

421
433
Forwarding Rate - DUT Forwarding Rate Per - IxClient

Based on configuration, each client is configured to simulate 30Mbps of upstream traffic. They
have to contend with each other to acquire the medium and transmit. Based on the graph, we can
see that the performance of the DUT was much lower than the performance of simulated IxClients.
Each IxClient was able to generate round 20Mbps of forward rate, whereas for the DUT the
forwarding rate was under 10Mbps. This is certainly an area for optimization.

PN 915-2640-01 Rev A September 2015 74


Test Case 7: Ecosystem Test

Observation 2 – DUT Transmit Rate

TxDataPHYRate
500
450
400
350
300
250
200
150
100
50
0
1

205
13
25
37
49
61
73
85
97
109
121
133
145
157
169
181
193

217
229
241
253
265
277
289
301
313
325
337
349
361
373
385
397
409
421
433
TxDataMCSRate
10
9
8
7
6
5
4
3
2
1
0
433
1
13
25
37
49
61
73
85
97
109
121
133
145
157
169
181
193
205
217
229
241
253
265
277
289
301
313
325
337
349
361
373
385
397
409
421

The DUT kicks off with a high MCS and TxPHYDataRate, but starts tracking downward soon after
and ends up at low MCS of 1 or TxPHYDataRate of 27Mbps for the majority of the test. This low
rate had a significant impact on the net result – low forwarding rate. It takes much longer to
transmit the same amount of data at lower rates.

Although devices typically adapt their transmit rates to changing channel conditions and frame
transmission quality feedback. In this case the DUT made some aggressive moves while
adapting, which resulted in poor forwarding rate.

PN 915-2640-01 Rev A September 2015 75


Test Case 7: Ecosystem Test

Test Variables
 Add more simulated APs to the mix

 Add more clients

 Add bi-directional traffic

Conclusion
Based on the results, its clear the DUT has trouble in performing well under busy deployment
conditions. The simulated IxClients performed much better in comparison to the DUT in the same
given ecosystem. The DUT made some aggressive moves in switching Tx Rates while rate
adapting, which resulted in impacting the forwarding rate.

PN 915-2640-01 Rev A September 2015 76


Test Case 8: Radio Transmitter Quality

Test Case 8: Radio Transmitter Quality

Overview
For any client device to work well and meet expectations, it needs a solid foundation in the form
of a really good transmitter and receiver. The transmitter should be able to transmit high quality
signals when transmitting at different transmit power settings and also at different modulation
rates. Similarly, the receiver should be able to meet or beat the specs in the ability to successfully
receive and decode all the data at different RSSI values and modulations rates. It’s very important
that both the transmitter and receiver meet specifications atleast under ideal test conditions.

Objective
Validate the quality of the transmitter by measuring Error Vector Magnitude (EVM) at the receiver
when the transmitter is transmitting at different data rates.

Setup

Step-by-step Instructions
5. Follow steps 1-8 defined in TC1 to configure a simulated AP.

6. Pick the “Simple” test for this exercise.

7. Configure the test with Upstream UDP traffic.

8. For the first trial setup a traffic flow from DUT to the Simulated AP at 1 Mbps.

9. Measure the EVM on the traffic stream on the Simulated AP using the WaveAnalyze
application as shown in the screenshot below (This functionality is only available on the
RFA L1-7 Hardware).

PN 915-2640-01 Rev A September 2015 77


Test Case 8: Radio Transmitter Quality

In the example above the DUT was transmitting at MCS 7, 40 MHz channel Bandwidth
and the 5-second moving Maximum EVM was measured at 3.9% which is well within what
the spec requires for MCS 7 which is 4.47%

10. Now repeat the same test but now with a 100 Mbps traffic load from the device under test
to the Golden AP. Make the same EVM measurements. Results can be seen below:

In this case the measured EVM was 6.02% which is way above the spec of 4.47%

Result Analysis
The device’s transmitter quality clearly degraded substantially when the data rate increased from
1 Mbps to 100 Mbps. It is to be noted that the theoretical throughput of the DUT is much higher
than 100 Mbps and hence in theory it should very well capable of transmitting at 100 Mbps without

PN 915-2640-01 Rev A September 2015 78


Test Case 8: Radio Transmitter Quality
any issues. But as we can see from the results at high data rate (which is very common when
devices carry applications like HD video), the transmitter quality degrades substantially, which
results in the receiver not being able to decode the frames properly. This then results in the
receiver not being able to acknowledge the frames, causing the transmitter to retransmit
extensively. This phenomenon will cause the transmitter to re-transmit frame several time, which
substantially increases the cost of throughput.

Troubleshooting and Diagnostics


Symptom Diagnosis Comments

Testing is done
 Poor transmitter quality in certain test conditions
under ideal test
 Test antenna placement, quality of the various conditions with a
radio components under various conditions. fully cabled and
High EVM  Look for problems caused by interference from isolated test
Values multiple radios and radio technologies like WiFi, setup
LTE and Bluetooth placed too close to each other

Test Variables
 Check results for different Data Rates

Conclusion
Devices should be able to have excellent transmitter quality to avoid a high cost of throughput
when transmitting at high data rates, which happens very commonly with applications like
streaming HD video

PN 915-2640-01 Rev A September 2015 79


Test Case 9: Interoperability Testing – Performance Characterization over Distance

Test Case 9: Interoperability Testing – Performance Characterization


over Distance

Overview
Wi-Fi technology is very commonly used in residential and enterprise scenarios to carry deal
sensitive and high bandwidth real-time voice and video traffic. Since the devices connected to the
Access Point are wireless, they can be located at different distances from the AP, and it’s
important to make sure that the users of applications like streaming HD video get a good quality
of service at different distances from the Access Point. The previous test was designed to
benchmark performance over distance of a DUT against a Golden AP. This test case deals with
running the same test using real APs.

Objective
Characterize the performance of device under test against a real AP over various distance
profiles.

Setup

PN 915-2640-01 Rev A September 2015 80


Test Case 9: Interoperability Testing – Performance Characterization over Distance

Step-by-step Instructions
1. Launch IxVeriWave Interoperability WaveDevice. The workflow for configuring a test is
outlined in the left frame of the GUI- System (chassis and port assignment), End Points
(End Points IP configuration), Devices (Devices and Tests) and Analysis (Results
Analysis). Please refer to the user guide to familiarize yourself with Interoperability
WaveDevice GUI.

2. Enter host name or IP address of the chassis that hosts the Interoperability hardware.
Click on connect to establish communication with chassis. Interoperability hardware
includes the following components:

 1 Ethernet port

 2 Wi-Fi ports i.e., Access Point and Client expert analysis ports

 1 Programmable RF Management unit

 1 Access Point

 Device under test with WaveAgent software pre-installed

3. After successfully connecting to chassis, Application will populate end point and
monitoring ports information. User is expected to select appropriate hardware ports for the
current test.

PN 915-2640-01 Rev A September 2015 81


Test Case 9: Interoperability Testing – Performance Characterization over Distance

4. Enter RFMU IP address and click connect to establish communication with RFMU. After
successful connection with RFMU, application will retrieve RFMU Model, RFMU Firmware
Revision, default attenuation value and available RFMU banks information. User can
reserve RFMU banks by clicking on check-box against bank number. If it is SISO one
RFMU bank is enough and MIMO testing requires multiple banks.

PN 915-2640-01 Rev A September 2015 82


Test Case 9: Interoperability Testing – Performance Characterization over Distance

5. Reserve the endpoint port. Ethernet endpoint will be used to generate or receive traffic
depending on traffic direction.

6. Reserve Access Point and Client monitoring ports. AP and Client Wi-Fi monitor ports will
monitor Tx frames from Access Point and device respectively. Reserving Access Point
monitor port will initiate scan functionality and discover all available Wi-
Fi networks. The available Wi-Fi network(s) information will be shown in table format and
user should choose test wireless network.

7. Reserve client monitor port. Client monitor port will be configured on same Wi-Fi channel
as selected in AP monitor port.

PN 915-2640-01 Rev A September 2015 83


Test Case 9: Interoperability Testing – Performance Characterization over Distance

8. Switch to Endpoint configuration. Enter WaveAgent endpoint information by click on ‘+’


sign. Click “Active Endpoints” to establish communication between Ethernet and
WaveAgent endpoints.

9. Switch to Devices configuration. Select ‘Rate vs Range Test’ test from Test Type.

 Traffic Type : UDP

 Traffic Direction : Downstream, Upstream

 Attenuations : 0 – 60 dBm with 1 dBm step

 Frame Rate : 200

PN 915-2640-01 Rev A September 2015 84


Test Case 9: Interoperability Testing – Performance Characterization over Distance

Note: User can measure path loss using AP Tx Power and RSSI value

10. Start Test

Result Analysis
Interoperability WaveDevice provides you the ability to validate the DUT rate adaptation algorithm
using upstream traffic and measure receive performance using downstream traffic.

When test starts executing, monitoring stats will begin populating simultaneously. Monitoring stats
are retrieved from IxVeriWave cards like (RFA/WBA3601) as well as the WaveAgent running on
DUT. Retrieved stats are presented in WaveDevice GUI in 2 categories:

 Flow Statistics - stats measuring the active traffic flows

 Station Statistics – Tx Stats from Client device to Access Point and vice versa

PN 915-2640-01 Rev A September 2015 85


Test Case 9: Interoperability Testing – Performance Characterization over Distance

Stats are also stored away as CSV files for each trail in the host PC. There is an analysis module
called “View Measurement” that can analyze results by co-relating stats and presenting results
as bar or line graphs. RvR tests go by trials: for each trial the set of key configuration parameters
are highlighted as the trial progresses. In this test there are 120 trials. The number of trials will be
derived from Traffic Types, Traffic Direction, Attenuations, and Frame Rates.

For the sake of brevity, we will pick some key trials for analysis. This should be sufficient to give
an idea on how to go about analyzing results from RvR test.

When each trial starts executing, the first set of stats to look at would be Offered Load and
Forwarding Rate, Packet Loss and Client Data PHY Rate.

Observation 1 – Forwarding rate

The downstream forwarding rate follows a nice curve of decreasing forwarding rate over simulated
distance, which is the expected result. But on the upstream direction when the client is transmitting
to the AP, at around 25dB of attenuation there is a sharp drop in the forwarding rate, and there
are also a number of instances where the measurements could not be made because of lost
connectivity. This indicates that the client device is not performing well after a certain simulated
distance while transmitting.

PN 915-2640-01 Rev A September 2015 86


Test Case 9: Interoperability Testing – Performance Characterization over Distance

Observation 2 – Packet Loss

The same effect seen in the forwarding rate chart is also reflected in the packet loss chart. Packet
loss shoots up when attenuation goes beyond a certain value. The increase in the packet loss
means that AP is not able to properly receive frames from the client at those attenuation values.
This could also mean that the client device is not picking the most optimal transmit data rate based
on the channel conditions.

Observation 3 – Average Data PHY Rate


Data PHY rate indicates the transmission rate selected by DUT. Devices pick the optimal rate for
a successful transmission. Under rate adaptation, devices will dynamically update the rates to
increase rate of successful transmission, and minimize retries.

Figure 6: AP Data PHY Rate

PN 915-2640-01 Rev A September 2015 87


Test Case 9: Interoperability Testing – Performance Characterization over Distance

Figure 7: Client Data PHY Rate

The above two charts plot the transmit PHY data rate of the AP and the client respectively over
attenuation on the X-axis.

These charts reinforce the point that the AP did very well in rate adapting from over 600 Mbps to
6 Mbps as the attenuation (simulated distance) increased.

The client, however, started from about 300 Mbps PHY rate and went through several ups and
downs when the signal between the AP and the client was attenuated. For example, when the
client transmits at 300 Mbps, it used a high Modulation and Coding (MCS) rate. The disadvantage
of using a high MCS rate is, it requires a high Signal to Noise (SNR) ratio to decode the frames
at the receiver. The advantage is that the transmitter can send more bits per symbol and can
achieve higher data rates and higher spectral efficiency. As the attenuation increases, the SNR
at the receiver decreases, and this causes the receiver (in this case the AP) to start seeing packet
errors when the client is transmitting at high MCS rates. So the AP will not be able to acknowledge
(ACK) some of the frames from the client. If the client continues to send data at the high MCS
rates, the AP will continue to lose frames. So the client needs to drop its MCS rate to adapt to the
changing channel conditions, and always try to find the optimal PHY rate that can minimize the
number of packet errors and maximize the spectral efficiency. In this case, the client doesn’t seem
to be doing a proper job of rate adaption, which is causing it to lose more frames in the upstream/
This in turn results in less forwarding rate in the upstream.

PN 915-2640-01 Rev A September 2015 88


Test Case 9: Interoperability Testing – Performance Characterization over Distance

Observation 4 - Aggregation

The above chart shows how the client and the AP aggregate frames as the attenuation increases.
The downstream here represents the AP perspective, and the upstream represents the client
perspective.

AP seems to be very consistent in the way it’s aggregating. Interestingly, the AP seems to start
with about 30 MPDUs in an AMPDU at low attenuation value. It then seems to increase it 64
MPDUs and then drop back to 30. The increase to 64 in the middle of the test could be because
the AP has a lot of lost frames to retransmit and hence is trying to build large aggregates.

The client, however, seems to be all over the place when it comes to aggregation. This indicates
that there are number of potential issues with the way the client device is buffering and reordering
frames to be transmitted.

Troubleshooting and Diagnostics


Symptom Diagnosis Comments

Possibly a problem with the Test run under ideal isolated


Sharp drop in Forwarding Rate quality of the transmitter test conditions so real world
for upstream traffic. signal at high attenuation performance may be worse
values.
The client transmit PHY The rates drop and then go
Client Data PHY Rate very rates possibly not adapting back up, which causes a lot
inconsistent over distance well to changing error rates of retries and errors
as the attenuation increases.

PN 915-2640-01 Rev A September 2015 89


Test Case 9: Interoperability Testing – Performance Characterization over Distance

Test Variables
 Traffic Type

 Traffic Direction

 Attenuation Values

 Frame Rate

Conclusion
Based on the results, the AP performance was on par with expectation; however, the client device
exhibited multiple issues in packet loss, throughput, and data PHY rate.

The up and downs in the PHY rates and the aggregation sizes of the packets transmitted by the
DUT in the upstream direction indicate that the upstream application performance may not meet
the expectations at certain attention levels or certain distances from the AP.

PN 915-2640-01 Rev A September 2015 90


Contact Ixia

Contact Ixia

Corporate Headquarters
Web site: www.ixiacom.com
Ixia Worldwide Headquarters General: info@ixiacom.com
26601 W. Agoura Rd. Investor Relations: ir@ixiacom.com
Calabasas, CA 91302 Training: training@ixiacom.com
USA Support: support@ixiacom.com

+1 877 FOR IXIA (877 367 4942) +1 877 367 4942

+1 818 871 1800 (International) +1 818 871 1800 Option 1 (outside USA)

(FAX) +1 818 871 1805 Online support form:


http://www.ixiacom.com/support/inquiry/
sales@ixiacom.com

EMEA

Ixia Technologies Europe Limited Renewals: renewals-emea@ixiacom.com


Support: support-emea@ixiacom.com
Clarion House, Norreys Drive
+44 1628 408750
Maiden Head SL6 4FL
United Kingdom Online support form:
+44 1628 408750 http://www.ixiacom.com/support/inquiry/?location=emea

FAX +44 1628 639916

VAT No. GB502006125

salesemea@ixiacom.com

Ixia Asia Pacific Headquarters Support: Support-Field-Asia-Pacific@ixiacom.com

101 Thomson Road, +1 818 871 1800 (Option 1)


#29-04/05 United Square Online support form:
Singapore 307591 http://www.ixiacom.com/support/inquiry/
+65.6332.0125

FAX +65.6332.0127
Support-Field-Asia-Pacific@ixiacom.com

PN 915-2640-01 Rev A September 2015 91


PN 915-2640-01 Rev A September 2015 92

You might also like