Professional Documents
Culture Documents
Black Book Dition Critical Testing For Wireless Client Devices Edition 10 April September 2015 Advanced Mpls
Black Book Dition Critical Testing For Wireless Client Devices Edition 10 April September 2015 Advanced Mpls
Edition 10
Wi-Fi Device Testing
Our goal in the preparation of this Black Book was to create high-value, high-quality content. Your
feedback is an important ingredient that will help guide our future books.
If you have any comments regarding how we could improve the quality of this book, or
suggestions for topics to be included in future Black Books, please contact us at
ProductMgmtBooklets@ixiacom.com.
This publication may not be copied, in whole or in part, without Ixia’s consent.
Ixia, the Ixia logo, and all Ixia brand names and product names in this document are either
trademarks or registered trademarks of Ixia in the United States and/or other countries. All other
trademarks belong to their respective owners. The information herein is furnished for informational
use only, is subject to change by Ixia without notice, and should not be construed as a
commitment by Ixia. Ixia assumes no responsibility or liability for any errors or inaccuracies
contained in this publication.
Table of Contents
Table of Contents........................................................................................................................ v
How to Read this Book.............................................................................................................. vii
Dear Reader ............................................................................................................................ viii
Introduction ................................................................................................................................ 1
Wi-Fi Device Issues ................................................................................................................... 2
Wi-Fi Device Test Challenges .................................................................................................... 4
Test Case 1: Throughput Benchmarking Test ...........................................................................11
Test Case 2: Performance Characterization over Packet Sizes ................................................29
Test Case 3: Performance Characterization over Distance – Rate vs Range ............................36
Test Case 4: Cost of Throughput Analysis ................................................................................47
Test Case 5: Roaming Validation ..............................................................................................55
Test Case 6: Security Test ........................................................................................................63
Test Case 7: Ecosystem Test ...................................................................................................71
Test Case 8: Radio Transmitter Quality ....................................................................................77
Test Case 9: Interoperability Testing – Performance Characterization over Distance................80
Contact Ixia ...............................................................................................................................91
Test Variables A summary of the key test parameters that affect the
test’s performance and scale. These can be modified to
construct other tests.
Results Analysis Provides the background useful for test result analysis,
explaining the metrics and providing examples of
expected results.
Typographic Conventions
In this document, the following conventions are used to indicate items that are selected or typed
by you:
Bold items are those that you select or click on. It is also used to indicate text found on
the current GUI screen.
Dear Reader
Ixia’s Black Books include a number of IP and wireless test methodologies that will help you
become familiar with new technologies and the key testing issues associated with them.
The Black Books can be considered primers on technology and testing. They include test
methodologies that can be used to verify device and system functionality and performance. The
methodologies are universally applicable to any test equipment. Step-by-step instructions using
Ixia’s test platform and applications are used to demonstrate the test methodology.
This tenth edition of the Black Books includes twenty-five volumes covering key technologies and
test methodologies:
A soft copy of each of the chapters of the books and the associated test configurations are
available on Ixia’s Black Book website at http://www.ixiacom.com/blackbook. Registration is required
to access this section of the web site.
Ixia is committed to helping our customers’ network perform at its highest level, so that end users
get the best application experience. We hope this Black Book series provides valuable insight into
the evolution of our industry, and helps customers deploy applications and network services—in
a physical, virtual, or hybrid network configuration.
Test Methodologies
Wi-Fi has become the de-facto standard of communication for local area networks. It’s fast,
flexible and cheap, which has led to a proliferation of Wi-Fi devices in the market. While this
trend is overall a good thing for the market, a large number of devices still exhibit issues
resulting in poor experience for the end user.
This booklet aims to address this quality gap by proposing various test methodologies to verify
the performance, functionality and security resiliency of Wi-Fi client devices.
Introduction
Over 10 Billion Wi-Fi enabled devices have been shipped to date, and this number is projected to
grow at 10% for the years to come. Although a majority of the Wi-Fi device shipments today are
made up of smart phones, tablets, e-readers and laptops, there is a growing trend of Wi-Fi
becoming the access technology for several application specific devices, including:
For most of these devices, good Wi-Fi connectivity is critical to their functioning and quality of
experience delivered to the end-user.
Lack of deployment standards – There are no standard deployment models. This leads
to lot of variation between each deployment. Devices have to cope with these variations
when they operate.
Legacy devices – Wi-Fi standards evolve pretty fast. This model is possible because the
standard imposes backwards compatibility on devices. Often Wi-Fi devices have to
operate in an ecosystem with several legacy devices. This causes a range of issues,
because legacy devices operate at different PHY rates and usually occupy the medium
for much longer periods.
QoS – Wi-Fi networks mostly carried best effort data traffic in the beginning; however, with
its surge in popularity, several new use-cases require QoS. This is extremely challenging
in Wi-Fi networks because of a lack of centralized control.
Radio Resource Management – Transmitting data, and transmitting data efficiently are
different concepts. Efficient transmission is increasingly becoming a focal point as it leads
to overall better network utilization and minimal impact on the battery.
Battery Performance – Most Wi-Fi devices are battery operated. Devices that don’t
optimize transmission algorithms will find that they spend too many cycles in transmissions
and re-transmissions. Optimizing transmission is key to better battery performance.
Selection. Devices exhibit many issues when it comes to DFS, because it involves
dynamically switching from one channel to another.
Antenna design – Antenna design is a complex subject: if not done right, device
performance can vary vastly, depending on its orientation or its interaction with the
environment.
Channel emulator – Emulates different channel conditions like home, small/large office,
outdoor etc. The channel conditions vary in each of these deployment conditions.
Packet capture/sniffing and protocol analyzer – Captures wireless traffic and helps
decode and analyze it.
Signal and Spectrum analyzer – Tools to analyze the Radio Frequency transmitted.
Usability – users have to train themselves in all of these different products with different
interfaces
Predictability – accounting for failure is 6 times harder when dealing with different pieces
Time – the time-cost to put together and maintain these different pieces
With Ixia’s WaveDevice solution, this hodge-podge approach should now be in the rear view.
When testing Wi-Fi Devices, three main technology areas should be assessed:
PHY Layer
This is responsible for converting information into RF signals, and for transmitting them between
the source and the destination. To ensure successful communication, the transmitter and receiver
of the device need to perform well and also conform to all necessary specifications. On the
functional side, the transmitter needs to make sure that the RF energy transmitted is confined to
the allowed spectrum mask within the frequency band of operation. It also needs to ramp up and
ramp down power within spec to ensure any given transmission does not interfere with the next
one. Because ensuring a proper PHY layer proves critical for ensuring end-user quality of
experience on a Wi-Fi device, RF testing is an essential step. On the performance side, the
transmitter needs to have proper transmitter modulation accuracy at different modulation rates,
while the receiver needs to have a low Error Vector Magnitude (EVM) when receiving at various
data rates and power levels.
MAC Layer:
Unlike other wireless access technologies such as LTE, where the base station makes most of
the decisions for the device, the Wi-Fi protocol requires the device to make lots of decisions. The
device must decide:
When to transmit
A typical Wi-Fi device is expected to be a lot smarter, and needs to implement several complex
algorithms at the MAC layer. Hence the MAC layer on a device needs to be tested thoroughly for
both functionality and performance.
On the functional side, the device needs to be tested to make sure it can roam, rate adapt, connect
to the AP using proper security mechanisms; and that it can only connect to APs with matching
credentials. On the performance side, it’s important to test that the device can optimize its
resources to maximize throughput, implement proper traffic classification under load, and
minimize battery consumption.
Application Layer
This is what the user sees and interacts with. Here, we need to look at several aspects of
application performance, including issues such as seamless LTE to Wi-Fi handover, delay-
sensitive Unified Communications (UC) applications, high-definition video streaming over Wi-Fi,
and the like. One very important point to note is that bad RF performance or bad MAC layer
performance will result in a bad user experience with an application. It’s important to thoroughly
test and harden a device at each one of these layers in isolation, and then test the system as a
whole.
Ixia recommends that testing is best conducted using a staged approach. It is important to
baseline the performance of the Device under Test (DUT) under ideal conditions, and to find and
fix issues. Testing should then progress by introducing one variable at a time, moving from the
most deterministic to the most realistic test conditions.
The key tests that need to be run in Stage 1 include measuring radio performance, validating
device connectivity, measuring raw throughput, and ensuring protocol conformance. Stage 1 also
includes other MAC and PHY protocol-related aspects such as roaming, rate adaptation, power-
save protocols, and security. Extensive test coverage is critical during this stage, and covering
numerous test cases in a small amount of time is essential for an on-time release of a high-quality
product in a highly competitive market.
The WaveDevice Golden AP solution combines hardware and software. The hardware includes
an 802.11a/b/g/ac Golden AP emulator, full line-rate traffic generator, channel and distance
emulator, line-rate real-time protocol sniffer, and line-rate, real-time signal generator and
analyzer.
The test hardware connects to an RF enclosure using RF cables; the device under test is placed
inside the RF enclosure. The DUT runs simple endpoint software – called the “WaveAgent” - that
sits at the transport layer on the device. The WaveAgent receives commands from the Ixia
WaveDevice hardware to send/receive different types of traffic at different rates, making precise
performance measurements.
In Stage 2, the client device under test has to be tested against the most common real APs to
make sure that the device can work well with those APs in the field. The key tests in Stage 2
include TCP/UDP/VOIP Upstream/Downstream performance at different frame sizes and rates,
and on different frequency channels with different settings on the AP and the client.
The client device under test is connected using RF cables to the AP through the IxVeriWave RF
Management Unit. Both the AP and the Client are placed in separate RF enclosures to create an
isolated, fully controllable, and repeatable test environment.
The testbed also includes the IxVeriWave WT-90/92 that houses two 802.11ac wireless cards
that can capture all the traffic between the AP and the Client on the wireless interface and perform
expert analysis to isolate and identify PHY/MAC-level issues with the AP and the Client device.
The WT-90/92 chassis also includes an Ethernet card that is connected to the Ethernet interface
of the AP and acts as one of the endpoints for traffic. The second endpoint is the WaveAgent
software installed on the device under test. The WaveDevice software application can create
TCP/UDP/VOIP traffic in both upstream and downstream directions and measure end-to-end Key
Performance Indicators (KPIs).
The RF Management Unit can also programmatically simulate distance between the AP and the
client and thus allows the software application to run performance over distance tests.
All the components of the testbed are nicely integrated and can be controlled from an easy-to-
use GUI. The user can run automated tests with various combinations of test settings and watch
the results in real-time.
In Stage 3 of testing, the goal is to evaluate the device’s performance in the field. Stage 1 and
Stage 2 testing provide an excellent platform for the test engineers to do everything possible in
the lab and ship an excellent product. However, there will almost certainly be some issues that
only show up in the field.
While testing Wi-Fi networks and devices in the field, the common misconception is: good RF
coverage means happy users. A device could be getting excellent signal strength at all the
locations on the floor but still have poor performance. There could be several reasons for this:
maybe the device is getting excellent signal in general but is not connected to the best available
AP; maybe the traffic load is not balanced across all the APs and thus resulting in low throughput
on the device; maybe all the neighboring devices are communicating only on the 2.4GHz band
even though there is ample free bandwidth available on the 5GHz band; or maybe it is not even
a wireless problem, and there is some policy/role based misconfiguration on the wired network
that is causing poor performance.
Ixia’s WaveDeploy test tool allows customers to run active site assessments from real devices
using the same WaveAgent software used in Stage 1 and Stage 2 testing. These assessments
measure the voice, video, and data performance of real devices at various locations on the
deployment floor under very real usage conditions.
From extensive testing conducted in the field over several sites, it becomes clear that the
traditional RSSI-based survey kind of testing is not sufficient. Coverage doesn’t mean capacity. It
is very important to:
Only then can the users find the issues in the field that were not found in the Stage 1 and Stage
2 lab testing.
Overview
Devices come in various shapes and sizes, but one function common among all of them is the
ability to transmit, receive, and process traffic. The throughput benchmark test provides a concise
sketch of the overall performance of the device. It indirectly validates the radio HW, RF signal
chain, device driver, OS and application level performance, all in a single test.
In Wi-Fi, the overall throughput of a device can be impacted by several factors; some of them are
listed below:
Transmit power
Ecosystem traffic
Packet sizes
Frame Aggregation
TCP/UDP
To be able to make sense of the results, it’s essential to keep the test variables to a minimum and
under control during each trial. A test-bed like Ixia’s WaveDevice Golden AP gives users this
control while enabling them to execute various tests.
Objective
Benchmark the maximum Downstream & Upstream throughput of a given device.
Setup
The setup here is made up of AP with traffic generation and control capabilities, a chamber to
isolate the RF environment from external RF sources and DUT with an agent that can be
controlled to generate data traffic.
Step-by-step Instructions
1. Launch IxVeriwave Golden AP WaveDevice. The workflow for configuring a test is outlined
in the left frame of the GUI – System (chassis and port assignment), Access Points (AP
configuration), Devices (Devices and Tests), Analysis (Results analysis). Please refer to
the user guide to familiarize yourself with WaveDevice GUI
2. Enter the IP address of the chassis that hosts the Golden AP card. Click on connect when
done.
3. Select the Golden AP card to be used for the simulating the AP and click Reserve at the
bottom of the screen
4. Set the Channel information for the simulated AP. Note: channel selection can have an
impact on AP configuration parameters like – AP Type, Bandwidth.
5. Switch to Access Point configuration to configure the simulated AP. You can leave most
of the parameters in their default values.
General Tab
o SSID - Blackbook_Exercise
o This is the screen to set max. supported Data and Mgmt. PHY rates of the
simulated AP. The DUT used for this exercise is an Apple iPhone 6, which is a
SISO device that supports 256QAM. Therefore the AP will be configured to
support MCS 8 and 9.
o Note: Disable Antennas 2, 3 and 4 in the configuration screen, if they are not
connected to the device
o Under OFDM, leave the default settings as is, as it offers ultimate compatibility.
Also set the Beacon PHY Rate to 6 Mbps for maximum compatibility. The
Beacon PHY rate sets the PHY rate for management frames transmitted by
AP.
Note: Setting a low beacon PHY rate will impact the maximum throughput the device under
test can achieve. If you are sure your DUT supports higher management PHY rates, you
can override this setting.
Advanced
Clicking Activate AP will begin beacon transmission from AP. The Beacons will be
transmitted at Tx Power level set in AP config screen (default value 15dBm).
8. Select the DUT from the devices that show up in the summary screen
9. Pick the “GDPT” option from Test Type drop down menu and configure it with the following
parameters to drive maximum throughput. The General Data Plane Test is designed to
characterize the performance of a device by subjecting it to different types of traffic. GDPT
Test is an ideal test to benchmark throughput performance of a device under test.
Frame rate: Set 100% of theoretical frame rate. Theoretical frame rate is derived based
on several configuration parameters like channel bandwidth, max. data PHY rate,
guard timer, aggregation settings. For an 80Mhz, AC SISO device with support for 256
QAM modulation (MCS 9), Short Guard timer – the theoretical frame rate works out to
be 433.3Mbps
Under options set the trial duration to 60 secs. Trial duration can be increased for long
duration and stability tests.
10. Start the test by clicking on the “Start Test” icon in the ribbon on top.
Result Analysis
When test starts executing, monitoring stats will begin populating simultaneously. Monitoring stats
are retrieved from IxVeriwave cards like (RFA/WBA) as well as the WaveAgent running on DUT.
Retrieved stats are presented in WaveDevice GUI in 3 categories
Port Stats - stats measured at port which include all clients and APs using the specific
IxVeriwave HW port
Stats are also stored away as CSV files in the hard disk. There is an analysis module called “View
Measurement” that can analyze results by co-relating stats and presenting results as bar or line
graphs.
GDPT tests go by trials, for each trial the set of key configuration parameters are highlighted as
the trial progresses. In this test there are 4 trials:
When each trial starts executing, the first set of stats to look at would be Offered Load and
Forwarding rate. But first, some background into the stats used here:
Statistics Terminology
Intended Load
Intended Load is the throughput intended to be generated. For tests like GDPT and RvR
it’s computed as a percent of theoretical PHY rate, and it is configurable in the test.
Offered load
Downstream
In the downstream direction, this stat represents the L4 traffic load generated at the
simulated distribution system. As the L4 traffic is generated in the same hardware as
simulated golden AP, this stat also represents the throughput load at L2 of AP. Moreover,
because of the way Wi-Fi MAC works, the system will limit the load value to traffic
successfully ACK’ed by the L2 of DUT. So offered load is the L4 traffic load generated at
the simulated AP that is also successfully ACK’ed at L2 of the DUT. The difference
between offered load and Intended load can be attributed to system overhead
(management frames, contention, retransmission etc) and receiver performance.
Upstream
In the upstream direction, this stat represents the L4 traffic generated by Waveagent. The
Waveagent relies on DUT’s Operating system TCP/IP stack to send the traffic out. If for
whatever reason there is a bottleneck in the transmission path and the OS is unable to
keep-up with traffic generated by WaveAgent, it will reflect in a lower offered load.
Forwarding rate
Downstream
In the downstream direction, this stat represents the effective traffic that reaches the
WaveAgent. If any traffic is dropped between L2 and L4 of the DUT, it will be reflected
between the delta between Forwarding Rate and Offered Load.
Upstream
In the upstream direction, this stat represents the traffic received by the simulated AP (L2).
Because the entire simulated AP sub system is in the same HW, there are no resulting
packet losses between layers of the simulated AP. Therefore, this can also be interpreted
as L4 traffic at the distribution system.
Observation 1 - Throughput
Offered load and forwarding rate are pretty close – 375.983Mbps vs 375.087 Mbps
The difference between offered load and forwarding rate is generally attributed to packet loss
across the various interfaces of the stack. Note that L2 frame errors are all only in 1 direction.
That’s because traffic is only in downstream direction, and even though the client transmits some
management frames in upstream, it doesn’t result in any L2 errors.
Observation 3 – Aggregation
In the downstream direction the simulated AP will aggregate packets-based negotiations with
DUT. Aggregating MPDUs (mac layer protocol data unit) results in overall higher throughput, as
there is lesser overhead in acquiring the medium for transmitting the same amount of data.
As we can see from the result below, aggregation performance was quite good; all Tx Packets
were aggregated in the max aggregation bucket, which is 32-64.
Observation 1 - Throughput
Once again, offered load and forwarding rate are pretty close – 326.578 Mbps vs 325.087 Mbps
The difference between offered load and forwarding rate is generally attributed to packet loss
across the various interfaces of the stack. Note that L2 frame errors are all only in 1 direction,
that’s because traffic is only in upstream direction and even though the Golden AP transmits some
management frames in upstream, it doesn’t result in any L2 errors.
Observation 3 – Aggregation
In the upstream direction the DUT has to aggregate packets. Aggregating MPDUs (mac layer
protocol data unit) results in overall higher throughput, as there is lesser overhead in acquiring
the medium for transmitting the same amount of data.
As we can see from the result below, aggregation performance was OK, but it was not maximized.
Each AMPDU had roughly 32 MPDUs aggregated. Effective throughput could have been higher
if more MPDUs were aggregated. In real-life though, DUTs have to balance high throughput with
overheads associated with retransmission; from that angle, aggregating fewer MPDUs could be
seen as a balancing act to optimize throughput.
Observation 1 - Throughput
Offered load and forwarding rate match - 216.365 Mbps. But the overall throughput is lower
compared to UDP. This is expected for TCP. Note that the L2 errors exist in both directions (unlike
UDP). This is because in TCP the ACKs coming back also take up resources.
Observation 3 – Aggregation
In the downstream direction the simulated AP will aggregate packets based negotiations with
DUT. Aggregating MPDUs (mac layer protocol data unit) results in overall higher throughput, as
there is lesser overhead in acquiring the medium for transmitting the same amount of data.
As we can see from the result below, aggregation performance was quite good; all Tx Packets
were aggregated in the max aggregation bucket, which is 32-64.
Test Variables
Different packet sizes
Conclusion
Based on analysis done above we conclude that maximum throughput for this device has been:
UDP
TCP
By using the Golden AP and a conductive setup, we were able to get consistent results with
regards to maximum throughput.
Overview
Several wireless devices are designed for specific applications like medical devices, point of sale,
VoIP phones etc. Traffic pattern in such devices is generally pretty deterministic, these devices
need to be benchmarked with specific traffic profiles. General purpose devices (laptops, tablets
etc.), on the other hand, deal with wide-ranging traffic patterns, and they too need to be
benchmarked with different traffic profiles. There have been many instances where device
performance deteriorates for specific packet sizes. It’s therefore important to characterize a
device performance with different traffic characteristics.
Objective
Characterize device performance over different packet sizes.
Setup
Step-by-step Instructions
1. Follow steps 1-8 defined in TC1 to configure a simulated AP.
2. Pick the “GDPT” option from Test Type drop down menu and configure it with the following
parameters. The General Data Plane Test is designed to characterize the performance of
a device by subjecting it to different types of traffic. GDPT Test is an ideal test to
benchmark performance of a device under test.
Frame Size: Set to MTU value 128, 256, 512, 1024, 1518, for a full sweep of different
packet sizes
Frame Rate: Set 100% of theoretical frame rate. Theoretical frame rate is derived
based on several configuration parameters like channel bandwidth, max. data PHY
rate, guard timer, aggregation settings. For an 80Mhz, AC SISO device with support
for 256 QAM modulation (MCS 9), Short Guard timer – the theoretical frame rate works
out to be 433.3Mbps
Under Options, set the trial duration to 60 secs. Trial duration can be increased for long
duration and stability tests.
Result Analysis
Tests are broken into trials, which basically lock the configuration down during execution. For this
testcase there are 10 trials. For sake of brevity, we will focus our analysis to only UDP traffic.
The second point that becomes evident is that offered load is higher in downstream compared to
upstream.
In the run below, it’s evident that L2 errors increase drastically – almost 40X in downstream – as
the frame size decreases.
Packet loss on the other hand is nominal. In the downstream direction Packet loss is highest for
the biggest frame size (1518 bytes) and in the upstream direction Packet loss is pretty even across
the board.
Observation 4: Aggregation
Aggregation stats represent the number of MAC layer PDUs aggregated by the system. It’s
presented in both upstream and downstream directions. From the graph below, it becomes
evident that in the upstream direction the DUT aggregates MPDUs differently for different packet
sizes. In the downstream direction, the simulated AP always aggregates at a constant rate of 64
MPDUs. This could be one of the reasons why the L2 frame errors are high in downstream
direction, especially for smaller frame sizes.
Test Variables
Frame rates
Conclusion
Based on analysis done above, we conclude that for the given DUT:
The bigger the frame size, the better the throughput performance
Number of L2 frame errors is much lower for bigger frame sizes compared to small frame
sizes. This translates to more overhead for lower frame sizes
Bigger frame sizes have better forwarding rate to intended load, compared to smaller
frame sizes
Overview
Wi-Fi technology is very commonly used in residential and enterprise scenarios to carry deal
sensitive and high bandwidth real-time voice and video traffic. Since the devices connected to the
Access Point are wireless, they can be located at different distances from the AP. It’s important
to make sure that users get good quality of experience at different distances from the Access
Point.
Objective
Characterize the performance of device over various distance profiles.
Setup
Step-by-step Instructions
1. Follow steps 1 - 5 of Test Case 1.
2. Switch to Access Point configuration to configure the simulated AP. You can leave most
of the parameters in their default values.
General Tab
o SSID - Blackbook_Exercise
o Default Tx Power - Set this value to default and adjust if required, depending
on feedback from device.
o This is the screen to set max. Supported Data and Mgmt. PHY rates of the
simulated AP. The DUT used for this exercise is a Nexus 6, which is a MIMO
2x2 device that supports 256QAM. Therefore the AP will be configured to
support Nss 1, Nss 2 with MCS 0 - 9.
o Disable Antennas 3 and 4 in the configuration screen, if they are not connected
to the device
o Under OFDM, leave the default settings as is, as these are mandatory
supported rates according to 802.11 specification. User has the privilege to
change these settings.
Note: Setting a low beacon PHY rate will impact the maximum throughput the
device under test can achieve. If you are sure your DUT supports higher
management PHY rates, you can override this setting.
Advanced
5. Select the DUT from the devices that show up in the summary screen
Note: RSSI value shown for each device is reported by WaveAgent endpoint installed
on DUT; initial path loss can be measured using Tx Power configured in Golden AP and
RSSI value reported by WaveAgent: TxPower - RSSI
6. Pick the “Rate vs. Range Test” from “Test Type” drop down menu and configure test with
the following parameters. The Rate vs. Range test is designed to characterize the device
receiver performance at different distances from the Access Point while locking the data
rate. This provides a fair idea about receiver performance for the given Modulation and
Coding Scheme/data rate. This test can be configured with multiple data rates so user can
benchmark the receiver performance for different Modulation and Coding Scheme/data
rate at different distances in single click.
Golden AP uses Tx Power to simulate distance. The Tx power can be configured with
minimum of 1 dB step difference.
RvR Configuration
Test Configuration
Tx PHY. Rates: Nss 1 MCS 0, Nss 1 MCS 9, Nss2 MCS 3 and Nss2 MCS 6
Note: Ideally this test case should be run with all possible MCS values to fully
sweep and identify any issues. However, to keep the content relevant for this
Black Book, this test has been configured to target certain key MCS indexes.
Note: Tx Power value can configured with minimum of 1 dB step size in the
range of +15 dBm to -50 dBm
Frame Rates: Set 50% of theoretical frame rate. Frame rate has been set as
50% of theoretical, to target reasonable throughput values.
7. Start the test by clicking on the “Start Test” icon in the ribbon on top
Result Analysis
Path Loss in setup: It is always recommended to measure the path loss between AP and DUT
before measuring the receiver performance. Initial estimated path loss can be measured using
TxPower configured in Golden AP and RSSI value reported by WaveAgent. In this example the
initial estimated path loss is 44 dB.
When test starts executing, monitoring stats will begin populating simultaneously. Monitoring stats
are retrieved from IxVeriWave cards like (RFA/WBA3601) as well as the WaveAgent running on
DUT. Retrieved stats are presented in WaveDevice GUI in 3 categories
Port Stats - stats measured at port which include all clients and APs using
the specific IxVeriWave HW port
Stats are also stored away as CSV files for each trial in the host PC. There is an analysis module
called “View Measurement” that can analyze results by co-relating stats and presenting results
as bar or line graphs. Along with other UI graphs, trial results are also available in table format
with a great deal of information.
RvR tests go by trials; for each trial, the set of key configuration parameters is highlighted as the
trial progresses. In this test there are 84 trials. The number of trials will be derived from number
of Tx PHY Rates times number of Tx Powers times frame rates.
For sake of brevity, we will pick some key trials for analysis. This should be sufficient to give an
idea of how to go about analyzing results from the RvR test.
When each trial starts executing, the first set of stats to look at would be Offered Load and
Forwarding Rate, L2 Frame Errors and Medium Utilization.
The following chart shows the typical receiver sensitivity for 802.11ac modulation coding schemes
with channel width20/40/80 and 160 MHz. Most receivers today exceed these performance
metrics quite comfortably.
If the DUT does not have any specifications on its receiver sensitivity, this reference chart
should provide some guidance on how to evaluate the device.
Use View Measurement module to plot Offered Load and Forwarding Rate graphs over
distance (represented as decreasing TxPower)
First observation is that Offered Load maps close to Intended load. This implies the
DUT is receiving the target throughput from L2 perspective.
Bingo – key issue noticed. Forwarding rate drops close to 0, for NSS-2-MCS-6. Offered
load for the same modulation was 250Mbps; this implies that at L2 of DUT the success
rate was close to 100%. However, between L2 and L4 of DUT, there was over 99%
packet loss. These packets will not be retransmitted, as the test was using UDP packets.
This translates to poor Quality of Experience for the user.
NSS 1 MCS 9 and NSS 2 MCS3 also shows drops in forwarding rate at certain Tx
powers, and again shoots up at adjacent lower power values.
Finally, when comparing results with packet loss graph, it’s once again clear there is an
issue for NSS-2-MCS-6 at certain power levels.
L2 errors are yo-yo’ing up and down for a range of power levels, after which they go all
the way up. This behavior clearly demonstrates that DUT is not able to
decode/acknowledge frames with specific Tx Power and Tx PHY rate. This behavior will
result more Layer 2 retries, and will finally increase the cost of throughput by using
additional air time. Since Wi-Fi is shared medium, this will not only impact the device
performance but overall Wi-Fi network performance.
Observation 3 – Jitter
Increased Path Loss may result in high jitter values. The high variation in jitter across different
power levels can cause degraded Voice/Video quality at different distances from the Access
Point.
Observed high jitter values with Nss 1 MCS 0, which is lowest Tx PHY Rate configured
irrespective of Tx Powers
Higher Tx PHY rates resulted in low jitter values, which shows that the device is receiving
a contentious stream.
Test Variables
Modulation and Coding Scheme/Tx PHY Rates
Tx Power
Frame Rates
Conclusion
Based on the analysis done above, we can conclude that the DUT clearly exhibited receiver
issues at some modulation rates and power levels. This calls for deeper analysis to understand
and remediate the issues.
Forwarding rate of device at different distances can be heavily influenced based on several
factors:
There can be other factors as well, but assuming the test bed setup is isolated and left in optimum
conditions, the above factors are key influencers.
Overview
Wi-Fi devices operate in a shared access medium, and they have to co-operate and co-exist with
several other devices. Wi-Fi, like Ethernet, uses a distributed access scheme with only a small
difference: that it uses a CSMA/CA scheme (carrier sensing multiple access / collision avoidance)
to control access to the medium. However, collisions still occur. Moreover, there are several other
challenges that MAC layer has to deal with that add to the overhead.
Cost of throughput is a lesser known but very important metric that represents the performance
of the device in terms of overhead to the DUT, as well as to the overall system. In simple terms,
cost of throughput is a reflection of the proportional use of medium – medium utilization – to
transfer a given amount of data. There will always be some cost associated with throughput: the
aim is to minimize it.
Being a shared medium, a higher cost of throughput implies a burden for all devices using the
medium.
Objective
Validate that cost of throughput improves with increasing modulation rate.
Setup
Step-by-step Instructions
1. Follow steps 1-8 defined in TC1 to configure a simulated AP.
2. Pick the “Simple” test for this exercise. It supports locking down Tx Power, MCS and
Data rate for a given trial.
4. Tx Power setting needs to correspond to optimum RSSI. In this case Tx Power = 5, results
in RSSI of 30 dBm at the device.
Result Analysis
Observation 1 – MCS 3 @ 50Mbps
As the test execution starts, begin monitoring real-time stats.
First make sure the forwarding rate matches the target throughput
This represents the amount of resources taken up by of the DUT to transmit 50Mbps successfully.
It includes retransmissions; as the retransmission rate is pretty low in this case, the medium
utilization is mostly made up of cost of transmission of 50Mbps.
Assume testing
Check L2 frame errors (or Tx failed ACK rate): if
is done in RF
that’s high, medium utilization can be impacted
isolated
Check the modulation rate: if that’s set low again chamber, so
High Medium medium utilization can be impacted interference is
utilization Check target throughput: setting that high can not an issue
result in high medium utilization
Test Variables
Check results for different MCS rates
Conclusion
The expected result, in general, is that medium utilization goes down with higher order modulation
rates, because high MCS results in more efficient encoding at the PHY layer, resulting in better
overall performance.
Looking closer at the results, it becomes clear that medium utilization starts trending down from
MCS3 to MCS 8. However, for MCS 9 it shoots back up. This is because of the amount of L2
errors and associated retransmission. It’s indeed quite high for MCS 9.
With this data we can conclude that MCS 9 performance for the given Tx Power is not on par with
expectation. The overall cost of throughput is much higher for MCS 9.
Overview
Roaming is the ability of a device to move from one AP to another while keeping an active network
session. Roaming is now very common in most commercial deployments; users typically move
within the campus and expect their connection to stay up.
When it comes to roaming function, the network only plays a small part (this is changing somewhat
with 802.11r and k). The device makes the key decisions on when to roam and where to roam.
The complexity arises from the fact that the active connection has to be maintained and serviced
in parallel to completing the roaming process. The 802.11 standards don’t address roaming, so
every vendor has their own implementation. This makes roaming function particularly susceptible
to failures and interoperability issues.
Testing roaming should be at the top of a device vendor’s test plan, as it can impact the user’s
experience quite significantly.
Objective
Validate the roaming success rate of a device roaming between APs in channels 48 and 44.
Setup
Step-by-step Instructions
1. Launch IxVeriwave Golden AP WaveDevice. The workflow for configuring a test is outlined
in the left frame of the GUI – System (chassis and port assignment), Access Points (AP
configuration), Devices (Devices and Tests), and Analysis (Results analysis). Please refer
to the user guide to familiarize yourself with WaveDevice GUI
2. Enter the IP address of the chassis that hosts the Golden AP card. Click on connect when
done
3. Select and reserve the Golden AP cards (IxAP) to be used for simulating the roaming test.
For this example, we select 2 IxAP cards.
4. Set the Channel information for the assigned cards. Set them to channels 44 and 48, as
example.
Note: channel selection can have an impact on AP configuration parameters, such as AP
Type and Bandwidth.
5. Switch to Access Point configuration to configure the simulated AP. You can leave most
of the parameters in their default values.
General Tab
o SSID - Blackbook_Exercise
o This is the screen to set max. supported Data and Mgmt. PHY rates of the
simulated AP. The DUT used for this exercise is an Apple iPhone 6, which is a
SISO device that supports 256QAM. Therefore, the AP will be configured to
support MCS 8 and 9.
o Note: Disable Antennas 2, 3 and 4 in the configuration screen, if they are not
connected to the device
o Under OFDM, leave the default settings as is, as it offers ultimate compatibility.
Also set the Beacon PHY Rate to 6 Mbps for maximum compatibility. The
Beacon PHY rate sets the PHY rate for management frames transmitted by
AP.
Note: Setting a low beacon PHY rate will impact the maximum throughput the device under
test can achieve. If you are sure your DUT supports higher management PHY rates, you
can override this setting.
Advanced
Toolbar
Clicking Activate AP will begin beacon transmission from AP. The Beacons will be
transmitted at Tx Power level set in AP config screen (default value 15dBm).
8. Pick the “Roaming” test for this exercise. Roaming test is designed to simulate various
roaming scenarios like
Multi-AP roam
Intra-channel
Inter-channel
Roaming simulation works by adjusting Tx Power levels between a source AP and target
AP; the test engine will step down the power level in the source AP and step up the power
level in target AP. These power transitions occur at regular intervals, which can be
configured in the test. When the device roams, the test engine determines if the roam was
successful and then calculates a roam delay.
Path: Inter-channel Roam. This will automatically create a roam path of APs with
alternating channels.
Repeat: 3, to get sufficient trials. This will cycle through the roam path 3 times.
Continue on fail: check. If roam fails, i.e. device doesn’t end up in the target AP,
this will continue the test and try to recover from the failure for future trials.
Min Power: Set this to -50dB. This will be lowest Tx Power level setting applied
when stepping down power level in source AP. Currently -50 is a hardware
limitation as well. External attenuators can be used if more attenuation is needed.
Max Power: Set this to +15dB. This will be the maximum Tx Power setting applied
to a target AP when stepping up power. This is also a hardware limitation.
Every: 1000 ms. Power step and every, are bets interpreted together. Power
program will step up/down power in source/target AP based on these values.
Estimated Attenuation: This setting is only exposed for devices that don’t report an
RSSI. For such devices, enter the approximate pathloss, the test engine will use
this to work out the estimate RSSI based on changing Tx Power.
Start test.
Result Analysis
When the test starts, roaming dashboard will begin populating. Roaming dashboard tracks
BSSID, Channel, TxPower, Estimated RSSI and it will automatically calculate Roam Delay and
Trial Status. Real-time Monitoring stats are also available in parallel for deeper analysis.
Based on the table above, this device roamed successfully in each of the 6 trials. The roam-delay
was somewhere between 30ms and 60ms for each trial.
20
18
16
14
12
10
8
6
4
2
0
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70
ForwardingRate RxFlow1PacketLossNumber
Observation 3 – TxPHYDataRate
Client Device’s TxPHYDataRate represents the link rate that device picks for transmission. A
number of factors determine this selection, chief among which is quality of previous transmissions.
Looking at the graph below the TxPHYDataRate remains mostly steady around 38Mbps, but it
also fluctuates between a lower PHY rate quite often. Before we analyze this further, it will help
to understand how the test system simulates roaming. The GoldenAP simulates roaming by
controlling transmit power, which basically affects traffic in the downstream direction. The roaming
test, however, only works in the upstream direction. Therefore, it shouldn’t have much of an impact
on client’s TxPHYDataRate. Moreover, the fluctuating rate doesn’t make sense. This is an issue
that requires further investigation.
TxDataPHYRate
60
50
40
30
20
10
226
271
316
1
16
31
46
61
76
91
106
121
136
151
166
181
196
211
241
256
286
301
331
346
361
376
391
406
421
Troubleshooting and Diagnostics
Symptom Diagnosis Comments
Test Variables
Number of roam trials
Roam frequency
Traffic types
Traffic direction
Conclusion
Overall, the device under test performed well, as all the roams were successful and roam delay
was under 100ms, which is the benchmark for voice traffic. However, the client’s TxPHYDataRate
fluctuated quite a bit, which requires further investigat
Overview
Wireless security covers two key functions, Authentication and Encryption. Today, most wireless
networks operate with some form of 802.1x based authentication scheme and 802.11i based
encryption scheme. It is therefore important to measure the impact of these settings on the
performance of the device. For instance, a device’s performance with or without encryption might
vary quite a bit. The same goes for other roaming, power save, and other functionalities.
Objective
Measure the impact of AES-CCMP encryption on effective performance
Setup
Step-by-step Instructions
1. Follow steps 1-8 defined in TC1 to configure a simulated AP.
2. Select a GDPT test and setup a simple configuration to benchmark throughput. UDP
traffic with 1518 byte packet size for both Upstream and Downstream direction.
4. Re-run the same test configuration with Security turned ON. In Access Point configuration,
enable security and set a password
Result Analysis
To analyze this test case, we will compare the results of the same test configuration with
security turned ON and OFF.
Note: this step is not necessary, but it’s always good practice to validate that the security setup
was successful.
As you can see, in the upstream direction there is a 5% drop in forwarding rate when security is
turned ON in up.
Let’s continue analyzing forwarding rate with packet loss and L2 frame errors.
The L2 frame errors are unusually high when security is turned ON, this requires deeper
investigation at the device level.
When the frame rate is set to 25% of theoretical frame rate, forwarding rate is not
impacted by security (unlike when frame rate is set to 100% of theoretical frame rate).
When security is turned off, for frame rate set to 100% of theoretical, the L2 frame errors
drop drastically from 16% to 1.8%. This directly correlates to higher forwarding rate in
downstream.
When security is turned off, for frame rate set to 25% of theoretical, the L2 frame errors
increase drastically from 1.8% to 22%. This is an anomaly: even after re-running this
several times the results are consistent. This is also something to investigate.
Authentication Download the capture file and analyze the call flow
failure between DUT and AP
Poor forwarding Check for related stats like L2 frame errors, packet
rate loss
Conclusion
Turning ON encryption typically takes up some resources for processing, and it’s bound to have
an impact on the performance of the device. It’s important to understand this impact, and in the
case of this device, we can conclude that a couple of key issues were noticed:
In the downstream direction, at 100% theoretical frame rate, with security turned ON, the
device is unable to keep up with the generated traffic.
In the downstream direction, at 25% theoretical frame rate, with security turned OFF, the
L2 errors were much higher than expected.
Overview
Most devices operate in an ecosystem with several other devices. Performance in this ecosystem
is heavily dependent on the ability to acquire medium and transmit successfully. This is quite
complex, as Wi-Fi uses a distributed coordination mechanism, and with more devices comes
more complexity. It’s therefore important to model these deployment scenarios in the lab to
understand the impact on the performance of the device and also optimize the performance.
Objective
Determine the performance of DUT in the presence of 10 other devices connected to the same
Access Point
Setup
Step-by-step Instructions
1. Follow steps 1-8 defined in TC1 to configure a simulated AP. In Step-3 instead of reserving
just 1 port, reserve a second IxClient port as well. The IxClient port will be used for
simulating ecosystem clients. I
2. Select the “Simple Test” from Devices page. Currently, Simple test is the only test that
supports ecosystem client simulation. As the DUT is being tested for its ability to acquire
medium and transmit, the traffic configuration will target upstream traffic.
3. Start test
Result Analysis
When the test starts, monitoring stats will automatically begin reporting. Two columns will be
created, one for reporting stats from the DUT and another for reporting stats from simulating
IxClients.
40
35
30
25
20
15
10
0
133
409
1
13
25
37
49
61
73
85
97
109
121
145
157
169
181
193
205
217
229
241
253
265
277
289
301
313
325
337
349
361
373
385
397
421
433
Forwarding Rate - DUT Forwarding Rate Per - IxClient
Based on configuration, each client is configured to simulate 30Mbps of upstream traffic. They
have to contend with each other to acquire the medium and transmit. Based on the graph, we can
see that the performance of the DUT was much lower than the performance of simulated IxClients.
Each IxClient was able to generate round 20Mbps of forward rate, whereas for the DUT the
forwarding rate was under 10Mbps. This is certainly an area for optimization.
TxDataPHYRate
500
450
400
350
300
250
200
150
100
50
0
1
205
13
25
37
49
61
73
85
97
109
121
133
145
157
169
181
193
217
229
241
253
265
277
289
301
313
325
337
349
361
373
385
397
409
421
433
TxDataMCSRate
10
9
8
7
6
5
4
3
2
1
0
433
1
13
25
37
49
61
73
85
97
109
121
133
145
157
169
181
193
205
217
229
241
253
265
277
289
301
313
325
337
349
361
373
385
397
409
421
The DUT kicks off with a high MCS and TxPHYDataRate, but starts tracking downward soon after
and ends up at low MCS of 1 or TxPHYDataRate of 27Mbps for the majority of the test. This low
rate had a significant impact on the net result – low forwarding rate. It takes much longer to
transmit the same amount of data at lower rates.
Although devices typically adapt their transmit rates to changing channel conditions and frame
transmission quality feedback. In this case the DUT made some aggressive moves while
adapting, which resulted in poor forwarding rate.
Test Variables
Add more simulated APs to the mix
Conclusion
Based on the results, its clear the DUT has trouble in performing well under busy deployment
conditions. The simulated IxClients performed much better in comparison to the DUT in the same
given ecosystem. The DUT made some aggressive moves in switching Tx Rates while rate
adapting, which resulted in impacting the forwarding rate.
Overview
For any client device to work well and meet expectations, it needs a solid foundation in the form
of a really good transmitter and receiver. The transmitter should be able to transmit high quality
signals when transmitting at different transmit power settings and also at different modulation
rates. Similarly, the receiver should be able to meet or beat the specs in the ability to successfully
receive and decode all the data at different RSSI values and modulations rates. It’s very important
that both the transmitter and receiver meet specifications atleast under ideal test conditions.
Objective
Validate the quality of the transmitter by measuring Error Vector Magnitude (EVM) at the receiver
when the transmitter is transmitting at different data rates.
Setup
Step-by-step Instructions
5. Follow steps 1-8 defined in TC1 to configure a simulated AP.
8. For the first trial setup a traffic flow from DUT to the Simulated AP at 1 Mbps.
9. Measure the EVM on the traffic stream on the Simulated AP using the WaveAnalyze
application as shown in the screenshot below (This functionality is only available on the
RFA L1-7 Hardware).
In the example above the DUT was transmitting at MCS 7, 40 MHz channel Bandwidth
and the 5-second moving Maximum EVM was measured at 3.9% which is well within what
the spec requires for MCS 7 which is 4.47%
10. Now repeat the same test but now with a 100 Mbps traffic load from the device under test
to the Golden AP. Make the same EVM measurements. Results can be seen below:
In this case the measured EVM was 6.02% which is way above the spec of 4.47%
Result Analysis
The device’s transmitter quality clearly degraded substantially when the data rate increased from
1 Mbps to 100 Mbps. It is to be noted that the theoretical throughput of the DUT is much higher
than 100 Mbps and hence in theory it should very well capable of transmitting at 100 Mbps without
Testing is done
Poor transmitter quality in certain test conditions
under ideal test
Test antenna placement, quality of the various conditions with a
radio components under various conditions. fully cabled and
High EVM Look for problems caused by interference from isolated test
Values multiple radios and radio technologies like WiFi, setup
LTE and Bluetooth placed too close to each other
Test Variables
Check results for different Data Rates
Conclusion
Devices should be able to have excellent transmitter quality to avoid a high cost of throughput
when transmitting at high data rates, which happens very commonly with applications like
streaming HD video
Overview
Wi-Fi technology is very commonly used in residential and enterprise scenarios to carry deal
sensitive and high bandwidth real-time voice and video traffic. Since the devices connected to the
Access Point are wireless, they can be located at different distances from the AP, and it’s
important to make sure that the users of applications like streaming HD video get a good quality
of service at different distances from the Access Point. The previous test was designed to
benchmark performance over distance of a DUT against a Golden AP. This test case deals with
running the same test using real APs.
Objective
Characterize the performance of device under test against a real AP over various distance
profiles.
Setup
Step-by-step Instructions
1. Launch IxVeriWave Interoperability WaveDevice. The workflow for configuring a test is
outlined in the left frame of the GUI- System (chassis and port assignment), End Points
(End Points IP configuration), Devices (Devices and Tests) and Analysis (Results
Analysis). Please refer to the user guide to familiarize yourself with Interoperability
WaveDevice GUI.
2. Enter host name or IP address of the chassis that hosts the Interoperability hardware.
Click on connect to establish communication with chassis. Interoperability hardware
includes the following components:
1 Ethernet port
2 Wi-Fi ports i.e., Access Point and Client expert analysis ports
1 Access Point
3. After successfully connecting to chassis, Application will populate end point and
monitoring ports information. User is expected to select appropriate hardware ports for the
current test.
4. Enter RFMU IP address and click connect to establish communication with RFMU. After
successful connection with RFMU, application will retrieve RFMU Model, RFMU Firmware
Revision, default attenuation value and available RFMU banks information. User can
reserve RFMU banks by clicking on check-box against bank number. If it is SISO one
RFMU bank is enough and MIMO testing requires multiple banks.
5. Reserve the endpoint port. Ethernet endpoint will be used to generate or receive traffic
depending on traffic direction.
6. Reserve Access Point and Client monitoring ports. AP and Client Wi-Fi monitor ports will
monitor Tx frames from Access Point and device respectively. Reserving Access Point
monitor port will initiate scan functionality and discover all available Wi-
Fi networks. The available Wi-Fi network(s) information will be shown in table format and
user should choose test wireless network.
7. Reserve client monitor port. Client monitor port will be configured on same Wi-Fi channel
as selected in AP monitor port.
9. Switch to Devices configuration. Select ‘Rate vs Range Test’ test from Test Type.
Note: User can measure path loss using AP Tx Power and RSSI value
Result Analysis
Interoperability WaveDevice provides you the ability to validate the DUT rate adaptation algorithm
using upstream traffic and measure receive performance using downstream traffic.
When test starts executing, monitoring stats will begin populating simultaneously. Monitoring stats
are retrieved from IxVeriWave cards like (RFA/WBA3601) as well as the WaveAgent running on
DUT. Retrieved stats are presented in WaveDevice GUI in 2 categories:
Station Statistics – Tx Stats from Client device to Access Point and vice versa
Stats are also stored away as CSV files for each trail in the host PC. There is an analysis module
called “View Measurement” that can analyze results by co-relating stats and presenting results
as bar or line graphs. RvR tests go by trials: for each trial the set of key configuration parameters
are highlighted as the trial progresses. In this test there are 120 trials. The number of trials will be
derived from Traffic Types, Traffic Direction, Attenuations, and Frame Rates.
For the sake of brevity, we will pick some key trials for analysis. This should be sufficient to give
an idea on how to go about analyzing results from RvR test.
When each trial starts executing, the first set of stats to look at would be Offered Load and
Forwarding Rate, Packet Loss and Client Data PHY Rate.
The downstream forwarding rate follows a nice curve of decreasing forwarding rate over simulated
distance, which is the expected result. But on the upstream direction when the client is transmitting
to the AP, at around 25dB of attenuation there is a sharp drop in the forwarding rate, and there
are also a number of instances where the measurements could not be made because of lost
connectivity. This indicates that the client device is not performing well after a certain simulated
distance while transmitting.
The same effect seen in the forwarding rate chart is also reflected in the packet loss chart. Packet
loss shoots up when attenuation goes beyond a certain value. The increase in the packet loss
means that AP is not able to properly receive frames from the client at those attenuation values.
This could also mean that the client device is not picking the most optimal transmit data rate based
on the channel conditions.
The above two charts plot the transmit PHY data rate of the AP and the client respectively over
attenuation on the X-axis.
These charts reinforce the point that the AP did very well in rate adapting from over 600 Mbps to
6 Mbps as the attenuation (simulated distance) increased.
The client, however, started from about 300 Mbps PHY rate and went through several ups and
downs when the signal between the AP and the client was attenuated. For example, when the
client transmits at 300 Mbps, it used a high Modulation and Coding (MCS) rate. The disadvantage
of using a high MCS rate is, it requires a high Signal to Noise (SNR) ratio to decode the frames
at the receiver. The advantage is that the transmitter can send more bits per symbol and can
achieve higher data rates and higher spectral efficiency. As the attenuation increases, the SNR
at the receiver decreases, and this causes the receiver (in this case the AP) to start seeing packet
errors when the client is transmitting at high MCS rates. So the AP will not be able to acknowledge
(ACK) some of the frames from the client. If the client continues to send data at the high MCS
rates, the AP will continue to lose frames. So the client needs to drop its MCS rate to adapt to the
changing channel conditions, and always try to find the optimal PHY rate that can minimize the
number of packet errors and maximize the spectral efficiency. In this case, the client doesn’t seem
to be doing a proper job of rate adaption, which is causing it to lose more frames in the upstream/
This in turn results in less forwarding rate in the upstream.
Observation 4 - Aggregation
The above chart shows how the client and the AP aggregate frames as the attenuation increases.
The downstream here represents the AP perspective, and the upstream represents the client
perspective.
AP seems to be very consistent in the way it’s aggregating. Interestingly, the AP seems to start
with about 30 MPDUs in an AMPDU at low attenuation value. It then seems to increase it 64
MPDUs and then drop back to 30. The increase to 64 in the middle of the test could be because
the AP has a lot of lost frames to retransmit and hence is trying to build large aggregates.
The client, however, seems to be all over the place when it comes to aggregation. This indicates
that there are number of potential issues with the way the client device is buffering and reordering
frames to be transmitted.
Test Variables
Traffic Type
Traffic Direction
Attenuation Values
Frame Rate
Conclusion
Based on the results, the AP performance was on par with expectation; however, the client device
exhibited multiple issues in packet loss, throughput, and data PHY rate.
The up and downs in the PHY rates and the aggregation sizes of the packets transmitted by the
DUT in the upstream direction indicate that the upstream application performance may not meet
the expectations at certain attention levels or certain distances from the AP.
Contact Ixia
Corporate Headquarters
Web site: www.ixiacom.com
Ixia Worldwide Headquarters General: info@ixiacom.com
26601 W. Agoura Rd. Investor Relations: ir@ixiacom.com
Calabasas, CA 91302 Training: training@ixiacom.com
USA Support: support@ixiacom.com
+1 818 871 1800 (International) +1 818 871 1800 Option 1 (outside USA)
EMEA
salesemea@ixiacom.com
FAX +65.6332.0127
Support-Field-Asia-Pacific@ixiacom.com