Closing Thoughts

As mentioned in the introduction, we've always been faced with the problem of seeing subjective differences in RF performance between devices, but lacking the data and repeatable tests to back it up. In my time at AnandTech, I've always been working to try and improve our reviews, and RF testing has been one of the major areas where I've sought to improve our reviews and benchmarks. While we've had some basic tests, we've never gone into this area with the same level of depth and breadth that we have with many other components of the stack. As connectivity is probably the most important thing in computing, it was evident that we had to tackle this subject at some point.

Our first leap into this area is the addition of Ixia's WaveDevice to our test suite. From the start, this system was conceived as an all-in-one chassis that could give data to prove or disprove subjective observation, to bring repeatable testing to seemingly one-off edge cases, and to do so at scale for Wi-Fi. It turns out that this system is quite powerful, and can show how a device performs in tests that directly correlate with user experience. These tests include throughput with respect to range, roaming latency, and ecosystem performance. The rate versus range test shows the quality of the RF front-end and the ability for the modem to properly decode and encode data in the face of decreasing SNR. The roaming latency test shows how well a device can detect and react to changing reception conditions. The ecosystem performance test can show how well a device can acquire the channel without conflicting with other traffic.

In the case of the iPad Pro and Pixel C, we found that WaveDevice was able to show a number of notable interesting data points from both an end user perspective and an engineering perspective. With the rate vs range test, it was possible to clearly see how well a device would perform in response to worsening reception from a user experience perspective. From an engineering perspective, it was possible to identify the root cause for the Google Pixel C’s poor Wi-Fi performance by using WaveAnalyze and an RF analysis blade in WaveDevice. While determining the root cause is still beyond what we can do with limited information on the hardware, an OEM would be able to act on the information provided by WaveDevice to improve their product before it reaches mass production.

In addition to the rate vs range test, the roaming latency test was quite illuminating. While root cause analysis is more difficult and best left to actual engineers, it’s quite obvious that the iPad Pro passed this test with flying colors while the Pixel C shows some serious deficiencies. If you regularly encounter large Wi-Fi networks with multiple access points all under a single SSID/name like eduroam, it’s obvious that the Pixel C will be an exercise in frustration if you’re hoping to keep a working Wi-Fi connection on the move. Even when the device roams successfully, the time that the device spends moving from one access point to the next is long enough on average to result in noticeable connection interruptions. When it doesn’t roam successfully, it seems to get stuck on a single access point and basically drops off the network entirely without manual intervention or has to re-authenticate and acquire a new IP address, which is guaranteed to cause most traffic to be dropped.

Of course, while this data is interesting, it's not very helpful without an understanding of how Wi-Fi works. Starting from the physical link layer, pretty much every modern radio in a mobile device is going to have a superheterodyne radio architecture, which uses an intermediate frequency before stepping down to a baseband frequency that allows for better signal processing. There’s a lot of bits and pieces here, but the key component here is the local oscillator that allows for successive stages where a baseband signal is encoded into a carrier or decoding a carrier to the baseband signal. Everything else is basically a lot of complicated circuits that are designed to help tune the RF circuits to oscillate at the correct frequencies and to boost a signal’s power multiple orders of magnitude from its baseband state.

With this radio as the foundation, we can then focus on modulation and coding schemes. Wi-Fi uses two primary methods of maximizing throughput as close to the Shannon limit as possible. These two methods are known as Quadrature Amplitude Modulation (QAM) and Orthogonal Frequency Division Multiplexing (OFDM). OFDM is basically a method of slicing up spectrum into small subcarriers. When done correctly, there are a number of benefits from a design perspective like simpler radio design, high spectral efficiency, simpler signal processing algorithms, and improved interference immunity.

If you can think of OFDM as slices of frequency, QAM is used on each slice of frequency. By using phase variation and amplitude variation, it becomes possible to represent multiple bits with a single frequency slice. In the case of Wi-Fi and LTE, we’re looking at up to 8 bits per “slice”, which means that there are 256 potential combinations of phase and amplitude to consider. However, noise limits the ability for a receiver or transmitter to differentiate between these combinations, so depending upon the channel conditions it may be necessary to increase the difference between each state to avoid data corruption.

The final method worth noting that can improve performance is MIMO. At its heart, MIMO is a form of parallelism to improve bandwidth and/or range. By exploiting the fact that signals will often have multiple propagation paths, it becomes possible to use these multiple paths to send multiple streams of data simultaneously. When taken to its logical conclusion of MU-MIMO, it’s possible to see additional throughput advantages as the device and access point can utilize beamforming to focus transmissions to reach a specific location rather than transmitting in all directions.

All of these aspects taken together form the physical link layer, which is best understood as the base hardware mechanics. The next layer up is the data link layer. This layer is used to help abstract away the underlying mechanics of all networking technologies so that the layers further up the networking stack don’t have to be tailored towards any one type of network technology. For the purposes of our reviews and understanding Wi-Fi, the key area of interest here is the method used to emulate a full-duplex network with a half-duplex technology. Full duplex in this case means simultaneous transmission and reception, while half-duplex only allows for transmission or reception, not both at the same time.

In the case of Wi-Fi, this emulation method is known as Carrier Sense Multiple Access with Collision Avoidance, or CSMA/CA. At a high level, devices listen to the channel to wait until it’s clear before sending the access point a request to send, at which point the access point must respond with a clear to send before the device can transmit on the channel.

In addition to multiplexing, the MAC layer determines how to control the physical link layer, which includes selection of modulation and coding schemes and power management of the radio. Due to the nature of Wi-Fi, the device has to use a number of heuristics to determine what kind of power save mechanisms and link rates to use rather than specific algorithms like traditional cellular networks. Of course, the MAC layer is also needed to determine how to route traffic and to transparently handle errors or corruption, but this is beyond the scope of this article.

Overall, reaching the level of understanding with regard to theory and practice has been one of the biggest undertakings that we’ve ever had at AnandTech. As I mentioned earlier, wireless testing has been one of the major frontiers that we’ve yet to fully explore. It turns out that digital logic and computer science don’t help much with understanding RF, and as a result something that might have seemed simple from our iPerf tests has exploded into months of research and experimentation. I’d like to thank Ixia for providing their significant expertise and equipment. More importantly, I’d like to thank all of our readers that have really provided the drive to make all of this possible. I look forward to seeing what we couldn’t see before.

Wi-Fi Performance: iPad Pro and Pixel C
Comments Locked

44 Comments

View All Comments

  • Ryan Smith - Friday, March 25, 2016 - link

    Correct, it's the 12.9" iPad Pro.
  • plext0r - Friday, March 25, 2016 - link

    Excellent article! Thanks for the write-up using professional-level WiFi testing.
  • jardows2 - Friday, March 25, 2016 - link

    Very interested to see this in use. In my work, I daily have to deal with people's wi-fi problems, and to see some of the insights this tool can use will be very enlightening. Trying to fix people's wi-fi over blind phone support is an excercise in frustration!
  • Ravn - Friday, March 25, 2016 - link

    Excellent stuff Anandtech. WiFi is usually in the specifications described by : WiFi: Yes, Bands: 2.4/5GHz a/b/c/g/n, Maximum throughput: xxx Mbps. And it says just about nothing about the quality in the WiFi unit. Finally some relevant RF data that describes how the WiFi performs in real life. Thank You!
    An additional test that could broaden the relevans of the WiFi testing, could be how the WiFi unit performs with a lot of BlueTooth units in the same area. BT's nasty frequency hopping nature in the whole WiFi band results in a lot of problems in WiFi setups. How the WiFi units handles this could be very interresting to include.
  • zodiacfml - Friday, March 25, 2016 - link

    Awesome and powerful testing machine you have there. One "Small" Wi-Fi testing website that I read regularly would be interested too. Yet, too powerful that only electrical engineers would use most of its functions.

    If I'm correct, you posted before that this device can also test for MU-MIMO performance without too much difficulty. Wi-Fi AP and routers reviews on Anandtech in the future wouldn't hurt? :)

    On a second thought, I think there is a brighter future for 802.11ad than say, MU-MIMO. As long it is line of sight or no obstruction, 1 Gbps is easy for this standard.
  • name99 - Friday, March 25, 2016 - link

    You've left out some interesting aspects of the physical layer.
    An essential part of this layer is the Forward Error Correction (FEC), which augments the transmitted data with additional data in such a way that if a few bits in the stream are in error, they can be recreated from the remaining bits (think parity on steroids).

    These error correcting codes have been improved over the years in successive specs as it's become feasible to throw more computation at them, with the current state of the art being so-called LDPC (low density parity code). [These same codes are currently used by a lot of flash vendors, but have a theoretical problem(trapping sets) that limits their effectiveness above certain noise levels, so better alternatives have been proposed (but as far as I know are not yet in production) for flash, and likely will follow in the next big WiF spec.]

    The specifically interesting thing about these codes, in the context of this article, is that it's not THAT useful to simply say that a chipset implements LDPC (or some other FEC). Implementing the encoding is a fixed algorithm that you can't really get wrong, but there are many ways of implementing a decoder (in other words, ways of attempting to construct the correct data stream from a corrupted data stream). These methods, of course, differ in the power they require, how much computation they utilize, how long they take to correct errors, and how complicated they are to implement.
    The real difference in performance of different chipsets (at the L1 level) is in how well their FEC decoders work. That's where the magic lives.

    At the next level up (the MAC level) its is crazy how much performance is lost because of the decentralized/unco-ordinated nature of the media access protocol.(This is the CSMA/CA that the article mentions.)
    Even in the simplest real world case of one base station and one device, you're losing 35% or so of your goodput to the MAC protocol, and it rapidly drops to 50% and then worse as you add just a few devices. The successive specs have tried various schemes (primarily using the logical equivalent of very long packets) to limit the damage, but all this has done is really keep things standing still so that the situation in each successive spec is not worse than in the previous spec. LTE can be vastly more efficient because it provides for a central intelligence that co-ordinates all devices and so does not have to waste time on guard intervals where everyone is looking around making sure that no-one else is talking or getting ready to talk.

    I don't understand why 802.11 has been so slow to adopt this model; putting the controlling intelligence in the base station (or base station equivalent in a peer network) and having every other device act as a slave. They're going to HAVE to go there at some point anyway --- they've pretty much run out of every other performance option --- and avoiding doing so in 802.11ac just means five more years of sub-optimal performance.

    [You can see this in the iPad Pro numbers. The BCM4355 supports 80MHz channels, and so a maximum PHY rate of 866Mbps, But the best you see is just under 600Mbps (and that performance is only available when transmitting extremely large packets in only one direction); the gap between this and the PHY rate is because of time wasted doing nothing but sitting around following the MAC protocol. This disappointing goodput compared to PHY rate is not due to the mythical "interference" that is blamed for every random radio issue; it is due to the design of the MAC protocol.

    You also see performance fall as the received signal strength falls. This performance drop is ALSO not due to interference, unless you want to make that word meaningless. The correct term for this performance drop is that it is the result of noise, or more precisely a falling SNR (signal to noise ratio). As the signal to noise ratio falls, you can pack fewer bits into each fragment of time (ie you have to switch from using QAM256 down to QAM64 down to QAM16) and you have to use more of those bits as error correction bits rather than as actual data bits.]
  • Denbo1991 - Friday, March 25, 2016 - link

    802.11ad and 802.11ax will have some centralized scheduling features to cut down on the overhead you talk about, especially in the context of many devices on one AP or many overlapping APs.
  • zodiacfml - Saturday, March 26, 2016 - link

    There's way a lot more to talk about here than possible.
  • alanore - Saturday, March 26, 2016 - link

    Definitely some good points that should be covered. It might be worth covering how older low speed devices can consume a large proportion of the air time and thus performance.

    Also in the article it might be worth calling out spatial streams and how they effect performance. In the article it was an apples for apples comparison (2x2 Vs 2x2) but I guess soon we might see a poorly performing 3x3 laptop getting similar results to the iPad pro.
  • Ratman6161 - Monday, March 28, 2016 - link

    Interesting but....for most of us does it really mean anything? So an iPad can achieve 600 Mbps throughput. How does this help me when wireless is used nearly exclusively for accessing the internet and my ISP provides 60 Mbps? For home use, I'm more interested in how well are things working when I have my Android TV streaming an HD Netflix movie while in the other room my wife is doing the same on Amazon and we are also both web surfing on either a tablet or a laptop...and that's more about the router than the individual devices, isn't it?

    Even at the office, no one is doing anything that requires 600 Mbps or even the 300 of the Pixel C (and the connection in/out of our building is only 20 Mbps). Its more a question of how many devices we can get connected simultaneously at a reasonable/usable speed.

Log in

Don't have an account? Sign up now