Wi-Fi Performance: iPad Pro and Pixel C

Now that we’ve gone over the basics of how Wi-Fi works, we can get to the true focus of today's article: the results. As this article is something of a preview of what we're working on behind the scenes, I’m still figuring out how I want to condense and present these results for future articles, so for now we’ll be looking at raw data generated from WaveDevice software. As you might be able to guess, it turns out that there are a number of differences in Wi-Fi performance that are readily apparent once you start running tests using WaveDevice.

In the interest of setting a baseline for performance, I elected to compare two tablets that I had on hand that could run WaveAgent software. Apple's 12.9" iPad Pro uses a BCM4355 solution with 2x2 802.11ac, while Google's Pixel C uses a 2x2 802.11ac device. Judging by the system firmware of the Pixel C it looks like we’re looking at BCM4350 chip, so at a high level there really shouldn’t be a huge delta between the iPad Pro and Pixel C.

To start off we can look at the range vs rate test, which is designed to see how the device performs in response to fading base station transmit power. In the case of the iPad Pro, at 15 dBm transmit power the device reported -33 dBm RSSI (received signal strength indicator). It's important to note that the IEEE 802.11 standard doesn't really define RSSI beyond a unitless value, but in the cases we're interested in RSSI is really more a reference to received power, where dBm is the unit of power rather than watts due to the huge differences in power from good to poor reception. However, in the interest of focusing on the rate at which throughput decreases the test sweep in transmit power was 0 dBm to -50 dBm with a 5 dBm step. With this sort of data, we can actually see the kind of throughput that the device sustains for a given RSSI level and for a given transmit power. Of course, there are a number of other statistics that can be examined here as previously discussed, but basically the main takeaway is that the iPad Pro is capable of sustaining 600 Mbps and approaches 0 Mbps at -45 dBm transmit power. Given that we’re looking at a ~47 dB path loss from the transmitter to the receiver, this basically means that the iPad Pro is capable of sustaining non-zero throughput all the way out to roughly -90-95 dBm RSSI.

If you think back to the explanation of the physical layer of Wi-Fi, the reason why this is important is because received power is not quite the same thing as signal to noise ratio. While having high received power does improve your signal to noise ratio, if your receiver has a great deal of phase noise to begin with from poor amplifier design or some other issue in the chain, your throughput is going to fall flat even if the device can transmit/receive effectively to/from the access point.

For the Pixel C, things aren't quite as rosy. In this case I recorded a -43 dBm RSSI at 15 dBm, which is already quite concerning to start with. I attempt to maximize RSSI and throughput in my tests, so it's likely that the Pixel C either has a highly directional antenna, insufficient antenna gain, or a significant impedance mismatch somewhere leading to significant signal reflection. These are all unquestionably hardware problems that are unlikely to be solved by any software changes. The Pixel C is also highly unstable as it approaches the edge of reception, so the test above stops at -30 dBm transmit power because attempting to go to -35 or -40 dBm resulted in the device disconnecting from the network. Resolving this required restarting WaveAgent, WaveDevice, and deleting all saved SSIDs from the Pixel C. I also had to change the SSID of the test AP, so to have any usable results at all it was necessary to adjust the test parameters for the Pixel C.

Putting these issues aside, it's obvious that the Pixel C is underperforming here regardless of how we go about it. Maximum throughput is well below what the iPad Pro can achieve even at short ranges, and the same is true even when compensating for the delta in RSSI. -30 dBm transmit power on the Google Pixel C is equal to -40 dBm transmit power for the iPad Pro when equalizing RSSI, so the iPad Pro can sustain roughly 50% higher throughput even at the extremes of reception. Equalizing RSSI means that we're still ignoring the antenna and other portions of the RF front-end, so it's entirely possible that the delta is even worse given that I couldn't achieve anywhere near -30 dBm RSSI on the Google Pixel C regardless of device orientation within the RF isolation chamber.

As mentioned previously, WaveDevice actually allows for a deeper look at what’s going on behind higher-level failures. Out of curiosity, I decided to run a simple upload throughput test at 15 dBm transmit power at the Pixel C’s highest possible throughput rate, and I found that it’s basically unable to use the highest throughput 256 QAM because there’s too much noise between each point on the constellation to tell what the device intended to transmit. Even when it can use MCS 9 (256 QAM and only 1 redundant bit) the Pixel C averages roughly a 3-4% EVM error, while the iPad Pro was closer to 1-2% at 256 QAM. And though 3-4% might sound like a small value, 256 QAM leaves very little room for error. I regularly saw drops down to MCS 7 (64 QAM, 1 redundant bit) even in ideal cases which resulted in noticeable drops in throughput during this simple test. I'm hesitant to go any further in the analysis here since we don't know enough about the design of the Pixel C's Wi-Fi subsystem, but an OEM would be able to use this information to start searching for potential sources of phase noise. It may be that we're looking at something like improper impedance matching somewhere in the system, amplifiers that are either poorly selected or poorly integrated, and/or a phase-locked loop somewhere that isn’t set up or designed properly for this task.

Moving on to the next test of interest, we can take a look at how these two devices perform in our roaming test. While I'm still experimenting with this for use in full reviews, for now I set up a test with a 10 Mbps load and a starting transmit power of 10 dBm, going down to -40 dBm for the noise floor with a 3 dBm step every second. 64 access points are used for this test, and all of them are on the same channel which should make this easier as the device doesn’t need to scan on all channels for the next access point to jump to. This is a fairly aggressive test, as I’ve run this test on a few devices and nothing is 100% perfect here, although some devices are clearly better than others.

In the case of the iPad Pro, we see a median roam delay of 42ms, which is reasonably respectable given the 10 Mbps traffic and fairly aggressive transmit power changes. However, the Google Pixel C seriously falls short here despite using a similar Wi-Fi chipset as I encountered a number of times when the Pixel C dropped from the network entirely and was unable to complete the test. Even when it didn’t fall off the network, the median roam delay was 682ms, which is pretty much guaranteed to result in some kind of disruption to things like VOIP calls, video conferencing, and similar problems. Ultimately the issue here is that roaming is a very common scenario a device will need to handle, as any office, school, or convention center is pretty much guaranteed to have multiple access points with the same SSID and authentication. There’s also the strong possibility that each access point is on a different channel, which would only increase roam latency figures relative to what I was able to test.


Pixel C Roam Latency

Needless to say, WaveDevice is an incredibly powerful system when it comes to providing insight into parts of Wi-Fi that have effects at the user experience level. Observant readers might have noticed that there's no Traffic Mix test here, and it turns out that the two devices we're looking at don't have particular issues with the Traffic Mix test. However, given the data shared with Ixia on devices they've tested, it's likely that the need for Traffic Mix testing will appear sooner rather than later. For full reviews we'll be including this test as well for completeness.

Wi-Fi Basics: L2/MAC Layer Closing Thoughts
Comments Locked

44 Comments

View All Comments

  • Ryan Smith - Friday, March 25, 2016 - link

    Correct, it's the 12.9" iPad Pro.
  • plext0r - Friday, March 25, 2016 - link

    Excellent article! Thanks for the write-up using professional-level WiFi testing.
  • jardows2 - Friday, March 25, 2016 - link

    Very interested to see this in use. In my work, I daily have to deal with people's wi-fi problems, and to see some of the insights this tool can use will be very enlightening. Trying to fix people's wi-fi over blind phone support is an excercise in frustration!
  • Ravn - Friday, March 25, 2016 - link

    Excellent stuff Anandtech. WiFi is usually in the specifications described by : WiFi: Yes, Bands: 2.4/5GHz a/b/c/g/n, Maximum throughput: xxx Mbps. And it says just about nothing about the quality in the WiFi unit. Finally some relevant RF data that describes how the WiFi performs in real life. Thank You!
    An additional test that could broaden the relevans of the WiFi testing, could be how the WiFi unit performs with a lot of BlueTooth units in the same area. BT's nasty frequency hopping nature in the whole WiFi band results in a lot of problems in WiFi setups. How the WiFi units handles this could be very interresting to include.
  • zodiacfml - Friday, March 25, 2016 - link

    Awesome and powerful testing machine you have there. One "Small" Wi-Fi testing website that I read regularly would be interested too. Yet, too powerful that only electrical engineers would use most of its functions.

    If I'm correct, you posted before that this device can also test for MU-MIMO performance without too much difficulty. Wi-Fi AP and routers reviews on Anandtech in the future wouldn't hurt? :)

    On a second thought, I think there is a brighter future for 802.11ad than say, MU-MIMO. As long it is line of sight or no obstruction, 1 Gbps is easy for this standard.
  • name99 - Friday, March 25, 2016 - link

    You've left out some interesting aspects of the physical layer.
    An essential part of this layer is the Forward Error Correction (FEC), which augments the transmitted data with additional data in such a way that if a few bits in the stream are in error, they can be recreated from the remaining bits (think parity on steroids).

    These error correcting codes have been improved over the years in successive specs as it's become feasible to throw more computation at them, with the current state of the art being so-called LDPC (low density parity code). [These same codes are currently used by a lot of flash vendors, but have a theoretical problem(trapping sets) that limits their effectiveness above certain noise levels, so better alternatives have been proposed (but as far as I know are not yet in production) for flash, and likely will follow in the next big WiF spec.]

    The specifically interesting thing about these codes, in the context of this article, is that it's not THAT useful to simply say that a chipset implements LDPC (or some other FEC). Implementing the encoding is a fixed algorithm that you can't really get wrong, but there are many ways of implementing a decoder (in other words, ways of attempting to construct the correct data stream from a corrupted data stream). These methods, of course, differ in the power they require, how much computation they utilize, how long they take to correct errors, and how complicated they are to implement.
    The real difference in performance of different chipsets (at the L1 level) is in how well their FEC decoders work. That's where the magic lives.

    At the next level up (the MAC level) its is crazy how much performance is lost because of the decentralized/unco-ordinated nature of the media access protocol.(This is the CSMA/CA that the article mentions.)
    Even in the simplest real world case of one base station and one device, you're losing 35% or so of your goodput to the MAC protocol, and it rapidly drops to 50% and then worse as you add just a few devices. The successive specs have tried various schemes (primarily using the logical equivalent of very long packets) to limit the damage, but all this has done is really keep things standing still so that the situation in each successive spec is not worse than in the previous spec. LTE can be vastly more efficient because it provides for a central intelligence that co-ordinates all devices and so does not have to waste time on guard intervals where everyone is looking around making sure that no-one else is talking or getting ready to talk.

    I don't understand why 802.11 has been so slow to adopt this model; putting the controlling intelligence in the base station (or base station equivalent in a peer network) and having every other device act as a slave. They're going to HAVE to go there at some point anyway --- they've pretty much run out of every other performance option --- and avoiding doing so in 802.11ac just means five more years of sub-optimal performance.

    [You can see this in the iPad Pro numbers. The BCM4355 supports 80MHz channels, and so a maximum PHY rate of 866Mbps, But the best you see is just under 600Mbps (and that performance is only available when transmitting extremely large packets in only one direction); the gap between this and the PHY rate is because of time wasted doing nothing but sitting around following the MAC protocol. This disappointing goodput compared to PHY rate is not due to the mythical "interference" that is blamed for every random radio issue; it is due to the design of the MAC protocol.

    You also see performance fall as the received signal strength falls. This performance drop is ALSO not due to interference, unless you want to make that word meaningless. The correct term for this performance drop is that it is the result of noise, or more precisely a falling SNR (signal to noise ratio). As the signal to noise ratio falls, you can pack fewer bits into each fragment of time (ie you have to switch from using QAM256 down to QAM64 down to QAM16) and you have to use more of those bits as error correction bits rather than as actual data bits.]
  • Denbo1991 - Friday, March 25, 2016 - link

    802.11ad and 802.11ax will have some centralized scheduling features to cut down on the overhead you talk about, especially in the context of many devices on one AP or many overlapping APs.
  • zodiacfml - Saturday, March 26, 2016 - link

    There's way a lot more to talk about here than possible.
  • alanore - Saturday, March 26, 2016 - link

    Definitely some good points that should be covered. It might be worth covering how older low speed devices can consume a large proportion of the air time and thus performance.

    Also in the article it might be worth calling out spatial streams and how they effect performance. In the article it was an apples for apples comparison (2x2 Vs 2x2) but I guess soon we might see a poorly performing 3x3 laptop getting similar results to the iPad pro.
  • Ratman6161 - Monday, March 28, 2016 - link

    Interesting but....for most of us does it really mean anything? So an iPad can achieve 600 Mbps throughput. How does this help me when wireless is used nearly exclusively for accessing the internet and my ISP provides 60 Mbps? For home use, I'm more interested in how well are things working when I have my Android TV streaming an HD Netflix movie while in the other room my wife is doing the same on Amazon and we are also both web surfing on either a tablet or a laptop...and that's more about the router than the individual devices, isn't it?

    Even at the office, no one is doing anything that requires 600 Mbps or even the 300 of the Pixel C (and the connection in/out of our building is only 20 Mbps). Its more a question of how many devices we can get connected simultaneously at a reasonable/usable speed.

Log in

Don't have an account? Sign up now