Wi-Fi Basics: L2/MAC Layer

The next layer up in OSI model is the data link layer, which helps to encapsulate raw bits into something more manageable. The key to understanding this subject (and many other technology concepts) is seeing everything in layers of abstraction, because otherwise it would be incredibly difficult to talk about and analyze aspects of electronics without getting lost in an excess of mostly irrelevant information.

In the case of Wi-Fi, the MAC layer, which is within the OSI data link layer, is where a ton of the intelligence resides on the device (this in particular setting it apart from LTE). At its most basic form, the MAC layer hides the reality of the physical link layer with an abstracted network. To the parts of the networking stack operating above the MAC layer, the network appears to be solely composed of endpoints. Furthermore this abstract network appears to be full duplex, which means that data can be received and transmitted simultaneously.

Obviously, infrastructure-mode Wi-Fi isn’t solely composed of endpoints, and neither is Wi-Fi full duplex. Rather, due to the nature of how radios work, Wi-Fi is half duplex (radios can only talk or receive), as trying to transmit in full-duplex mode results in self-interference due to the receiver picking up the signal from the transmitter. There’s actually a significant amount of research going into solving the duplex problem to improve data rates and spectral efficiency, but from a commercial perspective most consumer devices don’t have this kind of technology yet.

In order to enable this kind of abstraction, there are a lot of systems in place in the Wi-Fi standard. While it might be relatively simple to abstract away the fact that the network actually has an access point that routes communications from one endpoint to another, but emulating full-duplex communication in a half-duplex medium is surprisingly complex. In general, only one device on the access point/spectrum can transmit at any given time, and if this rule is broken you end up with a lot of interference effects and whatever data was being transmitted is pretty much as good as lost. However it should be noted that there is an exception here with the latest generation devices supporting MU-MIMO, where multiple antennas are used to create areas of constructive and destructive interference so that “beams” are created to allow for multiple simultaneous transmissions.

As a result of these limitations, the MAC layer has to require that both devices and access points cooperate in sharing the spectrum using a time division scheme known as Carrier Sense Multiple Access with Collision Avoidance, or CSMA/CA. Devices connected to the AP have to make sure to listen to the channel with a receiver to first verify that the channel is clear, then can transmit data by first sending a request to send packet and waiting for a clear to send packet. Once the clear to send is received, the device can transmit its data. Because there’s no guarantee here that some other device didn’t also simultaneously transmit, the device has to listen for an acknowledgement by the access point that the data was received. If an ack is not received by a certain period of time, the device has to assume that someone else was also attempting to transmit and respond by backing off on transmissions for a specified period of time before transmitting again.

WaveDevice in turn is actually able to specifically test this part of the MAC layer using its ecosystem performance test, in which clients are simulated by WaveDevice and the device of interest is tested to see whether its CSMA/CA algorithms are designed so that appropriate throughput levels are maintained in the face of competing traffic. It turns out that some devices can be too aggressive and collide with other traffic, or too passive and spend too much time backing off. Getting too far from ideal in either direction will seriously affect throughput, so from a validation standpoint this is a test that is of interest as soon as you’re in environments like a convention center or press conference where there may be hundreds, if not thousands of other devices in the vicinity all on the same few channels.

Another part of the MAC layer that is important to understand for the purposes of Wi-Fi testing is rate adaptation. While WaveDevice allows for manual control of the Modulation and Coding Scheme (MCS) used by the device in addition to the number of spatial streams for MIMO and other settings like guard interval and channel bandwidth, it’s important that a device selects all of these things automatically and correctly. This is necessary in order to ensure that packet loss and retransmission isn’t happening at excessive rates in higher parts of the networking stack and that throughput at higher layers is maximized. Importantly, unlike the cellular world, Wi-Fi lacks channel quality indicators that allow for the device and access point to directly determine what the ideal modulation and encoding scheme is. This means that rate adaptation has to happen based upon factors like packet/frame loss rates.

Meanwhile it’s also important for the device to avoid transmitting signals at excessive power levels, as power consumed by the power amplifier directly affects battery life. Given that power amplifiers generally have a power-added efficiency of somewhere around 40% in modern mobile devices, it’s not entirely surprising to have a power amplifier consume somewhere around 1W of power alone, even before considering other parts of the RF chain or the rest of the device. Using a real world example here, our web browsing battery life test is long enough that even an average difference of 200 mW can cause a runtime difference measured in hours, so proper control of transmit power is definitely important. It's also important for the Wi-Fi chain to go to appropriate sleep states in order to save power. When implemented improperly, there can be some pretty serious knock-on effects in terms of idle battery life because unnecessary wake-ups can lead to waking the main CPU, which is relatively enormous in terms of power consumption on a mobile device.

From a testing perspective, these aspects can also be tested on WaveDevice by looking at how a device performs by testing for throughput while steadily decreasing transmit power on the access point. This rate vs range test can also be a test of the RF front end/physical layer, though it requires that the test chamber is set up properly to ensure that the device receives a constant transmit power and multipath propagation regardless of angle to avoid issues with anisotropy (in the real world, devices vary in their transmit and receive capabilities based on their angle and orientation). This test also allows for direct measurement of the ability of a device’s Wi-Fi chipset to demodulate and decode the signal in the face of decreasing SNR and received power, in addition to its ability to select the ideal MCS rate to maximum throughput and reduce packet loss.

Wi-Fi Basics: L1/PHY Layer Wi-Fi Performance: iPad Pro and Pixel C
Comments Locked

44 Comments

View All Comments

  • Ryan Smith - Friday, March 25, 2016 - link

    Correct, it's the 12.9" iPad Pro.
  • plext0r - Friday, March 25, 2016 - link

    Excellent article! Thanks for the write-up using professional-level WiFi testing.
  • jardows2 - Friday, March 25, 2016 - link

    Very interested to see this in use. In my work, I daily have to deal with people's wi-fi problems, and to see some of the insights this tool can use will be very enlightening. Trying to fix people's wi-fi over blind phone support is an excercise in frustration!
  • Ravn - Friday, March 25, 2016 - link

    Excellent stuff Anandtech. WiFi is usually in the specifications described by : WiFi: Yes, Bands: 2.4/5GHz a/b/c/g/n, Maximum throughput: xxx Mbps. And it says just about nothing about the quality in the WiFi unit. Finally some relevant RF data that describes how the WiFi performs in real life. Thank You!
    An additional test that could broaden the relevans of the WiFi testing, could be how the WiFi unit performs with a lot of BlueTooth units in the same area. BT's nasty frequency hopping nature in the whole WiFi band results in a lot of problems in WiFi setups. How the WiFi units handles this could be very interresting to include.
  • zodiacfml - Friday, March 25, 2016 - link

    Awesome and powerful testing machine you have there. One "Small" Wi-Fi testing website that I read regularly would be interested too. Yet, too powerful that only electrical engineers would use most of its functions.

    If I'm correct, you posted before that this device can also test for MU-MIMO performance without too much difficulty. Wi-Fi AP and routers reviews on Anandtech in the future wouldn't hurt? :)

    On a second thought, I think there is a brighter future for 802.11ad than say, MU-MIMO. As long it is line of sight or no obstruction, 1 Gbps is easy for this standard.
  • name99 - Friday, March 25, 2016 - link

    You've left out some interesting aspects of the physical layer.
    An essential part of this layer is the Forward Error Correction (FEC), which augments the transmitted data with additional data in such a way that if a few bits in the stream are in error, they can be recreated from the remaining bits (think parity on steroids).

    These error correcting codes have been improved over the years in successive specs as it's become feasible to throw more computation at them, with the current state of the art being so-called LDPC (low density parity code). [These same codes are currently used by a lot of flash vendors, but have a theoretical problem(trapping sets) that limits their effectiveness above certain noise levels, so better alternatives have been proposed (but as far as I know are not yet in production) for flash, and likely will follow in the next big WiF spec.]

    The specifically interesting thing about these codes, in the context of this article, is that it's not THAT useful to simply say that a chipset implements LDPC (or some other FEC). Implementing the encoding is a fixed algorithm that you can't really get wrong, but there are many ways of implementing a decoder (in other words, ways of attempting to construct the correct data stream from a corrupted data stream). These methods, of course, differ in the power they require, how much computation they utilize, how long they take to correct errors, and how complicated they are to implement.
    The real difference in performance of different chipsets (at the L1 level) is in how well their FEC decoders work. That's where the magic lives.

    At the next level up (the MAC level) its is crazy how much performance is lost because of the decentralized/unco-ordinated nature of the media access protocol.(This is the CSMA/CA that the article mentions.)
    Even in the simplest real world case of one base station and one device, you're losing 35% or so of your goodput to the MAC protocol, and it rapidly drops to 50% and then worse as you add just a few devices. The successive specs have tried various schemes (primarily using the logical equivalent of very long packets) to limit the damage, but all this has done is really keep things standing still so that the situation in each successive spec is not worse than in the previous spec. LTE can be vastly more efficient because it provides for a central intelligence that co-ordinates all devices and so does not have to waste time on guard intervals where everyone is looking around making sure that no-one else is talking or getting ready to talk.

    I don't understand why 802.11 has been so slow to adopt this model; putting the controlling intelligence in the base station (or base station equivalent in a peer network) and having every other device act as a slave. They're going to HAVE to go there at some point anyway --- they've pretty much run out of every other performance option --- and avoiding doing so in 802.11ac just means five more years of sub-optimal performance.

    [You can see this in the iPad Pro numbers. The BCM4355 supports 80MHz channels, and so a maximum PHY rate of 866Mbps, But the best you see is just under 600Mbps (and that performance is only available when transmitting extremely large packets in only one direction); the gap between this and the PHY rate is because of time wasted doing nothing but sitting around following the MAC protocol. This disappointing goodput compared to PHY rate is not due to the mythical "interference" that is blamed for every random radio issue; it is due to the design of the MAC protocol.

    You also see performance fall as the received signal strength falls. This performance drop is ALSO not due to interference, unless you want to make that word meaningless. The correct term for this performance drop is that it is the result of noise, or more precisely a falling SNR (signal to noise ratio). As the signal to noise ratio falls, you can pack fewer bits into each fragment of time (ie you have to switch from using QAM256 down to QAM64 down to QAM16) and you have to use more of those bits as error correction bits rather than as actual data bits.]
  • Denbo1991 - Friday, March 25, 2016 - link

    802.11ad and 802.11ax will have some centralized scheduling features to cut down on the overhead you talk about, especially in the context of many devices on one AP or many overlapping APs.
  • zodiacfml - Saturday, March 26, 2016 - link

    There's way a lot more to talk about here than possible.
  • alanore - Saturday, March 26, 2016 - link

    Definitely some good points that should be covered. It might be worth covering how older low speed devices can consume a large proportion of the air time and thus performance.

    Also in the article it might be worth calling out spatial streams and how they effect performance. In the article it was an apples for apples comparison (2x2 Vs 2x2) but I guess soon we might see a poorly performing 3x3 laptop getting similar results to the iPad pro.
  • Ratman6161 - Monday, March 28, 2016 - link

    Interesting but....for most of us does it really mean anything? So an iPad can achieve 600 Mbps throughput. How does this help me when wireless is used nearly exclusively for accessing the internet and my ISP provides 60 Mbps? For home use, I'm more interested in how well are things working when I have my Android TV streaming an HD Netflix movie while in the other room my wife is doing the same on Amazon and we are also both web surfing on either a tablet or a laptop...and that's more about the router than the individual devices, isn't it?

    Even at the office, no one is doing anything that requires 600 Mbps or even the 300 of the Pixel C (and the connection in/out of our building is only 20 Mbps). Its more a question of how many devices we can get connected simultaneously at a reasonable/usable speed.

Log in

Don't have an account? Sign up now