Recap: 802.11ac Wireless Networking

We’ve had quite a few major wireless networking standards over the years, and while some have certainly been better than others, I have remained a strong adherent of wired networking. I don’t expect I’ll give up the wires completely for a while yet, but Western Digital and Linksys sent me some 802.11ac routers for testing, and for the first time in a long time I’m really excited about wireless.

I’m not a good representative of normal PC users, but it has been a long time, relatively speaking, since we first saw Draft-N wireless options—Gary Key (now with ASUS) wrote about it what seems like an eternity ago, and in Internet time I suppose seven years is pretty darn close. Granted, 802.11ac has really been “done” for about two years now, but the first laptops to arrive with 11ac adapters are less than a month old—up until now, 11ac has been almost exclusively used for routers and bridges.

Before I get into a few performance specifics of 802.11ac testing, let me start by saying what is bad with 802.11n. The single biggest issue for me is the lack of quality implementations in so many of our devices. If you look at Apple’s MacBook Pro offerings, they’ve all been 3x3:3 MIMO for several years, offering connection speeds of up to 450Mbps. The problem with that “up to 450Mbps” is that it’s influenced by several factors.

Of course you need to know what sort of signal quality you have, but by far the bigger issue is this: are you talking about 2.4GHz 802.11n or 5GHz 802.11n? If you’re talking about the former, you can pretty much throw any thoughts of 450Mbps out the window. The bigger problem with “up to 450Mbps” is that the vast majority of laptops and routers don’t offer such support; Apple's 3x3:3 dual-band implementation is better than 99% of Windows laptops (and yes, I just made up that statistic).

About a year ago, I reviewed a router and repeater from Amped Wireless and found them to be good if not exceptional products. Compared to most of the wireless solutions people end up with, they were a breath of fresh air and I’ve actually been using them for the past year with very few complaints. On the other hand, I’ve had dozens of laptops come and go during the same time frame. Can you guess what the most common configuration is, even on more expensive laptops? If you said “single-band 2.4GHz 1x1:1”, give yourself a cookie.

We’re thankfully starting to see more laptops with dual-band 2x2:2 implementations, but even when you get that there’s still a big difference in actual performance, depending on notebook design, drivers, and other “special sauce”. We’ll see this in the charts on the next page, and it’s often more a statement of a particular laptop’s wireless implementation as opposed to representing what you might get from a particular wireless chipset.

In my opinion, the great thing about 802.11ac then is that any product claiming 802.11ac compliance is automatically dual-band. 11ac actually only works on the 5GHz channels, so for 2.4GHz support it’s no better than existing 802.11n solutions, but it’s fully backwards compatible and, as we’ll see in a moment, you really don’t want to use 2.4GHz wireless networking unless you’re primarily concerned with range of the signal. This is a shorter introductory piece, so don’t expect a full suite of benchmarks, but let’s just cut straight to the chase and say that there are a lot of situations in which I’ve found 802.11ac to be substantially faster than 802.11n.

A Quick Test of Real-World Wireless Performance
Comments Locked

139 Comments

View All Comments

  • thesavvymage - Tuesday, July 9, 2013 - link

    up until a little bit ago, Linksys was a brand of Cisco's, so including them both as brand examples isnt really correct. They were however just sold to belkin, so Cisco doesnt even make consumer routers anymore
  • DanNeely - Tuesday, July 9, 2013 - link

    1. 160mhz channels are an optional feature.

    2. Making hardware that can work on that wide a channel is significantly more difficult than narrower options. N only supported 40mhz channels; so they already had to push the tx/rx modules to double their bandwidth already.

    3. For mobile devices the wider bandwidth will result in higher power consumption for the wifi chip. I wouldn't be surprised if 160mhz channels never become common for anything except bridges/etc.

    4. At 160mhz you're down to 2 channels in the US now (possibly 4 in the future); which is worse from a conflict standpoint than the 3 channels we've got at 2.4ghz now.

    The last point is the biggest reason I don't expect to see 160mhz channels any time soon. It's in the spec; but it has major real world problems. IMO it was added just to let them waive around bigger (theoretical) bandwidth numbers for bragging rights vs commonly available wired networks (never mind that in real world situations 1gb wired will be faster anyway).
  • DarkXale - Tuesday, July 9, 2013 - link

    Actually number 3 is false.

    A higher bandwidth permits using modulation that requires less energy per bit.
  • DanNeely - Tuesday, July 9, 2013 - link

    Unless I'm misunderstanding something, the higher bandwidths are used to pack more bits in; so the wider streams still need the same amount of power/bit but just cram more total bits into the stream at any given time.
  • DarkXale - Tuesday, July 9, 2013 - link

    A higher throughput of course will net a higher power drain (if you're using the bandwidth for that), but a wider bandwidth itself does not cause that.
  • Jaybus - Tuesday, July 9, 2013 - link

    The wider channel width would take a bit more power, but that would more than be made up for by allowing more bits per token. Higher throughput will use more power, of course, but does not affect power per bit. Where the power is being increased is in the RF amplifier. It of course takes more power to transmit 3 signals than it does to transmit 2.

    Also, it takes more power for a 5 GHz carrier than it does for a 2.4 GHz carrier. This is because the rise and fall times for the RF amplifier are the same. Amplifiers are less efficient during the rise and fall time, and the higher the frequency the larger the percentage of time they are in a rise/fall state. This is assuming a class D amplifier design, which it almost certainly is, as it is the most power efficient..
  • name99 - Tuesday, July 9, 2013 - link

    "Making hardware that can work on that wide a channel is significantly more difficult than narrower options"

    Sufficiently hard that that is not the way it is done.
    160MHz support is done through channel bonding, ie running essentially two 80MHz channels in parallel. This means duplication of everything, plus logic to synchronize the two. If you want the two 80MHz channels to be discontiguous, it also means a more aggressive (likely duplicated) set of RF components to handle the two disparate frequencies.

    For all these reasons, 160MHz, like MU-MIMO, has been left to the next gen of chips (and who knows if it will be implemented, even there; it's possible all the vendors will conclude that reducing power and area are more important priorities for the immediate future).
  • Modus24 - Tuesday, July 9, 2013 - link

    Seems like Jarred is assuming it's only using 2 streams. It's more likely the lower rates are due to the bad antenna design he mentioned and the link had to drop to a lower order modulation (ie. BPSK, QPSK, 16-QAM, etc.) in order to reduce the bit errors.
  • danstek - Tuesday, July 9, 2013 - link

    To correct a statement in the third paragraph, MacBook Air is traditionally 2x2:2 and only the MacBook Pro has had 3x3:3 WiFi implementations.
  • JarredWalton - Tuesday, July 9, 2013 - link

    Fixed, thanks.

Log in

Don't have an account? Sign up now