Closing Thoughts (for Now)

It’s really up to the notebook manufacturers to make sure that their WiFi implementations are up to snuff, and that means doing more than a quick test for connectivity in ideal test conditions. The QA and engineering departments at the very least ought to be testing at 5, 25, 50, and 100 feet, using standard Windows operations (i.e. not just IxChariot or iPerf). If there are issues, they should be ironed out before customers (and reviewers) get the product. With that said, a good foundation for wireless networking can go a long way toward improving bandwidth and stability of your connection.

Intel’s adapters aren’t always the best, but they’re rarely the worst, provided you get one of the non-budget offerings (i.e. avoid the 1000 and 2000 series parts). Realtek unfortunately comes in near the bottom of my ranking list in many cases, but most notebooks with Realtek WiFi are already cutting corners—they’re the 1x1:1 2.4GHz only solutions that are so common. The fact is, whether you're using an adapter from Qualcomm/Atheros, Broadcom, Marvell, Realtek, or Ralink, you can have a good adapter in some cases or a downright awful one in others. Broadly speaking, most solutions with two streams end up being better than any of the single stream solutions.

Of course, it's not just about spatial streams. Oddly enough, for a company that has been on the forefront of wireless technologies, as Anand detailed in our MacBook Air 2013 review OS X is not scaling TCP window size beyond 64KB and thus fails to get optimal performance out of 802.11ac. (I assume an OS/driver patch will address this at some point, but that hasn't happened yet AFAIK.) OS and driver issues can definitely put a clamp on WiFi performance, which again is why the notebook makers need to exercise due diligence and test in real-world scenarios to ensure their hardware is working properly.

As I said earlier, one of the best things about 802.11ac wireless is that it raises the bar for wireless adapters. No one can get away with selling you an 11ac adapter without including at the bare minimum a dual-band chipset with support for 5GHz and 2.4GHz networks. If you live in a packed subdivision or apartment complex, 5GHz networking is almost required these days. Ideally, though, I want more than just the bare minimum; I want two 80MHz streams on my 802.11ac connections, and three would be even better. Intel’s 7260 provides two streams, and so do most of the current crop of 802.11ac routers. Hopefully, we won’t see as many solutions going for the bottom of the barrel single stream implementations; they’re not worse than 802.11n, but they’re not much better than two stream 5GHz 802.11n either.

Consider this a warning shot across the bow of the notebook manufacturers: we’re going to be paying more attention to your wireless implementations going forward. I can understand why a $500 or less budget laptop needs to cut every corner possible to hit that price point, but when we’re looking at $1000+ laptops we don’t want to see such blemishes. It may not always be as painful as using a bad LCD on an otherwise excellent laptop, but a bad WiFi implementation that loses connectivity if you’re more than 40 feet from the router in can be even worse in some cases.

We’ll be doing some full reviews of 802.11ac routers in the near future, including the Western Digital AC1300 and Linksys AC1200. The full reviews will better characterize performance as well as other features. Until then, at least right now it looks like most 802.11ac routers are using two streams (867Mbps maximum theoretical throughput), which is at least a nice upgrade over the 300Mbps so many 802.11n routers offer. Meanwhile, Apple's latest AirPort Extreme and Time Capsule go whole hog and give us three streams and up to 1300Mbps. Now if I could just get (Windows) laptops with three 802.11ac streams, I might actually be willing to give up my Gigabit Ethernet and wires!

 

A Quick Test of Real-World Wireless Performance
POST A COMMENT

135 Comments

View All Comments

  • DanNeely - Tuesday, July 09, 2013 - link

    1. 160mhz channels are an optional feature.

    2. Making hardware that can work on that wide a channel is significantly more difficult than narrower options. N only supported 40mhz channels; so they already had to push the tx/rx modules to double their bandwidth already.

    3. For mobile devices the wider bandwidth will result in higher power consumption for the wifi chip. I wouldn't be surprised if 160mhz channels never become common for anything except bridges/etc.

    4. At 160mhz you're down to 2 channels in the US now (possibly 4 in the future); which is worse from a conflict standpoint than the 3 channels we've got at 2.4ghz now.

    The last point is the biggest reason I don't expect to see 160mhz channels any time soon. It's in the spec; but it has major real world problems. IMO it was added just to let them waive around bigger (theoretical) bandwidth numbers for bragging rights vs commonly available wired networks (never mind that in real world situations 1gb wired will be faster anyway).
    Reply
  • DarkXale - Tuesday, July 09, 2013 - link

    Actually number 3 is false.

    A higher bandwidth permits using modulation that requires less energy per bit.
    Reply
  • DanNeely - Tuesday, July 09, 2013 - link

    Unless I'm misunderstanding something, the higher bandwidths are used to pack more bits in; so the wider streams still need the same amount of power/bit but just cram more total bits into the stream at any given time. Reply
  • DarkXale - Tuesday, July 09, 2013 - link

    A higher throughput of course will net a higher power drain (if you're using the bandwidth for that), but a wider bandwidth itself does not cause that. Reply
  • Jaybus - Tuesday, July 09, 2013 - link

    The wider channel width would take a bit more power, but that would more than be made up for by allowing more bits per token. Higher throughput will use more power, of course, but does not affect power per bit. Where the power is being increased is in the RF amplifier. It of course takes more power to transmit 3 signals than it does to transmit 2.

    Also, it takes more power for a 5 GHz carrier than it does for a 2.4 GHz carrier. This is because the rise and fall times for the RF amplifier are the same. Amplifiers are less efficient during the rise and fall time, and the higher the frequency the larger the percentage of time they are in a rise/fall state. This is assuming a class D amplifier design, which it almost certainly is, as it is the most power efficient..
    Reply
  • name99 - Tuesday, July 09, 2013 - link

    "Making hardware that can work on that wide a channel is significantly more difficult than narrower options"

    Sufficiently hard that that is not the way it is done.
    160MHz support is done through channel bonding, ie running essentially two 80MHz channels in parallel. This means duplication of everything, plus logic to synchronize the two. If you want the two 80MHz channels to be discontiguous, it also means a more aggressive (likely duplicated) set of RF components to handle the two disparate frequencies.

    For all these reasons, 160MHz, like MU-MIMO, has been left to the next gen of chips (and who knows if it will be implemented, even there; it's possible all the vendors will conclude that reducing power and area are more important priorities for the immediate future).
    Reply
  • Modus24 - Tuesday, July 09, 2013 - link

    Seems like Jarred is assuming it's only using 2 streams. It's more likely the lower rates are due to the bad antenna design he mentioned and the link had to drop to a lower order modulation (ie. BPSK, QPSK, 16-QAM, etc.) in order to reduce the bit errors. Reply
  • danstek - Tuesday, July 09, 2013 - link

    To correct a statement in the third paragraph, MacBook Air is traditionally 2x2:2 and only the MacBook Pro has had 3x3:3 WiFi implementations. Reply
  • JarredWalton - Tuesday, July 09, 2013 - link

    Fixed, thanks. Reply
  • theagentsmith - Tuesday, July 09, 2013 - link

    Sorry to be obnoxious, but if Anandtech.com cares about its international audience I think a "Metric units-only" policy should be put in place.
    It's true that it's not so difficult to multiply a foot by 30 to get the distance in centimeters, but the only Imperial unit that I would accept in a tech site is inches and only for screens diagonal, as this is the de facto standard to express screen sizes.
    You are already testing hardware and measuring its performance with metric units (Mbps, Ghz), please go on with millimiters, centimeters, meters and grams, as it's done in the scientific field :)
    You remember why the Mars Climate Orbiter crashed, right?
    Reply

Log in

Don't have an account? Sign up now