One significant motherboard update that has been drawn out over time has been the integration of 10 Gigabit Ethernet on consumer level motherboards, and specifically copper based 10GBase-T that is backward compatible with the majority of home networks using RJ-45. While the traction of 10G is scaling in business and enterprise, cost remains a big barrier to home and prosumer networking, but also consumer based implementations. We recently posted a news update with the current 10GBase-T motherboards on the market, and this is the second review of that list: today we are testing ASUS' new high-end LGA2011-3 workstation refresh model, the ASUS X99-E-10G WS. The motherboard uses Intel’s latest 10GBase-T controller, the X550, which runs as a PCIe 3.0 x4 implementation. 

Other AnandTech Reviews for Intel’s LGA2011-3 Platform

The Intel Core i7-6950X, i7-6900K, i7-6850K and i7-6800K Broadwell-E Review
The Intel Core i7-5960X, i7-5930K and i7-5820 Haswell-E Review
The Intel Xeon E5 v3 Fourteen-Core Review (E5-2695 v3, E5-2697 v3)
The Intel Xeon E5 v3 Twelve-Core Review (E5-2650L v3, E5-2690 v3)
The Intel Xeon E5 v3 Ten-Core Review (E5-2650 v3, E5-2687W v3)

X99 Series Motherboard Reviews:
Prices Correct at time of each review

$750: The ASRock X99 WS-E 10G Review [link]
$600: The ASUS X99-E-10G WS Review (this review)
$600: The ASRock X99 Extreme11 Review [link]
$500: The ASUS Rampage V Extreme Review [link]
$400: The ASUS X99-Deluxe Review [link]
$340: The GIGABYTE X99-Gaming G1 WiFi Review [link]
$330: The ASRock X99 OC Formula Review [link]
$323: The ASRock X99 WS Review [link]
$310: The GIGABYTE X99-UD7 WiFi Review [link]
$310: The ASUS X99 Sabertooth Review [link]
$300: The GIGABYTE X99-SOC Champion Review [link]
$300: The ASRock X99E-ITX Review [link]
$300: The MSI X99S MPower Review [link]
$275: The ASUS X99-A Review [link]
$241: The MSI X99S SLI PLUS Review [link]

The State of the 10GBase-T Market

Current integration of 10GBase-T onto a motherboard is an expensive process. In order to get full bandwidth, at the bare minimum, either a PCIe 2.0 x4 or PCIe 3.0 x2 connection per port is needed, and it depends on the controller used. This controller would traditionally interface with the CPU, reducing the PCIe lanes available for other large PCIe devices and co-processors, such as GPUs, storage cards or professional compute cards. In the last generation of consumer chipsets, the ability to run them direct from the large PCIe bandwidth on the 100-series chipsets is a future potential play, although technically the 100-series chipset connects to the CPU via a PCIe 3.0 x4 equivalent link which may be a future bottleneck.

There are three main commercial controllers currently on offer that are used in both PCIe cards and motherboard integration. First, and what we’ve seen so far, is the Intel X540 family of controllers which require x8 lanes and runs at PCIe 2.0 speeds (i.e. in a PCIe 3.0 environment, it still needs x8 as the controller is only PCIe 2.0). The upgrade to this, the Intel X550 family, makes that leap to PCIe 3.0 and requires an x4 link which makes it easier to integrate into a modern platform but might be a touch more expensive by virtue of it being new. Third is an Aquantia / Tehuti Networks solution, which we’ve seen on 10GBase-T PCIe cards bundled with certain motherboard configurations or by third-parties for sale on their own. The Intel X540/X550 parts are families of controllers, offering single and dual port designs, and to our knowledge are better supported and use less motherboard area (but are more expensive) than the Tehuti solution. All these chips output up to 15W on their own, requiring a motherboard built to disperse the extra heat generated.

As a result, any user looking at an integrated 10GBase-T solution has only a few options, and will have to find a way to justify the cost (which is easier in a business perspective). Aside from the 10GBase-T switch cost (cheapest is a 2-port unmanaged switch for $250 from ASUS, an 8-port previous generation Netgear X708 for $700, or a 16-port Netgear for ~$1400), the previous motherboard we reviewed with an integrated X540-T2 controller still runs at $700, over a year after its release. The controller cost is around $100-$200, depending on the motherboard manufacturers deal with Intel, which leads to a direct bill-of-materials (BOM) increase in the base cost. The PCIe cards with single or dual ports can be purchased for around $250-$400, depending on sales, support, and if they are new. (For those looking outside copper, there are also solutions available, but are less likely to be integrated into a home/current SMB setup without prior planning).

For anyone looking to migrate a home network to 10GBase-T has to be aware of this outlay, and a number of users (myself included) are waiting diligently until the cost of such an ecosystem comes down. I do wonder exactly what the tipping point would be for a number of enthusiasts to make the jump, especially with a number of networking technologies in the works (such as 2.5G/5G, or 802.11ad wireless routers now coming into the market for consumers offering gigabit line-of-sight connectivity). I have had some companies ask me what that tipping point is, and to be honest I still think it’s the switch – a 4x10G + 4x1G port managed switch for $250 would sell like hot cakes, regardless of the cost of controllers.

The ASUS X99-E-10G WS Overview

The feature that’s hard to ignore is the 10G ports, and to be honest buying this motherboard relies on needing to use these ports (or trying to be ‘futureproof’ when building a 3-5 year system). Adding in the capability for a motherboard to also support x16/x16/x16/x16 with its main PCIe ports means that extra and expensive hardware is needed for full bandwidth support.

This ability comes through PCIe switches, namely a pair of Avago PLX8747 switches. These ~$50ea final cost add-ons convert (mux/demux) sixteen lanes of PCIe 3.0 from the processor into thirty-two (32) lanes that are converted to x16/x16. As the main processors for this motherboard, such as the Intel Core i7-6950X, offer 40 lanes of PCIe, taking 32 away leaves eight lanes. This final eight lanes is split into four for the 10G controller and four for the U.2/M.2 PCIe 3.0 x4 slot at the bottom of the board. ASUS intends to make this motherboard the single port of call for all your PCIe needs.  

One of the benefits of the PCIe configuration is that the board can support a full complement of GPUs for 4-way SLI or 4-way Crossfire (or even more for compute tasks, depending on GPU size or riser cables). One of the main criticisms of using PCIe switches is that there is a small amount of overhead which could reduce peak performance, but in gaming as we’ve tested before, it is sub 1%. In fact, this is the only way to support 4-way x16, and allows for faster GPU-to-GPU communication (for adjacent GPUs), which can be required for compute tasks.

As this is a premium motherboard, ASUS didn’t skip on the ‘regular’ features either. Starting their OC socket for premium LGA2011-3 platforms, the power delivery is enforced using ASUS’ high-end chokes as well as an extended heatsink arrangement for the high powered ICs present. The X99-E-10G WS will support 128GB of DDR4-2133, including up to ECC registered memory with the appropriate Xeon E5 v4 processor, and will have profiles up to DDR4-3333 for non-ECC gaming memory. Aside from the 10 SATA ports, U.2 and M.2, ASUS’ WS line is designed to be verified with a longer list of workstation-like hardware, such as RAID cards and FPGAs, to ensure compatibility. Thus given seven 16-way RAID cards, the motherboard makes an interesting storage proposition. Or add in more 10G ports.

Due to the 10G ports, ASUS does not include any 1G ports, however the 10G ports do work at 1G speeds. For the audio, ASUS has their upgraded Realtek ALC1150 solution with filter caps, PCB separation and additional audio software. On the rear panel, ASUS has removed any USB 2.0 ports and left a pair of USB 3.1 (one A, one C) and a set of four USB 3.0 ports.

The PCIe slots also get an upgrade here, with the four main GPU slots featuring semi-transparent latches that the user can light up via a DIP switch to indicate which slots are needed to maximize 2-way, 3-way or 4-way GPU use. Each of the seven slots also has extra metallic reinforcement embedded into the slot itself, designed to maintain rigidity when heavy PCIe devices are used or PCIe devices are installed during bumpy transit.

Performance wise it is sufficient to say that the idle power of this WS board is higher than that of standard X99 motherboards however for consumer CPUs Multi-Core Turbo is enabled by default, giving a little extra speed (at the expense of a bit of power). Metrics such as DPC Latency and Audio Quality are both in the better halves of the tables for the tests, but with most WS boards with extra features there is a little more POST time than normal. We tested the board up to 3-way SLI (I didn’t have a fourth GTX 980, sorry), seeing game-dependent enhancements at 4K.

Quick Links to Other Pages

In The Box and Visual Inspection
Test Bed and Setup
Benchmark Overview
System Performance (Audio, USB, Power, POST Times on Windows 7, Latency)
CPU Performance, Short Form (Office Tests and Transcoding)
Single GPU Gaming Performance (R7 240, GTX 770, GTX 980)
Testing up to 3xGTX 980 and 10G

Board Features, Visual Inspection


View All Comments

  • dsumanik - Monday, November 07, 2016 - link

    Agreed, but there is a lot of PCI lane juggling on this board as is. With the amount of modern external and internal interfaces being pushed currently the days of 'one board to do it all' may be gone forever, sadly.

    Ultimately this board is going to appeal to users who want to use PCI Slots taken up by 10g rider cards in thier current rigs.

    IMO the idle power is a bit of a concern, over the life of the board it is going to add up, especially if used for server duties.
  • Notmyusualid - Friday, December 02, 2016 - link

    Yes I noticed that too - hence I just picked up a new ASRock ws-e/10G which has the Thunderbolt header (TB2 I think it is - but that is fine with me). But what I didn't expect, was that I'd need to BUY the pcie card to actually present the interface. I must admit, I expected something like that to be in the box. More expense.

    Just waiting for my E5-2690v4 Broadwell-EP 14-Core 135W 35M CPU to clear customs to check it all out...
  • sorten - Monday, November 07, 2016 - link

    what is the use case for 10G in the home? Reply
  • jkhoward - Monday, November 07, 2016 - link

    People who render using multiple workstations want a super fast network. You can chain multiple systems together to render something faster. Think... home graphic designed/video editor. Reply
  • timbotim - Monday, November 07, 2016 - link

    My primary use case is 30sec transfer of VMs around a network at 10Gbs-1 rather than 5mins at 1Gbs-1 Reply
  • beginner99 - Tuesday, November 08, 2016 - link

    Thats a niche use case and you will need a PCIe SSD to write that much data in such a short time. A 20 GB VM would require a write speed of about 680Mb/s. Reply
  • sorten - Monday, November 07, 2016 - link

    I see, so the average consumer running a render farm in their home office ;-) Reply
  • philehidiot - Tuesday, November 08, 2016 - link

    Personally, I tend to render farts in my home office.

    I do not require quite so many PCIe lanes for this.
  • slyphnier - Wednesday, November 09, 2016 - link

    that not cost efficient for home graphic designer/video editor, because u end up spend like more than $15k(depends on many ws) for multiple ws including the switch/router... even say your system/rig will last you like 3-4 years, that will be much cheaper go with rental rendering server/office route

    i believe this board is limited, with shop that have this & available quantity
  • Notmyusualid - Friday, December 02, 2016 - link

    They ARE limited, I cannot find waterblocks for mine... But, I can live with that.

    At least having your own hardware, its a KNOWN cost, and some provider doesn't contact you to notify you that you own $7k usd this month in network over-usage due to some redirection error you made...

Log in

Don't have an account? Sign up now