Edit: Read our full review here: http://www.anandtech.com/show/8781/

Regular readers of my twitter feed might have noticed that over the past 12/24 months, I lamented the lack of 10 gigabit Ethernet connectors on any motherboard. My particular gripe was the lack of 10GBase-T, the standard which can be easily integrated into the home. Despite my wishes, there are several main barriers to introducing this technology. Firstly is the cost, whereby a 10GBase-T Ethernet card costs $400-$800 depending on your location (using the Intel X520-T2), followed by the power consumption which requires either an active cooler or a passive plus good airflow to shift up to 14W. The bandwidth can be as important (PCIe 2.1 x8 for the X540-BT2, but can work in PCIe 3.0 x8 or x4 mode), but also it is limited to those who need faster internal networking routing. When all these factors are added together, it does not make for an easy addition to a motherboard. But step forward ASRock.

The concept of the X99 WS-E/10G is simple. This is a workstation class motherboard aimed at prosumers. This is where 10GBase-T makes most sense after all, at the users that have sufficient funds to purchase a minimum $800 Netgear 10GBase-T switch and measure their internal networking upgrades in terms of hundreds of dollars per port, rather than cents per port. The workstation motherboard is also designed to support server operating systems, and is low profile in the rear for fitting into 1U chassis, similar to other ASRock WS motherboards.

In order to deal with the heat from the Intel X540-BT2 chip being used, the extended XXL heatsink is connected to the top heatsink on board, with the final chipset heatsink using an active fan. This is because this chipset heatsink arrangement also has to cool two PLX 8747 chips which enable the x16/x16/x16/x16 functionality. If a prosumer has enough single slot cards, this can extend into x16/x8/x8/x8/x8/x8/x8 if needed. Extra PCIe power is provided via two molex ports above and below the PCIe connectors.

Aside from the X540-BT2 chip supplying dual 10GBase-T ports, ASRock has dual Intel I210-AT Ethernet ports also for a total of four. All four can be teamed with a suitable switch in play. The key point to note here despite ASRock’s video explaining the technology, and which sounds perfectly simple to anyone in networking, is that this does not increase your internet speed, only the internal home/office network speed.

The rest of the motherboard is filled with ten SATA 6 Gbps ports plus another two from a controller, with also SATA Express support and M.2 support. ASRock’s video suggests this is PCIe 2.0 x4, although their image lacks the Turbo M.2 x4 designation and the chipset would not have enough lanes, and as such it is probably M.2 x2 shared with the SATAe. Audio is provided by an improved Realtek ALC1150 codec solution, and in the middle of the board is a USB 2.0 Type-A slot sticking out of the motherboard, for dongles or easy OS installation out of the case. There are eight USB 3.0 ports on the board as well.

Like the X99 Extreme11, this motherboard is going to come in very expensive. Dual PLX 8747 chips and an Intel X540-BT2 chip on their own would put it past most X99 motherboards on the market.  To a certain extent we could consider the Extreme11 design, remove the LSI chip from it and add the X540-BT2, which still means it will probably be $200-$300 more than the Extreme11. Mark this one down at around $800-$900 as a rough guess, with an estimated release date in December.

Thinking out loud for a moment: 10GBase-T is being used here because it is a prosumer feature, and prosumers already want a lot of other features, hence the combination and high price overall. The moment 10G is added to a basic motherboard for example, using a H97/Z97 (and reduces the PCIe 3.0 x16 down to x8), a $100 board becomes $400+ and beyond the cost of any other Z97 motherboard. Ultimately if 10GBase-T were to become a mainstream feature, the chip needs to come down in price.

 

Comments Locked

50 Comments

View All Comments

  • Samus - Tuesday, November 25, 2014 - link

    I have a very light EE background but if they move the vrm array to a vertical riser and use sodimms there is plenty of room for a few integrated components. But you are right there is physiclally room for only a single PCIe 16x slot, but many people don't care about 40 lanes. The raw throughput of the memory bus is my main attraction. It's annoying that the x58 chipset has more raw memory bandwidth than all the mainstream platforms today...just because it was triple channel.
  • dabotsonline - Thursday, November 27, 2014 - link

    "Regular readers of my twitter feed might have noticed that over the past 12/24 months, I lamented the lack of 10 gigabit Ethernet connectors on any motherboard. My particular gripe was the lack of 10GBase-T, the standard which can be easily integrated into the home."

    Ian, haven't Supermicro sold a few X540-based motherboards for quite a while now?
  • Pork@III - Monday, November 24, 2014 - link

    Hmm, impractical? We make 100GbE solutions(100 Gb/s on an single optical line) in mass production from 2012. Many ISPs started recharge their network with a new tech hardware. Don't make lie for that practical or impractical. If we all follow of your way of thinking, humankind would not go out of the Stone Age.
  • Lord of the Bored - Monday, November 24, 2014 - link

    I... did you read more than one sentence before replying? Because he's completely right and not just being stubborn and hidebound. Copper wire is a significant bottleneck these days.
    Optical is too delicate for home use, even ignoring the cost of decent fiber.

    "TOSLINK proves it's doable cheap and reliably" I imagine you might say. But TOSLINK is currently limited to a tad over a hundred megabit, and is still quite delicate next to it's copper brethren. Home users don't think twice about tying a knot in a cord to take up excess slack or keep it from wandering off. That breaks even the plastic fibers in TOSLINK immediately, to say nothing of a quality GLASS fiber. And god help the poor soul who runs his chair over a cable.
    That's part of why TOSLINK never saw widespread use, and we run our SPDIF over copper through an HDMI jack these days.

    The equipment you're talking about is VERY expensive, and it's not just because it's business hardware.

    Am I saying there will be no more advances in networking? No, of course not.
    Am I saying that the reign of copper is coming to an end? You better believe it.
    Am I saying optical fiber is too delicate to be trusted to the home users that apparently run cable with a pair of hamhocks in boxing gloves instead of their hands? Ayup.
    Am I saying we'll see 10-gigabit radios in the home before we see 10-Gb cables in the home? God, I hope not.
  • Pork@III - Monday, November 24, 2014 - link

    Asrock thinking for a near future. Not need copper cable to the street swich. Copper will be connection in very short line between optical to copper IO adapter and PC. In distances of decimal inches copper cables have no problem with 10Gb/s speed.
  • Romeen - Monday, November 24, 2014 - link

    Both Aquantia and BRCM have been offering 10GE on more than 100m of copper cable for over 5yrs now. The issue was power, but the new generation of 10GE PHYs in 28nm tech will bring PHY power down from the current ~7W/port to almost half. Intel will have the 10GE NICs with Aquantia 28nm PHYs in 2015, less than a year away.
  • DanNeely - Monday, November 24, 2014 - link

    Assuming you're talking about Intel Fortville, it's already launched, but they're quoting the same 7W TDP for 2x 10gb, 4x 10gb/1x40, and 4x10/40gb configurations. Unless they're thermal binning, or most of the power is being consumed by components that aren't sensitive to the data rate the equal TDP for all 3 parts seems odd to me.

    http://ark.intel.com/compare/82945,82946,82944
  • DanNeely - Monday, November 24, 2014 - link

    "In order to deal with the heat from the Intel X540-BT2 chip being used, the extended XXL heatsink is connected to the two other heatsinks on board, with the final chipset heatsink using an active fan."

    Looking at the pictures, I can't see any connection between the two sinks around the socket and the one at the bottom of the board with a fan.
  • Ian Cutress - Monday, November 24, 2014 - link

    You're right, I thought I saw something there.
  • Jeff7181 - Monday, November 24, 2014 - link

    I'm curious, what is the use case for 10GbE at home? I have a server at home with a 6TB RAID5 array and it's no big deal if it takes 4 minutes to transfer a freshly ripped 40 GB mkv file over 1GbE.

    Is it iSCSI? Who is using iSCSI at home anyway?

    As a workstation board for a professional with an iSCSI storage array, this makes some sense... but for even the odd-ball "prosumer" I just don't see the lack of 10GbE being as big a shortcoming as you make it sound. Especially when it doubles the cost of an otherwise very competent workstation motherboard.

Log in

Don't have an account? Sign up now