Edit: Read our full review here: http://www.anandtech.com/show/8781/

Regular readers of my twitter feed might have noticed that over the past 12/24 months, I lamented the lack of 10 gigabit Ethernet connectors on any motherboard. My particular gripe was the lack of 10GBase-T, the standard which can be easily integrated into the home. Despite my wishes, there are several main barriers to introducing this technology. Firstly is the cost, whereby a 10GBase-T Ethernet card costs $400-$800 depending on your location (using the Intel X520-T2), followed by the power consumption which requires either an active cooler or a passive plus good airflow to shift up to 14W. The bandwidth can be as important (PCIe 2.1 x8 for the X540-BT2, but can work in PCIe 3.0 x8 or x4 mode), but also it is limited to those who need faster internal networking routing. When all these factors are added together, it does not make for an easy addition to a motherboard. But step forward ASRock.

The concept of the X99 WS-E/10G is simple. This is a workstation class motherboard aimed at prosumers. This is where 10GBase-T makes most sense after all, at the users that have sufficient funds to purchase a minimum $800 Netgear 10GBase-T switch and measure their internal networking upgrades in terms of hundreds of dollars per port, rather than cents per port. The workstation motherboard is also designed to support server operating systems, and is low profile in the rear for fitting into 1U chassis, similar to other ASRock WS motherboards.

In order to deal with the heat from the Intel X540-BT2 chip being used, the extended XXL heatsink is connected to the top heatsink on board, with the final chipset heatsink using an active fan. This is because this chipset heatsink arrangement also has to cool two PLX 8747 chips which enable the x16/x16/x16/x16 functionality. If a prosumer has enough single slot cards, this can extend into x16/x8/x8/x8/x8/x8/x8 if needed. Extra PCIe power is provided via two molex ports above and below the PCIe connectors.

Aside from the X540-BT2 chip supplying dual 10GBase-T ports, ASRock has dual Intel I210-AT Ethernet ports also for a total of four. All four can be teamed with a suitable switch in play. The key point to note here despite ASRock’s video explaining the technology, and which sounds perfectly simple to anyone in networking, is that this does not increase your internet speed, only the internal home/office network speed.

The rest of the motherboard is filled with ten SATA 6 Gbps ports plus another two from a controller, with also SATA Express support and M.2 support. ASRock’s video suggests this is PCIe 2.0 x4, although their image lacks the Turbo M.2 x4 designation and the chipset would not have enough lanes, and as such it is probably M.2 x2 shared with the SATAe. Audio is provided by an improved Realtek ALC1150 codec solution, and in the middle of the board is a USB 2.0 Type-A slot sticking out of the motherboard, for dongles or easy OS installation out of the case. There are eight USB 3.0 ports on the board as well.

Like the X99 Extreme11, this motherboard is going to come in very expensive. Dual PLX 8747 chips and an Intel X540-BT2 chip on their own would put it past most X99 motherboards on the market.  To a certain extent we could consider the Extreme11 design, remove the LSI chip from it and add the X540-BT2, which still means it will probably be $200-$300 more than the Extreme11. Mark this one down at around $800-$900 as a rough guess, with an estimated release date in December.

Thinking out loud for a moment: 10GBase-T is being used here because it is a prosumer feature, and prosumers already want a lot of other features, hence the combination and high price overall. The moment 10G is added to a basic motherboard for example, using a H97/Z97 (and reduces the PCIe 3.0 x16 down to x8), a $100 board becomes $400+ and beyond the cost of any other Z97 motherboard. Ultimately if 10GBase-T were to become a mainstream feature, the chip needs to come down in price.

 

Comments Locked

50 Comments

View All Comments

  • DCide - Tuesday, November 25, 2014 - link

    Thanks for being so specific, Daniel. It all makes sense within your environment.

    However, I must say that this definitely doesn't apply to all environments. eSATA and Thunderbolt may be good for something like DAS, but tricky or impossible to use for peer to peer data transfer, and no good if there's any distance between them. Ethernet infrastructure is already well established down to the OS level, meaning virtually any two devices with Ethernet ports can generally share data and communicate in any way necessary, right out of the box.

    I wouldn't be shocked if you told me 10GBase-T currently tops out around 3-4Gbps in many real-world implementations, because that was the case with Gigabit Ethernet in earlier versions of Windows, for example (before Windows 7, I believe). But then we have to start talking about which OS, hardware, and drivers are involved, because I wasn't seeing the same problem in other OSes at the time. I think a lot of the HW/SW you were using would be prime candidates for non-optimal implementations of 10GbE, or be subject to bottlenecks elsewhere. And I have no doubt your solution was good. But this doesn't mean it's *very* hard to take advantage of the bandwidth, when a simple file copy will do it.

    We've been through this a few times already with Ethernet, Fast Ethernet, and then GbE. With Ethernet, most implementations ran around 300KB/s until 3COM came out with their Etherlink III. Suddenly I was seeing around 900KB/s+, or roughly 80-90% of the theoretical maximum. I saw a similar pattern repeated each time, with each new Ethernet standard starting out performing at only 30-40%, then moving up to perhaps 60-70%, and eventually landing at 80-90% of the maximum. So I'm making a rather educated guess that if you use the right OS you can get at least 6-7Gbps out of the ASRock motherboard when it's released, using a very real-world (not synthetic) test of copying files using the OS' copy command. This will make 10GbE very useful in some real-world situations right now.
  • Daniel Egger - Wednesday, November 26, 2014 - link

    > But this doesn't mean it's *very* hard to take advantage of the bandwidth, when a simple file copy will do it.

    Except that it doesn't. For 1000Base-T you already have to work a bit to get the most of out of a single GB/s link especially if you only have a single client, for 10GBase-T it's impossible. But don't take my word for it, you can read it right on Anandtech...
    http://www.anandtech.com/show/7608/netgear-readyna...

    Or you can read the case study from Intel:
    http://download.intel.com/support/network/sb/fedex...

    And those guys definitely know what they're doing...
  • DanNeely - Thursday, November 27, 2014 - link

    The testing setup page of the article mentioned that they got a 10GB switch for running the test network; but there was no mention of getting any 10GB cards to run single client tests for. Looking at where the tests all topped out at; I'm almost certain that the bottlenecks were 1GB ethernet links that each client was running on. Since Intel recently launched its new 10GB cards; maybe they can be convinced to donate a few for review and permanent inclusion in the testbed.
  • DCide - Wednesday, December 3, 2014 - link

    I don't quite understand your reasoning, Daniel. Two current Windows machines connected through a basic gigabit switch is about as simple and easy as it gets. Plug and play 100MB/s+. Doesn't even cost very much!

    No need to try to interpret someone else's tests when you can simply try it for yourself! Once you've established your baseline - that it achieves its full potential (80%+) in a rudimentary configuration, you can proceed to figure out how to avoid bottlenecks and achieve similar performance in whatever complex target configuration you desire.

    But don't undermine the basic technology as if it doesn't reach its potential, when in fact it does. If you can't achieve similar results your infrastructure is dragging it down.
  • Samus - Monday, November 24, 2014 - link

    I agree that 10 gigabit is really impractical over copper, which is why you just pickup an SFP NIC and a switch with SFP uplink. If your application isn't for raw downstream throughput (like 10 gigabit backbone with a few dozen 1 gigabit clients) then you can do point-to-point 10-gigabit with SFP NIC's and media converters (which are an added cost.)

    Most 10 gigabit networks I've built are for the former application. One was for imaging machines at a computer recycling company where downstream throughput was key (I did dual 10-gigabit uplinks from the imaging server to dual 48-port layer 2 switches) and the other was for an office that had a demanding SQL database from 25 or so simultaneous connections, so a single 10 gigabit uplink to a 26 port UNMANAGED gigabit switch was adequate.

    Most of the time a teamed gigabit NIC with auto-failover is adequate for networks of <50 nodes.
  • eanazag - Monday, November 24, 2014 - link

    Both. They have had a hard time producing 10G chipsets that use a reasonable amount of energy. They've been plagued with heat and electricity issues in both the client side and switch side. It has taken forever to get base T switches. Fiber ones have been around for a while.

    The other side is that they can get away with charging a lot, so they do. Competing technologies are fibre channel and infiniband - go price those out with a switch.

    Anandtech shown a few years ago that performance for one port 10G connection was better than 4 1Gb ports anyway you looked at it.
  • Railgun - Monday, November 24, 2014 - link

    Supply and demand. Plain and simple. Not a lot outside of the enterprise/datacenter requiring it, and the enterprise vendors know what they can charge for it. Back in the SOHO market, not many care to provide it so...the market/supply is small.
  • Kjella - Monday, November 24, 2014 - link

    Because 1000 Mbit/s is way bigger than raw BluRay at 54 Mbit/s, so you can do 20 HDTV streams or 4 UHD streams over GigE if your drives can keep up? Because you need two wired machines in close proximity, since what's the point of a 10GigE server if all that connects is laptops and tablets and smartphones and mostly by WiFi? The only place I know it's been used is between a server and a SAN, topping out at ~700 MB/s (5.4 GBit/s) actual performance it's quite neat but for a very specific niche in number crunching.
  • Samus - Monday, November 24, 2014 - link

    One could alternatively buy a PCIe x4 10-gigabit dual-port SFP card for <$100 (eBay) and a switch with 10-gigabit SFP uplink for the same price. You'd even get the added benefit of layer 2/3 management.

    But I like the idea of having it all integrated, too, but you're right, when it comes to integration, these OEM's seem to "pull a Samsung" and throw it all in plus the kitchen sink. It's overkill.

    I just want X99 to come to ITX already. They neglected it with X79 even though its entirely possible, because overclocking is pretty much out of the question and that's 'suicide' to make something non-overclock I guess...
  • Ian Cutress - Tuesday, November 25, 2014 - link

    X99 + ITX won't happen any time soon. The size of the socket and DRAM leaves little space for anything else, and out of the CPU PCIe lanes you'll be able to use 16 on a single PCIe slot. Unless there is significant demand, motherboard manufacturers see that as a waste of resources and users won't want to buy a 40 PCIe lane CPU and not be able to use most of them. I have suggested cutting down to dual channel memory to save space for a chipset and some controllers, but the same argument: users who pay for quad channel support won't want dual channel. Then find space for the power delivery, SATA ports etc. As said, if there is *demand* (1000s of units, not 10s) then there might be a compromise.

Log in

Don't have an account? Sign up now