The ASUS X99-E-10G WS Motherboard Review: 10GBase-T Networking with Intel’s X550-AT2
by Ian Cutress on November 7, 2016 9:00 AM EST- Posted in
- Motherboards
- Intel
- Asus
- 10G Ethernet
- X99
- 10GBase-T
- X99-E-10G WS
- X550
- X550-AT2
SLI, SLI, SLI
In our last review of a motherboard capable of 4-way GPU gaming, it was eloquently stated that our failure to test the feature was an indication of AnandTech’s current path on quality. I would have preferred a discourse in actually being able to source four identical GPUs at the time (and then showing ‘wow, it works’ in the review). There are plenty of things we don’t test as rigorously on each product that some users might like, such as PCIe x1 slot bandwidth latency, or USB 2.0 performance on CentOS, either due to time, relevance, or as was the case before, a lack of hardware. This is still the case - I do not have four expensive and high profile GPUs and I live a whole continent and an ocean away from our GPU testing facility. I have procured a third identical GTX 980 however, and will show you that at least 3-way GPU works in our test suite.
GPU scaling beyond two cards depends on the game engine and the drivers, or if the game has implemented specific multi-GPU modes to accelerate game features. For some titles, the GPU is not the bottleneck, and it might either be CPU performance, PCIe bandwidth, DRAM, or the fact that the game doesn’t scale and we’re relying on the performance of a single card.
Both GTA5 and Alien Isolation scaled to two cards with our hardware setup, but failed to move going to three. GRID is typically a title that scales with almost anything, however the jump from two to three cards was only 7%.
Shadow of Mordor gets the best scaling, but only at 4K and not at 1080p. At 1080p the move from 1 GPU (98 FPS) to 2 GPUs (150 FPS) is significant, but 3 GPUs (158 FPS) is not. For 4K, the scaling keeps going from 1 GPU (39 FPS) to 2 GPUs (69 FPS) and 3 GPUs (80 FPS), although not as much for that last jump. At 4K we are running at our Ultra preset, indicating that some other hard compute part of frame rendering might be the limiting factor in AFR modes.
Testing 10GBase-T and X550-AT2
Similar to our GPU testing, we have not the ideal hardware for Ethernet testing. In our previous 10G motherboard review, we implemented an ESXi 6 platform and used two Windows Server VMs, each with 8 threads, 16GB of DRAM, and one of the 10G ports. As a result each VM had a direct OS-to-OS 10G connection with a custom configured IP and testing was done.
Testing 10G with ESXi was actually more difficult this time around. The X550 series of drivers are not supporting in the default image, requiring the admin to install the relevant plug-in. While this enabled the ports to work in the Windows Server 2016 VMs, ESXi would not allow them under VSXNET mode, which is typically the high-performance mode. I was unable to find a quick solution, and along with the X550 controller being newer, deciphering what needed to be done was also a minefield of frustration.
It is interesting then to note that our results for the ASUS board and X550 are similar to previous results with the ASRock board using X540. This is ultimately because the chips are mostly similar, with the primary difference in the way they communicate with the CPU – the X540 requires PCIe 2.0 x8, while the X550 requires PCIe 3.0 x4. The X550 also introduces some professional level features, but the 10G copper market remains in Intel’s hands without another major player (or that professional environments turn to fiber).
When we last performed this with the ASRock X99 WS-E/10G, and a number of our readers were very helpful in describing ways in which 10G network performance (with the right hardware and knowledge) could be improved. As our test is point-to-point without a managed switch, and the frustrating element of learning to debug the environment, I highly recommend you read the post by Jammrock back in that review. But the 10G ports do both work, I can tell you that.
63 Comments
View All Comments
maglito - Monday, November 7, 2016 - link
faster large file copies to/from NAS.1Gbps is a real limitation here.
prisonerX - Monday, November 7, 2016 - link
Who needs a "use case"?BrokenCrayons - Wednesday, November 9, 2016 - link
Someone who has a lot of stray use laying around and needs a container in which to store them.Breit - Thursday, November 10, 2016 - link
Thanks, made my day! :DNotmyusualid - Friday, December 2, 2016 - link
Hilarious!beginner99 - Tuesday, November 8, 2016 - link
My thought as well. The average user doesn't need it that's why it's not marketed in this space. The average user is content with crappy WiFi that can't even fully use a fast internet connection. 1GBit Ethernet is usually fast enough to saturate your average hdd even for large file transfersbcronce - Tuesday, November 8, 2016 - link
40Gb network cards are becoming popular in datacenters. My use case is a simple one. My ISP is so good, they pass these 40Gb micro-busts through their network right to my connection, giving me transient latency spikes that can last tens of milliseconds before TCP's congestion control starts doing its magic.My 1Gb link can't handle 40Gb hitting it and TCP takes too long to respond. TCP takes tens of milliseconds because they're tens of milliseconds away, latency wise. This is a speed of light issue and Latency vs throughput issue.
Notmyusualid - Friday, December 2, 2016 - link
I bought the ASrock version of this board, and by day actually test / implement 10G & higher networks. So given I'm one who is building a rig for the next 3-5yrs (as the article refers too), I thought I'd jump on 10G to futureproof myself. I also have 2x1G ports on the board too, so I can disable the 10G in the meantime.zodiacfml - Monday, November 7, 2016 - link
Thunderbolt 3 has more value.It will get popular soon but price and power consumption has to go down now. The tipping point will be when enterprise Wi-Fi APs start wielding this interface.
pixelstuff - Monday, November 7, 2016 - link
Thunderbolt 3 is harder to run through your walls and put an end on wherever needed. Also not convenient for having multiple users connected at the same time.