SLI, SLI, SLI

In our last review of a motherboard capable of 4-way GPU gaming, it was eloquently stated that our failure to test the feature was an indication of AnandTech’s current path on quality. I would have preferred a discourse in actually being able to source four identical GPUs at the time (and then showing ‘wow, it works’ in the review). There are plenty of things we don’t test as rigorously on each product that some users might like, such as PCIe x1 slot bandwidth latency, or USB 2.0 performance on CentOS, either due to time, relevance, or as was the case before, a lack of hardware. This is still the case - I do not have four expensive and high profile GPUs and I live a whole continent and an ocean away from our GPU testing facility. I have procured a third identical GTX 980 however, and will show you that at least 3-way GPU works in our test suite.

Alien Isolation GTX 980 SLI on ASUS X99-E-10G WS

Grand Theft Auto 5 SLI on ASUS X99-E-10G WS

GRID Autosport SLI on ASUS X99-E-10G WS

Shadow of Mordor SLI on ASUS X99-E-10G WS (1080p)

Shadow of Mordor SLI on ASUS X99-E-10G WS (4K)

Shadow of Mordor SLI on ASUS X99-E-10G WS (1080p)

Shadow of Mordor SLI on ASUS X99-E-10G WS (4K)

GPU scaling beyond two cards depends on the game engine and the drivers, or if the game has implemented specific multi-GPU modes to accelerate game features. For some titles, the GPU is not the bottleneck, and it might either be CPU performance, PCIe bandwidth, DRAM, or the fact that the game doesn’t scale and we’re relying on the performance of a single card.

Both GTA5 and Alien Isolation scaled to two cards with our hardware setup, but failed to move going to three. GRID is typically a title that scales with almost anything, however the jump from two to three cards was only 7%.

Shadow of Mordor gets the best scaling, but only at 4K and not at 1080p. At 1080p the move from 1 GPU (98 FPS) to 2 GPUs (150 FPS) is significant, but 3 GPUs (158 FPS) is not. For 4K, the scaling keeps going from 1 GPU (39 FPS) to 2 GPUs (69 FPS) and 3 GPUs (80 FPS), although not as much for that last jump. At 4K we are running at our Ultra preset, indicating that some other hard compute part of frame rendering might be the limiting factor in AFR modes.

Testing 10GBase-T and X550-AT2

Similar to our GPU testing, we have not the ideal hardware for Ethernet testing. In our previous 10G motherboard review, we implemented an ESXi 6 platform and used two Windows Server VMs, each with 8 threads, 16GB of DRAM, and one of the 10G ports. As a result each VM had a direct OS-to-OS 10G connection with a custom configured IP and testing was done.

Testing 10G with ESXi was actually more difficult this time around. The X550 series of drivers are not supporting in the default image, requiring the admin to install the relevant plug-in. While this enabled the ports to work in the Windows Server 2016 VMs, ESXi would not allow them under VSXNET mode, which is typically the high-performance mode. I was unable to find a quick solution, and along with the X550 controller being newer, deciphering what needed to be done was also a minefield of frustration.

It is interesting then to note that our results for the ASUS board and X550 are similar to previous results with the ASRock board using X540. This is ultimately because the chips are mostly similar, with the primary difference in the way they communicate with the CPU – the X540 requires PCIe 2.0 x8, while the X550 requires PCIe 3.0 x4. The X550 also introduces some professional level features, but the 10G copper market remains in Intel’s hands without another major player (or that professional environments turn to fiber).

When we last performed this with the ASRock X99 WS-E/10G, and a number of our readers were very helpful in describing ways in which 10G network performance (with the right hardware and knowledge) could be improved. As our test is point-to-point without a managed switch, and the frustrating element of learning to debug the environment, I highly recommend you read the post by Jammrock back in that review. But the 10G ports do both work, I can tell you that.

Single GTX 980 Gaming Performance ASUS X99-E-10G WS Conclusion
Comments Locked

63 Comments

View All Comments

  • dsumanik - Monday, November 7, 2016 - link

    Agreed, but there is a lot of PCI lane juggling on this board as is. With the amount of modern external and internal interfaces being pushed currently the days of 'one board to do it all' may be gone forever, sadly.

    Ultimately this board is going to appeal to users who want to use PCI Slots taken up by 10g rider cards in thier current rigs.

    IMO the idle power is a bit of a concern, over the life of the board it is going to add up, especially if used for server duties.
  • Notmyusualid - Friday, December 2, 2016 - link

    Yes I noticed that too - hence I just picked up a new ASRock ws-e/10G which has the Thunderbolt header (TB2 I think it is - but that is fine with me). But what I didn't expect, was that I'd need to BUY the pcie card to actually present the interface. I must admit, I expected something like that to be in the box. More expense.

    Just waiting for my E5-2690v4 Broadwell-EP 14-Core 135W 35M CPU to clear customs to check it all out...
  • sorten - Monday, November 7, 2016 - link

    what is the use case for 10G in the home?
  • jkhoward - Monday, November 7, 2016 - link

    People who render using multiple workstations want a super fast network. You can chain multiple systems together to render something faster. Think... home graphic designed/video editor.
  • timbotim - Monday, November 7, 2016 - link

    My primary use case is 30sec transfer of VMs around a network at 10Gbs-1 rather than 5mins at 1Gbs-1
  • beginner99 - Tuesday, November 8, 2016 - link

    Thats a niche use case and you will need a PCIe SSD to write that much data in such a short time. A 20 GB VM would require a write speed of about 680Mb/s.
  • sorten - Monday, November 7, 2016 - link

    I see, so the average consumer running a render farm in their home office ;-)
  • philehidiot - Tuesday, November 8, 2016 - link

    Personally, I tend to render farts in my home office.

    I do not require quite so many PCIe lanes for this.
  • slyphnier - Wednesday, November 9, 2016 - link

    that not cost efficient for home graphic designer/video editor, because u end up spend like more than $15k(depends on many ws) for multiple ws including the switch/router... even say your system/rig will last you like 3-4 years, that will be much cheaper go with rental rendering server/office route

    i believe this board is limited, with shop that have this & available quantity
  • Notmyusualid - Friday, December 2, 2016 - link

    They ARE limited, I cannot find waterblocks for mine... But, I can live with that.

    At least having your own hardware, its a KNOWN cost, and some provider doesn't contact you to notify you that you own $7k usd this month in network over-usage due to some redirection error you made...

Log in

Don't have an account? Sign up now