SLI, SLI, SLI

In our last review of a motherboard capable of 4-way GPU gaming, it was eloquently stated that our failure to test the feature was an indication of AnandTech’s current path on quality. I would have preferred a discourse in actually being able to source four identical GPUs at the time (and then showing ‘wow, it works’ in the review). There are plenty of things we don’t test as rigorously on each product that some users might like, such as PCIe x1 slot bandwidth latency, or USB 2.0 performance on CentOS, either due to time, relevance, or as was the case before, a lack of hardware. This is still the case - I do not have four expensive and high profile GPUs and I live a whole continent and an ocean away from our GPU testing facility. I have procured a third identical GTX 980 however, and will show you that at least 3-way GPU works in our test suite.

Alien Isolation GTX 980 SLI on ASUS X99-E-10G WS

Grand Theft Auto 5 SLI on ASUS X99-E-10G WS

GRID Autosport SLI on ASUS X99-E-10G WS

Shadow of Mordor SLI on ASUS X99-E-10G WS (1080p)

Shadow of Mordor SLI on ASUS X99-E-10G WS (4K)

Shadow of Mordor SLI on ASUS X99-E-10G WS (1080p)

Shadow of Mordor SLI on ASUS X99-E-10G WS (4K)

GPU scaling beyond two cards depends on the game engine and the drivers, or if the game has implemented specific multi-GPU modes to accelerate game features. For some titles, the GPU is not the bottleneck, and it might either be CPU performance, PCIe bandwidth, DRAM, or the fact that the game doesn’t scale and we’re relying on the performance of a single card.

Both GTA5 and Alien Isolation scaled to two cards with our hardware setup, but failed to move going to three. GRID is typically a title that scales with almost anything, however the jump from two to three cards was only 7%.

Shadow of Mordor gets the best scaling, but only at 4K and not at 1080p. At 1080p the move from 1 GPU (98 FPS) to 2 GPUs (150 FPS) is significant, but 3 GPUs (158 FPS) is not. For 4K, the scaling keeps going from 1 GPU (39 FPS) to 2 GPUs (69 FPS) and 3 GPUs (80 FPS), although not as much for that last jump. At 4K we are running at our Ultra preset, indicating that some other hard compute part of frame rendering might be the limiting factor in AFR modes.

Testing 10GBase-T and X550-AT2

Similar to our GPU testing, we have not the ideal hardware for Ethernet testing. In our previous 10G motherboard review, we implemented an ESXi 6 platform and used two Windows Server VMs, each with 8 threads, 16GB of DRAM, and one of the 10G ports. As a result each VM had a direct OS-to-OS 10G connection with a custom configured IP and testing was done.

Testing 10G with ESXi was actually more difficult this time around. The X550 series of drivers are not supporting in the default image, requiring the admin to install the relevant plug-in. While this enabled the ports to work in the Windows Server 2016 VMs, ESXi would not allow them under VSXNET mode, which is typically the high-performance mode. I was unable to find a quick solution, and along with the X550 controller being newer, deciphering what needed to be done was also a minefield of frustration.

It is interesting then to note that our results for the ASUS board and X550 are similar to previous results with the ASRock board using X540. This is ultimately because the chips are mostly similar, with the primary difference in the way they communicate with the CPU – the X540 requires PCIe 2.0 x8, while the X550 requires PCIe 3.0 x4. The X550 also introduces some professional level features, but the 10G copper market remains in Intel’s hands without another major player (or that professional environments turn to fiber).

When we last performed this with the ASRock X99 WS-E/10G, and a number of our readers were very helpful in describing ways in which 10G network performance (with the right hardware and knowledge) could be improved. As our test is point-to-point without a managed switch, and the frustrating element of learning to debug the environment, I highly recommend you read the post by Jammrock back in that review. But the 10G ports do both work, I can tell you that.

Single GTX 980 Gaming Performance ASUS X99-E-10G WS Conclusion
Comments Locked

63 Comments

View All Comments

  • kgardas - Tuesday, November 8, 2016 - link

    Looks really nice, ~6W for 10Gbit is good and very low on todays standard. The only drawback in comparison with Intel is PCIe 2.0 support only, so for 10Gbit you need 4 PCIe lanes. Otherwise I'm looking forward to see this card here...
  • Notmyusualid - Friday, December 2, 2016 - link

    @ kgardas: You should have seen our 10G DWDM telecom equipment, back in late 1998... more than 6W I can tell you :) , in fact we couldn't get it to work without forced air, each transceiver taking up a whole rack shelf, and we could only fit three shelves / rack space. The electrical complexity / number of boards to make it work was astounding.

    Incredible to see it done on a single card now, and more often now, even multiples of, on a single card.

    So yes, tech moves on...
  • Lolimaster - Tuesday, November 8, 2016 - link

    I think you should dive the PSU's used.

    Only a high wattage for multigpu test (850w+)
    500-650w Titanium for any cpu + single gpu / APU-intel IGP powered systems
  • ads295 - Wednesday, November 9, 2016 - link

    You know how those clickbait websites show cleavage or a$$?
    The thumbnail for this article led me to open it in the same vein. :O
  • Breit - Thursday, November 10, 2016 - link

    Thanks for this review Ian, very informative.

    While reading the comments here, the single feature that seems to attract the most attention is the inclusion of 10G Ethernet. As it seems rather hard to implement a good performing 10G network compared to 1G, maybe an AnandTech-style in-depth article about 10G networking in general would be appreciated by the readers of this site. Just a suggestion.
    At least I would appreciate it... ;)
  • JlHADJOE - Friday, November 11, 2016 - link

    Didn't think I'd see the day when an ASUS motherboard is both cheaper and has more features than it's ASRock counterpart.
  • Notmyusualid - Friday, December 2, 2016 - link

    More features?

    I don't see a SATA DOM port.

    It is missing 2x 1GB Ethernet ports.

    It is missing 2 SATA ports (12 vs 10)

    It has only 10-phase power solution, vs 12 phase.

    It has no USB 2.0 ports did I read correctly?

    It has no fan on the 10G heatsink also, which allows the case temp to equalize with outside temps for some time after shutdown, to avoid condonsation building up in the case.

    Can you mount the same range of M.2 SSDs in this? I see only two mounting holes, mine has four...

    Board-mounted USB port, for DRM-related stick, or whatever you need connecting / secured on the INSIDE of a case.

    I also believe I have LAN LED headers to put network activity on the front panel, as one does with their hard disks.

    So tell me if I'm wrong, please.

    One thing I'll say, I do find the 6-pin board power connector much more elegant than my 4-pin Molex connector. And I cannot STAND my anodized blue... the black on the ASUS is also more elegant.

    Anybody who needs their pcie slots lit, to choose the right combo shouldn't be allowed to buy it..
  • Notmyusualid - Friday, December 2, 2016 - link

    also @ Jihadjoe

    Mine has TB header too. Almost forgot about that...
  • Hixbot - Tuesday, November 22, 2016 - link

    Don't understand the move to 10G copper. We should be transitioning towards 10G fiber. Copper can't carry 10G a practical distance. 55 meters for unshielded Cat 6 cable. That't not very far. 100 meters for shielded Cat 6, thats more reasonable. but has anyone priced Cat 6 shielded cable? It's very expensive, and good luck terminating the shielded RJ45 yourself to Cat 6 standards. In my workplace, we've had to order pre-terminated lengths of shielded Cat 6. Whenever we use fiber it's easier to terminate, costs are much cheaper, and distance is practically unlimited.

    So what is with the move to 10G copper?
  • Notmyusualid - Friday, December 2, 2016 - link

    As an owner of the asrock, I too would have preferred SFP sockets.

    But SMBs CAN afford $700 for a switch, and many of them have little fiber. My 2c.

Log in

Don't have an account? Sign up now