DirectX 12 Multi-GPU Performance

Shifting gears, let’s take a look at multi-GPU performance on the latest Ashes beta. The focus of our previous article, Ashes’ support for DX12 explicit multi-GPU makes it the first game to support the ability to pair up RTG and NVIDIA GPUs in an AFR setup. Like traditional same-vendor AFR configurations, Ashes’ AFR setup works best when both GPUs are similar in performance, so although this technology does allow for some unusual cross-vendor comparisons, it does not (yet) benefit from pairing up GPUs that widely differ in performance, such as a last-generation video card with a current-generation video card. None the less, running a Radeon and a GeForce card together is an interesting sight, if only for the sheer audacity of it.

Meanwhile as a result of the significant performance optimizations between the last beta build and this latest build, this has also had an equally significant knock-on effect on mutli-GPU performance as compared to the last time we looked at the game.

Ashes of the Singularity (Beta) - 3840x2160 - High Quality - MGPU

Even at 4K a pair of GPUs ends up being almost too much at Ashes’ High quality setting. All four multi-GPU configurations are over 60fps, with the fastest Fury X + 980 Ti configuration nudging past 70fps. Meanwhile the lead over our two fastest single-GPU configurations is not especially great, particularly compared to the Fury X, with the Fury X + 980 Ti configuration only coming in 15fps (27%) faster than a single GPU. The all-NVIDIA comparison does fare better in this regard, but only because of GTX 980 Ti’s lower initial performance.

Digging deeper, what we find is that even at 4K we’re actually CPU limited according to the benchmark data. Across all four multi-GPU configurations, our hex-core overclocked Core i7-4960X can only setup frames at roughly 70fps, versus 100fps+ for a single-GPU configuration.


Top: Fury X. Bottom: Fury X + 980 Ti

The increased CPU load from utilizing multi-GPU is to be expected, as the CPU now needs to spend time synchronizing the GPUs and waiting on them to transfer data between each other. However dropping to 70fps means that Ashes has become a surprisingly heavy CPU test as well, and that 4K at high quality alone isn’t enough to max out our dual GPU configurations.

Ashes of the Singularity (Beta) - 3840x2160 - Extreme Quality - MGPU

Cranking up the quality setting to Extreme finally gives our dual-GPU configurations enough of a workload to back off from the CPU performance cap. Once again the fastest configuration is the Fury X + 980 Ti, which lands just short of 60fps, followed by the Fury X + Fury configuration at 55.1fps. In our first look at Ashes multi-GPU scaling we found that having a Fury X card as the lead card resulted in better performance, and this has not changed for the newest beta. The Fury continues to be faster at reading data off of other cards. Still, the gap between the Fury X + 980 Ti configuration and the 980 Ti + Fury X configuration has closed some as compared to last time, and now stands at 11%.

Backing off from the CPU limit has also put the multi-GPU configurations well ahead of the single-GPU configurations. We’re now looking at upwards of a 65% performance boost versus a single GTX 980, and a smaller 31% performance boost versus a single Fury X. These are smaller gains for multi-GPU configurations than we first saw last year, but it’s also very much a consequence of Ashes’ improved performance across the board. Though we didn’t have time to test it, Ashes does have one higher quality setting – Crazy – which may drive a bit of a larger wedge between the multi-GPU configurations and the Fury X, though the overhead of synchronization will always present a roadblock.

DirectX 12 Single-GPU Performance DirectX 12 vs. DirectX 11
Comments Locked

153 Comments

View All Comments

  • tuxRoller - Friday, February 26, 2016 - link

    It's the simpler drivers which provide less room to hide architectural deficiencies.
    My point was that, across the board, gcn improves its performance a good deal relative to d3d11. That includes cards that are four years old. I don't think Maxwell is older than that.
    I don't think we are really disagreeing, though.
  • RMSe17 - Wednesday, February 24, 2016 - link

    Nowhere near as bad as the DX9 fiasco back in the FX 5xxx days where a low level ATi card would demolish the highest end GeForce
  • pt2501 - Thursday, February 25, 2016 - link

    Few if anyone here is going to remember the fiasco when the radeon 9700 pro demolished the competition in performance and stability. Even fewer remember nvidia "optimizing" games with lower quality textures to compete.
  • dray67 - Thursday, February 25, 2016 - link

    I remember it and it was the reason I went for the 9700 and the later 9800, atm I'm back to Nvidia I've had 2 AMD card die on me due to heat, as much as I like them I've had my fingers burnt and moved away from them, if dx12 and dual gpu support becomes better supported I'll buy a high AMD card in an instant.
  • knightspawn1138 - Thursday, February 25, 2016 - link

    I remember it clearly. My Radeon 9800 was the last ATI card I bought. I loved it for years, and only ended up replacing it with an NVidia card when the Catalyst Control Center started sucking all the cycles out of my CPU. It's funny that half of the comments on this article complain that NVidia's drivers are over-optimized for every specific game, yet ATI and AMD were content to allow the CCC to be a resource hog that ruined even non-gaming performance for years. I'm happy with my NVidia cards. I've been able to easily play all modern games with great performance using a pair of GTX 460's, and recently replaced those with a GTX 970.
  • xenol - Thursday, February 25, 2016 - link

    Considering there aren't any other async shader games in development and nothing announced and with Pascal coming within the next year (which maybe, a game might actually use DX12) which will probably alleviate the situation, your evaluation of NVIDIA's situation is pretty poor.

    It takes more than a generation or a game to make a hardware company go down. NVIDIA suffered plenty during its GeForce FX days, and it got right back on its feet.
  • MattKa - Thursday, February 25, 2016 - link

    No, no, no. An RTS game that probably isn't going to sell very well and seems incredibly lacking is going to destroy Nvidia.
  • gamerk2 - Thursday, February 25, 2016 - link

    AMD has had an async compute engine in their GPUs going back to the 7000 series. NVIDIA has not. Stands to reason AMD would do better in async compute based benchmarking.

    Let's see how Pascal compares, since it's being designed with DX12, and async compute, in mind.
  • agentbb007 - Saturday, February 27, 2016 - link

    "NVIDIA telling us that async shading is not currently enabled in their drivers", yeah this pretty much sums it up. This beta stuff is interesting but just that beta...
  • JlHADJOE - Saturday, February 27, 2016 - link

    The GTX 680 seems to have done well though. I feel like Maxwell is being let down by the compromises Nvidia made optimizing for FP16 only and sacrificing real compute performance.

Log in

Don't have an account? Sign up now