Ashes GPU Performance: Single & Mixed High-End GPUs

Since seeing is believing, we’ll start things off with a look at some direct captured recordings of Ashes running the built-in benchmark. These recordings showcase a Radeon R9 Fury X and a GeForce GTX 980 Ti in AFR mode, with each video swapping the primary and secondary video card. Both videos are captures with our 2560x1440 settings and with v-sync on, though YouTube limits 60fps videos to 1080p at this time.

Overall you’d be hard-pressed to find a difference between the two videos. No matter which GPU is primary, both setups render correctly and without issue, showcasing that DirectX 12 explicit multi-adapter and Ashes’ AFR implementation on top of it is working as expected.

Diving into our results then, let’s start with performance at 2560x1440.

Ashes of the Singularity (Alpha) - 2560x1440 - High Quality - 2x MSAA

It’s interesting to note that when picking these settings, the settings were chosen first and the cards second. So the fact that the GTX 980 Ti and R9 Fury X end up being so close in average performance comes as a pleasant surprise. With AFR performance gains dependent in part on how similar the two cards are, this should give us a better look at performance than cards that differ widely in performance.

In any case what we find is that with a single card setup the GTX 980 Ti and R9 Fury X are within 5% of each other with the older driver sets we needed to use for AFR compatibility. Both AMD and NVIDIA do see some performance gains with newer driver sets, with NVIDIA picking up 8% while AMD picks up 5%.

But more importantly let’s talk about mutli-GPU setups. If everything is working correctly and there are no unexpected bottlenecks, then on paper in mixed GPU setups we should get similar results no matter which card is the primary. And indeed that’s exactly what we find here, with only 1.4fps (2%) separating the GeForce + Radeon setups. Using the Radeon Fury X as the primary card gets the best results at 70.8fps, while swapping the order to let the GTX 980 Ti lead gives us 69.4fps.

However would you believe that the mixed GPU setups are faster than the homogenous setups? Trailing the mixed setups is the R9 Fury X + R9 Fury setup, averaging 67.1fps and trailing the slower mixed setup by 3.5%. Slower still – and unexpectedly so – is the GTX 980 Ti + GTX Titan X setup, which averages just 61.6fps, some 12% slower than the GTX 980 Ti + Fury X setup. The card setup is admittedly somewhat unusual here – in order to consistently use the GTX 980 Ti as the primary card we had to make the secondary card a GTX Titan X, seeing as how we don’t have another GTX 980 Ti or a third-tier GM200 card comparable to the R9 Fury – but even so the only impact here should be that the GTX Titan X doesn’t get to stretch its legs quite as much since it needs to wait on the slightly slower GTX 980 Ti primary in order to stay in sync.

Ashes of the Singularity (Alpha) - 2560x1440 - Multi-GPU Perf. Gains

Looking at performance from the perspective of overall performance gains, the extra performance we see here isn’t going to be chart-topping – an optimized SLI/CF setup can get better than 80% gains – but overall the data here confirms our earlier raw results: we’re actually seeing a significant uptick in performance with the mixed GPU setups. R9 Fury X + GTX 980 Ti is some 75% faster than a single R9 Fury X while GTX 980 Ti + R9 Fury X is 64% faster than a single GTX 980 Ti. Meanwhile the dual AMD setup sees a 66% performance gain, followed by the dual NVIDIA setup at only 46%.

The most surprising thing about all of this is that the greatest gains are with the mixed GPU setups. It’s not immediately clear why this is – if there’s something more efficient about having each vendor and their drivers operating one GPU instead of two – or if AMD and NVIDIA are just more compatible than either company cares to admit. Either way this shows that even with Ashes’ basic AFR implementation, multi-adapter rendering is working and working well. Meanwhile the one outlier, as we briefly discussed before, is the dual NVIDIA setup, which just doesn’t scale quite as well.

Finally let’s take a quick look at the GPU utilization statistics from MSI Afterburner, with our top-performing R9 Fury X + GTX 980 Ti setup. Here we can see the R9 Fury X in its primary card role operate at near-100% load the entire time, while the GTX 980 Ti secondary card is averaging close to 90% utilization. The difference, as best as we can tell, is the fact that the secondary card has to wait on additional synchronization information from the primary card, while the primary card is always either rendering or reading in a frame from the secondary card.

3840x2160

Now we’ll kick things up a notch by increasing the resolution to 3840x2160.

Ashes of the Singularity (Alpha) - 3840x2160 - High Quality - 2x MSAA

Despite the fact that this was just a resolution increase, the performance landscape has shifted by more than we would expect here. The top configuration is still the mixed GPU configuration, with the R9 Fury X + GTX 980 Ti setup taking the top spot. However the inverse configuration of the GTX 980 Ti + R9 Fury X isn’t neck-and-neck this time, rather it’s some 15% slower. Meanwhile the pokey-at-2560 dual NVIDIA setup is now in second place by a hair, trailing the mixed GPU setup by 10%. Following that at just 0.1fps lower is the dual AMD setup with 46.7fps.

These tests are run multiple times and we can consistently get these results, so we are not looking at a fluke. At 3840x2160 the fastest setup by a respectable 10.5% margin is the R9 Fury X + GTX 980 Ti setup, which for an experimental implementation of DX12 unlinked explicit multi-adapter is quite astonishing. “Why not both” may be more than just a meme here…

From a technical perspective the fact that there’s now a wider divergence between the mixed GPU setups is unexpected, but not necessarily irrational. If the R9 Fury X is better at reading shared resources than the GTX 980 Ti – be it due to hardware differences, driver differences, or both – then that would explain what we’re seeing. Though at the same time we can’t rule out the fact that this is an early tech demo of this functionality.

Ashes of the Singularity (Alpha) - 3840x2160 - Multi-GPU Perf. Gains

As you’d expect from those raw numbers, the best perf gains are with the R9 Fury X + GTX 980 Ti, which picks up 66% over the single-GPU setups. After that the dual AMD GPU setup picks up 50%, the dual NVIDA setup a more respectable 46%, and finally the mixed GTX 980 Ti + R9 Fury X setup tops out with a 40% performance gain.

Finally, taking a quick look of GPU frametimes as reported by Ashes internal performance counters, we can see that across all of the mutli-GPU setups there’s a bit of a consistent variance going on. Overall there’s at least a few milliseconds difference between the alternating frames, with the dual NVIDIA setup faring the best while the dual AMD setup fares the worst. Otherwise the performance leader, the R9 Fury X + GTX 980 Ti, only averages a bit more variance than the dual NVIDIA setup. At least in this one instance, there’s some evidence to suggest that NVIDIA secondary cards have a harder time supplying frame data to the primary card (regardless of its make), while NVIDIA and AMD primary cards are similar in read performance.

Ashes of the Singularity: Unlinked Explicit Multi-Adapter w/AFR & The Test Ashes GPU Performance: Single & Mixed 2012 GPUs
Comments Locked

180 Comments

View All Comments

  • andrew_pz - Tuesday, October 27, 2015 - link

    Radeon placed in 16x slot, GeFroce installed to 4x slot only. WHY?
    It's cheat!
  • silverblue - Tuesday, October 27, 2015 - link

    There isn't a 4x slot on that board. To quote the specs...

    "- 4 x PCI Express 3.0 x16 slots (PCIE1/PCIE2/PCIE4/PCIE5: x16/8/16/0 mode or x16/8/8/8 mode)"

    Even if the GeForce was in an 8x slot, I really doubt it would've made a difference.
  • Ryan Smith - Wednesday, October 28, 2015 - link

    Aye. And just to be clear here, both cards are in x16 slots (we're not using tri-8 mode).
  • brucek2 - Tuesday, October 27, 2015 - link

    The vast majority of PCs, and 100% of consoles, are single GPU (or less.) Therefore developers absolutely must ensure their game can run satisfactorily on one GPU, and have very little to gain from investing extra work in enabling multi GPU support.

    To me this suggests that moving the burden of enabling multi-gpu support from hardware sellers (who can benefit from selling more cards) to game publishers (who basically have no real way to benefit at all) is that the only sane decision is not invest any additional development or testing on multi gpu support and that therefore multi GPU support will effectively be dead in the DX12 world.

    What am I missing?
  • willgart - Tuesday, October 27, 2015 - link

    well... you no longer need to change your card to a big one, you can just upgrade your pc with a low or middle entry card to get a good boost! and you keep your old one. from a long term point of view we win, not the hardware resellers.
    imagine today you have a GTX970, in 4 years you can get a GTX 2970 and have a stronger system than a single 2980 card... specialy the FPS / $ is very interesting.

    and when you compare the setup HD7970+GTX680, maybe the cost is 100$ today(?) can be compared to a single GTX980 which cost nearly 700$...
  • brucek2 - Tuesday, October 27, 2015 - link

    I understand the benefit to the user. What I'm worried is missing is incentive to the game developer. For them the new arrangement sounds like nothing but extra cost and likely extra technical support hassle to make multi-gpu work. Why would they bother? To use your example of a user with 7970+680, the 680 alone would at least meet the console-equivalent setting, so they'd probably just tell you to use that.)
  • prtskg - Wednesday, October 28, 2015 - link

    It would make their game run better and thus improve their brand name.
  • brucek2 - Wednesday, October 28, 2015 - link

    Making it run "better" implies it runs "worse" for the 95%+ of PC users (and 100% of console users) who do not have multi-GPU. That's a non-starter. The publisher has to make it a good experience for the overwhelmingly common case of single gpu or they're not going to be in business for very long. Once they've done that, what they are left with is the option to spend more of their own dollars so that a very tiny fraction of users can play the same game at higher graphics settings. Hard to see how that's going to improve their brand name more than virtually anything else they'd choose to spend that money on, and certainly not for the vast majority of users who will never see or know about it.
  • BrokenCrayons - Wednesday, October 28, 2015 - link

    You're not missing anything at all. Multi-GPU systems, at least in the case of there being more than one discrete GPU, represent a small number of halo desktop computers. Desktops, gaming desktops in particular, are already a shrinking market and even the large majority of such systems contain only a single graphics card. This means there's minimal incentive for a developer of a game to bother soaking up the additional cost of adding support for multi GPU systems. As developers are already cost-sensitive and working in a highly competitive business landscape, it seems highly unlikely that they'll be willing to invest the human resources in the additional code or soak up the risks associated with bugs and/or poor performance. In essence, DX12 seems poised to end multi GPU gaming UNLESS the dGPU + iGPU market is large enough in modern computers AND the performance benefits realized are worth the cost to the developers to write code for it. There are, after all, a lot more computers (even laptops and a very limited number of tablets) that contain an Intel graphics processor and an NV or more rarely an AMD dGPU. Though even then, I'd hazard a guess to say that the performance improvement is minimal and not worth the trouble. Plus most computers sold contain only whatever Intel happens to throw onto the CPU die so even that scenario is of limited benefit in a world of mostly integrated graphics processors.
  • mayankleoboy1 - Wednesday, October 28, 2015 - link

    Any idea what LucidLogix are doing these days?
    Last i remember, they had released some software solutions which reduced battery drain on Samsung devices (by dynamically decreasing the game rendering quality

Log in

Don't have an account? Sign up now