Ashes GPU Performance: Single & Mixed 2012 GPUs

While Ashes’ mutli-GPU support sees solid performance gains with current-generation high-end GPUs, we wanted to see if those gains would extend to older DirectX 12 GPUs. To that end we’ve put the GeForce GTX 680 and the Radeon HD 7970 through a similar test, running the Ashes’ benchmark at 2560x1440 with Medium image quality and no MSAA.

Ashes of the Singularity (Alpha) - 2560x1440 - Medium Quality - 0x MSAA

First off, unlike our high-end GPUs there’s a distinct performance difference between our AMD and NVIDIA cards. The Radeon HD 7970 performs 22% better here, just averaging 30fps to the GTX 680’s 24.5fps. So right off the bat we’re entering an AFR setup with a moderately unbalanced set of cards.

Once we do turn on AFR, two very different things happen. The GTX 680 + HD 7970 setup is an outright performance regression, with performance 40% from the single GTX 680 Ti. On the other hand the HD 7970 + GTX 680 setup sees an unexpectedly good performance gain from AFR, picking up a further 55% to 46.4fps.

As this test is a smaller number of combinations it’s not clear where the bottlenecks are, but it’s none the less very interesting how we get such widely different results depending on which card is in the lead. In the GTX 680 + HD 7970 setup, either the GTX 680 is a bad leader or the HD 7970 is a bad follower, and this leads to this setup spinning its proverbial wheels. Otherwise letting the HD 7970 lead and GTX 680 follow sees a bigger performance gain than we would have expected for a moderately unbalanced setup with a pair of cards that were never known for their efficient PCIe data transfers. So long as you let the HD 7970 lead, at least in this case you could absolutely get away with a mixed GPU pairing of older GPUs.

Ashes GPU Performance: Single & Mixed High-End GPUs First Thoughts
Comments Locked

180 Comments

View All Comments

  • TallestJon96 - Monday, October 26, 2015 - link

    Crazy stuff. 50% chance either AMD or more likely NVIDIA locks this out via drivers unfortunately.

    To me, the most logical use of this is to have a strong GPU rendering the scene, and a weak gpu handling post processing. This way, the strong GPU is freed up, and as long as the weak GPU is powerful enough, you do not have any slow down or micro-stutter, but only get an improvement in performance, and the opportunity to increase the quality of post processing. This has significantly less complications than AFR, is simpler than two cars working on a single frame, and is pretty economical. For example, i could have kept my 750 ti with my nee 970, had had the 750 ti handle post processing, and the 970 do everything else. No micro stutter, relatively simple,and inexpensive, all while improving performance and post-processing effects.

    Between multi-adapter support, multi-core improvements in DX12, free-sync and Gsync, HBM, and possibly X-point, there is quite a bit going on for PC gaming. All of these new technologies fundementally improve the user experience, and fundementally improves the way we render games. Add in the slow march of moore's law, an over due die shrink next year for GPUs, and the abandonment of last generation consoles, and the next 3-5 years are looking pretty damn good.
  • Refuge - Tuesday, October 27, 2015 - link

    I think that would be the dumbest thing either one of them could do.

    Also, if they locked it out, then their cards would no longer be DX12 compliant. Losing that endorsement would be a devastating blow for even Nvidia.
  • Gigaplex - Tuesday, October 27, 2015 - link

    NVIDIA has a habit of making dumb decisions to intentionally sabotage their own hardware when some competitor kit is detected in the system.
  • tamalero - Monday, October 26, 2015 - link

    Question is.. will Nvidia be able to block this feature on their drivers? not the first time they tried to block anything that is not Nvidia (see PhysX that DO work fine with AMD + Nvidia combos, but disabled on purpose)
  • martixy - Monday, October 26, 2015 - link

    What about stacking abstractions? Could you theoretically stack a set of linked-mode for main processing on top of unlinked mode for offloading post to the iGPU?
  • Ryan Smith - Monday, October 26, 2015 - link

    Sure. The unlinked iGPU just shows up as another GPU, separate from the linked adapter.
  • lorribot - Monday, October 26, 2015 - link

    From a continuous upgrade point of view you could buy a new card shove it in as the the primary and keep the old card as a secondary, it could make smaller more frequent upgrade steps a possibility rather than having to buy the one big card.

    Would be interesting to see something like a HD7850 paired with a GTX 780 or R290
  • boeush - Monday, October 26, 2015 - link

    In addition to postprocessing, I wonder what implications/prospects there might be when it comes to offloading physics (PhysX, Havoc, etc.) processing onto, say the iGPU while the dGPU handles pure rendering... Of course that would require a major upgrade to the physics engines to support DX12 and EMA, but then I imagine they should already be well along on that path.
  • Gigaplex - Tuesday, October 27, 2015 - link

    That was already possible with DirectCompute. I don't think many games made much use of it.
  • nathanddrews - Tuesday, October 27, 2015 - link

    This is my fear - that these hyped features will end up not being used AT ALL in the real world. One tech demo that proves that you can use different GPUs together... but how many people with multi-GPU setups will honestly choose to buy one of each flagship instead of going full homogenous SLI or CF?

    It seems to me that the only relevant use case for heterogenous rendering/compute is to combine an IGP/APU with a dGPU... and so far only AMD has been pushing that feature with their Dual Graphics setup, despite the availability of other solutions being available. If it were realistic, I think it would exist all over already.

Log in

Don't have an account? Sign up now