First Thoughts

Wrapping up our first look at Ashes of the Singularity and DirectX 12 Explicit Multi-Adapter, when Microsoft first unveiled the technology back at BUILD 2015, I figured it would only be a matter of time until someone put together a game utilizing the technology. After all, Epic and Square already had their tech demos up and running. However with the DirectX 12 ecosystem still coming together here in the final months of 2015 – and that goes for games as well as drivers – I wasn’t expecting something quite this soon.

As it stands the Ashes of the Singularity multi-GPU tech demo is just that, a tech demo for a game that itself is only in Alpha testing. There are still optimizations to be made and numerous bugs to be squashed. But despite all of that, seeing AMD and NVIDIA video cards working together to render a game is damn impressive.

Seeing as this build of Ashes is a tech demo, I’m hesitant to read too much into the precise benchmark numbers we’re seeing. That said, the fact that the fastest multi-GPU setup was a mixed AMD/NVIDIA GPU setup was something I wasn’t expecting and definitely makes it all the more interesting. DirectX 11 games are going to be around for a while longer yet, so we’re likely still some time away from a mixed GPU gaming setup being truly viable, but it will be interesting to see just what Oxide and other developers can pull off with explicit multi-adapter as they become more familiar with the technology and implement more advanced rendering modes.

Meanwhile it’s interesting to note just how far the industry as a whole has come since 2005 or even 2010. GPU architectures have become increasingly similar and tighter API standards have greatly curtailed the number of implementation differences that would prevent interoperability. And with Explicit Multi-Adapter, Microsoft and the GPU vendors have laid down a solid path for allowing game developers to finally tap the performance of multiple GPUs in a system, both integrated and discrete.

The timing couldn’t be any better either. As integrated GPUs have consumed the low-end GPU market and both CPU vendors devote more die space than ever to their respective integrated GPUs, using a discrete GPU leaves an increasingly large amount of silicon unused in the modern gaming system. Explicit multi-adapter in turn isn’t the silver bullet to that problem, but it is a means to finally putting the integrated GPU to good use even when it’s not a system’s primary GPU.

However with that said, it’s important to note that what happens from here is ultimately more in the hands of game developers than hardware developers. Given the nature of the explicit API, it’s now the game developers that have to do most of the legwork on implementing multi-GPU, and I’m left to wonder how many of them are up to the challenge. Hardware developers have an obvious interest in promoting and developing multi-GPU technology in order to sell more GPUs – which is how we got SLI and Crossfire in the first place – but software developers don’t have that same incentive.

Ultimately as gamers all we can do is take a wait-and-see approach to the whole matter. But as DirectX 12 game development ramps up, I am cautiously optimistic that positive experiences like Ashes will help encourage other developers to plan for multi-adapter support as well.

Ashes GPU Performance: Single & Mixed 2012 GPUs
Comments Locked

180 Comments

View All Comments

  • TallestJon96 - Monday, October 26, 2015 - link

    Crazy stuff. 50% chance either AMD or more likely NVIDIA locks this out via drivers unfortunately.

    To me, the most logical use of this is to have a strong GPU rendering the scene, and a weak gpu handling post processing. This way, the strong GPU is freed up, and as long as the weak GPU is powerful enough, you do not have any slow down or micro-stutter, but only get an improvement in performance, and the opportunity to increase the quality of post processing. This has significantly less complications than AFR, is simpler than two cars working on a single frame, and is pretty economical. For example, i could have kept my 750 ti with my nee 970, had had the 750 ti handle post processing, and the 970 do everything else. No micro stutter, relatively simple,and inexpensive, all while improving performance and post-processing effects.

    Between multi-adapter support, multi-core improvements in DX12, free-sync and Gsync, HBM, and possibly X-point, there is quite a bit going on for PC gaming. All of these new technologies fundementally improve the user experience, and fundementally improves the way we render games. Add in the slow march of moore's law, an over due die shrink next year for GPUs, and the abandonment of last generation consoles, and the next 3-5 years are looking pretty damn good.
  • Refuge - Tuesday, October 27, 2015 - link

    I think that would be the dumbest thing either one of them could do.

    Also, if they locked it out, then their cards would no longer be DX12 compliant. Losing that endorsement would be a devastating blow for even Nvidia.
  • Gigaplex - Tuesday, October 27, 2015 - link

    NVIDIA has a habit of making dumb decisions to intentionally sabotage their own hardware when some competitor kit is detected in the system.
  • tamalero - Monday, October 26, 2015 - link

    Question is.. will Nvidia be able to block this feature on their drivers? not the first time they tried to block anything that is not Nvidia (see PhysX that DO work fine with AMD + Nvidia combos, but disabled on purpose)
  • martixy - Monday, October 26, 2015 - link

    What about stacking abstractions? Could you theoretically stack a set of linked-mode for main processing on top of unlinked mode for offloading post to the iGPU?
  • Ryan Smith - Monday, October 26, 2015 - link

    Sure. The unlinked iGPU just shows up as another GPU, separate from the linked adapter.
  • lorribot - Monday, October 26, 2015 - link

    From a continuous upgrade point of view you could buy a new card shove it in as the the primary and keep the old card as a secondary, it could make smaller more frequent upgrade steps a possibility rather than having to buy the one big card.

    Would be interesting to see something like a HD7850 paired with a GTX 780 or R290
  • boeush - Monday, October 26, 2015 - link

    In addition to postprocessing, I wonder what implications/prospects there might be when it comes to offloading physics (PhysX, Havoc, etc.) processing onto, say the iGPU while the dGPU handles pure rendering... Of course that would require a major upgrade to the physics engines to support DX12 and EMA, but then I imagine they should already be well along on that path.
  • Gigaplex - Tuesday, October 27, 2015 - link

    That was already possible with DirectCompute. I don't think many games made much use of it.
  • nathanddrews - Tuesday, October 27, 2015 - link

    This is my fear - that these hyped features will end up not being used AT ALL in the real world. One tech demo that proves that you can use different GPUs together... but how many people with multi-GPU setups will honestly choose to buy one of each flagship instead of going full homogenous SLI or CF?

    It seems to me that the only relevant use case for heterogenous rendering/compute is to combine an IGP/APU with a dGPU... and so far only AMD has been pushing that feature with their Dual Graphics setup, despite the availability of other solutions being available. If it were realistic, I think it would exist all over already.

Log in

Don't have an account? Sign up now