Throughout this year we’ve looked at several previews and technical demos of DirectX 12 technologies, both before and after the launch of Windows 10 in July. As the most significant update to the DirectX API since DirectX 10 in 2007, the release of DirectX 12 marks the beginning of a major overhaul of how developers will program for modern GPUs. So to say there’s quite a bit of interest in it – both from consumers and developers – would be an understatement.

In putting together the DirectX 12 specification, Microsoft and their partners planned for the long haul, present and future. DirectX 12 has a number of immediately useful features in it that has developers grinning from ear to ear, but at the same time given the fact that another transition like this will not happen for many years (if at all), DirectX 12 and the update to the underlying display driver foundation were meant to be very forward looking and to pack in as many advanced features as would be reasonable. Consequently the first retail games such as this quarter’s Fable Legends will just scratch the surface of what the API can do, as developers are still in the process of understanding the API and writing new engines around it, and GPU driver developers are similarly still hammering out their code and improving their DirectX 12 functionality.

Of everything that has been written about DirectX 12 so far, the bulk of the focus has been on the immediate benefits of the low-level nature of the API, and this is for a good reason. The greatly reduced driver overhead and better ability to spread out work submission over multiple CPU cores stands to be extremely useful for game developers, especially as the CPU submission bottleneck is among the greatest bottlenecks facing GPUs today. Even then, taking full advantage of this functionality will take some time as developers have become accustomed to minimizing the use of draw calls to work around the bottleneck, so it is safe to say that we are at the start of what is going to be a long transition for gamers and game developers.

A little farther out on the horizon than the driver overhead improvements are DirectX 12’s improvements to multi-GPU functionality. Traditionally the domain of drivers – developers have little control under DirectX 11 – DirectX 12’s explicit controls extend to multi-GPU rendering as well. It is now up to developers to decide if they want to use multiple GPUs and how they want to use them. And with explicit control over the GPUs along with the deep understanding that only a game’s developer can have for the layout of their rendering pipeline, DirectX 12 gives developers the freedom to do things that could never be done before.

That brings us to today’s article, an initial look into the multi-GPU capabilities of DirectX 12. Developer Oxide Games, who is responsible for the popular Star Swarm demo we looked at earlier this year, has taken the underlying Nitrous engine and are ramping up for the 2016 release of the first retail game using the engine, Ashes of the Singularity. As part of their ongoing efforts to Nitrous as a testbed for DirectX 12 technologies and in conjunction with last week’s Steam Early Access release of the game, Oxide has sent over a very special build of Ashes.

What makes this build so special is that it’s the first game demo for DirectX 12’s multi-GPU Explicit Multi-Adapter (AKA Multi Display Adapter) functionality. We’ll go into a bit more on Explicit Multi-Adapter in a bit, but in short it is one of DirectX 12’s two multi-GPU modes, and thanks to the explicit controls offered by the API, allows for disparate GPUs to be paired up. More than SLI and more than Crossfire, EMA allows for dissimilar GPUs to be used in conjunction with each other, and productively at that.

So in an article only fitting for the week of Halloween, today we will be combining NVIDIA GeForce and AMD Radeon cards into a single system – a single rendering setup – to see how well Oxide’s early implementation of the technology works. It may be unnatural and perhaps even a bit unholy, but there’s something undeniably awesome about watching a single game rendered by two dissimilar cards in this fashion.

A Brief History & DirectX 12
Comments Locked

180 Comments

View All Comments

  • TallestJon96 - Monday, October 26, 2015 - link

    Crazy stuff. 50% chance either AMD or more likely NVIDIA locks this out via drivers unfortunately.

    To me, the most logical use of this is to have a strong GPU rendering the scene, and a weak gpu handling post processing. This way, the strong GPU is freed up, and as long as the weak GPU is powerful enough, you do not have any slow down or micro-stutter, but only get an improvement in performance, and the opportunity to increase the quality of post processing. This has significantly less complications than AFR, is simpler than two cars working on a single frame, and is pretty economical. For example, i could have kept my 750 ti with my nee 970, had had the 750 ti handle post processing, and the 970 do everything else. No micro stutter, relatively simple,and inexpensive, all while improving performance and post-processing effects.

    Between multi-adapter support, multi-core improvements in DX12, free-sync and Gsync, HBM, and possibly X-point, there is quite a bit going on for PC gaming. All of these new technologies fundementally improve the user experience, and fundementally improves the way we render games. Add in the slow march of moore's law, an over due die shrink next year for GPUs, and the abandonment of last generation consoles, and the next 3-5 years are looking pretty damn good.
  • Refuge - Tuesday, October 27, 2015 - link

    I think that would be the dumbest thing either one of them could do.

    Also, if they locked it out, then their cards would no longer be DX12 compliant. Losing that endorsement would be a devastating blow for even Nvidia.
  • Gigaplex - Tuesday, October 27, 2015 - link

    NVIDIA has a habit of making dumb decisions to intentionally sabotage their own hardware when some competitor kit is detected in the system.
  • tamalero - Monday, October 26, 2015 - link

    Question is.. will Nvidia be able to block this feature on their drivers? not the first time they tried to block anything that is not Nvidia (see PhysX that DO work fine with AMD + Nvidia combos, but disabled on purpose)
  • martixy - Monday, October 26, 2015 - link

    What about stacking abstractions? Could you theoretically stack a set of linked-mode for main processing on top of unlinked mode for offloading post to the iGPU?
  • Ryan Smith - Monday, October 26, 2015 - link

    Sure. The unlinked iGPU just shows up as another GPU, separate from the linked adapter.
  • lorribot - Monday, October 26, 2015 - link

    From a continuous upgrade point of view you could buy a new card shove it in as the the primary and keep the old card as a secondary, it could make smaller more frequent upgrade steps a possibility rather than having to buy the one big card.

    Would be interesting to see something like a HD7850 paired with a GTX 780 or R290
  • boeush - Monday, October 26, 2015 - link

    In addition to postprocessing, I wonder what implications/prospects there might be when it comes to offloading physics (PhysX, Havoc, etc.) processing onto, say the iGPU while the dGPU handles pure rendering... Of course that would require a major upgrade to the physics engines to support DX12 and EMA, but then I imagine they should already be well along on that path.
  • Gigaplex - Tuesday, October 27, 2015 - link

    That was already possible with DirectCompute. I don't think many games made much use of it.
  • nathanddrews - Tuesday, October 27, 2015 - link

    This is my fear - that these hyped features will end up not being used AT ALL in the real world. One tech demo that proves that you can use different GPUs together... but how many people with multi-GPU setups will honestly choose to buy one of each flagship instead of going full homogenous SLI or CF?

    It seems to me that the only relevant use case for heterogenous rendering/compute is to combine an IGP/APU with a dGPU... and so far only AMD has been pushing that feature with their Dual Graphics setup, despite the availability of other solutions being available. If it were realistic, I think it would exist all over already.

Log in

Don't have an account? Sign up now