Throughout this year we’ve looked at several previews and technical demos of DirectX 12 technologies, both before and after the launch of Windows 10 in July. As the most significant update to the DirectX API since DirectX 10 in 2007, the release of DirectX 12 marks the beginning of a major overhaul of how developers will program for modern GPUs. So to say there’s quite a bit of interest in it – both from consumers and developers – would be an understatement.

In putting together the DirectX 12 specification, Microsoft and their partners planned for the long haul, present and future. DirectX 12 has a number of immediately useful features in it that has developers grinning from ear to ear, but at the same time given the fact that another transition like this will not happen for many years (if at all), DirectX 12 and the update to the underlying display driver foundation were meant to be very forward looking and to pack in as many advanced features as would be reasonable. Consequently the first retail games such as this quarter’s Fable Legends will just scratch the surface of what the API can do, as developers are still in the process of understanding the API and writing new engines around it, and GPU driver developers are similarly still hammering out their code and improving their DirectX 12 functionality.

Of everything that has been written about DirectX 12 so far, the bulk of the focus has been on the immediate benefits of the low-level nature of the API, and this is for a good reason. The greatly reduced driver overhead and better ability to spread out work submission over multiple CPU cores stands to be extremely useful for game developers, especially as the CPU submission bottleneck is among the greatest bottlenecks facing GPUs today. Even then, taking full advantage of this functionality will take some time as developers have become accustomed to minimizing the use of draw calls to work around the bottleneck, so it is safe to say that we are at the start of what is going to be a long transition for gamers and game developers.

A little farther out on the horizon than the driver overhead improvements are DirectX 12’s improvements to multi-GPU functionality. Traditionally the domain of drivers – developers have little control under DirectX 11 – DirectX 12’s explicit controls extend to multi-GPU rendering as well. It is now up to developers to decide if they want to use multiple GPUs and how they want to use them. And with explicit control over the GPUs along with the deep understanding that only a game’s developer can have for the layout of their rendering pipeline, DirectX 12 gives developers the freedom to do things that could never be done before.

That brings us to today’s article, an initial look into the multi-GPU capabilities of DirectX 12. Developer Oxide Games, who is responsible for the popular Star Swarm demo we looked at earlier this year, has taken the underlying Nitrous engine and are ramping up for the 2016 release of the first retail game using the engine, Ashes of the Singularity. As part of their ongoing efforts to Nitrous as a testbed for DirectX 12 technologies and in conjunction with last week’s Steam Early Access release of the game, Oxide has sent over a very special build of Ashes.

What makes this build so special is that it’s the first game demo for DirectX 12’s multi-GPU Explicit Multi-Adapter (AKA Multi Display Adapter) functionality. We’ll go into a bit more on Explicit Multi-Adapter in a bit, but in short it is one of DirectX 12’s two multi-GPU modes, and thanks to the explicit controls offered by the API, allows for disparate GPUs to be paired up. More than SLI and more than Crossfire, EMA allows for dissimilar GPUs to be used in conjunction with each other, and productively at that.

So in an article only fitting for the week of Halloween, today we will be combining NVIDIA GeForce and AMD Radeon cards into a single system – a single rendering setup – to see how well Oxide’s early implementation of the technology works. It may be unnatural and perhaps even a bit unholy, but there’s something undeniably awesome about watching a single game rendered by two dissimilar cards in this fashion.

A Brief History & DirectX 12


View All Comments

  • Gigaplex - Tuesday, October 27, 2015 - link

    Intel's top iGPUs can beat AMDs top ones, but expect to pay a premium. Reply
  • loguerto - Sunday, November 1, 2015 - link

    I love how intel managed to implement ondie ram as a workaround to the ddr3 huge bottleneck. I wonder why AMD did not cosed to do the same as their 7870k is evidently bottlenecked by the ddr3, is there a cost problem or they are waiting to switch directly on the hbm memory? Reply
  • CiccioB - Tuesday, October 27, 2015 - link

    A test with the Titan X as master card would be interesting. It may show if the sync problem is HW or SW related.
    Test with low tier cards should be run a 1080p. GTX680 has never been so good at higher resolutions, so maybe the test at FullHD may better level both graphics cards performances and show different results with mixed cards.

    BTW, nvidia cards/driver are not optimized for PCI-e transfers as they use proprietary connectors to do SLI and synchronization, while AMD cards use PCI-e transfer to to all the above. Maybe the problem is that.
    It would also be interesting to see how these mixes work when used on slower PCI-e lanes. You know, not all PCs have PCI-e 3.0 or running at 16x.

    Specific results apart (they will most probably change with driver updates) it is interesting to see that this particular feature work.
  • VarthDaver - Tuesday, October 27, 2015 - link

    Can we also get this? "In conjunction with last week’s Steam Early Access release of the game, Oxide has sent over a very special build of Ashes." I have had access to Ashes for a while but do not see the AFR checkbox in my version to match their special one. I would be happy to provide some 2x TitanX performance numbers if I could get a copy with AFR enabled. Reply
  • Ryan Smith - Tuesday, October 27, 2015 - link

    As I briefly mention elsewhere, AFR support is very much an experimental feature in Ashes at the moment. Oxide has mentioned elsewhere that they will eventually push it out in public builds, but not until the feature is in a better state. Reply
  • silverblue - Tuesday, October 27, 2015 - link

    That's correct as regards the 290, but the 7970 uses a CrossFire bridge. Reply
  • MrPoletski - Tuesday, October 27, 2015 - link

    What about integrated graphics solutions? It'd be nice to see what this does to our potential CPU choice. Can we see a top of the line Intel CPU vs a top of the line AMD cpu now and see how each ones iGPU helps out with a 980ti/furyx? Reply
  • CiccioB - Tuesday, October 27, 2015 - link

    I suggest you and all the others that keep on suggesting to do such tests or making fantasies on hybrid systems to first understand how AFR works and so to understand by yourself why it is useless to use a iGPU with it. Reply
  • Gigaplex - Tuesday, October 27, 2015 - link

    And perhaps you should read the article, where it explicitly states that AFR isn't the only form of multi GPU load sharing. The iGPU could do post processing, such as deferred rendering of lighting. It's not implemented in this particular benchmark yet, but it's been demonstrated in the Unreal engine. Reply
  • Harry Lloyd - Tuesday, October 27, 2015 - link

    I do not see this ever being practical. I would rather see the results of split frame rendering on two identical GPUs, that seems to have real potential. Reply

Log in

Don't have an account? Sign up now