Throughout this year we’ve looked at several previews and technical demos of DirectX 12 technologies, both before and after the launch of Windows 10 in July. As the most significant update to the DirectX API since DirectX 10 in 2007, the release of DirectX 12 marks the beginning of a major overhaul of how developers will program for modern GPUs. So to say there’s quite a bit of interest in it – both from consumers and developers – would be an understatement.

In putting together the DirectX 12 specification, Microsoft and their partners planned for the long haul, present and future. DirectX 12 has a number of immediately useful features in it that has developers grinning from ear to ear, but at the same time given the fact that another transition like this will not happen for many years (if at all), DirectX 12 and the update to the underlying display driver foundation were meant to be very forward looking and to pack in as many advanced features as would be reasonable. Consequently the first retail games such as this quarter’s Fable Legends will just scratch the surface of what the API can do, as developers are still in the process of understanding the API and writing new engines around it, and GPU driver developers are similarly still hammering out their code and improving their DirectX 12 functionality.

Of everything that has been written about DirectX 12 so far, the bulk of the focus has been on the immediate benefits of the low-level nature of the API, and this is for a good reason. The greatly reduced driver overhead and better ability to spread out work submission over multiple CPU cores stands to be extremely useful for game developers, especially as the CPU submission bottleneck is among the greatest bottlenecks facing GPUs today. Even then, taking full advantage of this functionality will take some time as developers have become accustomed to minimizing the use of draw calls to work around the bottleneck, so it is safe to say that we are at the start of what is going to be a long transition for gamers and game developers.

A little farther out on the horizon than the driver overhead improvements are DirectX 12’s improvements to multi-GPU functionality. Traditionally the domain of drivers – developers have little control under DirectX 11 – DirectX 12’s explicit controls extend to multi-GPU rendering as well. It is now up to developers to decide if they want to use multiple GPUs and how they want to use them. And with explicit control over the GPUs along with the deep understanding that only a game’s developer can have for the layout of their rendering pipeline, DirectX 12 gives developers the freedom to do things that could never be done before.

That brings us to today’s article, an initial look into the multi-GPU capabilities of DirectX 12. Developer Oxide Games, who is responsible for the popular Star Swarm demo we looked at earlier this year, has taken the underlying Nitrous engine and are ramping up for the 2016 release of the first retail game using the engine, Ashes of the Singularity. As part of their ongoing efforts to Nitrous as a testbed for DirectX 12 technologies and in conjunction with last week’s Steam Early Access release of the game, Oxide has sent over a very special build of Ashes.

What makes this build so special is that it’s the first game demo for DirectX 12’s multi-GPU Explicit Multi-Adapter (AKA Multi Display Adapter) functionality. We’ll go into a bit more on Explicit Multi-Adapter in a bit, but in short it is one of DirectX 12’s two multi-GPU modes, and thanks to the explicit controls offered by the API, allows for disparate GPUs to be paired up. More than SLI and more than Crossfire, EMA allows for dissimilar GPUs to be used in conjunction with each other, and productively at that.

So in an article only fitting for the week of Halloween, today we will be combining NVIDIA GeForce and AMD Radeon cards into a single system – a single rendering setup – to see how well Oxide’s early implementation of the technology works. It may be unnatural and perhaps even a bit unholy, but there’s something undeniably awesome about watching a single game rendered by two dissimilar cards in this fashion.

A Brief History & DirectX 12
Comments Locked

180 Comments

View All Comments

  • IKeelU - Monday, October 26, 2015 - link

    We've come a hell of a long way since Voodoo SLI.

    Leaving it up to developers is most definitely a good thing, and I'm not just saying that as hindsight on the article. We'll always be better off not depending on a small cadre of developers in Nvidia/AMD's driver departments determining SLI performance optimizations. Based on what I'm reading here, the field should be much more open. I can't wait to see how different dev houses deal with these challenges.
  • lorribot - Monday, October 26, 2015 - link

    Generally speaking leaving it up to developers is a bad thing, you will end up with lots of fragmentation, patchy/incomplete implementation and a whole new level of instability, that is why DirectX came about in the first place.
    I just hope this doesn't break more than it can fix.
    We need an old school 50% upgrade to the hardware capability to deliver 4K at reasonable price point, but I don't see that coming any time soon judging by the last 3 or 4 years of small incremental steps.
    All of this is the industry recognising it's inability to deliver hardware and wringing every last last drop of performance from the existing equipment/nodes/architecture.
  • McDamon - Tuesday, October 27, 2015 - link

    Really? I'm developer, so I'm biased, but to me, leaving it up to the developer is what drives the innovation in this space. DirectX, much like OpenGL, were conceived to homologate APIs and devices. Glide and such. In fact, as is obvious, both APIs have moved away from the fixed function pipeline to a programmable model to allow for developer flexibility, not hinder it. Sure, there will be challenges for the first few tries with the new model, but that's why companies hire smart people right?
  • CiccioB - Tuesday, October 27, 2015 - link

    Slow incremental steps during last 3-4 years?
    You probably are speaking about AMD only, as nvidia has made great progresses from GTX680 to GTX980Ti both in terms of performances and power consumption. All of this on the same PP.
  • loguerto - Sunday, November 1, 2015 - link

    You are hugely sub estimating the GCN architecture, nvidia might have had a jump from kepler to maxwell in terms of efficiency (in part by cutting down the double precision performance), but still with the same slightly improved GCN architecture amd competes in dx11 and often outperforms the maxwells in the latests dx12 benchmarks. I when I say that I invite everyone to look at the entire GPU lineup and not only the 980ti vs fury x benchmarks.
  • IKeelU - Tuesday, October 27, 2015 - link

    Your first statement is pretty entirely wrong: a) we already have fragmentation in the form of different hardware manufacturers and driver streams. b) common solutions will be created in the form of licensed engines, c) the people currently solving these problems *are* developers, they just work for Nvidia and AMD, instead of those directly affected by the quality of end-product (game companies).

    Your contention that solutions should be closed off only really works when there's a clearly dominant and common solution to the problem. As we've learned over the last 15 years, there simply isn't. Every game release triggers a barrage of optimizations from the various driver teams. That code is totally out of scope - it should be managed by the concerned game company, not Nvidia/AMD/Intel.
  • callous - Monday, October 26, 2015 - link

    why test with intel APU + fury? It's more of a mainstream configuration than 2 video cards
  • Refuge - Tuesday, October 27, 2015 - link

    I believe it is too large of a performance gap, it would just hamstring the Fury.
  • nagi603 - Monday, October 26, 2015 - link

    nVidia already forcefully disabled using an nvidia card as physix add-in card with an AMD main GPU. When will they try to disable this extra feature?
  • silverblue - Tuesday, October 27, 2015 - link

    They may already have; then again, there could be a legitimate reason for the less than stellar performance with an AMD card as the slave.

Log in

Don't have an account? Sign up now