Throughout this year we’ve looked at several previews and technical demos of DirectX 12 technologies, both before and after the launch of Windows 10 in July. As the most significant update to the DirectX API since DirectX 10 in 2007, the release of DirectX 12 marks the beginning of a major overhaul of how developers will program for modern GPUs. So to say there’s quite a bit of interest in it – both from consumers and developers – would be an understatement.

In putting together the DirectX 12 specification, Microsoft and their partners planned for the long haul, present and future. DirectX 12 has a number of immediately useful features in it that has developers grinning from ear to ear, but at the same time given the fact that another transition like this will not happen for many years (if at all), DirectX 12 and the update to the underlying display driver foundation were meant to be very forward looking and to pack in as many advanced features as would be reasonable. Consequently the first retail games such as this quarter’s Fable Legends will just scratch the surface of what the API can do, as developers are still in the process of understanding the API and writing new engines around it, and GPU driver developers are similarly still hammering out their code and improving their DirectX 12 functionality.

Of everything that has been written about DirectX 12 so far, the bulk of the focus has been on the immediate benefits of the low-level nature of the API, and this is for a good reason. The greatly reduced driver overhead and better ability to spread out work submission over multiple CPU cores stands to be extremely useful for game developers, especially as the CPU submission bottleneck is among the greatest bottlenecks facing GPUs today. Even then, taking full advantage of this functionality will take some time as developers have become accustomed to minimizing the use of draw calls to work around the bottleneck, so it is safe to say that we are at the start of what is going to be a long transition for gamers and game developers.

A little farther out on the horizon than the driver overhead improvements are DirectX 12’s improvements to multi-GPU functionality. Traditionally the domain of drivers – developers have little control under DirectX 11 – DirectX 12’s explicit controls extend to multi-GPU rendering as well. It is now up to developers to decide if they want to use multiple GPUs and how they want to use them. And with explicit control over the GPUs along with the deep understanding that only a game’s developer can have for the layout of their rendering pipeline, DirectX 12 gives developers the freedom to do things that could never be done before.

That brings us to today’s article, an initial look into the multi-GPU capabilities of DirectX 12. Developer Oxide Games, who is responsible for the popular Star Swarm demo we looked at earlier this year, has taken the underlying Nitrous engine and are ramping up for the 2016 release of the first retail game using the engine, Ashes of the Singularity. As part of their ongoing efforts to Nitrous as a testbed for DirectX 12 technologies and in conjunction with last week’s Steam Early Access release of the game, Oxide has sent over a very special build of Ashes.

What makes this build so special is that it’s the first game demo for DirectX 12’s multi-GPU Explicit Multi-Adapter (AKA Multi Display Adapter) functionality. We’ll go into a bit more on Explicit Multi-Adapter in a bit, but in short it is one of DirectX 12’s two multi-GPU modes, and thanks to the explicit controls offered by the API, allows for disparate GPUs to be paired up. More than SLI and more than Crossfire, EMA allows for dissimilar GPUs to be used in conjunction with each other, and productively at that.

So in an article only fitting for the week of Halloween, today we will be combining NVIDIA GeForce and AMD Radeon cards into a single system – a single rendering setup – to see how well Oxide’s early implementation of the technology works. It may be unnatural and perhaps even a bit unholy, but there’s something undeniably awesome about watching a single game rendered by two dissimilar cards in this fashion.

A Brief History & DirectX 12
POST A COMMENT

180 Comments

View All Comments

  • AndrewJacksonZA - Tuesday, October 27, 2015 - link

    Did you try running three cards Ryan? Reply
  • BrokenCrayons - Tuesday, October 27, 2015 - link

    It's pretty interesting stuff, but I think most of the people who buy computing hardware could care less about the performance of a pair of high end graphics processors that are individually priced well above the cost of a nice laptop. What's more substantial in this is the ability to make use of an iGPU whena dGPU is present in the system to obtain a better overall user experience. Very, very few people are going to bother throwing away money to buy even one graphics card at a cost of over $400 let alone two and then also soak up the cost of the components necessary to support their operation. I can't imagine developers, those in control of whether or not this new feature gets any support in the industry, are going to invest much money by writing code to support such a small subset of the overall market. In the end, what DX12 might do is have the exact opposite effect of what we're predicting...it could ultimately kill off high end multi GPU setups as impractical and wholly unsupported by game production studios. The most I'd see this ever doing is making the i+dGPU scenario practical. Everything else seems a little too expensive to implement for a limited market of large desktop computers that are rapidly fading away as small form factor and mobile devices replace them. Reply
  • diola - Tuesday, October 27, 2015 - link

    Hi, after connecting the cards in mainboard and install the drivers, you need something else to work?I was unable to recognize the second card when the first is from AMD and the second from Nvidia. Reply
  • Wunkbanok - Tuesday, October 27, 2015 - link

    How does this multi-adapter thing works? can i use a new card with an older card and get any improvement? wich cards are supported? Reply
  • dray67 - Tuesday, October 27, 2015 - link

    I'm very tempted to give the Ashes alpha a try, I've recently upgraded from a 680 to a 980 ti and I like the idea of a linked setup, but theres so many questions, the main one being compatibility with other hardware, motherboards etc.

    I can't help but think that the bigger the disparity between cards the less you'd gain but either way I like where this is going (giggly).
    Reply
  • Valantar - Tuesday, October 27, 2015 - link

    I really wish you would take this one step further and test combinations of new and old cards - Fury X + 7970/GTX 680 and 980 Ti + GTX 680/7970 would be the really interesting combinations here. Also, far more relevant for most gamers looking to upgrade at some point in the next few years. Reply
  • loguerto - Tuesday, October 27, 2015 - link

    I would like to mention an off-topic argument:
    the kepler gtx 680 as it's time was constantly outperforming the GCN 1.0 7970, and was on pair with the 7970 ghz edition. Now we have the the 7970 non ghz edition outperforming the gtx 680 by 20%. It's just a fact that proves how AMD cards compared to Nvidias.
    Reply
  • Clone12100 - Tuesday, October 27, 2015 - link

    Why no comparison of the older cards with the newer cards? Like a 7970 with a 980ti Reply
  • WhisperingEye - Tuesday, October 27, 2015 - link

    Because they are using Alternate Frame Rendering. This means that the slowest card drives the set. So your new $400 card will go at the speed of your (now) $150 card. Reply
  • Oxford Guy - Friday, October 30, 2015 - link

    Would have liked to have seen the 290X or 390X. Reply

Log in

Don't have an account? Sign up now