Throughout this year we’ve looked at several previews and technical demos of DirectX 12 technologies, both before and after the launch of Windows 10 in July. As the most significant update to the DirectX API since DirectX 10 in 2007, the release of DirectX 12 marks the beginning of a major overhaul of how developers will program for modern GPUs. So to say there’s quite a bit of interest in it – both from consumers and developers – would be an understatement.

In putting together the DirectX 12 specification, Microsoft and their partners planned for the long haul, present and future. DirectX 12 has a number of immediately useful features in it that has developers grinning from ear to ear, but at the same time given the fact that another transition like this will not happen for many years (if at all), DirectX 12 and the update to the underlying display driver foundation were meant to be very forward looking and to pack in as many advanced features as would be reasonable. Consequently the first retail games such as this quarter’s Fable Legends will just scratch the surface of what the API can do, as developers are still in the process of understanding the API and writing new engines around it, and GPU driver developers are similarly still hammering out their code and improving their DirectX 12 functionality.

Of everything that has been written about DirectX 12 so far, the bulk of the focus has been on the immediate benefits of the low-level nature of the API, and this is for a good reason. The greatly reduced driver overhead and better ability to spread out work submission over multiple CPU cores stands to be extremely useful for game developers, especially as the CPU submission bottleneck is among the greatest bottlenecks facing GPUs today. Even then, taking full advantage of this functionality will take some time as developers have become accustomed to minimizing the use of draw calls to work around the bottleneck, so it is safe to say that we are at the start of what is going to be a long transition for gamers and game developers.

A little farther out on the horizon than the driver overhead improvements are DirectX 12’s improvements to multi-GPU functionality. Traditionally the domain of drivers – developers have little control under DirectX 11 – DirectX 12’s explicit controls extend to multi-GPU rendering as well. It is now up to developers to decide if they want to use multiple GPUs and how they want to use them. And with explicit control over the GPUs along with the deep understanding that only a game’s developer can have for the layout of their rendering pipeline, DirectX 12 gives developers the freedom to do things that could never be done before.

That brings us to today’s article, an initial look into the multi-GPU capabilities of DirectX 12. Developer Oxide Games, who is responsible for the popular Star Swarm demo we looked at earlier this year, has taken the underlying Nitrous engine and are ramping up for the 2016 release of the first retail game using the engine, Ashes of the Singularity. As part of their ongoing efforts to Nitrous as a testbed for DirectX 12 technologies and in conjunction with last week’s Steam Early Access release of the game, Oxide has sent over a very special build of Ashes.

What makes this build so special is that it’s the first game demo for DirectX 12’s multi-GPU Explicit Multi-Adapter (AKA Multi Display Adapter) functionality. We’ll go into a bit more on Explicit Multi-Adapter in a bit, but in short it is one of DirectX 12’s two multi-GPU modes, and thanks to the explicit controls offered by the API, allows for disparate GPUs to be paired up. More than SLI and more than Crossfire, EMA allows for dissimilar GPUs to be used in conjunction with each other, and productively at that.

So in an article only fitting for the week of Halloween, today we will be combining NVIDIA GeForce and AMD Radeon cards into a single system – a single rendering setup – to see how well Oxide’s early implementation of the technology works. It may be unnatural and perhaps even a bit unholy, but there’s something undeniably awesome about watching a single game rendered by two dissimilar cards in this fashion.

A Brief History & DirectX 12
Comments Locked

180 Comments

View All Comments

  • mosu - Thursday, October 29, 2015 - link

    Did you ever owned or touched an Iris HD 6000? or at least know someone who did?
  • wiak - Friday, October 30, 2015 - link

    eDRAM...
    if AMD goes HBM2 like they did in the past with ddr3 sideport memory

    just a taught
    AMD Zen 4-8 Core with Radeon (2048+ shaders, 2 or 4GB HBM2 (either as slot on mb or ondie like fury)

    i think i read somewhere there will be a single socket for APUs and CPUs,
    so amd lineup can be a Zen CPU with 8-16 cores for perf system and a Zen APU with 4-8 cores, 2048+ shaders and hbm2 for mainstream/laptops computers
  • Michael Bay - Thursday, October 29, 2015 - link

    If it actually could, we would be able to buy it. No such luck.
  • Revdarian - Thursday, October 29, 2015 - link

    Well, it has currently two offerings, one called Xbox One, and the other one that is more powerful is called the Playstation 4.

    Those are technically APU's, developed by AMD, and can be bought at the moment. Just saying, it is possible.
  • Midwayman - Monday, October 26, 2015 - link

    Seems like it would be great to do post effects and free up the main gpu to work on rendering.
  • Alexvrb - Monday, October 26, 2015 - link

    Agreed, as far as dGPU and iGPU cooperation goes I think Epic is on to something there. Free 10% performance boost? Why not. Now for dGPU + dGPU modes, I am not killed on the idea of unlinked mode. Seems like developers would have their work cut out for them with all the different possible configurations. Linked mode makes the most sense to me for consistency and relative difficulty to implement. Plus anyone using multiple GPUs is already used to using a pair of the same GPUs.

    Regardless of whether they go linked or unlinked though... I'd really like them to do something other than AFR. Split-frame, tile-based, something, anything. Blech.
  • Refuge - Monday, October 26, 2015 - link

    For high end AAA Titles likened mode would be optimum, I agree. Allows for their fast releases, and still gives a great performance boost. Their target demographic is already used to having to jump through hoops to get the results they want. Getting identical GPU's won't affect them.

    For games with extended lifetimes like MMO's such as WoW, Swtor, etc, etc. Unlikened mode is worth the investment, as it allows your game to hit a MUCH wider customer base with increased graphical performance. These are crowds that are easy to pole for data so they would easily know who they are directing their efforts towards, and the lifespan of the game make the extra man hours a worthy investment.
  • Gadgety - Tuesday, October 27, 2015 - link

    @alexvrb And game testers have their work cut out for them as well, testing all sorts of hardware configurations.

    In addition game developers will likely see the need for new skill sets, and likely this will benefit larger outfits being able to cope with developing and tuning their games to various hard ware combinations.
  • DanNeely - Tuesday, October 27, 2015 - link

    I suspect most small devs will continue to use their engine in the normal way, not taking any more advantage of most of the DX12 multi-GPU features any more than they did SLI/XFire in DX11 or prior. The only exception I see might be offloading post-processing to the IGP. That looks like a much simpler split to implement; and might be something they could get for free from the next version of their engine.
  • nightbringer57 - Monday, October 26, 2015 - link

    Wow. I didn't expect this to work this well.

    Just out of curiosity... Could you get a few more data points to show how a Titan X + Fury X/Fury X + Titan X would fare?

Log in

Don't have an account? Sign up now