First Thoughts

Wrapping up our first look at Ashes of the Singularity and DirectX 12 Explicit Multi-Adapter, when Microsoft first unveiled the technology back at BUILD 2015, I figured it would only be a matter of time until someone put together a game utilizing the technology. After all, Epic and Square already had their tech demos up and running. However with the DirectX 12 ecosystem still coming together here in the final months of 2015 – and that goes for games as well as drivers – I wasn’t expecting something quite this soon.

As it stands the Ashes of the Singularity multi-GPU tech demo is just that, a tech demo for a game that itself is only in Alpha testing. There are still optimizations to be made and numerous bugs to be squashed. But despite all of that, seeing AMD and NVIDIA video cards working together to render a game is damn impressive.

Seeing as this build of Ashes is a tech demo, I’m hesitant to read too much into the precise benchmark numbers we’re seeing. That said, the fact that the fastest multi-GPU setup was a mixed AMD/NVIDIA GPU setup was something I wasn’t expecting and definitely makes it all the more interesting. DirectX 11 games are going to be around for a while longer yet, so we’re likely still some time away from a mixed GPU gaming setup being truly viable, but it will be interesting to see just what Oxide and other developers can pull off with explicit multi-adapter as they become more familiar with the technology and implement more advanced rendering modes.

Meanwhile it’s interesting to note just how far the industry as a whole has come since 2005 or even 2010. GPU architectures have become increasingly similar and tighter API standards have greatly curtailed the number of implementation differences that would prevent interoperability. And with Explicit Multi-Adapter, Microsoft and the GPU vendors have laid down a solid path for allowing game developers to finally tap the performance of multiple GPUs in a system, both integrated and discrete.

The timing couldn’t be any better either. As integrated GPUs have consumed the low-end GPU market and both CPU vendors devote more die space than ever to their respective integrated GPUs, using a discrete GPU leaves an increasingly large amount of silicon unused in the modern gaming system. Explicit multi-adapter in turn isn’t the silver bullet to that problem, but it is a means to finally putting the integrated GPU to good use even when it’s not a system’s primary GPU.

However with that said, it’s important to note that what happens from here is ultimately more in the hands of game developers than hardware developers. Given the nature of the explicit API, it’s now the game developers that have to do most of the legwork on implementing multi-GPU, and I’m left to wonder how many of them are up to the challenge. Hardware developers have an obvious interest in promoting and developing multi-GPU technology in order to sell more GPUs – which is how we got SLI and Crossfire in the first place – but software developers don’t have that same incentive.

Ultimately as gamers all we can do is take a wait-and-see approach to the whole matter. But as DirectX 12 game development ramps up, I am cautiously optimistic that positive experiences like Ashes will help encourage other developers to plan for multi-adapter support as well.

Ashes GPU Performance: Single & Mixed 2012 GPUs
Comments Locked

180 Comments

View All Comments

  • Grimsace - Tuesday, October 27, 2015 - link

    Nvida and AMD both have a history of re-branding cards with the same gpus. Nvidia SLI requires that the cards are the exactly the same model (i.e. GTX 760 with another GTX 760). While AMD still allow you to crossfire the cards as long as they have the same basic architecture (i.e. Radeon 7870 and an R9 280X).
  • Ryan Smith - Monday, October 26, 2015 - link

    "What about pairing the IGP with a low end discrete part where the performance gap is much smaller? "

    Right now it would still be too slow. Oxide is only ready to test Ashes on high-end hardware at this point.

    "This pre-beta requires very high-end hardware and should only be joined by people with substantial technical expertise and experience in real-time strategy games.

    These builds are buggy, gameplay is very incomplete and it'll probably kill your pets."

    Which is not to say that I'm not curious as well. But it's one of those matters where Ashes needs some more development work before they're ready to show off any kind of fusion involving an iGPU.
  • JamesDax3 - Monday, October 26, 2015 - link

    Intel iGPUs may not be up to par but AMD APUs should be. Would love to see this done with an A10-7870K paired with a R7 360/370 or GTX 950/960.
  • naretla - Tuesday, October 27, 2015 - link

    Intel Iris Pro actually outperforms AMD APUs: http://www.tomshardware.com/reviews/intel-core-i7-...
  • silverblue - Tuesday, October 27, 2015 - link

    ...for double the price. Will Intel's parts also benefit from DX12?
  • nathanddrews - Tuesday, October 27, 2015 - link

    It always costs more to get better performance. Why would that suddenly change in the case of Iris Pro vs APU? If you recall, Intel has been showing DX12 demos on Haswell, Broadwell, and Skylake for some time now. Skylake has been confirmed to support feature level 12_1.
  • silverblue - Tuesday, October 27, 2015 - link

    That doesn't necessarily mean it'll perform better at DX12 than in DX11; ask NVIDIA. However, NVIDIA's DX11 performance is that good, it's little surprise they're not benefitting with DX12.

    It does cost more to get better performance, you're right, however until Broadwell, Intel hadn't provided something to challenge AMD on the desktop. Intel's CPUs generally did cost more regardless of the strength of their IGP.
  • nathanddrews - Wednesday, October 28, 2015 - link

    I'm not sure I follow your post. Intel is more expensive than AMD because they get better CPU performance AND better IGP performance (Iris Pro only). They have also shown - in demos and game engines - that DX12 performance is better than DX11 performance.

    Not sure what NVIDIA has to do with this...
  • looncraz - Wednesday, October 28, 2015 - link

    It better for twice the money!

    AMD could easily build an APU twice as fast, but memory bandwidth is a real issue.

    We will see what they have up their sleeves on the coming year...
  • patrickjp93 - Wednesday, October 28, 2015 - link

    How can bandwidth be the issue when Intel gets around it so easily? Even without eDRAM HD 6000 smacks Kaveri upside the head. Maybe Intel's just better at the integrated game...

Log in

Don't have an account? Sign up now