First Thoughts

Wrapping up our first look at Ashes of the Singularity and DirectX 12 Explicit Multi-Adapter, when Microsoft first unveiled the technology back at BUILD 2015, I figured it would only be a matter of time until someone put together a game utilizing the technology. After all, Epic and Square already had their tech demos up and running. However with the DirectX 12 ecosystem still coming together here in the final months of 2015 – and that goes for games as well as drivers – I wasn’t expecting something quite this soon.

As it stands the Ashes of the Singularity multi-GPU tech demo is just that, a tech demo for a game that itself is only in Alpha testing. There are still optimizations to be made and numerous bugs to be squashed. But despite all of that, seeing AMD and NVIDIA video cards working together to render a game is damn impressive.

Seeing as this build of Ashes is a tech demo, I’m hesitant to read too much into the precise benchmark numbers we’re seeing. That said, the fact that the fastest multi-GPU setup was a mixed AMD/NVIDIA GPU setup was something I wasn’t expecting and definitely makes it all the more interesting. DirectX 11 games are going to be around for a while longer yet, so we’re likely still some time away from a mixed GPU gaming setup being truly viable, but it will be interesting to see just what Oxide and other developers can pull off with explicit multi-adapter as they become more familiar with the technology and implement more advanced rendering modes.

Meanwhile it’s interesting to note just how far the industry as a whole has come since 2005 or even 2010. GPU architectures have become increasingly similar and tighter API standards have greatly curtailed the number of implementation differences that would prevent interoperability. And with Explicit Multi-Adapter, Microsoft and the GPU vendors have laid down a solid path for allowing game developers to finally tap the performance of multiple GPUs in a system, both integrated and discrete.

The timing couldn’t be any better either. As integrated GPUs have consumed the low-end GPU market and both CPU vendors devote more die space than ever to their respective integrated GPUs, using a discrete GPU leaves an increasingly large amount of silicon unused in the modern gaming system. Explicit multi-adapter in turn isn’t the silver bullet to that problem, but it is a means to finally putting the integrated GPU to good use even when it’s not a system’s primary GPU.

However with that said, it’s important to note that what happens from here is ultimately more in the hands of game developers than hardware developers. Given the nature of the explicit API, it’s now the game developers that have to do most of the legwork on implementing multi-GPU, and I’m left to wonder how many of them are up to the challenge. Hardware developers have an obvious interest in promoting and developing multi-GPU technology in order to sell more GPUs – which is how we got SLI and Crossfire in the first place – but software developers don’t have that same incentive.

Ultimately as gamers all we can do is take a wait-and-see approach to the whole matter. But as DirectX 12 game development ramps up, I am cautiously optimistic that positive experiences like Ashes will help encourage other developers to plan for multi-adapter support as well.

Ashes GPU Performance: Single & Mixed 2012 GPUs
Comments Locked

180 Comments

View All Comments

  • IKeelU - Monday, October 26, 2015 - link

    We've come a hell of a long way since Voodoo SLI.

    Leaving it up to developers is most definitely a good thing, and I'm not just saying that as hindsight on the article. We'll always be better off not depending on a small cadre of developers in Nvidia/AMD's driver departments determining SLI performance optimizations. Based on what I'm reading here, the field should be much more open. I can't wait to see how different dev houses deal with these challenges.
  • lorribot - Monday, October 26, 2015 - link

    Generally speaking leaving it up to developers is a bad thing, you will end up with lots of fragmentation, patchy/incomplete implementation and a whole new level of instability, that is why DirectX came about in the first place.
    I just hope this doesn't break more than it can fix.
    We need an old school 50% upgrade to the hardware capability to deliver 4K at reasonable price point, but I don't see that coming any time soon judging by the last 3 or 4 years of small incremental steps.
    All of this is the industry recognising it's inability to deliver hardware and wringing every last last drop of performance from the existing equipment/nodes/architecture.
  • McDamon - Tuesday, October 27, 2015 - link

    Really? I'm developer, so I'm biased, but to me, leaving it up to the developer is what drives the innovation in this space. DirectX, much like OpenGL, were conceived to homologate APIs and devices. Glide and such. In fact, as is obvious, both APIs have moved away from the fixed function pipeline to a programmable model to allow for developer flexibility, not hinder it. Sure, there will be challenges for the first few tries with the new model, but that's why companies hire smart people right?
  • CiccioB - Tuesday, October 27, 2015 - link

    Slow incremental steps during last 3-4 years?
    You probably are speaking about AMD only, as nvidia has made great progresses from GTX680 to GTX980Ti both in terms of performances and power consumption. All of this on the same PP.
  • loguerto - Sunday, November 1, 2015 - link

    You are hugely sub estimating the GCN architecture, nvidia might have had a jump from kepler to maxwell in terms of efficiency (in part by cutting down the double precision performance), but still with the same slightly improved GCN architecture amd competes in dx11 and often outperforms the maxwells in the latests dx12 benchmarks. I when I say that I invite everyone to look at the entire GPU lineup and not only the 980ti vs fury x benchmarks.
  • IKeelU - Tuesday, October 27, 2015 - link

    Your first statement is pretty entirely wrong: a) we already have fragmentation in the form of different hardware manufacturers and driver streams. b) common solutions will be created in the form of licensed engines, c) the people currently solving these problems *are* developers, they just work for Nvidia and AMD, instead of those directly affected by the quality of end-product (game companies).

    Your contention that solutions should be closed off only really works when there's a clearly dominant and common solution to the problem. As we've learned over the last 15 years, there simply isn't. Every game release triggers a barrage of optimizations from the various driver teams. That code is totally out of scope - it should be managed by the concerned game company, not Nvidia/AMD/Intel.
  • callous - Monday, October 26, 2015 - link

    why test with intel APU + fury? It's more of a mainstream configuration than 2 video cards
  • Refuge - Tuesday, October 27, 2015 - link

    I believe it is too large of a performance gap, it would just hamstring the Fury.
  • nagi603 - Monday, October 26, 2015 - link

    nVidia already forcefully disabled using an nvidia card as physix add-in card with an AMD main GPU. When will they try to disable this extra feature?
  • silverblue - Tuesday, October 27, 2015 - link

    They may already have; then again, there could be a legitimate reason for the less than stellar performance with an AMD card as the slave.

Log in

Don't have an account? Sign up now