First Thoughts

Wrapping up our first look at Ashes of the Singularity and DirectX 12 Explicit Multi-Adapter, when Microsoft first unveiled the technology back at BUILD 2015, I figured it would only be a matter of time until someone put together a game utilizing the technology. After all, Epic and Square already had their tech demos up and running. However with the DirectX 12 ecosystem still coming together here in the final months of 2015 – and that goes for games as well as drivers – I wasn’t expecting something quite this soon.

As it stands the Ashes of the Singularity multi-GPU tech demo is just that, a tech demo for a game that itself is only in Alpha testing. There are still optimizations to be made and numerous bugs to be squashed. But despite all of that, seeing AMD and NVIDIA video cards working together to render a game is damn impressive.

Seeing as this build of Ashes is a tech demo, I’m hesitant to read too much into the precise benchmark numbers we’re seeing. That said, the fact that the fastest multi-GPU setup was a mixed AMD/NVIDIA GPU setup was something I wasn’t expecting and definitely makes it all the more interesting. DirectX 11 games are going to be around for a while longer yet, so we’re likely still some time away from a mixed GPU gaming setup being truly viable, but it will be interesting to see just what Oxide and other developers can pull off with explicit multi-adapter as they become more familiar with the technology and implement more advanced rendering modes.

Meanwhile it’s interesting to note just how far the industry as a whole has come since 2005 or even 2010. GPU architectures have become increasingly similar and tighter API standards have greatly curtailed the number of implementation differences that would prevent interoperability. And with Explicit Multi-Adapter, Microsoft and the GPU vendors have laid down a solid path for allowing game developers to finally tap the performance of multiple GPUs in a system, both integrated and discrete.

The timing couldn’t be any better either. As integrated GPUs have consumed the low-end GPU market and both CPU vendors devote more die space than ever to their respective integrated GPUs, using a discrete GPU leaves an increasingly large amount of silicon unused in the modern gaming system. Explicit multi-adapter in turn isn’t the silver bullet to that problem, but it is a means to finally putting the integrated GPU to good use even when it’s not a system’s primary GPU.

However with that said, it’s important to note that what happens from here is ultimately more in the hands of game developers than hardware developers. Given the nature of the explicit API, it’s now the game developers that have to do most of the legwork on implementing multi-GPU, and I’m left to wonder how many of them are up to the challenge. Hardware developers have an obvious interest in promoting and developing multi-GPU technology in order to sell more GPUs – which is how we got SLI and Crossfire in the first place – but software developers don’t have that same incentive.

Ultimately as gamers all we can do is take a wait-and-see approach to the whole matter. But as DirectX 12 game development ramps up, I am cautiously optimistic that positive experiences like Ashes will help encourage other developers to plan for multi-adapter support as well.

Ashes GPU Performance: Single & Mixed 2012 GPUs
Comments Locked

180 Comments

View All Comments

  • Gigaplex - Tuesday, October 27, 2015 - link

    Intel's top iGPUs can beat AMDs top ones, but expect to pay a premium.
  • loguerto - Sunday, November 1, 2015 - link

    I love how intel managed to implement ondie ram as a workaround to the ddr3 huge bottleneck. I wonder why AMD did not cosed to do the same as their 7870k is evidently bottlenecked by the ddr3, is there a cost problem or they are waiting to switch directly on the hbm memory?
  • CiccioB - Tuesday, October 27, 2015 - link

    A test with the Titan X as master card would be interesting. It may show if the sync problem is HW or SW related.
    Test with low tier cards should be run a 1080p. GTX680 has never been so good at higher resolutions, so maybe the test at FullHD may better level both graphics cards performances and show different results with mixed cards.

    BTW, nvidia cards/driver are not optimized for PCI-e transfers as they use proprietary connectors to do SLI and synchronization, while AMD cards use PCI-e transfer to to all the above. Maybe the problem is that.
    It would also be interesting to see how these mixes work when used on slower PCI-e lanes. You know, not all PCs have PCI-e 3.0 or running at 16x.

    Specific results apart (they will most probably change with driver updates) it is interesting to see that this particular feature work.
  • VarthDaver - Tuesday, October 27, 2015 - link

    Can we also get this? "In conjunction with last week’s Steam Early Access release of the game, Oxide has sent over a very special build of Ashes." I have had access to Ashes for a while but do not see the AFR checkbox in my version to match their special one. I would be happy to provide some 2x TitanX performance numbers if I could get a copy with AFR enabled.
  • Ryan Smith - Tuesday, October 27, 2015 - link

    As I briefly mention elsewhere, AFR support is very much an experimental feature in Ashes at the moment. Oxide has mentioned elsewhere that they will eventually push it out in public builds, but not until the feature is in a better state.
  • silverblue - Tuesday, October 27, 2015 - link

    That's correct as regards the 290, but the 7970 uses a CrossFire bridge.
  • MrPoletski - Tuesday, October 27, 2015 - link

    What about integrated graphics solutions? It'd be nice to see what this does to our potential CPU choice. Can we see a top of the line Intel CPU vs a top of the line AMD cpu now and see how each ones iGPU helps out with a 980ti/furyx?
  • CiccioB - Tuesday, October 27, 2015 - link

    I suggest you and all the others that keep on suggesting to do such tests or making fantasies on hybrid systems to first understand how AFR works and so to understand by yourself why it is useless to use a iGPU with it.
  • Gigaplex - Tuesday, October 27, 2015 - link

    And perhaps you should read the article, where it explicitly states that AFR isn't the only form of multi GPU load sharing. The iGPU could do post processing, such as deferred rendering of lighting. It's not implemented in this particular benchmark yet, but it's been demonstrated in the Unreal engine.
  • Harry Lloyd - Tuesday, October 27, 2015 - link

    I do not see this ever being practical. I would rather see the results of split frame rendering on two identical GPUs, that seems to have real potential.

Log in

Don't have an account? Sign up now