Ashes of the Singularity: Unlinked Explicit Multi-Adapter with AFR

Based off of Oxide’s Nitrous engine, Ashes of the Singularity will be the first full game released using the engine. Oxide had previously used the engine as part of their Star Swarm technical demo, showcasing the benefits of vastly improved draw call throughput under Mantle and DirectX 12. As one might expect then, for their first retail game Oxide is developing a game around Nitrous’s DX12 capabilities, with an eye towards putting a large number of draw calls to good use and to develop something that might not have been as good looking under DirectX 11.

That resulting game is Ashes of the Singularity, a massive-scale real time strategy game. Ashes is a spiritual successor of sorts to 2007’s Supreme Commander, a game with a reputation for its technical ambition. Similar to Supreme Commander, Oxide is aiming high with Ashes, and while the current alpha is far from optimized, they have made it clear that even the final version of the game will push CPUs and GPUs hard. Between a complex game simulation (including ballistic and line of sight checks for individual units) and the rendering resources needed to draw all of those units and their weapons effects in detail over a large playing field, I’m expecting that the final version of Ashes will be the most demanding RTS we’ve seen in some number of years.

Because of its high resource requirements Ashes is also a good candidate for multi-GPU scaling, and for this reason Oxide is working on implementing DirectX 12 explicit multi-adapter support into the game. For Ashes, Oxide has opted to start by implementing support for unlinked mode, both because this is a building block for implementing linked mode later on and because from a tech demo point of view this allows Oxide to demonstrate unlinked mode’s most nifty feature: the ability to utilize multiple dissimilar (non-homogenous) GPUs within a single game. EMA with dissimilar GPUs has been shown off in bits and pieces at developer events like Microsoft’s BUILD, but this is the first time an in-game demo has been made available outside of those conferences.

In order to demonstrate EMA and explicit synchronization in action, Oxide has started things off by using a basic alternate frame rendering implementation for the game. As we briefly mentioned in our technical overview of DX12 explicit multi-adapter, EMA puts developers in full control of the rendering process, which for Oxide meant implementing AFR from scratch. This includes assigning frames to each GPU, handling frame buffer transfers from the secondary GPU to the primary GPU, and most importantly of all controlling frame pacing, which is typically the hardest part of AFR to get right.

Because Oxide is using a DX12 EMA AFR implementation here, this gives Ashes quite a bit of flexibility as far as GPU compatibility goes. From a performance standpoint the basic limitations of AFR are still in place – due to the fact that each GPU is tasked with rendering a whole frame, all utilized GPUs need to be close in performance for best results – otherwise Oxide is able to support a wide variety of GPUs with one generic implementation. This includes not only AMD/AMD and NVIDIA/NVIDIA pairings, but GPU pairings that wouldn’t typically work for Crossfire and SLI (e.g. GTX Titan X + GTX 980 Ti). But most importantly of course, this allows Ashes to support using an AMD video card and an NVIDIA video card together as well. In fact beyond the aforementioned performance limitations, Ashes’ AFR mode should work on any two DX12 compliant GPUs.

From a technical standpoint, Oxide tells us that they’ve had a bit of a learning curve in getting EMA working for Ashes – particularly since they’re the first – but that they’re happy with the results. Obviously the fact that this even works is itself a major accomplishment, and in our experience frame pacing with v-sync disabled and tearing enabled feels smooth on the latest generation of high-end cards. Otherwise Oxide is still experimenting with the limits of the hardware and the API; they’ve told us that so far they’ve found that there’s plenty of bandwidth over PCIe for shared textures, and meanwhile they’re incurring a roughly 2ms penalty in transferring data via GPUs.

With that said and to be very clear here, the game itself is still in its alpha state, and the multi-adapter support is not even at alpha (ed: nor is it in the public release at this time). So Ashes’ explicit multi-adapter support is a true tech demo, intended to first and foremost show off the capabilities of EMA rather than what performance will be like in the retail game. As it stands the AFR-enabled build of Ashes occasionally crashes at load time for no obvious reason when AFR is enabled. Furthermore there are stability/corruption issues with newer AMD and NVIDIA drivers, which has required us to use slightly older drivers that have been validated to work. Overall while AMD and NVIDIA have their DirectX 12 drivers up and running, as has been the case with past API launches it’s going to take some time for the two GPU firms to lock down every new feature of the API and driver model and to fully knock out all of their driver bugs.

Finally, Oxide tells us that going forward they will be developing support for additional EMA modes in Ashes. As the current unlinked EMA implementation is stabilized, the next thing on their list will be to add support for linked EMA for better performance on similar GPUs. Oxide is still exploring linked EMA, but somewhat surprisingly they tell us that unlinked EMA already unlocks much of the performance of their AFR implementation. A linked EMA implementation in turn may only improve multi-GPU scaling by a further 5-10%. Beyond that, they will also be looking into alternative implementations of multi-GPU rendering (e.g. work sharing of individual frames), though that is farther off and will likely hinge on other factors such as hardware capabilities and the state of DX12 drivers from each vendor.

The Test

For our look at Ashes’ multi-adapter performance, we’re using Windows 10 with the latest updates on our GPU testbed. This provides plenty of CPU power for the game, and we’ve selected sufficiently high settings to ensure that we’re GPU-bound at all times.

For GPUs we’re using NVIDIA’s GeForce GTX Titan X and GTX 980 Ti, along with AMD’s Radeon R9 Fury X and R9 Fury for the bulk of our testing. As roughly comparable cards in price and performance, the GTX 980 Ti and R9 Fury X are our core comparison cards, with the additional GTX and Fury cards to back them up. Meanwhile we’ve also done a limited amount of testing with the GeForce GTX 680 and Radeon HD 7970 to showcase how well Ashes’ multi-adapter support works on older cards.

Finally, on the driver side of matters we’re using the most recent drivers from AMD that work correctly in multi-adapter mode with this build of Ashes. For AMD that’s Catalyst 15.8 and for NVIDIA that’s release 355.98. We’ve also thrown in single-GPU results with the latest drivers (15.0 and 358.50 respectively) to quickly showcase where single-GPU performance stands with these newest drivers.

CPU: Intel Core i7-4960X @ 4.2GHz
Motherboard: ASRock Fatal1ty X79 Professional
Power Supply: Corsair AX1200i
Hard Disk: Samsung SSD 840 EVO (750GB)
Memory: G.Skill RipjawZ DDR3-1866 4 x 8GB (9-10-9-26)
Case: NZXT Phantom 630 Windowed Edition
Monitor: Asus PQ321
Video Cards: AMD Radeon R9 Fury X
ASUS STRIX R9 Fury
AMD Radeon HD 7970
NVIDIA GeForce GTX Titan X
NVIDIA GeForce GTX 980 Ti
NVIDIA GeForce GTX 680
Video Drivers: NVIDIA Release 355.98
NVIDIA Release 358.50
AMD Catalyst 15.8 Beta
AMD Catalyst 15.10 Beta
OS: Windows 10 Pro
A Brief History & DirectX 12 Ashes GPU Performance: Single & Mixed High-End GPUs
Comments Locked

180 Comments

View All Comments

  • IKeelU - Monday, October 26, 2015 - link

    We've come a hell of a long way since Voodoo SLI.

    Leaving it up to developers is most definitely a good thing, and I'm not just saying that as hindsight on the article. We'll always be better off not depending on a small cadre of developers in Nvidia/AMD's driver departments determining SLI performance optimizations. Based on what I'm reading here, the field should be much more open. I can't wait to see how different dev houses deal with these challenges.
  • lorribot - Monday, October 26, 2015 - link

    Generally speaking leaving it up to developers is a bad thing, you will end up with lots of fragmentation, patchy/incomplete implementation and a whole new level of instability, that is why DirectX came about in the first place.
    I just hope this doesn't break more than it can fix.
    We need an old school 50% upgrade to the hardware capability to deliver 4K at reasonable price point, but I don't see that coming any time soon judging by the last 3 or 4 years of small incremental steps.
    All of this is the industry recognising it's inability to deliver hardware and wringing every last last drop of performance from the existing equipment/nodes/architecture.
  • McDamon - Tuesday, October 27, 2015 - link

    Really? I'm developer, so I'm biased, but to me, leaving it up to the developer is what drives the innovation in this space. DirectX, much like OpenGL, were conceived to homologate APIs and devices. Glide and such. In fact, as is obvious, both APIs have moved away from the fixed function pipeline to a programmable model to allow for developer flexibility, not hinder it. Sure, there will be challenges for the first few tries with the new model, but that's why companies hire smart people right?
  • CiccioB - Tuesday, October 27, 2015 - link

    Slow incremental steps during last 3-4 years?
    You probably are speaking about AMD only, as nvidia has made great progresses from GTX680 to GTX980Ti both in terms of performances and power consumption. All of this on the same PP.
  • loguerto - Sunday, November 1, 2015 - link

    You are hugely sub estimating the GCN architecture, nvidia might have had a jump from kepler to maxwell in terms of efficiency (in part by cutting down the double precision performance), but still with the same slightly improved GCN architecture amd competes in dx11 and often outperforms the maxwells in the latests dx12 benchmarks. I when I say that I invite everyone to look at the entire GPU lineup and not only the 980ti vs fury x benchmarks.
  • IKeelU - Tuesday, October 27, 2015 - link

    Your first statement is pretty entirely wrong: a) we already have fragmentation in the form of different hardware manufacturers and driver streams. b) common solutions will be created in the form of licensed engines, c) the people currently solving these problems *are* developers, they just work for Nvidia and AMD, instead of those directly affected by the quality of end-product (game companies).

    Your contention that solutions should be closed off only really works when there's a clearly dominant and common solution to the problem. As we've learned over the last 15 years, there simply isn't. Every game release triggers a barrage of optimizations from the various driver teams. That code is totally out of scope - it should be managed by the concerned game company, not Nvidia/AMD/Intel.
  • callous - Monday, October 26, 2015 - link

    why test with intel APU + fury? It's more of a mainstream configuration than 2 video cards
  • Refuge - Tuesday, October 27, 2015 - link

    I believe it is too large of a performance gap, it would just hamstring the Fury.
  • nagi603 - Monday, October 26, 2015 - link

    nVidia already forcefully disabled using an nvidia card as physix add-in card with an AMD main GPU. When will they try to disable this extra feature?
  • silverblue - Tuesday, October 27, 2015 - link

    They may already have; then again, there could be a legitimate reason for the less than stellar performance with an AMD card as the slave.

Log in

Don't have an account? Sign up now