Ashes of the Singularity: Unlinked Explicit Multi-Adapter with AFR

Based off of Oxide’s Nitrous engine, Ashes of the Singularity will be the first full game released using the engine. Oxide had previously used the engine as part of their Star Swarm technical demo, showcasing the benefits of vastly improved draw call throughput under Mantle and DirectX 12. As one might expect then, for their first retail game Oxide is developing a game around Nitrous’s DX12 capabilities, with an eye towards putting a large number of draw calls to good use and to develop something that might not have been as good looking under DirectX 11.

That resulting game is Ashes of the Singularity, a massive-scale real time strategy game. Ashes is a spiritual successor of sorts to 2007’s Supreme Commander, a game with a reputation for its technical ambition. Similar to Supreme Commander, Oxide is aiming high with Ashes, and while the current alpha is far from optimized, they have made it clear that even the final version of the game will push CPUs and GPUs hard. Between a complex game simulation (including ballistic and line of sight checks for individual units) and the rendering resources needed to draw all of those units and their weapons effects in detail over a large playing field, I’m expecting that the final version of Ashes will be the most demanding RTS we’ve seen in some number of years.

Because of its high resource requirements Ashes is also a good candidate for multi-GPU scaling, and for this reason Oxide is working on implementing DirectX 12 explicit multi-adapter support into the game. For Ashes, Oxide has opted to start by implementing support for unlinked mode, both because this is a building block for implementing linked mode later on and because from a tech demo point of view this allows Oxide to demonstrate unlinked mode’s most nifty feature: the ability to utilize multiple dissimilar (non-homogenous) GPUs within a single game. EMA with dissimilar GPUs has been shown off in bits and pieces at developer events like Microsoft’s BUILD, but this is the first time an in-game demo has been made available outside of those conferences.

In order to demonstrate EMA and explicit synchronization in action, Oxide has started things off by using a basic alternate frame rendering implementation for the game. As we briefly mentioned in our technical overview of DX12 explicit multi-adapter, EMA puts developers in full control of the rendering process, which for Oxide meant implementing AFR from scratch. This includes assigning frames to each GPU, handling frame buffer transfers from the secondary GPU to the primary GPU, and most importantly of all controlling frame pacing, which is typically the hardest part of AFR to get right.

Because Oxide is using a DX12 EMA AFR implementation here, this gives Ashes quite a bit of flexibility as far as GPU compatibility goes. From a performance standpoint the basic limitations of AFR are still in place – due to the fact that each GPU is tasked with rendering a whole frame, all utilized GPUs need to be close in performance for best results – otherwise Oxide is able to support a wide variety of GPUs with one generic implementation. This includes not only AMD/AMD and NVIDIA/NVIDIA pairings, but GPU pairings that wouldn’t typically work for Crossfire and SLI (e.g. GTX Titan X + GTX 980 Ti). But most importantly of course, this allows Ashes to support using an AMD video card and an NVIDIA video card together as well. In fact beyond the aforementioned performance limitations, Ashes’ AFR mode should work on any two DX12 compliant GPUs.

From a technical standpoint, Oxide tells us that they’ve had a bit of a learning curve in getting EMA working for Ashes – particularly since they’re the first – but that they’re happy with the results. Obviously the fact that this even works is itself a major accomplishment, and in our experience frame pacing with v-sync disabled and tearing enabled feels smooth on the latest generation of high-end cards. Otherwise Oxide is still experimenting with the limits of the hardware and the API; they’ve told us that so far they’ve found that there’s plenty of bandwidth over PCIe for shared textures, and meanwhile they’re incurring a roughly 2ms penalty in transferring data via GPUs.

With that said and to be very clear here, the game itself is still in its alpha state, and the multi-adapter support is not even at alpha (ed: nor is it in the public release at this time). So Ashes’ explicit multi-adapter support is a true tech demo, intended to first and foremost show off the capabilities of EMA rather than what performance will be like in the retail game. As it stands the AFR-enabled build of Ashes occasionally crashes at load time for no obvious reason when AFR is enabled. Furthermore there are stability/corruption issues with newer AMD and NVIDIA drivers, which has required us to use slightly older drivers that have been validated to work. Overall while AMD and NVIDIA have their DirectX 12 drivers up and running, as has been the case with past API launches it’s going to take some time for the two GPU firms to lock down every new feature of the API and driver model and to fully knock out all of their driver bugs.

Finally, Oxide tells us that going forward they will be developing support for additional EMA modes in Ashes. As the current unlinked EMA implementation is stabilized, the next thing on their list will be to add support for linked EMA for better performance on similar GPUs. Oxide is still exploring linked EMA, but somewhat surprisingly they tell us that unlinked EMA already unlocks much of the performance of their AFR implementation. A linked EMA implementation in turn may only improve multi-GPU scaling by a further 5-10%. Beyond that, they will also be looking into alternative implementations of multi-GPU rendering (e.g. work sharing of individual frames), though that is farther off and will likely hinge on other factors such as hardware capabilities and the state of DX12 drivers from each vendor.

The Test

For our look at Ashes’ multi-adapter performance, we’re using Windows 10 with the latest updates on our GPU testbed. This provides plenty of CPU power for the game, and we’ve selected sufficiently high settings to ensure that we’re GPU-bound at all times.

For GPUs we’re using NVIDIA’s GeForce GTX Titan X and GTX 980 Ti, along with AMD’s Radeon R9 Fury X and R9 Fury for the bulk of our testing. As roughly comparable cards in price and performance, the GTX 980 Ti and R9 Fury X are our core comparison cards, with the additional GTX and Fury cards to back them up. Meanwhile we’ve also done a limited amount of testing with the GeForce GTX 680 and Radeon HD 7970 to showcase how well Ashes’ multi-adapter support works on older cards.

Finally, on the driver side of matters we’re using the most recent drivers from AMD that work correctly in multi-adapter mode with this build of Ashes. For AMD that’s Catalyst 15.8 and for NVIDIA that’s release 355.98. We’ve also thrown in single-GPU results with the latest drivers (15.0 and 358.50 respectively) to quickly showcase where single-GPU performance stands with these newest drivers.

CPU: Intel Core i7-4960X @ 4.2GHz
Motherboard: ASRock Fatal1ty X79 Professional
Power Supply: Corsair AX1200i
Hard Disk: Samsung SSD 840 EVO (750GB)
Memory: G.Skill RipjawZ DDR3-1866 4 x 8GB (9-10-9-26)
Case: NZXT Phantom 630 Windowed Edition
Monitor: Asus PQ321
Video Cards: AMD Radeon R9 Fury X
ASUS STRIX R9 Fury
AMD Radeon HD 7970
NVIDIA GeForce GTX Titan X
NVIDIA GeForce GTX 980 Ti
NVIDIA GeForce GTX 680
Video Drivers: NVIDIA Release 355.98
NVIDIA Release 358.50
AMD Catalyst 15.8 Beta
AMD Catalyst 15.10 Beta
OS: Windows 10 Pro
A Brief History & DirectX 12 Ashes GPU Performance: Single & Mixed High-End GPUs
Comments Locked

180 Comments

View All Comments

  • jimjamjamie - Tuesday, October 27, 2015 - link

    [pizza-making intensifies]
  • geniekid - Monday, October 26, 2015 - link

    On one hand the idea of unlinked EMA is awesome. On the other hand, I have to believe 95% of developers will shy away from implementing anything other than AFR in their game due to the sheer amount of effort the complexity would add to their QA/debugging process. If Epic manages to pull off their post-processing offloading I would be very impressed.
  • DanNeely - Monday, October 26, 2015 - link

    I'd guess it'd be the other way around. SLI/XFire AFR is complicated enough that it's normally only done for big budget AAA games. Other than replacing two vendor APIs with a single OS API DX12 doesn't seem to offer a whole lot of help there; so I don't expect to see a lot change.

    Handing off the tail end of every frame seems simpler; especially since the frame pacing difficulties that make AFR so hard and require a large amount of per game work won't be a factor. This sounds like something that could be baked into the engines themselves, and that shouldn't require a lot of extra work on the game devs part. Even if it ends up only being a modest gain for those of us with mid/high end GPUs; it seems like it could end up being an almost free gift.
  • nightbringer57 - Monday, October 26, 2015 - link

    That's only half relevant.
    I wonder how much can be implemented at the engine level. This kind of thing may be at least partially transparent to devs if says Unreal Engine and Unity get compatibility for it... I don't know how much it can do, though.
  • andrewaggb - Monday, October 26, 2015 - link

    Agreed, I would hope that if the Unreal Engine, Unity, Frostbite etc support it that maybe 50% or more of new games will support it.

    We'll have to see though. The idea of having both an AMD and Nvdia card in the same machine is both appealing and terrifying. Occasionally games work better on one than the other, so you might avoid some pain sometimes, but I'm sure you'd get a whole new set of problems sometimes as well.

    I think making use of the iGPU and discrete cards is probably the better scenario to optimize for. (Like Epic is apparently doing)
  • Gigaplex - Monday, October 26, 2015 - link

    Problems such as NVIDIA intentionally disabling PhysX when an AMD GPU is detected in the system, even if it's not actively being used.
  • Friendly0Fire - Monday, October 26, 2015 - link

    It really depends on a lot of factors I think, namely how complex the API ends up being.

    For instance, I could really see shadow rendering being offloaded to one GPU. There's minimal crosstalk between the two GPUs, the shadow renderer only needs geometry and camera information (quick to transfer/update) and only outputs a single frame buffer (also very quick to transfer), yet the process of shadow rendering is slow and complex and requires extremely high bandwidth internally, so it'd be a great candidate for splitting off.

    Then you can also split off the post-processing to the iGPU and you've suddenly shaved maybe 6-8ms off your frame time.
  • Oogle - Monday, October 26, 2015 - link

    Yikes. Just one more exponential factor to add when doing benchmarks. More choice is great for us consumers. But reviews and comparisons are going to start looking more complicated. I'll be interested to see how guys will make recommendations when it comes to multi-gpu setups.
  • tipoo - Monday, October 26, 2015 - link

    Wow, seems like a bigger boost than I had anticipated. Will be nice to see all that unused silicon (in dGPU environments) getting used.
  • gamerk2 - Monday, October 26, 2015 - link

    As this test is a smaller number of combinations it’s not clear where the bottlenecks are, but it’s none the less very interesting how we get such widely different results depending on which card is in the lead. In the GTX 680 + HD 7970 setup, either the GTX 680 is a bad leader or the HD 7970 is a bad follower, and this leads to this setup spinning its proverbial wheels. Otherwise letting the HD 7970 lead and GTX 680 follow sees a bigger performance gain than we would have expected for a moderately unbalanced setup with a pair of cards that were never known for their efficient PCIe data transfers. So long as you let the HD 7970 lead, at least in this case you could absolutely get away with a mixed GPU pairing of older GPUs.


    Drivers. Pretty much that simple. Odds are, the NVIDIA drivers are treating the HD 7970 the same way it's treating the 680 GTX, which will result in performance problems. AMD and NVIDIA use very different GPU architectures, and you're seeing it here. NVIDIA is probably attempting to utilize the 7970 in a way it just can't handle.

    I'd be very interested to see something like 680/Titan, or some form of lower/newer setup, which is what most people would actually use this for (GPU upgrade).

Log in

Don't have an account? Sign up now