Closing Thoughts

Wrapping up our second look at Ashes of the Singularity and third overall look at Oxide’s Nitrous engines, it’s interesting to see where things have changed and where they have stayed the same.

Thanks to the general performance optimizations made since our initial look at Ashes, the situation for multi-GPU via DirectX 12 explicit multi-adapter is both very different and very similar. On an absolute basis it’s now a lot harder to max out a multi-GPU configuration; with reasonable quality settings we’re CPU limited even up to 4K, requiring we further increase the rendering quality. This more than anything else handily illustrates just how much performance has improved since the last beta. On the other hand it’s still the most unusual pairing – a Radeon R9 Fury X with a GeForce GTX 980 Ti – that delivers the best multi-GPU performance, which just goes to show what RTG and NVIDIA can accomplish working together.

As for the single GPU configurations, I’m not sure things as they currently stand could be any more different. NVIDIA cards have very good baseline DX11 performance in Ashes of the Singularity, but they mostly gain nothing from Ashes’ DX12 rendering path. RTG cards on the other hand have poorer DX11 performance, but they gain a significant amount of performance from the DX12 rendering path. In fact they gain so much performance that against traditional competitive lineups (e.g. Fury X vs. 980 Ti), the RTG cards are well in the lead, which isn’t usually the case elsewhere.

Going hand-in-hand with DX12, RTG’s cards are the only products to consistently benefit from Ashes’ improved asynchronous shading implementation. Whereas our NVIDIA cards see a very slight regression (with NVIDIA telling us that async shading is not currently enabled in their drivers), the Radeons improve in performance, especially the top-tier Fury X. This by itself isn’t wholly surprising given some of our theories about Fury X’s strengths and weaknesses, but for Ashes of the Singularity performance it further compounds on the other DX12 performance gains for RTG.

Ultimately Ashes gives us a very interesting look at the state of DirectX 12 performance for both RTG and NVIDIA cards, though no more and no less. As we stated at the start of this article this is beta software and performance is subject to change – not to mention the overall sample size of one game – but it is a start. For RTG this certainly lends support to their promotion of and expectations for DirectX 12, and it should be interesting to see how things shape up in March and beyond once the gold version of Ashes is released, and past that even more DirectX 12 games.

The Performance Impact of Asynchronous Shading
Comments Locked

153 Comments

View All Comments

  • BurntMyBacon - Thursday, February 25, 2016 - link

    @anubis44: "nVidia wasn't expecting AMD to force Microsoft's hand and release DX12 so soon."

    I do believe you are correct. Given the lack of ability to throw driver optimizations at the DX12 code path and nVidia's proficiency at doing it, I'd say this will be quite damaging. They've lost one clear advantage they held (at least in DX11).

    @anubis44: "It's beginning to look like nVidia's been check-mated by AMD here."

    I wouldn't go that far. They probably won't have the necessary hardware in Pascal, but you can be sure Volta will have what it needs. Besides, most games will likely have a DX11 code path for the foreseeable future as developers wouldn't want to lock themselves out of an entire market. Also, at the moment, nVidia can still play DX12 fine, they just don't appear to have the advantage at the moment given the small sample set of available data points.

    In conclusion, it is more like they have lost a rook or queen. Of course, they've taken a few of ATi's pieces as well, so lets just wait and see who plays their remaining pieces better.
  • rhysiam - Thursday, February 25, 2016 - link

    The other thing I would add to this is that it's not like Nvidia have nowhere to go here. Take the GTX 970 vs the R9 390 for example... they're in a similar price & performance tier. Yet the 970 is smaller with fewer transistors (usually meaning it's cheaper to produce) and generally has a much higher overclocking headroom (because Nvidia wasn't under pressure to clock the card closer to the limit to reach relevant performance). So it's reasonable to expect Nvidia could both lower the price and clock it higher to get a significantly better value card with minimal basically no substantive engineering/architectural changes.

    I'm not suggesting Nvidia will do that with the 970 specifically. Rather, what I'm saying is that if they find Pascal is similarly behind AMD they've got plenty of room to tweak performance and price before we can start calling them "check-mated". But it's certainly good new for us if DX12 performance like this continues and AMD essentially forces Nvidia to lower its margin.
  • CiccioB - Sunday, February 28, 2016 - link

    They can do exactly as AMD has done with GCN: they just can start using 30 or 50% bigger GPUs to close the performance gap if they really need to.
  • The_Countess - Thursday, February 25, 2016 - link

    nvidia's entire performance advantage in DX11 is based on game specific driver optimizations. they have a virtual army of developers slaving away on those (and coming up with way to hurt everyone's performance as long as it hurt AMD the most or makes their own latest gen cards look better... but that's a different matter)

    with DX12 however the drivers becomes MUCH thinner and doesn't have nearly as much influence. so basically nvidia's main competitive advantage is gone with dx12 and vulkan.

    as for being relevant: this year pretty much every game where performance matters will have either a DX12 or Vulkan render option. adding in the fact that AMD cards generally age better then nvidia's (those game specific optimizations focus pretty much exclusively only on their latest generation of cards) and i would say that yes it is very relevant.
  • BurntMyBacon - Thursday, February 25, 2016 - link

    @The_Countess: "nvidia's entire performance advantage in DX11 is based on game specific driver optimizations. they have a virtual army of developers slaving away on those ..."

    True, they have lost a large advantage. Keep in mind, though, that nVidia's developer relations are still in play. What they once achieved through the use of driver optimizations may still be accomplished through code path optimization and design guidance for nVidia architecture. The first beta for Vulkan (The Talos Principle) showed that merely replacing a high level API (OpenGL/DX11) with a low level one (Vulkan/DX12) does not automatically improve the experience. If nVidia can convince developers to avoid certain non-optimal features or program in such a way as to take better advantage of nVidia hardware in their titles (for the sake of performance on the majority of discrete card owners out there of course) then ATi will be in the same position as they are now. Better hardware, worse software support. Then again, low level API cross-platform titles will most assuredly program to take advantage of the console architectures which happens to be ATi's at the moment.
  • nevcairiel - Wednesday, February 24, 2016 - link

    Considering the Fury X just has a tad bit more raw power than a (older) 980Ti, I would say the DX12 numbers are fine, and what is really showing is AMDs lack of performance in DX11?
  • tuxRoller - Wednesday, February 24, 2016 - link

    I don't agree with this. I think this is more a case of nvidia not being able to rely so much on the ENORMOUS number of special cases in their driver.
    IOW, this is about two things: hardware and game design. The drivers are trivial next to d3d11/ogl.
  • jasonelmore - Wednesday, February 24, 2016 - link

    Fury X's Architecture is much newer than Maxwell 2's. Lets see what the true DX12 cards can do this summer.
  • tuxRoller - Wednesday, February 24, 2016 - link

    Did you not notice the across the board improvements for all gcn cards?
    The point I was making, and that others have made for sometime, is that AMD makes really good hardware but this is typically masked by poor drivers.
    You can see this by looking at their excellent performance in compute workloads where the code in the driver is more recent and doesn't have the legacy cruft of their d3d/ogl code.
  • Despoiler - Thursday, February 25, 2016 - link

    It's not their drivers. It's purely architectural. GCN moved their schedulers into to hardware. GCN requires the API to be able to feed it enough work. What people have been calling "driver overhead" is nothing of the sort. DX11 is just not capable of fully utilizing AMD hardware. DX12 is and that is why AMD created Mantle. It forced MS to create DX12 and that set off the creation of Vulkan. All of the next gen APIs are tailored to exploit AMDs already being sold hardware.

Log in

Don't have an account? Sign up now