The Performance Impact of Asynchronous Shading

Finally, let’s take a look at Ashes’ latest addition to its stable of DX12 headlining features; asynchronous shading/compute. While earlier betas of the game implemented a very limited form of async shading, this latest beta contains a newer, more complex implementation of the technology, inspired in part by Oxide’s experiences with multi-GPU. As a result, async shading will potentially have a greater impact on performance than in earlier betas.

Update 02/24: NVIDIA sent a note over this afternoon letting us know that asynchornous shading is not enabled in their current drivers, hence the performance we are seeing here. Unfortunately they are not providing an ETA for when this feature will be enabled.

Ashes of the Singularity (Beta) - High Quality - Async Shader Performance

Since async shading is turned on by default in Ashes, what we’re essentially doing here is measuring the penalty for turning it off. Not unlike the DirectX 12 vs. DirectX 11 situation – and possibly even contributing to it – what we find depends heavily on the GPU vendor.

Ashes of the Singularity (Beta) - High Quality - Async Shading Perf. Gain

All NVIDIA cards suffer a minor regression in performance with async shading turned on. At a maximum of -4% it’s really not enough to justify disabling async shading, but at the same time it means that async shading is not providing NVIDIA with any benefit. With RTG cards on the other hand it’s almost always beneficial, with the benefit increasing with the overall performance of the card. In the case of the Fury X this means a 10% gain at 1440p, and though not plotted here, a similar gain at 4K.

These findings do go hand-in-hand with some of the basic performance goals of async shading, primarily that async shading can improve GPU utilization. At 4096 stream processors the Fury X has the most ALUs out of any card on these charts, and given its performance in other games, the numbers we see here lend credit to the theory that RTG isn’t always able to reach full utilization of those ALUs, particularly on Ashes. In which case async shading could be a big benefit going forward.

As for the NVIDIA cards, that’s a harder read. Is it that NVIDIA already has good ALU utilization? Or is it that their architectures can’t do enough with asynchronous execution to offset the scheduling penalty for using it? Either way, when it comes to Ashes NVIDIA isn’t gaining anything from async shading at this time.

Ashes of the Singularity (Beta) - Extreme Quality - Async Shading Perf. Gain

Meanwhile pushing our fastest GPUs to their limit at Extreme quality only widens the gap. At 4K the Fury X picks up nearly 20% from async shading – though a much smaller 6% at 1440p – while the GTX 980 Ti continues to lose a couple of percent from enabling it. This outcome is somewhat surprising since at 4K we’d already expect the Fury X to be rather taxed, but clearly there’s quite a bit of shader headroom left unused.

DirectX 12 vs. DirectX 11 Closing Thoughts
Comments Locked

153 Comments

View All Comments

  • Kouin325 - Friday, February 26, 2016 - link

    yes indeed they will be patching DX12 into the game, AFTER all the PR damage from the low benchmark scores is done. Nvidia waved some cash at the publisher/dev to make it a gameworks title, make it DX11, and to lock AMD out of making a day 1 patch.

    This was done to keep the general gaming public from learning that the Nvidia performance crown will all but disappear or worse under DX12. So they can keep selling their cards like hotcakes for another month or two.

    Also, Xbox hasn't been moved over to DX12 proper YET, but the DX11.x that the Xbox one has always used is by far closer to DX12 than DX11 for the PC. I think we'll know for sure what the game was developed for after the patch comes out. If the game gets a big performance increase after the DX12 patch then it was developed for DX12, and NV possibly had a hand in the DX11 for PC release. If the increase is small then it was developed for DX11,

    Reason being that getting the true performance of DX12 takes a major refactor of how assets are handled and pretty major changes to the rendering pipeline. Things that CANNOT be done in a month or two or how long this patch is taking to come out after release.

    Saying "we support DirectX12" is fairly ease and only takes changing a few lines of code, but you won't get the performance increases that DX12 can bring.
  • Kouin325 - Friday, February 26, 2016 - link

    ugh, I think Firefox had a brainfart, sorry for the TRIPPLE post.... *facepalm*
  • Gothmoth - Friday, February 26, 2016 - link

    it´s a crap game anyway so who cares?

    honestly even when nvidia should be 20% worse i would not buy ATI.
    not becasue im a fanboy.. but i use my GPU´s for more than games and ATI GPUs suck big time when it comes to drivers stability in pro applications.
  • D. Lister - Friday, February 26, 2016 - link

    Oxide and their so called "benchmarks" are a joke. Anyone who takes the aforementioned seriously, is just another unwitting victim of AMD's typical underhanded marketing.

    https://scalibq.wordpress.com/2015/09/02/directx-1...
    "And don’t get me started on Oxide… First they had their Star Swarm benchmark, which was made only to promote Mantle (AMD sponsors them via the Gaming Evolved program). By showing that bad DX11 code is bad. Really, they show DX11 code which runs single-digit framerates on most systems, while not exactly producing world-class graphics. Why isn’t the first response of most people as sane as: “But wait, we’ve seen tons of games doing similar stuff in DX11 or even older APIs, running much faster than this. You must be doing it wrong!”?

    But here Oxide is again, in the news… This time they have another ‘benchmark’ (do these guys actually ever make any actual games?), namely “Ashes of the Singularity”.
    And, surprise surprise, again it performs like a dog on nVidia hardware. Again, in a way that doesn’t make sense at all… The figures show it is actually *slower* in DX12 than in DX11. But somehow this is spun into a DX12 hardware deficiency on nVidia’s side. Now, if the game can get a certain level of performance in DX11, clearly that is the baseline of performance that you should also get in DX12, because that is simply what the hardware is capable of, using only DX11-level features. Using the newer API, and optionally using new features should only make things faster, never slower. That’s just common sense."
  • Th-z - Saturday, February 27, 2016 - link

    “But wait, we’ve seen tons of games doing similar stuff in DX11 or even older APIs..."

    Doing similar stuff in DX11? What stuff and what games?

    "The figures show it is actually *slower* in DX12 than in DX11. But somehow this is spun into a DX12 hardware deficiency on nVidia’s side."

    Which figure?

    This is Anandtech, we need to be more specific and provide solid evidence to back up your claims in order to avoid sounding like an astroturfer.
  • D. Lister - Saturday, February 27, 2016 - link

    You see my post? You see that there is this underlined text in blue? Well my friend, it is called a URL, which is an acronym for "Uniform Resource Locator", long story short it is this internet thingy that you go clickity-clickity with your mouse and it opens another page, where you can find the rest of the information.

    Don't worry, the process of opening a new webpage by using a URL may APPEAR quite daunting at first, but with very little practice you could be clicking away like a pro. This is after all "The AnandTech", and everybody is here to help. Heck, who knows if there are more like you out there, I might even make a video tutorial - "Open new webpages in 3 easy steps", or something.

    PS: Another pro tip, there is no such thing as "solid evidence" outside of a court of law. On the internet, you have information resources and reference material, and you have to use your own first-hand knowledge, experience and commonsense to differentiate the right from wrong.
  • Th-z - Sunday, May 29, 2016 - link

    Your blabbering is as useful as your link. I have a pro tip for you: you gave yourself away.
  • EugenM - Tuesday, June 7, 2016 - link

    @Th-z Dont feed the troll.
  • GeneralTom - Saturday, February 27, 2016 - link

    I hope Metal will be supported, too.
  • HollyDOL - Monday, February 29, 2016 - link

    Hm, from the screenshots posted I honestly can't see why would there be a need to run Dx12 with so "low performance" even on the most elite cards. While I give these guys credits for having the guts to go and develop in completely new API, the graphics looks more like early Dx9 games.
    Just a note this opinion is based on screenshots, not actual live render, but still from what I see there I'd expect FPS hitting 120+ with Dx11...

Log in

Don't have an account? Sign up now