DirectX 12 vs. DirectX 11

Now that we’ve had the chance to look at DirecX 12 performance, let’s take a look at things with DirectX 11 thrown into the mix. As a reminder, while the two rendering paths are graphically identical, the DirectX 12 path introduces the latter’s multi-core scalability along with asynchronous shading functionality. The game and the underlying Nitrous engine is designed to take advantage of both, but particularly the multi-core functionality as the game pushes some very high batch counts.

Ashes of the Singularity (Beta) - High Quality - DirectX 11 vs. DirectX 12

Given that we had never benchmarked Ashes under DirectX 11 before, what we had been expecting was a significant performance regression when switching to it. Instead what we found was far more surprising.

On the RTG side of matters, there is a large performance gap between DX11 and DX12 at all resolutions, increasing with the overall performance of the video card being tested. Even on the R9 290X and the 7970, using DX12 is a no brainer, as it improves performance by 20% or more.

The big surprise however is with the NVIDIA cards. For the more powerful GTX 980 Ti and GTX 780 Ti, NVIDIA doesn’t gain anything from the DX12 rendering path; in fact they lose a percent or two in performance. This means that they have very good performance under DX11 (particular the GTX 980 Ti), but it’s not doing them any favors under DX12, where as we’ve seen RTG has a rather consistent performance lead. In the past NVIDIA has gone through some pretty extreme lengths to optimize the CPU usage of their DX11 driver, so this may be the payoff from general optimizations, or even a round of Ashes-specific optimizations.

Ashes of the Singularity (Beta) - High Quality 1920x1080 - DirectX 12 Perf. Gain

Breaking down the gains on a percentage basis at 1080p, the most CPU-demanding resolution, we find that the Fury X picks up a full 50% from DX12, followed by 29% and 23% for the R9 290X and 7970 respectively. Meanwhile at the opposite end of the spectrum are the GTX 980 Ti and GTX 780 Ti, who lose 1% and 3% respectively.

Finally, right in the middle of all of this is the GTX 680. Given what happens to the architecturally similar GTX 780 Ti, this may be a case of GPU memory limitations (this is the only 2GB NVIDIA card in this set), as there’s otherwise no reason to expect the weakest NVIDIA GPU to benefit the most from DX12.

Overall then this neatly illustrates why RTG in particular has been so gung-ho about DX12, as Ashes’ DX12 path has netted them a very significant increase in performance. To some degree however what this means is a glass half full/half empty full situation; RTG gains so much from DX12 in large part because of their poorer DX11 performance (especially on the faster cards), but on the other hand a “simple” API change has unlocked a great deal of GPU power that wasn’t otherwise being used and vaulted them well into the lead. As for NVIDIA, is it that their cards don’t benefit from DX12, or is it that their DX11 driver stack is that good to begin with? At the end of the day Ashes is just a single game – and a beta game at that – but it will be interesting to see if this is a one-off situation or if it becomes recurring.

DirectX 12 Multi-GPU Performance The Performance Impact of Asynchronous Shading
Comments Locked

153 Comments

View All Comments

  • Friendly0Fire - Wednesday, February 24, 2016 - link

    You don't have enough data to know this.

    Once the second generation of DX12 cards come out, then you can analyze the jumps and get a better idea. Ideally you'd wait for three generations of post-DX12 GPUs to get the full picture. As it is, all we know is that AMD's DX12 driver is better than their DX11 driver... which ain't saying much.
  • The_Countess - Thursday, February 25, 2016 - link

    except we have 3 generations of DX12 cards already on AMD's side, starting with the hd7970, which still holds its own quit well.

    and we've had multiple DX12 and vulkan benchmarks already and in every one of them the 290 and 390 in particular beat the crap out of nvidia's direct competition. in fact they often beat or match the card above them as well

    as for drivers. AMD's dx11 drivers are fine. they just didn't invest bucketloads of money in game specific optimizations like nividia did, but instead focused on fixed the need for those optimizations in the first place. nvidia's investment doesn't offer long term benefits (a few months, then people move on to the next game) and that level of optimization in the drivers is impossible and even unwanted in low level API's.

    basically nvidia will be losing its main competitive advantage this year.
  • hero4hire - Friday, February 26, 2016 - link

    I think what he meant was we don't have enough test cases to conclude mature dx12 performance. The odds are pointing to AMD having faster gpus for dx12. But until multiple games are out, and preferably one or two "dx12" noted driver, we're speculating. I thought this was clear from the article?

    It's a stretch calling 3 generations of dx12 released cards too. I guess if we add up draft revisions there are 50 generations of AC wireless.

    You could state that because AMDs arch is targeting dx12, it looks to give an accross the board performance win in dx12 next gen games. But again we only have 1 beta game as a test case. Just wait and it will be a fact or not. No need to backfill the why
  • CiccioB - Sunday, February 28, 2016 - link

    Right, they didn't invest bunchload in optimizing current game, they just payed a single company to make a benchmark game using their most strong point in DX12, super mega threaded (useless) engine. Not different than nvidia using super mega geometry (uselessly) complex scenes helped by tessellation.
    Perfect marketing: most return with less investments.

    Unfortunately a single game with a bunchload of ASYN compute thread added just for the joy of it is not a complete DX12 trend: what about games that are going to support Voxel global illumination that AMD HW cannot handle?

    We'll see where the game engine will point to. And if this is another faux-fire that AMD has started up these years seeing they are in big trouble.

    BTW: it is stupid to say that 390 "beat up the crap aout of anything else" that is using a different API. All you could see is that a beefed up GPU like Hawaii consuming 80+W with respect to the competition manage finally to pass it as it should have do at day one. But this was only because of the use of a different API with different capacities that the other GPU could not benefit from.
    You can't say it is better if with current standard API (DX11) that beefed up GPU can't really do better.

    If you are so excited by the fact that a GPU 33% bigger than another is able to get almost 20% more in performance with a future API and best conditions at the moment a complete new architecture is going to be launched by both red and green teams, well, you really demonstrates how biased you are. Whoever has bought a 290 (then 390) card back in the old days during all these month has been biting dust (and loosing Watts) and the small boost at the end of these cards life is really a shallow thing to be exited for.
  • lilmoe - Wednesday, February 24, 2016 - link

    I like what AMD has done with "future proofing" their cards and drivers for DirectX12. But people buy graphics cards to play games TODAY. I'd rather get a graphics card with solid performance in what we have now rather than get one and sit down playing the waiting game.

    1) It's not like NVidia's DX12 performance is "awful", you'll still get to play future games with relatively good performance.
    2) The games you play now won't be obsolete for years.
    3) I agree with what others have said; AOS is just one game. We DON'T know if NVidia cards won't get any performance gains from DX12 under other games/engines.
  • ppi - Wednesday, February 24, 2016 - link

    You do not buy a new gfx card to play games TODAY, but for playing TOMORROW, next month, quarter and then for a few years (few being ~two), until the performance in new games regresses to the point when you bite the bullet and buy a new one.

    Most people do not have unlimited budget to upgrade every six months when a new card claims performance crown.
  • Friendly0Fire - Wednesday, February 24, 2016 - link

    It's unlikely that the gaming market will be flooded by DX12 games within six months. It's unlikely to happen within a few years, even. Look at how slow DX10 adoption was.
  • anubis44 - Thursday, February 25, 2016 - link

    I think you're quite wrong about this. Windows 10 adoption is spreading like wildfire in comparison to Windows XP --> Vista. DX10 wasn't available as a free upgrade to Vista the way DX12 is in Windows 10.
  • Despoiler - Thursday, February 25, 2016 - link

    Just about every title announced for 2016 is DX12 and some are DX12 only. There are many already released games that have DX12 upgrades in the works.
  • Space Jam - Wednesday, February 24, 2016 - link

    Nvidia leading is always irrelevant. Get with the program :p

    Nvidia's GPUs lead for two years? Doesn't matter, AMD based on future performance.

    DX11 the only real titles in play? Doesn't matter, the miniscule DX12/Vulkan sample size says buy AMD!

Log in

Don't have an account? Sign up now