DirectX 12 Single-GPU Performance

We’ll start things off with a look at single-GPU performance. For this, we’ve grabbed a collection of RTG and NVIDIA GPUs covering the entire DX12 generation, from GCN 1.0 and Kepler to GCN 1.2 and Maxwell. This will give us a good idea of how the game performs both across a wide span of GPU performance levels, and how (if at all) the various GPU generational changes play a role.

Meanwhile unless otherwise noted, we’re using Ashes’ High quality setting, which turns up a number of graphical features and also utilizes 2x MSAA. It’s also worth mentioning that while Ashes does allow async shading to be turned off and on, this option is on by default unless turned off in the game’s INI file.

Ashes of the Singularity (Beta) - 3840x2160 - High Quality

Starting at 4K, we have the GeForce GTX 980 Ti and Radeon R9 Fury X. On the latest beta the Fury X has a strong lead over the normally faster GTX 980 Ti, beating it by 20% and coming close to hitting 60fps.

Ashes of the Singularity (Beta) - 2560x1440 - High Quality

When we drop down to 1440p and introduce last-generation’s flagship video cards, the GeForce GTX 780 Ti and Radeon R9 290X, the story is much the same. The Fury X continues to hold a 10fps lead over the GTX 980 Ti, giving it an 18% lead. Similarly, the R9 290X has an 8fps lead over the 780 Ti, translating into a 19% performance lead. This is a significant turnabout from where we normally see these cards, as 780 Ti traditionally holds a lead over the 290X.

Meanwhile looking at the average framerates with different batch count intensities, there admittedly isn’t much remarkable here. All cards take roughly the same performance hit with increasingly larger batch counts.

Ashes of the Singularity (Beta) - 1920x1080 - High Quality

Finally at 1080p, with our full lineup of cards we can see that RTG’s lead in this latest beta is nearly absolute. The 2012 flagship battle between the 7970 and the GTX 680 puts the 7970 in the lead by 12%, or just shy of 4fps. Elsewhere the GTX 980 Ti does close on the Fury X, but RTG’s current-gen flagship remains in the lead.

The one outlier here is the Radeon R9 285, which is the only 2GB RTG card in our collection. At this point we suspect it’s VRAM limited, but it would require further investigation.

More on Async Shading, the New Benchmark, & the Test DirectX 12 Multi-GPU Performance
Comments Locked

153 Comments

View All Comments

  • BurntMyBacon - Thursday, February 25, 2016 - link

    @anubis44: "nVidia wasn't expecting AMD to force Microsoft's hand and release DX12 so soon."

    I do believe you are correct. Given the lack of ability to throw driver optimizations at the DX12 code path and nVidia's proficiency at doing it, I'd say this will be quite damaging. They've lost one clear advantage they held (at least in DX11).

    @anubis44: "It's beginning to look like nVidia's been check-mated by AMD here."

    I wouldn't go that far. They probably won't have the necessary hardware in Pascal, but you can be sure Volta will have what it needs. Besides, most games will likely have a DX11 code path for the foreseeable future as developers wouldn't want to lock themselves out of an entire market. Also, at the moment, nVidia can still play DX12 fine, they just don't appear to have the advantage at the moment given the small sample set of available data points.

    In conclusion, it is more like they have lost a rook or queen. Of course, they've taken a few of ATi's pieces as well, so lets just wait and see who plays their remaining pieces better.
  • rhysiam - Thursday, February 25, 2016 - link

    The other thing I would add to this is that it's not like Nvidia have nowhere to go here. Take the GTX 970 vs the R9 390 for example... they're in a similar price & performance tier. Yet the 970 is smaller with fewer transistors (usually meaning it's cheaper to produce) and generally has a much higher overclocking headroom (because Nvidia wasn't under pressure to clock the card closer to the limit to reach relevant performance). So it's reasonable to expect Nvidia could both lower the price and clock it higher to get a significantly better value card with minimal basically no substantive engineering/architectural changes.

    I'm not suggesting Nvidia will do that with the 970 specifically. Rather, what I'm saying is that if they find Pascal is similarly behind AMD they've got plenty of room to tweak performance and price before we can start calling them "check-mated". But it's certainly good new for us if DX12 performance like this continues and AMD essentially forces Nvidia to lower its margin.
  • CiccioB - Sunday, February 28, 2016 - link

    They can do exactly as AMD has done with GCN: they just can start using 30 or 50% bigger GPUs to close the performance gap if they really need to.
  • The_Countess - Thursday, February 25, 2016 - link

    nvidia's entire performance advantage in DX11 is based on game specific driver optimizations. they have a virtual army of developers slaving away on those (and coming up with way to hurt everyone's performance as long as it hurt AMD the most or makes their own latest gen cards look better... but that's a different matter)

    with DX12 however the drivers becomes MUCH thinner and doesn't have nearly as much influence. so basically nvidia's main competitive advantage is gone with dx12 and vulkan.

    as for being relevant: this year pretty much every game where performance matters will have either a DX12 or Vulkan render option. adding in the fact that AMD cards generally age better then nvidia's (those game specific optimizations focus pretty much exclusively only on their latest generation of cards) and i would say that yes it is very relevant.
  • BurntMyBacon - Thursday, February 25, 2016 - link

    @The_Countess: "nvidia's entire performance advantage in DX11 is based on game specific driver optimizations. they have a virtual army of developers slaving away on those ..."

    True, they have lost a large advantage. Keep in mind, though, that nVidia's developer relations are still in play. What they once achieved through the use of driver optimizations may still be accomplished through code path optimization and design guidance for nVidia architecture. The first beta for Vulkan (The Talos Principle) showed that merely replacing a high level API (OpenGL/DX11) with a low level one (Vulkan/DX12) does not automatically improve the experience. If nVidia can convince developers to avoid certain non-optimal features or program in such a way as to take better advantage of nVidia hardware in their titles (for the sake of performance on the majority of discrete card owners out there of course) then ATi will be in the same position as they are now. Better hardware, worse software support. Then again, low level API cross-platform titles will most assuredly program to take advantage of the console architectures which happens to be ATi's at the moment.
  • nevcairiel - Wednesday, February 24, 2016 - link

    Considering the Fury X just has a tad bit more raw power than a (older) 980Ti, I would say the DX12 numbers are fine, and what is really showing is AMDs lack of performance in DX11?
  • tuxRoller - Wednesday, February 24, 2016 - link

    I don't agree with this. I think this is more a case of nvidia not being able to rely so much on the ENORMOUS number of special cases in their driver.
    IOW, this is about two things: hardware and game design. The drivers are trivial next to d3d11/ogl.
  • jasonelmore - Wednesday, February 24, 2016 - link

    Fury X's Architecture is much newer than Maxwell 2's. Lets see what the true DX12 cards can do this summer.
  • tuxRoller - Wednesday, February 24, 2016 - link

    Did you not notice the across the board improvements for all gcn cards?
    The point I was making, and that others have made for sometime, is that AMD makes really good hardware but this is typically masked by poor drivers.
    You can see this by looking at their excellent performance in compute workloads where the code in the driver is more recent and doesn't have the legacy cruft of their d3d/ogl code.
  • Despoiler - Thursday, February 25, 2016 - link

    It's not their drivers. It's purely architectural. GCN moved their schedulers into to hardware. GCN requires the API to be able to feed it enough work. What people have been calling "driver overhead" is nothing of the sort. DX11 is just not capable of fully utilizing AMD hardware. DX12 is and that is why AMD created Mantle. It forced MS to create DX12 and that set off the creation of Vulkan. All of the next gen APIs are tailored to exploit AMDs already being sold hardware.

Log in

Don't have an account? Sign up now