Discrete GPU Testing

We’ll kick things off with our discrete GPUs, which should present us with a best case scenario for DirectX 12 from a hardware standpoint. With the most powerful CPUs powering the most powerful GPUs, the ability to generate a massive number of draw calls and to have them consumed in equally large number, this is where DirectX 12 will be at its best.

We’ll start with a look a CPU scaling on our discrete GPUs. How much benefit do we see going from 2 to 4 and finally 6 CPU cores?

3DMark API Overhead D3D12 CPU Scaling

The answer on the CPU side is quite a lot. Whereas Star Swarm generally topped out at 4 cores – after which it was often GPU limited – we see gains all the way up to 6 cores on our most powerful cards. This is a simple but important reminder of the fact that the AOFT is a synthetic test designed specifically to push draw calls and avoid all other bottlenecks as much as possible, leading to increased CPU scalability.

With that said, it’s clear that we’re reaching the limits of our GPUs with 6 cores. While the gains from 2 to 4 cores are rather significant, increasing from 4 to 6 (and with a slight bump in clockspeed) is much more muted, even with our most powerful cards. Meanwhile anything slower than a Radeon R9 285X is showing no real scaling from 4 to 6 cores, indicating a rough cutoff right now of how powerful a card needs to be to take advantage of more than 4 cores.

Moving on, let’s take a look at the actual API performance scaling characteristics at 6, 4, and 2 cores.

3DMark API Overhead GPU Scaling - 6 Cores

6 cores of course is a best case scenario for DirectX 12 – it’s the least likely to be CPU-bound – and we see first-hand the incredible increase in draw call throughput by switching from DirectX 11 to DirectX 12 or Mantle.

Somewhat unexpectedly, the greatest gains and the highest absolute performnace are achieved by AMD’s Radeon R9 290X. As we saw in Star Swarm and continue to see here, AMD’s DirectX 11 throughput is relatively poor, topping out at 1.1 draw calls for both DX11ST and DX11MT. AMD simply isn’t able to push much more than that many calls through their drivers, and without real support for DX11 multi-threading (e.g. DX11 Dirver Command Lists), they gain nothing from the DX11MT test.

But on the opposite side of the coin, this means they have the most to gain from DirectX 12. The R9 290X sees a 16.8x increase in draw call throughput switching from DX11 to DX12. At 18.5 million draw calls per second this is the highest draw call rate out of any of our cards, and we have good reason to suspect that we’re GPU command processor limited at this point. Which is to say that our CPU could push yet more draw calls if only a GPU existed that could consumer that many calls. On a side note, 18.5M calls would break down to just over 300K calls per frame at 60fps, which is a similarly insane number compared to today’s standards where draw calls per frame in most games is rarely over 10K.

Meanwhile we see a reduction in gains going from the 290X to the 285 and finally to the 7970. As we mentioned earlier we appear to be command processor limited, and each one of these progressively weaker GPUs appears to contain a similarly weaker command processor. Still, even the “lowly” 7970 can push 11.6M draw calls per second, which is a 10.5x (order of magnitude) increase in draw call performance over DirectX 11.

Mantle on the other hand presents an interesting aside. As AMD’s in-house API (and forerunner to Vulkan), the AMD cards do even better on Mantle than they do DirectX 12. At this point the difference is somewhat academic – what are you going to do with 20.3M draw calls over 18.5M – but it goes to show that Mantle can still squeeze out a bit more at times. It will be interesting to see whether this holds as Windows 10 and the drivers are finalized, and even longer term whether these benefits are retained by Vulkan.

As for the NVIDIA cards, NVIDIA sees neither quite the awesome relative performance gains from DirectX 12 nor enough absolute performance to top the charts, but here too we see the benefits of DirectX 12 in full force. At 1.9M draw calls per second in DX11ST and 2.2M draw calls per second in DX11MT, NVIDIA starts out in a much better position than AMD does; in the latter they essentially can double AMD’s DX11MT throughput (or alternatively have half the API overhead).

Once DX12 comes into play though, NVIDIA’s throughput rockets through the roof as well. The GTX 980 sees an 8.2x increase over DX11ST, and a 7x increase over DX11MT. On an absolute basis the GTX 980 is consuming 15.5M draw calls per second (or about 250K per frame at 60fps), showing that even the best DX11 implementation can’t hold a candle to this early DirectX 12 implementation. The benefits of DirectX 12 really are that great for draw call performance.

Like AMD, NVIDIA seems to be command processor limited here. GPU-Z reports 100% GPU usage in the DX12 test, indicating that by NVIDIA’s internal metrics the card is working as hard as it can. Meanwhile though not charted, I also tested a GTX Titan X here, which achieved virtually the exact same results as the GTX 980. In lieu of more evidence to support being CPU bound, I have to assume that the GM200 GPU uses a similar command processor as the GM204 based GTX 980, leading to a similar bottleneck. Which would make some sense, as the GM200 is by all practical measurements a supersized version of GM204.

Moving down the NVIDIA lineup, we see performance decrease as we work towards the GTX 680 and GTX 750 Ti. The latter is a newer product, based on the GM107 GPU, but ultimately it is a smaller and lower performing GPU than the GTX 680. Regardless, we are hitting the lower command processor throughput limits of these cards, and seeing the maximum DX12 throughput decrease accordingly. This means that the relative gains are smaller – DX11 performance is virtually the same as GTX 980 since the CPU is the limit there – but even GTX 750 Ti sees a 3.8x increase in throughput over DX11ST.

Finally, it’s here where we’re seeing a distinct case of the DX11 test producing variable results. For the NVIDIA cards we have seen our results fluctuate between 1.4M and 1.9M. Of all of our runs 1.9M is more common – not to mention it’s close to the score we get on NVIDIA’s public WDDM 1.3 drivers – so it’s what we’re publishing here. However for whatever reason, 1.4M will become more common with fewer cores even though the bottleneck was (and remains) single-core performance.

3DMark API Overhead GPU Scaling - 4 Cores

As for performance scaling with 4 cores, it’s very similar to what we saw with 6 cores. As we noted in our CPU-centric look at our data, only the fastest cards benefit from 6 cores, so the performance we see with 4 cores is quite similar to what we saw before. AMD of course still sees the greatest gains, while overall the gap between AMD and NVIDIA is compressed some.

Interestingly Mantle’s performance advantage melts away here. DirectX 12 is now the fastest API for all AMD cards, indicating that DX12 scales out better to 4 cores than Mantle, but perhaps not as well to 6 cores.

3DMark API Overhead GPU Scaling - 2 Cores

Finally with 2 cores many of our configurations are CPU limited. The baseline changes a bit – DX11MT ceases to be effective since 1 core must be reserved for the display driver – and the fastest cards have lost quite a bit of performance here. None the less, the AMD cards can still hit 10M+ draw calls per second with just 2 cores, and the GTX 980/680 are close behind at 9.4M draw calls per second. Which is again a minimum 6.7x increase in draw call throughput versus DirectX 11, showing that even on relatively low performance CPUs the draw call gains from DirectX 12 are substantial.

Overall then, with 6 CPU cores in play AMD appears to have an edge in command processor performance, allowing them to sustain a higher draw call throughput than NVIDIA. That said, as we know the real world performance of the GTX 980 easily surpasses the R9 290X, which is why it’s important to remember that this is a synthetic benchmark. Meanwhile at 2 cores where we become distinctly CPU limited, AMD appears to still have an edge in DirectX 12 throughput, an interesting role reversal from their poorer DirectX 11 performance.

Other Notes & The Test Integrated GPU Testing
Comments Locked

113 Comments

View All Comments

  • Vayra - Saturday, March 28, 2015 - link

    Oh hi Star Citizen, how are you today.
  • Michael Bay - Sunday, March 29, 2015 - link

    Wait until they hit optimizations stage.
  • Refuge - Friday, March 27, 2015 - link

    As mentioned below this will make the ports much more scalable to PC's. So when taking a game meant ton run on 6 year old hardware meets brand new hardware it isn't like taking a Porsche 911 from your city streets to a mud pit like it is now. It will be more like going from the city to the Autobahn.

    Ports will actually run better on better computers, not worse. Also, it will speed up the time of release for ports, in fact in a few years I wouldn't be surprised if multiform games were released on consoles and PC's at the same time as standard policy.
  • Belgarathian - Friday, March 27, 2015 - link

    I'm more interested in being able to display more dynamic enviroments with more artifacts, more units on screen etc.

    Can someone please remaster Supreme Commander Forged Alliance for DX12, and fix the bad AI. SC:FA is by far the best RTS, it's a shame that it was released before the technology was there to support it.
  • DarkXale - Sunday, March 29, 2015 - link

    The performance issues with SupCom Forged Alliance Forever (or you're just doing it wrong), are from the sheer quantity of units the game needs to manage. Not the number of issued drawcalls.

    The 'gameplay' simply requires the CPU to do too much - all of which must be done in a single thread - for any machine to reasonably manage in large games. DX12 can't help much with that.
  • FlushedBubblyJock - Tuesday, March 31, 2015 - link

    Q: " How much can they do with it really? "
    A: "How much did mantle do ?"
  • akamateau - Thursday, April 30, 2015 - link

    Hmmmm... yes you are right. Partially. Console games will be developed to the limit of the console and Microsoft just announced that DX12 was going into XBOX.

    AMD 8 core Jaguar will scale much higher tan 4.4 million draw calls on XBOX.

    But you also have to realise the GAMES are about the storey and eye candy. Games studios are also highly competitive. It is the nature of business that all things evolve to the lowest limiting factor. Until now DX11 was THE factor that limited the size and complexity of games. DX12 removes those limits.

    Expect games to be photorealistic at 4k easily!

    So the decision the consumer must make si simple. Great gaming with exensive Intel silicon or better gaming with inexpensive AMD silicon!!!
  • akamateau - Thursday, April 30, 2015 - link

    FINALLY THE TRUTH IS REVEALED!!!

    AMD A6-7400 K CRUSHES INTEL i7 IGP by better than 100%!!!

    But Anand is also guilty of a WHOPPER of a LIE!

    Anand uses Intel i7-4960X. NOBODY uses RADEON with an Intel i7 cpu. But rather than use either an AMD FX CPU or an AMD A10 CPU they decided to degrade AMD's scores substanbtially by using an Intel product which is not optimsed to work with Radeon. Intel i7 also is not GCN or HSA compatible nor can it take advantage Asynchronous Shader Pipelines either. Only an IDIOT would feed Radeon GPU with Intel CPU.

    In short Anand's journalistic integrity is called into question here.

    Basically RADEON WOULD HAVE DESTROYED ALL nVIDIA AND INTEL COMBINATIONS if Anand benchmarked Radeon dGPU with AMD silicon. By Itself A6 is staggeringly superior to Intel i3, i5, AND i7.

    Ryan Smith & Ian Cutress have lied.

    As it stands A10-7700k produces 4.4 MILLION drawcalls per second. At 6 cores the GTX 980 in DX11 only produces 2.2 MILLION draw calls.

    DX12 enables a $150 AMD APU to CRUSH a $1500.00 Intel/nVidia gaming setup that runs DX11.

    Here is the second lie.

    AMD Asynchronous Shader Pipelines allow for 100% multithreaded proceesing in the CPU feeding the GPU whether it is an integrated APU or an 8 core FX feeding a GPU. What Anand sould also show is 8 core scaling using an AMD FX processor.

    Anand will say that they are too poor to use an AMD CPU or APU set up. Somehow I think that they are being disingenuous.

    NO INTEL/nVidia combination can compete with AMD using DX12.
  • RandomUser15 - Friday, March 27, 2015 - link

    First and foremost this is the first comment, also great article, very well done!
  • RandomUser15 - Friday, March 27, 2015 - link

    Damn, nooooooooo.

Log in

Don't have an account? Sign up now