Closing Thoughts

Wrapping things up, Futuremark’s latest benchmark certainly gives us a new view on DirectX 12, and of course another data point in looking at the performance of the forthcoming API.

Since being announced last year – and really, since Mantle was announced in 2013 – the initial focus on low-level APIs has been on draw call throughput, and for good reason. The current high-level API paradigm has significant CPU overhead and at the same time fails to scale well with multiple CPU cores, leading to a sort of worst-case scenario for trying to push draw calls. At the same time console developers have low enjoyed lower-level access and the accompanying improvement in draw calls, a benefit that is an issue for the PC in the age of so many multiplatform titles.

DirectX 12 then will be a radical overhaul to how GPU programming works, but at its most basic level it’s a fix for the draw call problem. And as we’ve seen in Star Swarm and now the 3DMark API Overhead Feature Test, the results are nothing short of dramatic. With the low-level API offering a 10x-20x increase in draw call throughput, any sort of draw call problems the PC was facing with high-level APIs is thoroughly put to rest by the new API. With the ability to push upwards of 20 million draw calls per second, PC developers should finally be able to break away from doing tricks to minimize draw calls in the name of performance and focus on other aspects of game design.


GDC 2014 - DirectX 12 Unveiled: 3DMark 2011 CPU Time: Direct3D 11 vs. Direct3D 12

Of course at the same time we need to be clear that 3DMark’s API Overhead Feature Test is a synthetic test – and is so by design – so the performance we’re looking at today is just one small slice of the overall performance picture. Real world game performance gains will undoubtedly be much smaller, especially if games aren’t using a large number of draw calls in the first place. But the important part is that it sets the stage for future games to use a much larger number of draw calls and/or spend less time trying to minimize the number of calls. And of course we can’t ignore the multi-threading benefits from DirectX 12, as while multi-threaded games are relatively old now, the inability to scale up throughput with additional cores has always been an issue that DirectX 12 will help to solve.

Ultimately we’re looking at just one test, and a synthetic test at that, but as gamers if we want better understand why game developers such as Johan Andersson have been pushing so hard for low-level APIs, the results of this benchmark are exactly why. From discrete to integrated, top to bottom, every performance level of PC stands to gain from DirectX 12, and for virtually all of them the draw call gains are going to be immense. DirectX 12 won’t change the world, but it will change the face of game programming for the better, and it will be very interesting to see just what developers can do with the API starting later this year.

Integrated GPU Testing
Comments Locked

113 Comments

View All Comments

  • Mannymal - Sunday, March 29, 2015 - link

    The article fails to address for the layman how exactly this will impact gameplay. Will games simply look better? Will AI get better? will maps be larger and more complex? All of the above? And how much?
  • Ryan Smith - Sunday, March 29, 2015 - link

    It's up to the developers. Ultimately DX12 frees up resources and removes bottlenecks; it's up to the developers to decide how they want to spend that performance. They could do relatively low draw calls and get some more CPU performance for AI, or they could try to do more expansive environments, etc.
  • jabber - Monday, March 30, 2015 - link

    Yeah seems to me that DX12 isn't so much about adding new eye-sandy its about a long time coming total back end refresh to get rid of the old DX crap and bring it more up to speed with modern hardware.
  • AleXopf - Sunday, March 29, 2015 - link

    I would love to see what effect directx 12 has on the cpu side. All the articles so far have been about cpu scalling with different gpus. Would be nice to see how amd compare to intel with a better use of their higher core count.
  • Netmsm - Monday, March 30, 2015 - link

    AMD is the thech's hero ^_^. always been.
  • JonnyDough - Tuesday, March 31, 2015 - link

    Great! Now all we need are driver hacks to make our over priced non-DX12 video cards worth their money!
  • loguerto - Friday, April 3, 2015 - link

    AMD masterpiece. Does this superiority has something to do with AMD Asynchronous Shaders? I know that nvidia's kerpel and maxwell asynchronous pipeline engine is not as powerful as the one in GCN architecture.
  • perula - Thursday, April 9, 2015 - link


    1)PASSPORT:
    2) license driving:
    3)Identity Card:
    For other types of documents, the price is to be determined we are also
    able to clone credit cards, or create for you a physical card codes
    starting with cc in your possession. But they are not able to do it with
    cards equipped with the latest generation of chips, but only with the old
    ones are still outstanding feature of the single magnetic stripe. The
    price in this case is 200 euro per card./

    Email /perula0@gmail.com /
    Text;+1(201) 588-4406
  • Clorex - Wednesday, April 22, 2015 - link

    On page 4:
    "Intel does end up seeing the smallest gains here, but again even in this sort of worst case scenario of a powerful CPU paired with a weak CPU, DX12 still improved draw call performance by over 3.2x."

    Should be "powerful CPU paired with a weak GPU".
  • akamateau - Thursday, April 30, 2015 - link

    FINALLY THE TRUTH IS REVEALED!!!

    AMD A6-7400 K CRUSHES INTEL i7 IGP by better than 100%!!!

    But Anand is also guilty of a WHOPPER of a LIE!

    Anand uses Intel i7-4960X. NOBODY uses RADEON with an Intel i7 cpu. But rather than use either an AMD FX CPU or an AMD A10 CPU they decided to degrade AMD's scores substanbtially by using an Intel product which is not optimsed to work with Radeon. Intel i7 also is not GCN or HSA compatible nor can it take advantage Asynchronous Shader Pipelines either. Only an IDIOT would feed Radeon GPU with Intel CPU.

    In short Anand's journalistic integrity is called into question here.

    Basically RADEON WOULD HAVE DESTROYED ALL nVIDIA AND INTEL COMBINATIONS if Anand benchmarked Radeon dGPU with AMD silicon. By Itself A6 is staggeringly superior to Intel i3, i5, AND i7.

    Ryan Smith & Ian Cutress have lied.

    As it stands A10-7700k produces 4.4 MILLION drawcalls per second. At 6 cores the GTX 980 in DX11 only produces 2.2 MILLION draw calls.

    DX12 enables a $150 AMD APU to CRUSH a $1500.00 Intel/nVidia gaming setup that runs DX11.

    Here is the second lie.

    AMD Asynchronous Shader Pipelines allow for 100% multithreaded proceesing in the CPU feeding the GPU whether it is an integrated APU or an 8 core FX feeding a GPU. What Anand sould also show is 8 core scaling using an AMD FX processor.

    Anand will say that they are too poor to use an AMD CPU or APU set up. Somehow I think that they are being disingenuous.

    NO INTEL/nVidia combination can compete with AMD using DX12.

Log in

Don't have an account? Sign up now