About a year and a half ago AMD kicked off the public half of a race to improve the state of graphics APIs. Dubbed "Mantle", AMD’s in-house API for their Radeon cards stripped away the abstraction and inefficiencies of traditional high-level APIs like DirectX 11 and OpenGL 4, and instead gave developers a means to access the GPU in a low-level, game console-like manner. The impetus: with a low-level API, engine developers could achieve better performance than with a high-level API, sometimes vastly exceeding what DirectX and OpenGL could offer.

While AMD was the first such company to publicly announce their low-level API, they were not the last. 2014 saw the announcement of APIs such as DirectX 12, OpenGL Next, and Apple’s Metal, all of which would implement similar ideas for similar performance reasons. It was a renaissance in the graphics API space after many years of slow progress, and one desperately needed to keep pace with the progress of both GPUs and CPUs.

In the PC graphics space we’ve already seen how early versions of Mantle perform, with Mantle offering some substantial boosts in performance, especially in CPU-bound scenarios. As awesome as Mantle is though, it is currently a de-facto proprietary AMD API, which means it can only be used with AMD GPUs; what about NVIDIA and Intel GPUs? For that we turn towards DirectX, Microsoft’s traditional cross-vendor API that will be making the same jump as Mantle, but using a common API for the benefit of every vendor in the Windows ecosystem.

DirectX 12 was first announced at GDC 2014, where Microsoft unveiled the existence of the new API along with their planned goals, a brief demonstration of very early code, and limited technical details about how the API would work. Since then Microsoft has been hard at work on DirectX 12 as part of the larger Windows 10 development effort, culminating in the release of the latest Windows 10 Technical Preview, Build 9926, which is shipping with an early preview version of DirectX 12.


GDC 2014 - DirectX 12 Unveiled: 3DMark 2011 CPU Time: Direct3D 11 vs. Direct3D 12

With the various pieces of Microsoft’s latest API finally coming together, today we will be taking our first look at the performance future of DirectX. The API is stabilizing, video card drivers are improving, and the first DirectX 12 application has been written; Microsoft and their partners are finally ready to show off DirectX 12. To that end, today we’ll looking at DirectX 12 through Oxide Games’ Star Swarm benchmark, our first DirectX 12 application and a true API efficiency torture test.

Does DirectX 12 bring the same kind of performance benefits we saw with Mantle? Can it resolve the CPU bottlenecking that DirectX 11 struggles with? How well does the concept of a low-level API work for a common API with disparate hardware? Let’s find out!

The Current State of DirectX 12 & WDDM 2.0
Comments Locked

245 Comments

View All Comments

  • junky77 - Friday, February 6, 2015 - link

    Looking at the CPU scaling graphs and CPU/GPU usage, it doesn't look like the situation in other games where CPU can be maxed out. It does seem like this engine and test might be really tailored for this specific case of DX12 and Mantle in a specific way

    The interesting thing is to understand whether the DX11 performance shown here is optimal. The CPU usage is way below max, even for the one core supposedly taking all the load. Something is bottlenecking the performance and it's not the number of cores, threads or clocks.
  • eRacer1 - Friday, February 6, 2015 - link

    So the GTX 980 is using less power than the 290X while performing ~50% better, and somehow NVIDIA is the one with the problem here? The data is clear. The GTX 980 has a massive DX12 (and DX11) performance lead and performance/watt lead over 290X.
  • The_Countess666 - Thursday, February 19, 2015 - link

    it also costs twice as much.

    and this is the first time in roughly 4 generations that nvidia's managed to release a new generation first. it would be shocking is there wasn't a huge performance difference between AMD and nvidia at the moment.
  • bebimbap - Friday, February 6, 2015 - link

    TDP and power consumption are not the same thing, but are related
    if i had to write a simple equation it would be something to the effect of

    TDP(wasted heat) = (Power Consumption) X (process node coeff) X (temperature of silicon coeff) X (Architecture coeff)

    so basically TDP or "wasted heat" is related to power consumption but not the same thing
    Since they are on the same process node by the same foundry, the difference in TDP vs power consumed would be because of Nvidia currently has the more efficient architecture, and that also leads to their chips being cooler, both of which lead to less "wasted heat"

    A perfect conductor would have 0 TDP and infinite power consumption.
  • Mr Perfect - Saturday, February 7, 2015 - link

    Erm, I don't think you've got the right term there with TDP. TDP is not defined as "wasted heat", but as the typical power draw of the board. So if TDP for the GTX 980 is 165 watts, that just means that in normal gaming use it's drawing 165 watts.

    Besides, if a card is drawing 165watts, it's all going to become heat somewhere along the line. I'm not sure you can really decide how many of those watts are "wasted" and how many are actually doing "work".
  • Wwhat - Saturday, February 7, 2015 - link

    No, he's right TDP means Thermal design power and defines the cooling a system needs to run at full power.
  • Strunf - Saturday, February 7, 2015 - link

    It's the same... if a GC draws 165W it needs a 165W cooler... do you see anything moving on your card exept the fans? no, so all power will be transformed into heat.
  • wetwareinterface - Saturday, February 7, 2015 - link

    no it's not the same. 165w tdp means the cooler has to dump 165w worth of heat.
    165w power draw means the card needs to have 165w of power available to it.

    if the card draws 300w of power and has 200w of heat output that means the card is dumping 200w of that 300w into the cooler.
  • Strunf - Sunday, February 8, 2015 - link

    It's impossible for the card to draw 300W and only output 200W of heat... unless of course now GC defy the laws of physics.
  • grogi - Sunday, April 5, 2015 - link

    What is it doing with the remaining 100W?

Log in

Don't have an account? Sign up now