About a year and a half ago AMD kicked off the public half of a race to improve the state of graphics APIs. Dubbed "Mantle", AMD’s in-house API for their Radeon cards stripped away the abstraction and inefficiencies of traditional high-level APIs like DirectX 11 and OpenGL 4, and instead gave developers a means to access the GPU in a low-level, game console-like manner. The impetus: with a low-level API, engine developers could achieve better performance than with a high-level API, sometimes vastly exceeding what DirectX and OpenGL could offer.

While AMD was the first such company to publicly announce their low-level API, they were not the last. 2014 saw the announcement of APIs such as DirectX 12, OpenGL Next, and Apple’s Metal, all of which would implement similar ideas for similar performance reasons. It was a renaissance in the graphics API space after many years of slow progress, and one desperately needed to keep pace with the progress of both GPUs and CPUs.

In the PC graphics space we’ve already seen how early versions of Mantle perform, with Mantle offering some substantial boosts in performance, especially in CPU-bound scenarios. As awesome as Mantle is though, it is currently a de-facto proprietary AMD API, which means it can only be used with AMD GPUs; what about NVIDIA and Intel GPUs? For that we turn towards DirectX, Microsoft’s traditional cross-vendor API that will be making the same jump as Mantle, but using a common API for the benefit of every vendor in the Windows ecosystem.

DirectX 12 was first announced at GDC 2014, where Microsoft unveiled the existence of the new API along with their planned goals, a brief demonstration of very early code, and limited technical details about how the API would work. Since then Microsoft has been hard at work on DirectX 12 as part of the larger Windows 10 development effort, culminating in the release of the latest Windows 10 Technical Preview, Build 9926, which is shipping with an early preview version of DirectX 12.


GDC 2014 - DirectX 12 Unveiled: 3DMark 2011 CPU Time: Direct3D 11 vs. Direct3D 12

With the various pieces of Microsoft’s latest API finally coming together, today we will be taking our first look at the performance future of DirectX. The API is stabilizing, video card drivers are improving, and the first DirectX 12 application has been written; Microsoft and their partners are finally ready to show off DirectX 12. To that end, today we’ll looking at DirectX 12 through Oxide Games’ Star Swarm benchmark, our first DirectX 12 application and a true API efficiency torture test.

Does DirectX 12 bring the same kind of performance benefits we saw with Mantle? Can it resolve the CPU bottlenecking that DirectX 11 struggles with? How well does the concept of a low-level API work for a common API with disparate hardware? Let’s find out!

The Current State of DirectX 12 & WDDM 2.0
Comments Locked

245 Comments

View All Comments

  • Ryan Smith - Friday, February 6, 2015 - link

    Gen 7.5 graphics and up will support DX12 . So that's Haswell/4th Gen Core and newer.
  • Murloc - Saturday, February 7, 2015 - link

    as you said, it's unlikely to bring much advantage. Just the usual new features from one version to the next.
  • inighthawki - Sunday, February 8, 2015 - link

    Remember that lower CPU requirements means that there is more power available for the integrated GPU. There was an article that sort of described the behavior a while back:
    http://www.extremetech.com/gaming/187970-directx-1...
  • tipoo - Friday, February 6, 2015 - link

    I'm confused at this

    "What we find is that Star Swarm and DirectX 12 are so efficient that only our most powerful card, the GTX 980, is not CPU bound even with 2 cores. "

    I'm not sure how the first part of that proves the second. Wouldn't more CPU efficiency more likely be shown in being GPU bound, not CPU bound?
  • tipoo - Friday, February 6, 2015 - link

    Yeah, having read the next few pages I think that should either say "is CPU bound" rather than "not CPU bound", as the rest of the cards can be fed with just a 2 core CPU, while the 980 has headroom for more performance.
  • OrphanageExplosion - Friday, February 6, 2015 - link

    There are some pretty big differences in the CPU utilisation of the DX11 NVIDIA and AMD drivers. Maybe reviewing all GPUs with a high-end i7 isn't such a good idea, particularly on the lower-end and mainstream cards which aren't likely to be paired with top-end processors?
  • yannigr2 - Friday, February 6, 2015 - link

    Thats a very interesting article and a big victory for Maxwell architecture. I hope AMD's 300 series to be more mature under SX12 and Mantle and perform much better that 200 series.

    It will be extremely interesting to see an AMD FX in this test. Maybe the ugly duck could transform to a swan?
  • 200380051 - Friday, February 6, 2015 - link

    Your comment sums it up well. FX test, great idea.
  • zmeul - Friday, February 6, 2015 - link

    quick question:
    why VRAM usage hasn't been taken into account ?
  • Ryan Smith - Saturday, February 7, 2015 - link

    The short answer is that all of these cards have enough VRAM that it's not a real issue.

Log in

Don't have an account? Sign up now