First Thoughts

Bringing our preview of DirectX 12 to a close, what we’re seeing today is both a promising sign of what has been accomplished so far and a reminder of what is left to do. As it stands much of DirectX 12’s story remains to be told – features, feature levels, developer support, and more will only finally be unveiled by Microsoft next month at GDC 2015. So today’s preview is much more of a beginning than an end when it comes to sizing up the future of DirectX.

But for the time being we’re finally at a point where we can say the pieces are coming together, and we can finally see parts of the bigger picture. Drivers, APIs, and applications are starting to arrive, giving us our first look at DirectX 12’s performance. And we have to say we like what we’ve seen so far.

With DirectX 12 Microsoft and its partners set out to create a cross-vendor but still low-level API, and while there was admittedly little doubt they could pull it off, there has always been the question of how well they could do it. What kind of improvements and performance could you truly wring out of a new API when it has to work across different products and can never entirely avoid abstraction? The answer as it turns out is that you can still enjoy all of the major benefits of a low-level API, not the least of which are the incredible improvements in CPU efficiency and multi-threading.

That said, any time we’re looking at an early preview it’s important to keep our expectations in check, and that is especially the case with DirectX 12. Star Swarm is a best case scenario and designed to be a best case scenario; it isn’t so much a measure of real world performance as it is technological potential.

But to that end, it’s clear that DirectX 12 has a lot of potential in the right hands and the right circumstances. It isn’t going to be easy to master, and I suspect it won’t be a quick transition, but I am very interested in seeing what developers can do with this API. With the reduced overhead, the better threading, and ultimately a vastly more efficient means of submitting draw calls, there’s a lot of potential waiting to be exploited.

Frame Time Consistency & Recordings
Comments Locked

245 Comments

View All Comments

  • loguerto - Saturday, February 7, 2015 - link

    Microsoft is on the right way, but still, Mantle is the boss!
  • FXi - Saturday, February 7, 2015 - link

    I'm sadly more curious as to whether the 6 core chips prove their worth. A lot of rumor guessing seems to think that DX12 might finally show that a 6 core matters, but nothing here shows that. That's a very key issue when it comes to whether to go for a higher end chip or stick with the 4 core cpu's.
  • GMAR - Saturday, February 7, 2015 - link

    Excellent article. Thank you!
  • Shahnewaz - Saturday, February 7, 2015 - link

    Wait a minute, isn't the GTX 980 a 165W TDP card? Then how is it pulling over 200 watts?
  • eRacer1 - Sunday, February 8, 2015 - link

    The GTX 980 isn't pulling over 200W. The numbers shown are system power consumption not video card power consumption. The GTX 980 system power consumption isn't unusually high.

    Also, the system power consumption numbers are understating the power difference between the GTX 980 and Radeon 290X cards themselves under DX12. The GTX 980 has such a large performance advantage over the 290X in DX12 that the CPU is also using more power in the GTX 980 system to keep up with the video card.

    If anything the 290X power consumption is "too low", especially under DX12. To me it looks like the GPU is being underutilized, which seems to be the case based on the low FPS results and power consumption numbers. That could be due to many reasons: poor driver optimization, 290X architectural limitations, benchmark bug or design choice, Windows 10 issue, 290X throttling problem, etc. Hopefully, for AMD's sake, those issues can be worked out before the Windows 10 launch.
  • Shahnewaz - Sunday, February 8, 2015 - link

    That doesn't explain the <20W difference in both systems.
    And it's not like the CPU usage is also radically different.
    Remember, the TDP difference between the GPUs is a massive 165W (290W vs 165W).
  • eRacer1 - Sunday, February 8, 2015 - link

    "That doesn't explain the <20W difference in both systems. And it's not like the CPU usage is also radically different."

    Looking at the CPU usage graphs in the review the GTX 980 DX12 CPU average across all four cores is about 80% while the 290X average is only about 50%. So the GTX 980 CPU is doing 60% more work. That alone could easily account for 20+W watts of extra power consumption on CPU in the GTX 980 system. The ~60% CPU higher usage in the GTX 980 system makes sense as the frame rate is 56% higher as well. So what looks like a 14W difference is probably more like a 35W difference between the GTX 980 and 290X video cards.

    But the 35W difference doesn't tell the whole story because the GTX 980 is also 56% faster while using less power. So the GTX 980 has a MASSIVE efficiency advantage under these benchmark conditions. And it is doing it within a reasonable TDP because by the time you back out all of the non-GPU power consumption (CPU, memory, motherboard, hard drive, fans, etc.) and PSU inefficiency losses from the 271W system power consumption you'd likely find that the GTX 980 is under 200W.

    So the question we are left with is why is a 290W TDP 290X system power consumption only 285W under DX12? By the time you subtract the CPU power consumption (which is somewhat less than that of the GTX 980 test due to only being at 50% load instead of 80%), motherboard, memory and other components the 290X is probably using only 200-220W. To me it looks like the 290X is being bottlenecked and as a result isn't using as much power as one would expect. What the source of the bottleneck is, and if it is correctable, remains a mystery.
  • Shahnewaz - Saturday, February 7, 2015 - link

    It looks like AMD GPUs will get some 400%+ performance improvements! Sick!
  • ET - Sunday, February 8, 2015 - link

    My main takeaway from the article is that NVIDIA has done a much better job of optimising its DX11 drivers. AMD needs low level badly.
  • bloodypulp - Sunday, February 8, 2015 - link

    They already have it: Mantle.

Log in

Don't have an account? Sign up now