First Thoughts

Bringing our preview of DirectX 12 to a close, what we’re seeing today is both a promising sign of what has been accomplished so far and a reminder of what is left to do. As it stands much of DirectX 12’s story remains to be told – features, feature levels, developer support, and more will only finally be unveiled by Microsoft next month at GDC 2015. So today’s preview is much more of a beginning than an end when it comes to sizing up the future of DirectX.

But for the time being we’re finally at a point where we can say the pieces are coming together, and we can finally see parts of the bigger picture. Drivers, APIs, and applications are starting to arrive, giving us our first look at DirectX 12’s performance. And we have to say we like what we’ve seen so far.

With DirectX 12 Microsoft and its partners set out to create a cross-vendor but still low-level API, and while there was admittedly little doubt they could pull it off, there has always been the question of how well they could do it. What kind of improvements and performance could you truly wring out of a new API when it has to work across different products and can never entirely avoid abstraction? The answer as it turns out is that you can still enjoy all of the major benefits of a low-level API, not the least of which are the incredible improvements in CPU efficiency and multi-threading.

That said, any time we’re looking at an early preview it’s important to keep our expectations in check, and that is especially the case with DirectX 12. Star Swarm is a best case scenario and designed to be a best case scenario; it isn’t so much a measure of real world performance as it is technological potential.

But to that end, it’s clear that DirectX 12 has a lot of potential in the right hands and the right circumstances. It isn’t going to be easy to master, and I suspect it won’t be a quick transition, but I am very interested in seeing what developers can do with this API. With the reduced overhead, the better threading, and ultimately a vastly more efficient means of submitting draw calls, there’s a lot of potential waiting to be exploited.

Frame Time Consistency & Recordings
Comments Locked

245 Comments

View All Comments

  • lilmoe - Friday, February 6, 2015 - link

    These tests should totally NOT be done on a Core i7....
  • tipoo - Friday, February 6, 2015 - link

    He also has an i5 and i3 in there...?
  • Gigaplex - Sunday, February 8, 2015 - link

    He has an i7 with some cores disabled to simulate i5 and i3.
  • ColdSnowden - Friday, February 6, 2015 - link

    Why do AMD radeons have a much slower batch submission time? Does that mean that using an Nvidia card with a faster batch submission time can lessen cpu bottlenecking, even if DX11 is held constant?
  • junky77 - Friday, February 6, 2015 - link

    Also, what about testing with AMD APUs and/or CPUs.. this could make a change in this area
  • WaltC - Friday, February 6, 2015 - link

    Very good write-up! My own thought about Mantle is that AMD pushed it to light a fire under Microsoft and get the company stimulated again as to D3d and the Windows desktop--among other things. Prior to AMD's Mantle announcement it wasn't certain if Microsoft was ever going to do another version of D3d--the scuttlebutt was "No" from some quarters, as hard as that was to believe. At any rate, as a stimulus it seems to have worked as a couple of months after the Mantle announcement Microsoft announces D3d/DX12, the description of which sounded suspiciously like Mantle. I think that as D3d moves ahead and continues to develop that Mantle will sort of hit the back burners @ AMD and then, like an old soldier, just fade away...;) Microsoft needs to invest heavily in DX/D3d development in the direction they are moving now, and I think as the tablet fad continues to wane and desktops continue to rebound that Microsoft is unlikely to forget its core strengths again--which means robust development for D3d for many years to come. Maximizing hardware efficiencies is not just great for lower-end PC gpus & cpus, it's also great for xBone & Microsoft's continued push into mobile devices. Looks like clear sailing ahead...;)
  • Ryan Smith - Saturday, February 7, 2015 - link

    "Prior to AMD's Mantle announcement it wasn't certain if Microsoft was ever going to do another version of D3d"

    Although I don't have access to the specific timelines, I would strongly advice not conflating the public API announcements with development milestones.

    Mike Mantor (AMD) and Max McMullen (MS) may not go out and get microbrews together, but all of the major players in this industry are on roughly the same page. DX12 would have been in the drawing board before the public announcement of Mantle. Though I couldn't say which API was on the drawing board first.
  • at80eighty - Friday, February 6, 2015 - link

    Great article. Loved the "first thoughts" ending
  • HisDivineOrder - Friday, February 6, 2015 - link

    I have to say, I expected Mantle to do a LOT better for AMD than DX12 would simply because it would be more closely tailored to the architecture. I mean, that's what AMD's been saying forever and a day now. It just doesn't look true, though.

    Perhaps it's AMD sucks at optimization because Mantle looks as optimized for AMD architecture as DX11 is in comparison to their overall gains. Meanwhile, nVidia looks like they've been using all this time waiting on DX12 to show up to really hone their drivers to fighting shape.

    I guess this is what happens when you fire large numbers of your employees without doing much about the people directing them whose mistakes are the most reflected by flagging sales. Then you go and hire a few big names, trumpet those names, but you don't have the "little people" to back them up and do incredible feats.

    Then again, it probably has to do with the fact that nVidia's released a whole new generation of product while AMD's still using the same product from two years ago that they've stretched thin across multiple quarters via delayed releases and LOTS of rebrands and rebundling.
  • Jumangi - Saturday, February 7, 2015 - link

    No it was because Mantle only worked on AMD cards and NVidia has about 2/3 of the discrete graphics card market so most developers never bothered. Mantle never had a chance for widespread adoption.

Log in

Don't have an account? Sign up now