First Thoughts

Bringing our preview of DirectX 12 to a close, what we’re seeing today is both a promising sign of what has been accomplished so far and a reminder of what is left to do. As it stands much of DirectX 12’s story remains to be told – features, feature levels, developer support, and more will only finally be unveiled by Microsoft next month at GDC 2015. So today’s preview is much more of a beginning than an end when it comes to sizing up the future of DirectX.

But for the time being we’re finally at a point where we can say the pieces are coming together, and we can finally see parts of the bigger picture. Drivers, APIs, and applications are starting to arrive, giving us our first look at DirectX 12’s performance. And we have to say we like what we’ve seen so far.

With DirectX 12 Microsoft and its partners set out to create a cross-vendor but still low-level API, and while there was admittedly little doubt they could pull it off, there has always been the question of how well they could do it. What kind of improvements and performance could you truly wring out of a new API when it has to work across different products and can never entirely avoid abstraction? The answer as it turns out is that you can still enjoy all of the major benefits of a low-level API, not the least of which are the incredible improvements in CPU efficiency and multi-threading.

That said, any time we’re looking at an early preview it’s important to keep our expectations in check, and that is especially the case with DirectX 12. Star Swarm is a best case scenario and designed to be a best case scenario; it isn’t so much a measure of real world performance as it is technological potential.

But to that end, it’s clear that DirectX 12 has a lot of potential in the right hands and the right circumstances. It isn’t going to be easy to master, and I suspect it won’t be a quick transition, but I am very interested in seeing what developers can do with this API. With the reduced overhead, the better threading, and ultimately a vastly more efficient means of submitting draw calls, there’s a lot of potential waiting to be exploited.

Frame Time Consistency & Recordings
Comments Locked

245 Comments

View All Comments

  • silverblue - Saturday, February 7, 2015 - link

    This is but one use case. There does need to be an investigation into why AMD is so poor here with all three APIs, however - either a hardware deficiency exposed by the test, or NVIDIA's drivers just handle it better. I'll hold off on the conspiracy theories for now; this isn't Ubisoft, after all.
  • AnnonymousCoward - Friday, February 6, 2015 - link

    Finally a reason to move on from XP!
  • BadThad - Saturday, February 7, 2015 - link

    Great article! Thanks Ryan
  • PanzerEagle - Saturday, February 7, 2015 - link

    Great article! Would like to see a follow uo/ add on article with a system that mimics and Xbox one. Since windows 10 is coming to the One it would be great to see the performance delta for the Xbox.
  • thunderising - Saturday, February 7, 2015 - link

    So.

    Mantle is better, eh.
  • Nuno Simões - Tuesday, February 10, 2015 - link

    How did you come to that conclusion after this article?
  • randomguy1 - Saturday, February 7, 2015 - link

    As per the GPU scaling results, the percentage gain for Radeon cards is MUCH higher than Nvidia's cards. Although Nvidia had been ahead of AMD in terms of optimising the software with the hardware, but with the release of DX12, they are almost at level ground. What this means is that similarly priced Radeon cards will get a huge boost in performance as compared to their Nvidia counter parts. This is a BIG win for AMD.
  • nulian - Saturday, February 7, 2015 - link

    If you read the article it said we don't know why AMD has so bad performance on CPU it might be because NVIDIA drivers work better with multithreading on DX11 then AMD and the benchmark was originally written for AMD.
  • calzone964 - Saturday, February 7, 2015 - link

    I hope Assassin's Creed Unity is patched with this when Windows 10 releases. That game needs this so much...
  • althaz - Saturday, February 7, 2015 - link

    "AMD is banking heavily on low-level APIs like Mantle to help level the CPU playing field with Intel, so if Mantle needs 4 CPU cores to fully spread its wings with faster cards, that might be a problem."

    Actually - this is an extra point helping AMD out. Their CPUs get utterly crushed in single-threaded performance, but by and large at similar prices their CPUs will nearly always have more cores. Likely AMD simply don't care about dual-core chips - because they don't sell any.

Log in

Don't have an account? Sign up now