The DirectX 12 Performance Preview: AMD, NVIDIA, & Star Swarmby Ryan Smith on February 6, 2015 2:00 PM EST
- Posted in
- DirectX 12
Bringing our preview of DirectX 12 to a close, what we’re seeing today is both a promising sign of what has been accomplished so far and a reminder of what is left to do. As it stands much of DirectX 12’s story remains to be told – features, feature levels, developer support, and more will only finally be unveiled by Microsoft next month at GDC 2015. So today’s preview is much more of a beginning than an end when it comes to sizing up the future of DirectX.
But for the time being we’re finally at a point where we can say the pieces are coming together, and we can finally see parts of the bigger picture. Drivers, APIs, and applications are starting to arrive, giving us our first look at DirectX 12’s performance. And we have to say we like what we’ve seen so far.
With DirectX 12 Microsoft and its partners set out to create a cross-vendor but still low-level API, and while there was admittedly little doubt they could pull it off, there has always been the question of how well they could do it. What kind of improvements and performance could you truly wring out of a new API when it has to work across different products and can never entirely avoid abstraction? The answer as it turns out is that you can still enjoy all of the major benefits of a low-level API, not the least of which are the incredible improvements in CPU efficiency and multi-threading.
That said, any time we’re looking at an early preview it’s important to keep our expectations in check, and that is especially the case with DirectX 12. Star Swarm is a best case scenario and designed to be a best case scenario; it isn’t so much a measure of real world performance as it is technological potential.
But to that end, it’s clear that DirectX 12 has a lot of potential in the right hands and the right circumstances. It isn’t going to be easy to master, and I suspect it won’t be a quick transition, but I am very interested in seeing what developers can do with this API. With the reduced overhead, the better threading, and ultimately a vastly more efficient means of submitting draw calls, there’s a lot of potential waiting to be exploited.
Post Your CommentPlease log in or sign up to comment.
View All Comments
junky77 - Friday, February 6, 2015 - linkLooking at the CPU scaling graphs and CPU/GPU usage, it doesn't look like the situation in other games where CPU can be maxed out. It does seem like this engine and test might be really tailored for this specific case of DX12 and Mantle in a specific way
The interesting thing is to understand whether the DX11 performance shown here is optimal. The CPU usage is way below max, even for the one core supposedly taking all the load. Something is bottlenecking the performance and it's not the number of cores, threads or clocks.
eRacer1 - Friday, February 6, 2015 - linkSo the GTX 980 is using less power than the 290X while performing ~50% better, and somehow NVIDIA is the one with the problem here? The data is clear. The GTX 980 has a massive DX12 (and DX11) performance lead and performance/watt lead over 290X.
The_Countess666 - Thursday, February 19, 2015 - linkit also costs twice as much.
and this is the first time in roughly 4 generations that nvidia's managed to release a new generation first. it would be shocking is there wasn't a huge performance difference between AMD and nvidia at the moment.
bebimbap - Friday, February 6, 2015 - linkTDP and power consumption are not the same thing, but are related
if i had to write a simple equation it would be something to the effect of
TDP(wasted heat) = (Power Consumption) X (process node coeff) X (temperature of silicon coeff) X (Architecture coeff)
so basically TDP or "wasted heat" is related to power consumption but not the same thing
Since they are on the same process node by the same foundry, the difference in TDP vs power consumed would be because of Nvidia currently has the more efficient architecture, and that also leads to their chips being cooler, both of which lead to less "wasted heat"
A perfect conductor would have 0 TDP and infinite power consumption.
Mr Perfect - Saturday, February 7, 2015 - linkErm, I don't think you've got the right term there with TDP. TDP is not defined as "wasted heat", but as the typical power draw of the board. So if TDP for the GTX 980 is 165 watts, that just means that in normal gaming use it's drawing 165 watts.
Besides, if a card is drawing 165watts, it's all going to become heat somewhere along the line. I'm not sure you can really decide how many of those watts are "wasted" and how many are actually doing "work".
Wwhat - Saturday, February 7, 2015 - linkNo, he's right TDP means Thermal design power and defines the cooling a system needs to run at full power.
Strunf - Saturday, February 7, 2015 - linkIt's the same... if a GC draws 165W it needs a 165W cooler... do you see anything moving on your card exept the fans? no, so all power will be transformed into heat.
wetwareinterface - Saturday, February 7, 2015 - linkno it's not the same. 165w tdp means the cooler has to dump 165w worth of heat.
165w power draw means the card needs to have 165w of power available to it.
if the card draws 300w of power and has 200w of heat output that means the card is dumping 200w of that 300w into the cooler.
Strunf - Sunday, February 8, 2015 - linkIt's impossible for the card to draw 300W and only output 200W of heat... unless of course now GC defy the laws of physics.
grogi - Sunday, April 5, 2015 - linkWhat is it doing with the remaining 100W?