DirectX 12 vs. Mantle, Power Consumption

Although the bulk of our coverage today is going to be focused on DirectX 12 versus DirectX 11, we also wanted to take a moment to also stop and look at DirectX 12 and how it compares to AMD’s Mantle. Mantle offers an interesting point of contrast being that it has been in beta longer than DirectX 12, but also due to the fact that it’s an even lower level API than DirectX 12. Since Mantle only needs to work on AMD’s GPUs and can be tweaked for AMD’s architectures, it offers AMD the chance to exploit their GPUs in a few additional ways that a common, cross-vendor API like DirectX 12 cannot.

Star Swarm - Direct3D 12 vs. Mantle (4 Cores) - Extreme Quality

With 4 cores we find that AMD achieves better results with Mantle than DirectX 12 across the board. The gains are never very great – a few percent here and there – but they are consistent and just outside our window of variability for the Star Swarm benchmark. With such a small gain there are a number of factors that can possibly explain this outcome – better developed drivers, better developed application, further benefits of working with a known hardware platform – so we can’t credit any one factor. But it’s safe to say that at least in this one instance, at this time, Star Swarm’s Mantle rendering path produces even better results than its DirectX 12 path on AMD cards.

Star Swarm - Direct3D 12 vs. Mantle (2 Cores) - Extreme Quality

On the other hand, Mantle doesn’t seem to be able to accommodate a two-core situation as well, with the 290X seeing a small but distinct performance regression from switching to Mantle from DirectX 12. Though we didn’t have time to look at an AMD APU for this article, it would be interesting to see if this regression occurs on their 2M/4C parts as well as it does here; AMD is banking heavily on low-level APIs like Mantle to help level the CPU playing field with Intel, so if Mantle needs 4 CPU cores to fully spread its wings with faster cards, that might be a problem.

Star Swarm CPU Batch Submission Time (4 Cores) - D3D vs. Mantle - Extreme Quality

Diving deeper, we can see that part of the explanation for our Mantle performance regression may come from the batch submission process. DirectX 12 is unexpectedly well ahead of Mantle here, with batch submission taking on average a bit more than half as long as it does under Mantle. As batch submission times are highly correlated to CPU bottlenecking on Star Swarm, this would imply that DirectX 12 would bottleneck later than Mantle in this instance. That said, since we’re so strongly GPU-bound right now it’s not at all clear if either API would be CPU bottlenecked any time soon.

Update: Oxide Games has emailed us this evening with a bit more detail about what's going on under the hood, and why Mantle batch submission times are higher. When working with large numbers of very small batches, Star Swarm is capable of throwing enough work at the GPU such that the GPU's command processor becomes the bottleneck. For this reason the Mantle path includes an optimization routine for small batches (OptimizeSmallBatch=1), which trades GPU power for CPU power, doing a second pass on the batches in the CPU to combine some of them before submitting them to the GPU. This bypasses the command processor bottleneck, but it increases the amount of work the CPU needs to do (though note that in AMD's case, it's still several times faster than DX11).

This feature is enabled by default in our build, and by combining those small batches this is the likely reason that the Mantle path holds a slight performance edge over the DX12 path on our AMD cards. The tradeoff is that in a 2 core configuration, the extra CPU workload from the optimization pass is just enough to cause Star Swarm to start bottlenecking at the CPU again. For the time being this is a user-adjustable feature in Star Swarm, and Oxide notes that in any shipping game the small batch feature would likely be turned off by default on slower CPUs.

Star Swarm CPU Batch Submission Time (4 Cores) - Small Batch Optimization

Star Swarm - Direct3D 12 vs. Mantle (4 Cores) - Small Batch Optimization

If we turn off the small batch optimization feature, what we find is that Mantle' s batch submission time drops nearly in half, to an average of 4.4ms. With the second pass removed, Mantle and DirectX 12 take roughly the same amount of time to submit batches in a single pass. However as Oxide noted, there is a performance hit; the Mantle rendering path's performance goes from being ahead of DirectX 12 to trailing it. So given sufficient CPU power to pay the price for batch optimization, it can have a signifcant impact (16%) on improving performance under Mantle.

Star Swarm System Power Consumption (6 Cores)

Finally, we wanted to take a quick look at power consumption among cards and APIs. To once again repeat what we said earlier, Star Swarm is an imperfect, non-deterministic benchmark, and coupled with the in-development status of DirectX 12 everything here is subject to change. However we thought this was interesting enough to include in our evaluation.

As expected, the increased throughput from DirectX 12 and Mantle drive up system power consumption. With the CPU no longer the bottleneck, the GPU never gets a chance to idle and video card power consumption ramps up to full load.

GPU Scaling Mid Quality Performance
Comments Locked

245 Comments

View All Comments

  • Mr Perfect - Sunday, February 8, 2015 - link

    That's not what he's saying though, he said TDP is some measure of what amount of heat is 'wasted" heat. As if there's some way to figure out what part of the 165 watts is doing computational work, and what is just turning into heat without doing any computational work. That's not what TDP measures.

    Also, CPUs and GPUs can routinely go past TDP, so I'm not sure where people keep getting TDP is maximum power draw from. It's seen regularly in the benchmarks here at Anandtech. That's usually one of the goals of the power section of reviews, seeing if the manufacturers TDP calculation of typical power draw holds up in the real world.
  • Mr Perfect - Sunday, February 8, 2015 - link

    Although, now that I think about it, I do remember a time when TDP actually was pretty close to maximum power draw. But then Intel came out with the Netburst architecture and started defining TDP as the typical power used by the part in real world use, since the maximum power draw was so ugly. After a lot of outrage from the other companies, they picked up the same practice so they wouldn't seem to be at a disadvantage in regard to power draw. That was ages ago though, TDP hasn't meant maximum power draw for years.
  • Strunf - Sunday, February 8, 2015 - link

    TDP essentially means your GPU can work at that power input for a long time, in the past the CPU/GPU were close to it cause they didn't have throttle, idles and what not technologies. Today they have and they can go past the TDP for "short" period of times, with the help of thermal sensors they can adjust the power as they need without risking of burning down the CPU/GPU.
  • YazX_ - Friday, February 6, 2015 - link

    Dude, its total System power consumption not video card only.
  • Morawka - Friday, February 6, 2015 - link

    are you sure you not looking at factory overclocked cards? The 980 has a 8 pin and 6 pin connector. You gotta minus the CPU and Motherboard power.

    Check any reference review on power consumption

    http://www.guru3d.com/articles_pages/nvidia_geforc...
  • Yojimbo - Friday, February 6, 2015 - link

    Did you notice the 56% greater performance? The rest of the system is going to be drawing more power to keep up with the greater GPU performance. NVIDIA is getting much greater benefit of having 4 cores than 2, for instance. And who knows, maybe the GPU itself was able to run closer to full load. Also, the benchmark is not deterministic, as mentioned several times in the article. It is the wrong sort of benchmark to be using to compare two different GPUs in power consumption, unless the test is run significantly many times. Finally, you said the R9 290X-powered system consumed 14W more in the DX12 test than the GTX 980-powered system, but the list shows it consumed 24W more. Let's not even compare DX11 power consumption using this benchmark, since NVIDIA's performance is 222% higher.
  • MrPete123 - Friday, February 6, 2015 - link

    Win7 will be dominant in businesses for some time, but not gaming PCs where this will be benefit more.
  • Yojimbo - Friday, February 6, 2015 - link

    Most likely the main reasons for consumers not upgrading to Windows 10 will be laziness, comfort, and ignorance.
  • Murloc - Saturday, February 7, 2015 - link

    people who are CPU bottlenecked are not that kind of people given the amount of money they spend on GPUs.
  • Frenetic Pony - Friday, February 6, 2015 - link

    FREE. Ok. FREE. F and then R and then E and then another E.

Log in

Don't have an account? Sign up now