DirectX 12 vs. Mantle, Power Consumption

Although the bulk of our coverage today is going to be focused on DirectX 12 versus DirectX 11, we also wanted to take a moment to also stop and look at DirectX 12 and how it compares to AMD’s Mantle. Mantle offers an interesting point of contrast being that it has been in beta longer than DirectX 12, but also due to the fact that it’s an even lower level API than DirectX 12. Since Mantle only needs to work on AMD’s GPUs and can be tweaked for AMD’s architectures, it offers AMD the chance to exploit their GPUs in a few additional ways that a common, cross-vendor API like DirectX 12 cannot.

Star Swarm - Direct3D 12 vs. Mantle (4 Cores) - Extreme Quality

With 4 cores we find that AMD achieves better results with Mantle than DirectX 12 across the board. The gains are never very great – a few percent here and there – but they are consistent and just outside our window of variability for the Star Swarm benchmark. With such a small gain there are a number of factors that can possibly explain this outcome – better developed drivers, better developed application, further benefits of working with a known hardware platform – so we can’t credit any one factor. But it’s safe to say that at least in this one instance, at this time, Star Swarm’s Mantle rendering path produces even better results than its DirectX 12 path on AMD cards.

Star Swarm - Direct3D 12 vs. Mantle (2 Cores) - Extreme Quality

On the other hand, Mantle doesn’t seem to be able to accommodate a two-core situation as well, with the 290X seeing a small but distinct performance regression from switching to Mantle from DirectX 12. Though we didn’t have time to look at an AMD APU for this article, it would be interesting to see if this regression occurs on their 2M/4C parts as well as it does here; AMD is banking heavily on low-level APIs like Mantle to help level the CPU playing field with Intel, so if Mantle needs 4 CPU cores to fully spread its wings with faster cards, that might be a problem.

Star Swarm CPU Batch Submission Time (4 Cores) - D3D vs. Mantle - Extreme Quality

Diving deeper, we can see that part of the explanation for our Mantle performance regression may come from the batch submission process. DirectX 12 is unexpectedly well ahead of Mantle here, with batch submission taking on average a bit more than half as long as it does under Mantle. As batch submission times are highly correlated to CPU bottlenecking on Star Swarm, this would imply that DirectX 12 would bottleneck later than Mantle in this instance. That said, since we’re so strongly GPU-bound right now it’s not at all clear if either API would be CPU bottlenecked any time soon.

Update: Oxide Games has emailed us this evening with a bit more detail about what's going on under the hood, and why Mantle batch submission times are higher. When working with large numbers of very small batches, Star Swarm is capable of throwing enough work at the GPU such that the GPU's command processor becomes the bottleneck. For this reason the Mantle path includes an optimization routine for small batches (OptimizeSmallBatch=1), which trades GPU power for CPU power, doing a second pass on the batches in the CPU to combine some of them before submitting them to the GPU. This bypasses the command processor bottleneck, but it increases the amount of work the CPU needs to do (though note that in AMD's case, it's still several times faster than DX11).

This feature is enabled by default in our build, and by combining those small batches this is the likely reason that the Mantle path holds a slight performance edge over the DX12 path on our AMD cards. The tradeoff is that in a 2 core configuration, the extra CPU workload from the optimization pass is just enough to cause Star Swarm to start bottlenecking at the CPU again. For the time being this is a user-adjustable feature in Star Swarm, and Oxide notes that in any shipping game the small batch feature would likely be turned off by default on slower CPUs.

Star Swarm CPU Batch Submission Time (4 Cores) - Small Batch Optimization

Star Swarm - Direct3D 12 vs. Mantle (4 Cores) - Small Batch Optimization

If we turn off the small batch optimization feature, what we find is that Mantle' s batch submission time drops nearly in half, to an average of 4.4ms. With the second pass removed, Mantle and DirectX 12 take roughly the same amount of time to submit batches in a single pass. However as Oxide noted, there is a performance hit; the Mantle rendering path's performance goes from being ahead of DirectX 12 to trailing it. So given sufficient CPU power to pay the price for batch optimization, it can have a signifcant impact (16%) on improving performance under Mantle.

Star Swarm System Power Consumption (6 Cores)

Finally, we wanted to take a quick look at power consumption among cards and APIs. To once again repeat what we said earlier, Star Swarm is an imperfect, non-deterministic benchmark, and coupled with the in-development status of DirectX 12 everything here is subject to change. However we thought this was interesting enough to include in our evaluation.

As expected, the increased throughput from DirectX 12 and Mantle drive up system power consumption. With the CPU no longer the bottleneck, the GPU never gets a chance to idle and video card power consumption ramps up to full load.

GPU Scaling Mid Quality Performance
Comments Locked

245 Comments

View All Comments

  • loguerto - Saturday, February 7, 2015 - link

    Microsoft is on the right way, but still, Mantle is the boss!
  • FXi - Saturday, February 7, 2015 - link

    I'm sadly more curious as to whether the 6 core chips prove their worth. A lot of rumor guessing seems to think that DX12 might finally show that a 6 core matters, but nothing here shows that. That's a very key issue when it comes to whether to go for a higher end chip or stick with the 4 core cpu's.
  • GMAR - Saturday, February 7, 2015 - link

    Excellent article. Thank you!
  • Shahnewaz - Saturday, February 7, 2015 - link

    Wait a minute, isn't the GTX 980 a 165W TDP card? Then how is it pulling over 200 watts?
  • eRacer1 - Sunday, February 8, 2015 - link

    The GTX 980 isn't pulling over 200W. The numbers shown are system power consumption not video card power consumption. The GTX 980 system power consumption isn't unusually high.

    Also, the system power consumption numbers are understating the power difference between the GTX 980 and Radeon 290X cards themselves under DX12. The GTX 980 has such a large performance advantage over the 290X in DX12 that the CPU is also using more power in the GTX 980 system to keep up with the video card.

    If anything the 290X power consumption is "too low", especially under DX12. To me it looks like the GPU is being underutilized, which seems to be the case based on the low FPS results and power consumption numbers. That could be due to many reasons: poor driver optimization, 290X architectural limitations, benchmark bug or design choice, Windows 10 issue, 290X throttling problem, etc. Hopefully, for AMD's sake, those issues can be worked out before the Windows 10 launch.
  • Shahnewaz - Sunday, February 8, 2015 - link

    That doesn't explain the <20W difference in both systems.
    And it's not like the CPU usage is also radically different.
    Remember, the TDP difference between the GPUs is a massive 165W (290W vs 165W).
  • eRacer1 - Sunday, February 8, 2015 - link

    "That doesn't explain the <20W difference in both systems. And it's not like the CPU usage is also radically different."

    Looking at the CPU usage graphs in the review the GTX 980 DX12 CPU average across all four cores is about 80% while the 290X average is only about 50%. So the GTX 980 CPU is doing 60% more work. That alone could easily account for 20+W watts of extra power consumption on CPU in the GTX 980 system. The ~60% CPU higher usage in the GTX 980 system makes sense as the frame rate is 56% higher as well. So what looks like a 14W difference is probably more like a 35W difference between the GTX 980 and 290X video cards.

    But the 35W difference doesn't tell the whole story because the GTX 980 is also 56% faster while using less power. So the GTX 980 has a MASSIVE efficiency advantage under these benchmark conditions. And it is doing it within a reasonable TDP because by the time you back out all of the non-GPU power consumption (CPU, memory, motherboard, hard drive, fans, etc.) and PSU inefficiency losses from the 271W system power consumption you'd likely find that the GTX 980 is under 200W.

    So the question we are left with is why is a 290W TDP 290X system power consumption only 285W under DX12? By the time you subtract the CPU power consumption (which is somewhat less than that of the GTX 980 test due to only being at 50% load instead of 80%), motherboard, memory and other components the 290X is probably using only 200-220W. To me it looks like the 290X is being bottlenecked and as a result isn't using as much power as one would expect. What the source of the bottleneck is, and if it is correctable, remains a mystery.
  • Shahnewaz - Saturday, February 7, 2015 - link

    It looks like AMD GPUs will get some 400%+ performance improvements! Sick!
  • ET - Sunday, February 8, 2015 - link

    My main takeaway from the article is that NVIDIA has done a much better job of optimising its DX11 drivers. AMD needs low level badly.
  • bloodypulp - Sunday, February 8, 2015 - link

    They already have it: Mantle.

Log in

Don't have an account? Sign up now