CPU Scaling

Diving into our look at DirectX 12, let’s start with what is going to be the most critical component for a benchmark like Star Swarm, the CPU scaling.

Because Star Swarm is designed to exploit the threading inefficiencies of DirectX 11, the biggest gains from switching to DirectX 12 on Star Swarm come from removing the CPU bottleneck. Under DirectX 11 the bulk of Star Swarm’s batch submission work happens under a single thread, and as a result the benchmark is effectively bottlenecked by single-threaded performance, unable to scale out with multiple CPU cores. This is one of the issues DirectX 12 sets out to resolve, with the low-level API allowing Oxide to more directly control how work is submitted, and as such better balance it over multiple CPU cores.

Star Swarm CPU Scaling - Extreme Quality - GeForce GTX 980

Star Swarm CPU Scaling - Extreme Quality - Radeon R9 290X

Starting with a look at CPU scaling on our fastest cards, what we find is that besides the absurd performance difference between DirectX 11 and DirectX 12, performance scales roughly as we’d expect among our CPU configurations. Star Swarm's DirectX 11 path, being single-threaded bound, scales very slightly with clockspeed and core count increases. The DirectX 12 path on the other hand scales up moderately well from 2 to 4 cores, but doesn’t scale up beyond that. This is due to the fact that at these settings, even pushing over 100K draw calls, both GPUs are solidly GPU limited. Anything more than 4 cores goes to waste as we’re no longer CPU-bound. Which means that we don’t even need a highly threaded processor to take advantage of DirectX 12’s strengths in this scenario, as even a 4 core processor provides plenty of kick.

Meanwhile this setup also highlights the fact that under DirectX 11, there is a massive difference in performance between AMD and NVIDIA. In both cases we are completely CPU bound, with AMD’s drivers only able to deliver 1/3rd the performance of NVIDIA’s. Given that this is the original Mantle benchmark I’m not sure we should read into the DirectX 11 situation too much since AMD has little incentive to optimize for this game, but there is clearly a massive difference in CPU efficiency under DirectX 11 in this case.

Star Swarm D3D12 CPU Scaling - Extreme Quality

Having effectively ruled out the need for 6 core CPUs for Star Swarm, let’s take a look at a breakdown across all of our cards for performance with 2 and 4 cores. What we find is that Star Swarm and DirectX 12 are so efficient that only our most powerful card, the GTX 980, finds itself CPU-bound with just 2 cores. For the AMD cards and other NVIDIA cards we can get GPU bound with the equivalent of an Intel Core i3 processor, showcasing just how effective DirectX 12’s improved batch submission process can be. In fact it’s so efficient that Oxide is running both batch submission and a complete AI simulation over just 2 cores.

Star Swarm CPU Batch Submission Time (4 Cores)

Speaking of batch submission, if we look at Star Swarm’s statistics we can find out just what’s going on with batch submission. The results are nothing short of incredible, particularly in the case of AMD. Batch submission time is down from dozens of milliseconds or more to just 3-5ms for our fastest cards, an improvement just overof a whole order of magnitude. For all practical purposes the need to spend CPU time to submit batches has been eliminated entirely, with upwards of 120K draw calls being submitted in a handful of milliseconds. It is this optimization that is at the core of Star Swarm’s DirectX 12 performance improvements, and going forward it could potentially benefit many other games as well.


Another metric we can look at is actual CPU usage as reported by the OS, as shown above. In this case CPU usage more or less perfectly matches our earlier expectations: with DirectX 11 both the GTX 980 and R9 290X show very uneven usage with 1-2 cores doing the bulk of the work, whereas with DirectX 12 CPU usage is spread out evenly over all 4 CPU cores.

At the risk of speaking to the point that it’s redundant, what we’re seeing here is exactly why Mantle, DirectX 12, OpenGL Next, and other low-level APIs have been created. With single-threaded performance struggling to increase while GPUs continue to improve by leaps and bounds with each generation, something must be done to allow games to better spread out their rendering & submission workloads over multiple cores. The solution to that problem is to eliminate the abstraction and let the developers do it themselves through APIs like DirectX 12.

Star Swarm & The Test GPU Scaling
Comments Locked

245 Comments

View All Comments

  • loguerto - Saturday, February 7, 2015 - link

    Microsoft is on the right way, but still, Mantle is the boss!
  • FXi - Saturday, February 7, 2015 - link

    I'm sadly more curious as to whether the 6 core chips prove their worth. A lot of rumor guessing seems to think that DX12 might finally show that a 6 core matters, but nothing here shows that. That's a very key issue when it comes to whether to go for a higher end chip or stick with the 4 core cpu's.
  • GMAR - Saturday, February 7, 2015 - link

    Excellent article. Thank you!
  • Shahnewaz - Saturday, February 7, 2015 - link

    Wait a minute, isn't the GTX 980 a 165W TDP card? Then how is it pulling over 200 watts?
  • eRacer1 - Sunday, February 8, 2015 - link

    The GTX 980 isn't pulling over 200W. The numbers shown are system power consumption not video card power consumption. The GTX 980 system power consumption isn't unusually high.

    Also, the system power consumption numbers are understating the power difference between the GTX 980 and Radeon 290X cards themselves under DX12. The GTX 980 has such a large performance advantage over the 290X in DX12 that the CPU is also using more power in the GTX 980 system to keep up with the video card.

    If anything the 290X power consumption is "too low", especially under DX12. To me it looks like the GPU is being underutilized, which seems to be the case based on the low FPS results and power consumption numbers. That could be due to many reasons: poor driver optimization, 290X architectural limitations, benchmark bug or design choice, Windows 10 issue, 290X throttling problem, etc. Hopefully, for AMD's sake, those issues can be worked out before the Windows 10 launch.
  • Shahnewaz - Sunday, February 8, 2015 - link

    That doesn't explain the <20W difference in both systems.
    And it's not like the CPU usage is also radically different.
    Remember, the TDP difference between the GPUs is a massive 165W (290W vs 165W).
  • eRacer1 - Sunday, February 8, 2015 - link

    "That doesn't explain the <20W difference in both systems. And it's not like the CPU usage is also radically different."

    Looking at the CPU usage graphs in the review the GTX 980 DX12 CPU average across all four cores is about 80% while the 290X average is only about 50%. So the GTX 980 CPU is doing 60% more work. That alone could easily account for 20+W watts of extra power consumption on CPU in the GTX 980 system. The ~60% CPU higher usage in the GTX 980 system makes sense as the frame rate is 56% higher as well. So what looks like a 14W difference is probably more like a 35W difference between the GTX 980 and 290X video cards.

    But the 35W difference doesn't tell the whole story because the GTX 980 is also 56% faster while using less power. So the GTX 980 has a MASSIVE efficiency advantage under these benchmark conditions. And it is doing it within a reasonable TDP because by the time you back out all of the non-GPU power consumption (CPU, memory, motherboard, hard drive, fans, etc.) and PSU inefficiency losses from the 271W system power consumption you'd likely find that the GTX 980 is under 200W.

    So the question we are left with is why is a 290W TDP 290X system power consumption only 285W under DX12? By the time you subtract the CPU power consumption (which is somewhat less than that of the GTX 980 test due to only being at 50% load instead of 80%), motherboard, memory and other components the 290X is probably using only 200-220W. To me it looks like the 290X is being bottlenecked and as a result isn't using as much power as one would expect. What the source of the bottleneck is, and if it is correctable, remains a mystery.
  • Shahnewaz - Saturday, February 7, 2015 - link

    It looks like AMD GPUs will get some 400%+ performance improvements! Sick!
  • ET - Sunday, February 8, 2015 - link

    My main takeaway from the article is that NVIDIA has done a much better job of optimising its DX11 drivers. AMD needs low level badly.
  • bloodypulp - Sunday, February 8, 2015 - link

    They already have it: Mantle.

Log in

Don't have an account? Sign up now