CPU Scaling

Diving into our look at DirectX 12, let’s start with what is going to be the most critical component for a benchmark like Star Swarm, the CPU scaling.

Because Star Swarm is designed to exploit the threading inefficiencies of DirectX 11, the biggest gains from switching to DirectX 12 on Star Swarm come from removing the CPU bottleneck. Under DirectX 11 the bulk of Star Swarm’s batch submission work happens under a single thread, and as a result the benchmark is effectively bottlenecked by single-threaded performance, unable to scale out with multiple CPU cores. This is one of the issues DirectX 12 sets out to resolve, with the low-level API allowing Oxide to more directly control how work is submitted, and as such better balance it over multiple CPU cores.

Star Swarm CPU Scaling - Extreme Quality - GeForce GTX 980

Star Swarm CPU Scaling - Extreme Quality - Radeon R9 290X

Starting with a look at CPU scaling on our fastest cards, what we find is that besides the absurd performance difference between DirectX 11 and DirectX 12, performance scales roughly as we’d expect among our CPU configurations. Star Swarm's DirectX 11 path, being single-threaded bound, scales very slightly with clockspeed and core count increases. The DirectX 12 path on the other hand scales up moderately well from 2 to 4 cores, but doesn’t scale up beyond that. This is due to the fact that at these settings, even pushing over 100K draw calls, both GPUs are solidly GPU limited. Anything more than 4 cores goes to waste as we’re no longer CPU-bound. Which means that we don’t even need a highly threaded processor to take advantage of DirectX 12’s strengths in this scenario, as even a 4 core processor provides plenty of kick.

Meanwhile this setup also highlights the fact that under DirectX 11, there is a massive difference in performance between AMD and NVIDIA. In both cases we are completely CPU bound, with AMD’s drivers only able to deliver 1/3rd the performance of NVIDIA’s. Given that this is the original Mantle benchmark I’m not sure we should read into the DirectX 11 situation too much since AMD has little incentive to optimize for this game, but there is clearly a massive difference in CPU efficiency under DirectX 11 in this case.

Star Swarm D3D12 CPU Scaling - Extreme Quality

Having effectively ruled out the need for 6 core CPUs for Star Swarm, let’s take a look at a breakdown across all of our cards for performance with 2 and 4 cores. What we find is that Star Swarm and DirectX 12 are so efficient that only our most powerful card, the GTX 980, finds itself CPU-bound with just 2 cores. For the AMD cards and other NVIDIA cards we can get GPU bound with the equivalent of an Intel Core i3 processor, showcasing just how effective DirectX 12’s improved batch submission process can be. In fact it’s so efficient that Oxide is running both batch submission and a complete AI simulation over just 2 cores.

Star Swarm CPU Batch Submission Time (4 Cores)

Speaking of batch submission, if we look at Star Swarm’s statistics we can find out just what’s going on with batch submission. The results are nothing short of incredible, particularly in the case of AMD. Batch submission time is down from dozens of milliseconds or more to just 3-5ms for our fastest cards, an improvement just overof a whole order of magnitude. For all practical purposes the need to spend CPU time to submit batches has been eliminated entirely, with upwards of 120K draw calls being submitted in a handful of milliseconds. It is this optimization that is at the core of Star Swarm’s DirectX 12 performance improvements, and going forward it could potentially benefit many other games as well.


Another metric we can look at is actual CPU usage as reported by the OS, as shown above. In this case CPU usage more or less perfectly matches our earlier expectations: with DirectX 11 both the GTX 980 and R9 290X show very uneven usage with 1-2 cores doing the bulk of the work, whereas with DirectX 12 CPU usage is spread out evenly over all 4 CPU cores.

At the risk of speaking to the point that it’s redundant, what we’re seeing here is exactly why Mantle, DirectX 12, OpenGL Next, and other low-level APIs have been created. With single-threaded performance struggling to increase while GPUs continue to improve by leaps and bounds with each generation, something must be done to allow games to better spread out their rendering & submission workloads over multiple cores. The solution to that problem is to eliminate the abstraction and let the developers do it themselves through APIs like DirectX 12.

Star Swarm & The Test GPU Scaling
Comments Locked

245 Comments

View All Comments

  • tipoo - Friday, February 6, 2015 - link

    They'd still be relatively slower than the i3, but just higher in absolute terms relative to themselves.
  • Sivar - Friday, February 6, 2015 - link

    Has no one informed Microsoft?
    They will never actually release DirectX 12:
    http://tech.slashdot.org/story/13/04/12/1847250/am...
  • HighTech4US - Thursday, February 26, 2015 - link

    WOW, who would have thought that NEVER would only last 2 years.
  • jwcalla - Friday, February 6, 2015 - link

    So it took MS three years to catch up to OGL? I guess better late than never. ;-P
  • tipoo - Friday, February 6, 2015 - link

    Oh, so that's why the consortium is dropping OpenGL in favor of a from-the-ground-up API called GLNext? OpenGL hasn't been better than DX in many years. OpenGL holdouts like Carmack even said so themselves, that they only still use it because of inertia, but DX was better recently.
  • jwcalla - Friday, February 6, 2015 - link

    Yeah, whatever. BMDI has been available in OGL for years now and now DX is finally getting it.

    glNext will likely just be the AZDO stuff w/ a more developer-friendly API + some marketing hype.
  • tipoo - Friday, February 6, 2015 - link

    That's one feature. Total API performance was still not in OGLs favour.
  • jwcalla - Friday, February 6, 2015 - link

    Sure it was. Look at Valve's comparison in their L4D2 ports, and that's with OpenGL 2.x/3.x vs. D3D9.
  • nulian - Saturday, February 7, 2015 - link

    Which was vs DX9 and even valve said they could have improved DX 9 performance if they wanted to. DX 10+ is very different.
  • killeak - Sunday, February 8, 2015 - link

    As a developer that shipped all my games in both D3D and OpenGL (some also in OpenGL ES), I think OpenGL issues are fare more complex than a feature or two.

    What Khronos does with OpenGL it self is just defining the interface and its ideal behavior behind it. The issue is, the actual implementation is not done by Khronos but the IHVs in their drivers. Which means that every combination of OS + GPU + driver can behave different, and in practice this happens a lot!

    With D3D, you have one single implementation: The one done by MS in their OS. Sure, each Windows version has it's own implementation, and each GPU has different drivers, but the RunTime of D3D handles almost everything. Which in practice mean, that if the games run on a GPU in a particular version of Windows, it will run on every other GPU with that version of Windows (as long the GPU has the required features). It could happen that with advanced features and back doors/extensions this is not the rule, but that is an exception in the world of DX, in OpenGL is always like that.

    So, sure, I want OpenGL to be in parity feature wise with D3D (if is more advance better), but I am more worried about things like shader compilation, where you have hundred of different compilers in the market, since each GPU + OS + Drivers has its own. In PC is not that bad (is still bad), but on mobile is a disaster.

    Also it doesn't help that IHVs want to sell HW, not software, in fact they only make software (drivers) because they have to, is not their business, so they optimize their drivers for benchmarks and the like, but the actual implementation is never solid. In that regard, I am happy to see that the presentation of glNext at GDC, is done by developers and not IHVs.

    To be honest, I will be more pleased if Valve made their runtime for SteamOS, and Google for Android, and let the drivers do just the interface with the HW, nothing else. In the end, Apple does that for Macs (but is also true that they also control the HW). Maybe they could have, at least, something like the Windows WHQL in order to keep all implementation for their OS in line.

    And just as a note, I never shipped a game on Windows that runs better on OpenGL that on Direct3D, ever, even when I invested more time on the OpenGL implementation.

Log in

Don't have an account? Sign up now