CPU Scaling

Diving into our look at DirectX 12, let’s start with what is going to be the most critical component for a benchmark like Star Swarm, the CPU scaling.

Because Star Swarm is designed to exploit the threading inefficiencies of DirectX 11, the biggest gains from switching to DirectX 12 on Star Swarm come from removing the CPU bottleneck. Under DirectX 11 the bulk of Star Swarm’s batch submission work happens under a single thread, and as a result the benchmark is effectively bottlenecked by single-threaded performance, unable to scale out with multiple CPU cores. This is one of the issues DirectX 12 sets out to resolve, with the low-level API allowing Oxide to more directly control how work is submitted, and as such better balance it over multiple CPU cores.

Star Swarm CPU Scaling - Extreme Quality - GeForce GTX 980

Star Swarm CPU Scaling - Extreme Quality - Radeon R9 290X

Starting with a look at CPU scaling on our fastest cards, what we find is that besides the absurd performance difference between DirectX 11 and DirectX 12, performance scales roughly as we’d expect among our CPU configurations. Star Swarm's DirectX 11 path, being single-threaded bound, scales very slightly with clockspeed and core count increases. The DirectX 12 path on the other hand scales up moderately well from 2 to 4 cores, but doesn’t scale up beyond that. This is due to the fact that at these settings, even pushing over 100K draw calls, both GPUs are solidly GPU limited. Anything more than 4 cores goes to waste as we’re no longer CPU-bound. Which means that we don’t even need a highly threaded processor to take advantage of DirectX 12’s strengths in this scenario, as even a 4 core processor provides plenty of kick.

Meanwhile this setup also highlights the fact that under DirectX 11, there is a massive difference in performance between AMD and NVIDIA. In both cases we are completely CPU bound, with AMD’s drivers only able to deliver 1/3rd the performance of NVIDIA’s. Given that this is the original Mantle benchmark I’m not sure we should read into the DirectX 11 situation too much since AMD has little incentive to optimize for this game, but there is clearly a massive difference in CPU efficiency under DirectX 11 in this case.

Star Swarm D3D12 CPU Scaling - Extreme Quality

Having effectively ruled out the need for 6 core CPUs for Star Swarm, let’s take a look at a breakdown across all of our cards for performance with 2 and 4 cores. What we find is that Star Swarm and DirectX 12 are so efficient that only our most powerful card, the GTX 980, finds itself CPU-bound with just 2 cores. For the AMD cards and other NVIDIA cards we can get GPU bound with the equivalent of an Intel Core i3 processor, showcasing just how effective DirectX 12’s improved batch submission process can be. In fact it’s so efficient that Oxide is running both batch submission and a complete AI simulation over just 2 cores.

Star Swarm CPU Batch Submission Time (4 Cores)

Speaking of batch submission, if we look at Star Swarm’s statistics we can find out just what’s going on with batch submission. The results are nothing short of incredible, particularly in the case of AMD. Batch submission time is down from dozens of milliseconds or more to just 3-5ms for our fastest cards, an improvement just overof a whole order of magnitude. For all practical purposes the need to spend CPU time to submit batches has been eliminated entirely, with upwards of 120K draw calls being submitted in a handful of milliseconds. It is this optimization that is at the core of Star Swarm’s DirectX 12 performance improvements, and going forward it could potentially benefit many other games as well.


Another metric we can look at is actual CPU usage as reported by the OS, as shown above. In this case CPU usage more or less perfectly matches our earlier expectations: with DirectX 11 both the GTX 980 and R9 290X show very uneven usage with 1-2 cores doing the bulk of the work, whereas with DirectX 12 CPU usage is spread out evenly over all 4 CPU cores.

At the risk of speaking to the point that it’s redundant, what we’re seeing here is exactly why Mantle, DirectX 12, OpenGL Next, and other low-level APIs have been created. With single-threaded performance struggling to increase while GPUs continue to improve by leaps and bounds with each generation, something must be done to allow games to better spread out their rendering & submission workloads over multiple cores. The solution to that problem is to eliminate the abstraction and let the developers do it themselves through APIs like DirectX 12.

Star Swarm & The Test GPU Scaling
Comments Locked

245 Comments

View All Comments

  • OrphanageExplosion - Tuesday, February 10, 2015 - link

    Sony already has its low-level API, GNM, plus its DX11-a-like GNMX, for those who need faster results, easier debugging, handling of state etc.
  • Gigaplex - Monday, February 9, 2015 - link

    The consoles already use bare metal APIs. This will have minimal impact for consoles.
  • Bill McGann - Tuesday, February 10, 2015 - link

    D3D11.x is garbage compared to GNM (plus the XbOne HW is garbage compared to the PS4). The sad fact is that the XbOne desperately needs D3D12(.x) in order to catch up to the PS4.
  • Notmyusualid - Monday, February 9, 2015 - link

    Hello all... couldn't find any MULTI-GPU results...

    So here are mine:

    System: M18x R2 3920XM (16GB RAM 1866MHz CAS10), Crossfire 7970Ms.
    Test: Star Swarm Perf demo RTS, all Extreme setting.
    Ambient temp 31C, skype idling in background, and with CPU ALL at stock settings:

    Mantle: fps max 99.03 min 9.65 AVG 26.5
    DX11: fps max 141.56 min 3.26 AVG 11.6

    CPU at 4.4GHz x 4 cores, fans 100%, skype in background again:

    Mantle: fps max 51.03 min 7.26 AVG 26.2 - max temp 94C
    DX11: fps max 229.06 min 3.6 AVG 12.3 - max temp 104C + thus cpu throtteled at 23%.

    So I guess there are good things coming to PC gamers with multi-gpu's then. Alas, being a mere mortal, I've no access to DX12 to play with...

    Anyone care to offer some SLI results in return?
  • lamebot - Monday, February 9, 2015 - link

    I have been reading this article page by page over the weekend, giving myself time to take it in. The frame times from each vendor under DX11 is very interesting. Thank you for a great, in depth article! This is what keeps me coming back.
  • ChristTheGreat - Monday, February 9, 2015 - link

    Can it be only a WIndows 10 Driver problem or optimisation for these result with the R9 290/X? On windows 8.1, 4770k 4.3ghz and R9 290 1100/1400, I do 40avg D3D11 and 66avg Mantle.. Scenario: follow, Detail: extreme...
  • ChristTheGreat - Monday, February 9, 2015 - link

    Oh well, I saw that you guys use the Asus PQ321, so running in 3 840 x 2 160?
  • Wolfpup - Monday, February 9, 2015 - link

    Good, can Mantle go away now? The idea of moving BACK to proprietary APIs is a terrible one, unless the only point of it was to push Microsoft to make these fixes in Direct X.

    "Stardock is using the Nitrous engine for their forthcoming Star Control game"

    I didn't know! This is freaking awesome news. I love love love Star Control 2 on the 3DO. Liked 1 on the Genesis too, but SC2 on the 3DO (which I think is better than the PC version) is not unlike Starflight-and somewhat like Mass Effect.
  • Wolfpup - Monday, February 9, 2015 - link

    Forgot to mention... I also think AMD needs to worry about making sure their DirectX/Open GL drivers are flawless across years of hardware before wasting time on a proprietary API too...
  • lordken - Monday, February 9, 2015 - link

    Sorry but are you just amd hater or something?
    imho AMD did great thing with mantle and possibly pushed M$ to come with DX12 (or not we will never know). But that aside how about you think about some advantages mantle has?
    I think that when they bring it to linux (they said they would, not sure about current status) that will be nice advantage as I guess native api would work better than wine etc. With more recent games (thanks to new engines) comming to linux would probably benefit more if they would run on mantle.

    And nonetheless how about ppl that wont move to Win10 (for whatever reason)? Mantle on Win7/8 would still be huge benefit for such ppl. Not to mention that there will be probably more games with mantle than with DX12 in very close feature.
    Plus if they work out console & pc mantle aspect it could bring better console ports to PC, even if game devs would be lazy to do proper optimalization mantle should pretty much eliminate this aspect (though only for amd gpu).
    But either way I dont see much reason why should mantle go. I mean once it is included in all/most bigger engines (already in cryengine,frostbite,unreal) and it works what reason would be to trash something that is usefull?

    btw funny how "M$ would need to do huge kernel rework to bring DX12 to Win7/8" while mantle, which does similar thing, is easily capable to be "OS version independent" (sure it is amd specific but still)

    PS:What about amd drivers? They are fine imho, never had problem in last years. (you see personal experience is not a valid argument)

Log in

Don't have an account? Sign up now