CPU Scaling

Diving into our look at DirectX 12, let’s start with what is going to be the most critical component for a benchmark like Star Swarm, the CPU scaling.

Because Star Swarm is designed to exploit the threading inefficiencies of DirectX 11, the biggest gains from switching to DirectX 12 on Star Swarm come from removing the CPU bottleneck. Under DirectX 11 the bulk of Star Swarm’s batch submission work happens under a single thread, and as a result the benchmark is effectively bottlenecked by single-threaded performance, unable to scale out with multiple CPU cores. This is one of the issues DirectX 12 sets out to resolve, with the low-level API allowing Oxide to more directly control how work is submitted, and as such better balance it over multiple CPU cores.

Star Swarm CPU Scaling - Extreme Quality - GeForce GTX 980

Star Swarm CPU Scaling - Extreme Quality - Radeon R9 290X

Starting with a look at CPU scaling on our fastest cards, what we find is that besides the absurd performance difference between DirectX 11 and DirectX 12, performance scales roughly as we’d expect among our CPU configurations. Star Swarm's DirectX 11 path, being single-threaded bound, scales very slightly with clockspeed and core count increases. The DirectX 12 path on the other hand scales up moderately well from 2 to 4 cores, but doesn’t scale up beyond that. This is due to the fact that at these settings, even pushing over 100K draw calls, both GPUs are solidly GPU limited. Anything more than 4 cores goes to waste as we’re no longer CPU-bound. Which means that we don’t even need a highly threaded processor to take advantage of DirectX 12’s strengths in this scenario, as even a 4 core processor provides plenty of kick.

Meanwhile this setup also highlights the fact that under DirectX 11, there is a massive difference in performance between AMD and NVIDIA. In both cases we are completely CPU bound, with AMD’s drivers only able to deliver 1/3rd the performance of NVIDIA’s. Given that this is the original Mantle benchmark I’m not sure we should read into the DirectX 11 situation too much since AMD has little incentive to optimize for this game, but there is clearly a massive difference in CPU efficiency under DirectX 11 in this case.

Star Swarm D3D12 CPU Scaling - Extreme Quality

Having effectively ruled out the need for 6 core CPUs for Star Swarm, let’s take a look at a breakdown across all of our cards for performance with 2 and 4 cores. What we find is that Star Swarm and DirectX 12 are so efficient that only our most powerful card, the GTX 980, finds itself CPU-bound with just 2 cores. For the AMD cards and other NVIDIA cards we can get GPU bound with the equivalent of an Intel Core i3 processor, showcasing just how effective DirectX 12’s improved batch submission process can be. In fact it’s so efficient that Oxide is running both batch submission and a complete AI simulation over just 2 cores.

Star Swarm CPU Batch Submission Time (4 Cores)

Speaking of batch submission, if we look at Star Swarm’s statistics we can find out just what’s going on with batch submission. The results are nothing short of incredible, particularly in the case of AMD. Batch submission time is down from dozens of milliseconds or more to just 3-5ms for our fastest cards, an improvement just overof a whole order of magnitude. For all practical purposes the need to spend CPU time to submit batches has been eliminated entirely, with upwards of 120K draw calls being submitted in a handful of milliseconds. It is this optimization that is at the core of Star Swarm’s DirectX 12 performance improvements, and going forward it could potentially benefit many other games as well.


Another metric we can look at is actual CPU usage as reported by the OS, as shown above. In this case CPU usage more or less perfectly matches our earlier expectations: with DirectX 11 both the GTX 980 and R9 290X show very uneven usage with 1-2 cores doing the bulk of the work, whereas with DirectX 12 CPU usage is spread out evenly over all 4 CPU cores.

At the risk of speaking to the point that it’s redundant, what we’re seeing here is exactly why Mantle, DirectX 12, OpenGL Next, and other low-level APIs have been created. With single-threaded performance struggling to increase while GPUs continue to improve by leaps and bounds with each generation, something must be done to allow games to better spread out their rendering & submission workloads over multiple cores. The solution to that problem is to eliminate the abstraction and let the developers do it themselves through APIs like DirectX 12.

Star Swarm & The Test GPU Scaling
Comments Locked

245 Comments

View All Comments

  • inighthawki - Monday, February 9, 2015 - link

    >> btw funny how "M$ would need to do huge kernel rework to bring DX12 to Win7/8" while mantle, which does similar thing, is easily capable to be "OS version independent" (sure it is amd specific but still)

    How do you know that DX12 will not support a number of features that Mantle will not? For example, DX12 is expected to provide the application with manual memory management, a feature not available in Mantle while running on WDDM 1.3 or below.
  • lordken - Tuesday, February 10, 2015 - link

    what I meant is in performance terms. While mantle is able to deliver +/-same performance boost as DX12 but still on old windows kernel.
    Not saying DX12 wont support something that mantle wont be able to do on old windows kernel. I merely tried to highlight that same performance boost can be achieved on current OS without the need of M$ taunting gamers with Win10 (forced) upgrade for DX12
  • killeak - Tuesday, February 10, 2015 - link

    "btw funny how "M$ would need to do huge kernel rework to bring DX12 to Win7/8" while mantle, which does similar thing, is easily capable to be "OS version independent" (sure it is amd specific but still)"

    Direct3D has a very different design. While APIs like OpenGL or Mantle are implemented in the drivers, Direct3D is implemented (the runtime) in the OS. That means, that no matter what hardware you have, the code that is executed under the API, is for most part, always the same. Sure, the Driver needs to expose and abstract the hardware (following another API, in this case WDDM 2.0), but the actual implementation is much more slim. Which means is much more solid and reliable.

    Now, OpenGL is implemented in the driver, the OS only expose the basic C functions to create the context and the like. A good driver can make OpenGL works as fast, or even more, than D3D, but the reality says that 90% of the time, OpenGL works worse. Not just because of performance, but because each driver for each OS and each GPU has a different implementation, things usually doesn't work as you expected.

    After years of working with OpenGL and D3D, the thing that I miss more of D3D when I am coding for OpenGL platforms, is the single runtime. Program once, run everywhere (well on every windows) works on D3D but not on OpenGL, hell is even harder on mobile with OpenGL ES, and the broken drivers of Mali, Qualcomm, etc. Sure, if your app is simple OpenGL works, but for AAA it just doesn't cut...

    The true is, IHVs are here to sell hardware, not software, so they invest the minimum time and money on it (most of the time they optimize drivers for big AAA titles and benchmarks). For mobile, where SoCs are replaced every year, is even workse, since drivers never get mature enough. Heck, Mali for example doesn't have devices with the 700 series on the market and they already announced the 800 series, while their OpenGL ES drivers for the 600 are really bad.

    Going back to Mantle and Win7/8. In the drivers, you can do what ever you want, so yes, you can make your own API and make it work wherever you want, that is why Mantle can do things low level without WDDM 2.0, it doesn't need to be common or compatible to other drivers/vendors.
  • Bill McGann - Tuesday, February 10, 2015 - link

    Yeah, this is a huge reason why GL is largely ignored by Windows devs. D3D is extremely stable thanks to it largely being implemented by MS, and them having the power to test/certify the vendor's back-ends.
    GL on the other hand is the wild west, with every vendor doing whatever they like... You even have them stealing MS's terrible 90's browser-war strategies of deliberately supporting broken behavior, hoping devs will use it, so that games will break on other vendor's (strictly compliant) drivers. Any situation where vendors are abusing devs like this is pretty f'ed up.
  • tobi1449 - Thursday, February 12, 2015 - link

    The console & pc aspect isn't going anywhere and was never meant to. AMD formulated their early press releases in a way that some people jumped the hype train before it was even built, but AMD was shut down by Microsoft and Sony pretty quickly about that.
  • Bill McGann - Tuesday, February 10, 2015 - link

    FYI mantle is very carefully specified as a vendor-agnostic API, like GL, with extensions for vendor-specific behavior.

    If AMD even bother launching Mantle after D3D12/GLNext appear, and if it remains AMD-only, it's because nVidia/Intel have chosen not to adopt the spec, not because AMD have deliberately made it AMD-only.
  • tobi1449 - Thursday, February 12, 2015 - link

    a) I can see why there's resistance against adopting a competitors API.
    b) AFAIK AMD hasn't released anything needed to implement Mantle for other hardware yet. Sure, they've often talked about it and most of the time Mantle is mentioned this pops up, but in reality (if this is still correct) it is as locked down as say G-Sync or PhysX.
  • Arbie - Tuesday, February 10, 2015 - link

    I closely followed graphics board technology and performance for many years. But after a certain point I realized that there are actually very few - count 'em on one hand - games that I even enjoy playing. Three of those start with "Crysis" (and the other two with "Peggle"). The Battlefield series might have the same replay interest; don't know.

    So unless and until there are really startling ~3x gains for the same $$, my interest in desktop graphics card performance is much more constrained by game quality than by technology. I don't want to run "Borderlands" 50% faster because... I don't want to run it at all. Or any other of the lousy console ports out there.
  • computertech82 - Wednesday, February 11, 2015 - link

    SLIGHT PROBLEM. I think it's safe to say the dx11 vs dx12 was ran on the SAME OS 10. That probably just means dx11 runs crappy on win10, not that dx12 is so much better. I bet it would be different with win7/8 dx11 vs win10 dx12 (meaning very little difference).
  • Notmyusualid - Thursday, February 12, 2015 - link

    Good point - hadn't considered it until you mentioned it.

    Then the comparison should really have been dx11 - Win 7/8, dx12 - Win 10, Mantle - both (if poss).

Log in

Don't have an account? Sign up now