CPU Scaling

Diving into our look at DirectX 12, let’s start with what is going to be the most critical component for a benchmark like Star Swarm, the CPU scaling.

Because Star Swarm is designed to exploit the threading inefficiencies of DirectX 11, the biggest gains from switching to DirectX 12 on Star Swarm come from removing the CPU bottleneck. Under DirectX 11 the bulk of Star Swarm’s batch submission work happens under a single thread, and as a result the benchmark is effectively bottlenecked by single-threaded performance, unable to scale out with multiple CPU cores. This is one of the issues DirectX 12 sets out to resolve, with the low-level API allowing Oxide to more directly control how work is submitted, and as such better balance it over multiple CPU cores.

Star Swarm CPU Scaling - Extreme Quality - GeForce GTX 980

Star Swarm CPU Scaling - Extreme Quality - Radeon R9 290X

Starting with a look at CPU scaling on our fastest cards, what we find is that besides the absurd performance difference between DirectX 11 and DirectX 12, performance scales roughly as we’d expect among our CPU configurations. Star Swarm's DirectX 11 path, being single-threaded bound, scales very slightly with clockspeed and core count increases. The DirectX 12 path on the other hand scales up moderately well from 2 to 4 cores, but doesn’t scale up beyond that. This is due to the fact that at these settings, even pushing over 100K draw calls, both GPUs are solidly GPU limited. Anything more than 4 cores goes to waste as we’re no longer CPU-bound. Which means that we don’t even need a highly threaded processor to take advantage of DirectX 12’s strengths in this scenario, as even a 4 core processor provides plenty of kick.

Meanwhile this setup also highlights the fact that under DirectX 11, there is a massive difference in performance between AMD and NVIDIA. In both cases we are completely CPU bound, with AMD’s drivers only able to deliver 1/3rd the performance of NVIDIA’s. Given that this is the original Mantle benchmark I’m not sure we should read into the DirectX 11 situation too much since AMD has little incentive to optimize for this game, but there is clearly a massive difference in CPU efficiency under DirectX 11 in this case.

Star Swarm D3D12 CPU Scaling - Extreme Quality

Having effectively ruled out the need for 6 core CPUs for Star Swarm, let’s take a look at a breakdown across all of our cards for performance with 2 and 4 cores. What we find is that Star Swarm and DirectX 12 are so efficient that only our most powerful card, the GTX 980, finds itself CPU-bound with just 2 cores. For the AMD cards and other NVIDIA cards we can get GPU bound with the equivalent of an Intel Core i3 processor, showcasing just how effective DirectX 12’s improved batch submission process can be. In fact it’s so efficient that Oxide is running both batch submission and a complete AI simulation over just 2 cores.

Star Swarm CPU Batch Submission Time (4 Cores)

Speaking of batch submission, if we look at Star Swarm’s statistics we can find out just what’s going on with batch submission. The results are nothing short of incredible, particularly in the case of AMD. Batch submission time is down from dozens of milliseconds or more to just 3-5ms for our fastest cards, an improvement just overof a whole order of magnitude. For all practical purposes the need to spend CPU time to submit batches has been eliminated entirely, with upwards of 120K draw calls being submitted in a handful of milliseconds. It is this optimization that is at the core of Star Swarm’s DirectX 12 performance improvements, and going forward it could potentially benefit many other games as well.


Another metric we can look at is actual CPU usage as reported by the OS, as shown above. In this case CPU usage more or less perfectly matches our earlier expectations: with DirectX 11 both the GTX 980 and R9 290X show very uneven usage with 1-2 cores doing the bulk of the work, whereas with DirectX 12 CPU usage is spread out evenly over all 4 CPU cores.

At the risk of speaking to the point that it’s redundant, what we’re seeing here is exactly why Mantle, DirectX 12, OpenGL Next, and other low-level APIs have been created. With single-threaded performance struggling to increase while GPUs continue to improve by leaps and bounds with each generation, something must be done to allow games to better spread out their rendering & submission workloads over multiple cores. The solution to that problem is to eliminate the abstraction and let the developers do it themselves through APIs like DirectX 12.

Star Swarm & The Test GPU Scaling
Comments Locked

245 Comments

View All Comments

  • B3an - Saturday, February 7, 2015 - link

    Thanks for posting this. This is the kind of thing i come to AT for.
  • Notmyusualid - Sunday, February 8, 2015 - link

    And me.
  • dragosmp - Saturday, February 7, 2015 - link

    Looking at the Dx11 vs Dx12 per core load it looks like the per-core performance is the limiting factor in Dx11, not the number of cores. As such, CPUs like the AMD's AM1 & FM2 platforms with low per-core performance would benefit from Dx12 more than Intel's CPUs that already have high IPC. (It may be that even the FX may become decent gaming CPUs with their 8 integer cores and not limited by 1-core turbo)
  • guskline - Saturday, February 7, 2015 - link

    Thank you for a great article Ryan.
  • okp247 - Saturday, February 7, 2015 - link

    Cheers for the article, Ryan. Very interesting subject and a good read.

    There seems to be issues with the AMD cards though, especially under DX11. Other testers report FPS @ mid 20's to early 30's in 1080P extreme settings even with the old 7970 under Win7/DX11.
    The power consumption is also quite low. 241 watts for 290X with 6-core i7, when Crysis 3 pulls 375 also with 6-core i7 in your original review of the card. The card seems positively starved ;-)

    This could be the OS, the graphics API or the game. Possibly all three. Whatever it is, it looks like a big issue that's undermining your test.

    On a completely different note: maybe you could get the developer to spill the beans about their work with both APIs? (Mantle and DX12). I think that would also be a very interesting read.
  • OrphanageExplosion - Saturday, February 7, 2015 - link

    Yup, this is the big takeaway from this article - http://images.anandtech.com/graphs/graph8962/71451...

    AMD seems to have big issues with CPU load on DX11 - the gulf between NVIDIA and AMD is colossal. Probably not an issue when all reviews use i7s to test GPUs, but think of the more budget orientated gamer with his i3 or Athlon X4. This is the area where reviews will say that AMD dominates, but NOT if the CPU can't run the GPU effectively.
  • ColdSnowden - Saturday, February 7, 2015 - link

    This reflects what I said above. AMD radeons have a much slower batch submission time. Does that mean that using an Nvidia card with a faster batch submission time can lessen cpu bottlenecking, so perhaps Guild Wars Two would run better with an nvidia GPU as my FX 4170 would be less likely to bottleneck it.
  • ObscureAngel - Saturday, February 7, 2015 - link

    Basically AMD now requires much better CPU than nvidia to render the same "drawcalls"
    I benchmarked my self recently my Phenom II X4 945 OC 3.7GHZ with my HD 7850 vs GTX 770.

    Obviously GTX 770 outperform my HD 7850.
    Altough i benchmarked star swarm and games where my GPU usage was very below 90% which means i was bottlenecked by the CPU.

    Guess what:
    Star swarm: AMD DX11: 7fps, Nvidia DX11: 17fps AMD Mantle: 24fps.

    I tested Saints Row IV where i get all the time bottleneck with my AMD card where i get all the time more frames more close to 30 than 60, and with GTX 770 i get more 60 than 30.

    Even NFS Rivals i have drops on GPU usage to 50% in some locations and that causes drops to 24fps.
    With nvidia again, i have 30 rocking stable, unlocking the framerate i have 60 most of the time and where i drop to 24fps due to my CPU with nvidia i have 48fps.

    Its not a good example since the GTX 770 is far more powerfull, but you have more proofs with weaker nvidia GPUS in low end cpus really improve the performance comparing to AMD cards that seems to require more power.

    I try to contact AMD but i nobody replied ever, i even register in GURU3D since there is a guy that works on AMD, and he never replied, same goes to many persons there are just fanboys and attacked me instead of trying to make pressure for AMD to fix this.

    I'm serious worried with that problem, cause my CPU is old and weak, and the extra frames that nvidia offers in DX11 is really big.
    Dispite the fact that DX12 is very close to release, i am pretty sure that many games will continue to be released in DX11, and the number of games with mantle it just fit in my hand.
    So i am thinking in selling my HD 7850 and buy the next 950ti just because of that, its far more economic than buy a new CPU and motherboard.
    I already know this problem for more than 6 months, tried to convince everybody and trying to contact amd, but i am alway attacked by fanboys or get ignored by AMD..
    So if AMD reply to me, maybe they dont like my money.

    Altough nothing is free, the DX11 optimizations on Nvidia makes eat more Vram and in some games like dying light and ryse i notice more stuttering and sometimes more time to load textures..
    Same goes if you use mantle, it eats more vram too.
    I expect that DX12 will need more Vram too.

    If 2gb is getting short lately prepare that will get shorter if it will ear more vram as Nvidia DX11 and AMD Mantle.

    Regards.
  • okp247 - Saturday, February 7, 2015 - link

    I think the nVidia cards are actually being gimped as well. On Win7/DX11 people are reporting 70-80 FPS @ extreme settings, 1080 with the two top 900-cards on everything from old i5's to FX's.
    They are just not being hurt as much as AMDs, maybe because of more mature drivers and/or different architecture.
  • Ryan Smith - Saturday, February 7, 2015 - link

    Please note that we're using the RTS demo. If you're getting scores that high, you're probably using the Follow demo, which is entirely different from run-to-run.

Log in

Don't have an account? Sign up now