GPU Scaling

Switching gears, let’s take a look at performance from a GPU standpoint, including how well Star Swarm performance scales with more powerful GPUs now that we have eliminated the CPU bottleneck. Until now Star Swarm has never been GPU bottlenecked on high-end NVIDIA cards, so this is our first time seeing just how much faster Star Swarm can get until it runs into the limits of the GPU itself.

Star Swarm GPU Scaling - Extreme Quality (4 Cores)

As it stands, with the CPU bottleneck swapped out for a GPU bottleneck, Star Swarm starts to favor NVIDIA GPUs right now. Even accounting for performance differences, NVIDIA ends up coming out well ahead here, with the GTX 980 beating the R9 290X by over 50%, and the GTX 680 some 25% ahead of the R9 285, both values well ahead of their average lead in real-world games. With virtually every aspect of this test still being under development – OS, drivers, and Star Swarm – we would advise not reading into this too much right now, but it will be interesting to see if this trend holds with the final release of DirectX 12.

Meanwhile it’s interesting to note that largely due to their poor DirectX 11 performance in this benchmark, AMD sees the greatest gains from DirectX 12 on a relative basis and comes close to seeing the greatest gains on an absolute basis as well. The GTX 980’s performance improves by 150% and 40.1fps when switching APIs; the R9 290X improves by 416% and 34.6fps. As for AMD’s Mantle, we’ll get back to that in a bit.

Star Swarm GPU Scaling - Extreme Quality (2 Cores)

Having already established that even 2 CPU cores is enough to keep Star Swarm fed on anything less than a GTX 980, the results are much the same here for our 2 core configuration. Other than the GTX 980 being CPU limited, the gains from enabling DirectX 12 are consistent with what we saw for the 4 core configuration. Which is to say that even a relatively weak CPU can benefit from DirectX 12, at least when paired with a strong GPU.

However the GTX 750 Ti result in particular also highlights the fact that until a powerful GPU comes into play, the benefits today from DirectX 12 aren’t nearly as great. Though the GTX 750 Ti does improve in performance by 26%, this is far cry from the 150% of the GTX 980, or even the gains for the GTX 680. While AMD is terminally CPU limited here, NVIDIA can get just enough out of DirectX 11 that a 2 core configuration can almost feed the GTX 750 Ti. Consequently in the NVIDIA case, a weak CPU paired with a weak GPU does not currently see the same benefits that we get elsewhere. However as DirectX 12 is meant to be forward looking – to be out before it’s too late – as GPU performance gains continue to outstrip CPU performance gains, the benefits even for low-end configurations will continue to increase.

CPU Scaling DirectX 12 vs. Mantle, Power Consumption
Comments Locked

245 Comments

View All Comments

  • junky77 - Friday, February 6, 2015 - link

    Looking at the CPU scaling graphs and CPU/GPU usage, it doesn't look like the situation in other games where CPU can be maxed out. It does seem like this engine and test might be really tailored for this specific case of DX12 and Mantle in a specific way

    The interesting thing is to understand whether the DX11 performance shown here is optimal. The CPU usage is way below max, even for the one core supposedly taking all the load. Something is bottlenecking the performance and it's not the number of cores, threads or clocks.
  • eRacer1 - Friday, February 6, 2015 - link

    So the GTX 980 is using less power than the 290X while performing ~50% better, and somehow NVIDIA is the one with the problem here? The data is clear. The GTX 980 has a massive DX12 (and DX11) performance lead and performance/watt lead over 290X.
  • The_Countess666 - Thursday, February 19, 2015 - link

    it also costs twice as much.

    and this is the first time in roughly 4 generations that nvidia's managed to release a new generation first. it would be shocking is there wasn't a huge performance difference between AMD and nvidia at the moment.
  • bebimbap - Friday, February 6, 2015 - link

    TDP and power consumption are not the same thing, but are related
    if i had to write a simple equation it would be something to the effect of

    TDP(wasted heat) = (Power Consumption) X (process node coeff) X (temperature of silicon coeff) X (Architecture coeff)

    so basically TDP or "wasted heat" is related to power consumption but not the same thing
    Since they are on the same process node by the same foundry, the difference in TDP vs power consumed would be because of Nvidia currently has the more efficient architecture, and that also leads to their chips being cooler, both of which lead to less "wasted heat"

    A perfect conductor would have 0 TDP and infinite power consumption.
  • Mr Perfect - Saturday, February 7, 2015 - link

    Erm, I don't think you've got the right term there with TDP. TDP is not defined as "wasted heat", but as the typical power draw of the board. So if TDP for the GTX 980 is 165 watts, that just means that in normal gaming use it's drawing 165 watts.

    Besides, if a card is drawing 165watts, it's all going to become heat somewhere along the line. I'm not sure you can really decide how many of those watts are "wasted" and how many are actually doing "work".
  • Wwhat - Saturday, February 7, 2015 - link

    No, he's right TDP means Thermal design power and defines the cooling a system needs to run at full power.
  • Strunf - Saturday, February 7, 2015 - link

    It's the same... if a GC draws 165W it needs a 165W cooler... do you see anything moving on your card exept the fans? no, so all power will be transformed into heat.
  • wetwareinterface - Saturday, February 7, 2015 - link

    no it's not the same. 165w tdp means the cooler has to dump 165w worth of heat.
    165w power draw means the card needs to have 165w of power available to it.

    if the card draws 300w of power and has 200w of heat output that means the card is dumping 200w of that 300w into the cooler.
  • Strunf - Sunday, February 8, 2015 - link

    It's impossible for the card to draw 300W and only output 200W of heat... unless of course now GC defy the laws of physics.
  • grogi - Sunday, April 5, 2015 - link

    What is it doing with the remaining 100W?

Log in

Don't have an account? Sign up now