GPU Scaling

Switching gears, let’s take a look at performance from a GPU standpoint, including how well Star Swarm performance scales with more powerful GPUs now that we have eliminated the CPU bottleneck. Until now Star Swarm has never been GPU bottlenecked on high-end NVIDIA cards, so this is our first time seeing just how much faster Star Swarm can get until it runs into the limits of the GPU itself.

Star Swarm GPU Scaling - Extreme Quality (4 Cores)

As it stands, with the CPU bottleneck swapped out for a GPU bottleneck, Star Swarm starts to favor NVIDIA GPUs right now. Even accounting for performance differences, NVIDIA ends up coming out well ahead here, with the GTX 980 beating the R9 290X by over 50%, and the GTX 680 some 25% ahead of the R9 285, both values well ahead of their average lead in real-world games. With virtually every aspect of this test still being under development – OS, drivers, and Star Swarm – we would advise not reading into this too much right now, but it will be interesting to see if this trend holds with the final release of DirectX 12.

Meanwhile it’s interesting to note that largely due to their poor DirectX 11 performance in this benchmark, AMD sees the greatest gains from DirectX 12 on a relative basis and comes close to seeing the greatest gains on an absolute basis as well. The GTX 980’s performance improves by 150% and 40.1fps when switching APIs; the R9 290X improves by 416% and 34.6fps. As for AMD’s Mantle, we’ll get back to that in a bit.

Star Swarm GPU Scaling - Extreme Quality (2 Cores)

Having already established that even 2 CPU cores is enough to keep Star Swarm fed on anything less than a GTX 980, the results are much the same here for our 2 core configuration. Other than the GTX 980 being CPU limited, the gains from enabling DirectX 12 are consistent with what we saw for the 4 core configuration. Which is to say that even a relatively weak CPU can benefit from DirectX 12, at least when paired with a strong GPU.

However the GTX 750 Ti result in particular also highlights the fact that until a powerful GPU comes into play, the benefits today from DirectX 12 aren’t nearly as great. Though the GTX 750 Ti does improve in performance by 26%, this is far cry from the 150% of the GTX 980, or even the gains for the GTX 680. While AMD is terminally CPU limited here, NVIDIA can get just enough out of DirectX 11 that a 2 core configuration can almost feed the GTX 750 Ti. Consequently in the NVIDIA case, a weak CPU paired with a weak GPU does not currently see the same benefits that we get elsewhere. However as DirectX 12 is meant to be forward looking – to be out before it’s too late – as GPU performance gains continue to outstrip CPU performance gains, the benefits even for low-end configurations will continue to increase.

CPU Scaling DirectX 12 vs. Mantle, Power Consumption
Comments Locked

245 Comments

View All Comments

  • Mr Perfect - Sunday, February 8, 2015 - link

    That's not what he's saying though, he said TDP is some measure of what amount of heat is 'wasted" heat. As if there's some way to figure out what part of the 165 watts is doing computational work, and what is just turning into heat without doing any computational work. That's not what TDP measures.

    Also, CPUs and GPUs can routinely go past TDP, so I'm not sure where people keep getting TDP is maximum power draw from. It's seen regularly in the benchmarks here at Anandtech. That's usually one of the goals of the power section of reviews, seeing if the manufacturers TDP calculation of typical power draw holds up in the real world.
  • Mr Perfect - Sunday, February 8, 2015 - link

    Although, now that I think about it, I do remember a time when TDP actually was pretty close to maximum power draw. But then Intel came out with the Netburst architecture and started defining TDP as the typical power used by the part in real world use, since the maximum power draw was so ugly. After a lot of outrage from the other companies, they picked up the same practice so they wouldn't seem to be at a disadvantage in regard to power draw. That was ages ago though, TDP hasn't meant maximum power draw for years.
  • Strunf - Sunday, February 8, 2015 - link

    TDP essentially means your GPU can work at that power input for a long time, in the past the CPU/GPU were close to it cause they didn't have throttle, idles and what not technologies. Today they have and they can go past the TDP for "short" period of times, with the help of thermal sensors they can adjust the power as they need without risking of burning down the CPU/GPU.
  • YazX_ - Friday, February 6, 2015 - link

    Dude, its total System power consumption not video card only.
  • Morawka - Friday, February 6, 2015 - link

    are you sure you not looking at factory overclocked cards? The 980 has a 8 pin and 6 pin connector. You gotta minus the CPU and Motherboard power.

    Check any reference review on power consumption

    http://www.guru3d.com/articles_pages/nvidia_geforc...
  • Yojimbo - Friday, February 6, 2015 - link

    Did you notice the 56% greater performance? The rest of the system is going to be drawing more power to keep up with the greater GPU performance. NVIDIA is getting much greater benefit of having 4 cores than 2, for instance. And who knows, maybe the GPU itself was able to run closer to full load. Also, the benchmark is not deterministic, as mentioned several times in the article. It is the wrong sort of benchmark to be using to compare two different GPUs in power consumption, unless the test is run significantly many times. Finally, you said the R9 290X-powered system consumed 14W more in the DX12 test than the GTX 980-powered system, but the list shows it consumed 24W more. Let's not even compare DX11 power consumption using this benchmark, since NVIDIA's performance is 222% higher.
  • MrPete123 - Friday, February 6, 2015 - link

    Win7 will be dominant in businesses for some time, but not gaming PCs where this will be benefit more.
  • Yojimbo - Friday, February 6, 2015 - link

    Most likely the main reasons for consumers not upgrading to Windows 10 will be laziness, comfort, and ignorance.
  • Murloc - Saturday, February 7, 2015 - link

    people who are CPU bottlenecked are not that kind of people given the amount of money they spend on GPUs.
  • Frenetic Pony - Friday, February 6, 2015 - link

    FREE. Ok. FREE. F and then R and then E and then another E.

Log in

Don't have an account? Sign up now