GPU Scaling

Switching gears, let’s take a look at performance from a GPU standpoint, including how well Star Swarm performance scales with more powerful GPUs now that we have eliminated the CPU bottleneck. Until now Star Swarm has never been GPU bottlenecked on high-end NVIDIA cards, so this is our first time seeing just how much faster Star Swarm can get until it runs into the limits of the GPU itself.

Star Swarm GPU Scaling - Extreme Quality (4 Cores)

As it stands, with the CPU bottleneck swapped out for a GPU bottleneck, Star Swarm starts to favor NVIDIA GPUs right now. Even accounting for performance differences, NVIDIA ends up coming out well ahead here, with the GTX 980 beating the R9 290X by over 50%, and the GTX 680 some 25% ahead of the R9 285, both values well ahead of their average lead in real-world games. With virtually every aspect of this test still being under development – OS, drivers, and Star Swarm – we would advise not reading into this too much right now, but it will be interesting to see if this trend holds with the final release of DirectX 12.

Meanwhile it’s interesting to note that largely due to their poor DirectX 11 performance in this benchmark, AMD sees the greatest gains from DirectX 12 on a relative basis and comes close to seeing the greatest gains on an absolute basis as well. The GTX 980’s performance improves by 150% and 40.1fps when switching APIs; the R9 290X improves by 416% and 34.6fps. As for AMD’s Mantle, we’ll get back to that in a bit.

Star Swarm GPU Scaling - Extreme Quality (2 Cores)

Having already established that even 2 CPU cores is enough to keep Star Swarm fed on anything less than a GTX 980, the results are much the same here for our 2 core configuration. Other than the GTX 980 being CPU limited, the gains from enabling DirectX 12 are consistent with what we saw for the 4 core configuration. Which is to say that even a relatively weak CPU can benefit from DirectX 12, at least when paired with a strong GPU.

However the GTX 750 Ti result in particular also highlights the fact that until a powerful GPU comes into play, the benefits today from DirectX 12 aren’t nearly as great. Though the GTX 750 Ti does improve in performance by 26%, this is far cry from the 150% of the GTX 980, or even the gains for the GTX 680. While AMD is terminally CPU limited here, NVIDIA can get just enough out of DirectX 11 that a 2 core configuration can almost feed the GTX 750 Ti. Consequently in the NVIDIA case, a weak CPU paired with a weak GPU does not currently see the same benefits that we get elsewhere. However as DirectX 12 is meant to be forward looking – to be out before it’s too late – as GPU performance gains continue to outstrip CPU performance gains, the benefits even for low-end configurations will continue to increase.

CPU Scaling DirectX 12 vs. Mantle, Power Consumption
Comments Locked

245 Comments

View All Comments

  • jeffkibuule - Saturday, February 7, 2015 - link

    Right, it's a secret to the public, not so much to the engineers.
  • Jumangi - Friday, February 6, 2015 - link

    MS is a much bigger company so the resources they have give a big edge. I also suspect while AMD publicly still supports Mantle they probably aren't doing much for the future as they are smart enough, or I hope they are smart enough, to realize DX 12 makes Mantle irrelevant.
  • toffty - Friday, February 6, 2015 - link

    I wouldn't say Mantle is irrelevant since Directx12 is Windows 10 and Xbox One only. Mantle is for Window < 10, linux and OSX. It will be competing against OpenGL in that space, true, but if it's easier to port applications made with Mantle to Xbox One, Mantle will have a leg up on OpenGL. Mantle also needs to becomes open and nVidia supports it too.
  • Penti - Saturday, February 7, 2015 - link

    There is no mantle for anything else than Windows. PS4 uses it's own API's, Nintendo uses it's own API's, X1 doesn't straight up run the same runtime or API as DX on Windows and already have low-level features, D3D11.X is a superset of D3D/DX11. Games are normally DX/HLSL to begin with, so you don't need to mess around with shaders that much and converting formats. Converting GLSL to HLSL is practical though. Many engines also has their own shader languages and systems making stuff like Mantle's shading language similarities irrelevant. Most will design for something else as their first path than Mantle.
  • Jumangi - Saturday, February 7, 2015 - link

    Fanboys can get mad but Linux and OSX are irrelevant for mainstream gaming. So yes Mantle has no real future and will be forgotten about in a couple of years.
  • bloodypulp - Sunday, February 8, 2015 - link

    Keep on dreaming, Nvidia fangirl. Mantle is coming for SteamOS(Linux). Count on it.
  • yannigr2 - Friday, February 6, 2015 - link

    Mantle was the catalyst to bring DX12 closer. No reason for AMD to spend much resources on Mantle now. Job is done.
  • Notmyusualid - Friday, February 6, 2015 - link

    More or less my thinking too. Best to keep us all on the same page, so to speak.

    Respect to AMD, but let us all join the DX12 train together....
  • AnnonymousCoward - Saturday, February 7, 2015 - link

    Not necessarily. How do you know it wasn't MS's idea even? Look at Khato's post above.
  • tipoo - Friday, February 6, 2015 - link

    What's the support status of Intels chips which are going to get it? I think all Iris models will, and a few of the higher end HD Graphics series parts will. Are they currently supported?

    It would be interesting to see if this could push the Iris Pro 5200 any further. Though this is more for the flip situation of a weak CPU with a strong GPU, rather than this strong CPU with a modest GPU.

Log in

Don't have an account? Sign up now