GPU Scaling

Switching gears, let’s take a look at performance from a GPU standpoint, including how well Star Swarm performance scales with more powerful GPUs now that we have eliminated the CPU bottleneck. Until now Star Swarm has never been GPU bottlenecked on high-end NVIDIA cards, so this is our first time seeing just how much faster Star Swarm can get until it runs into the limits of the GPU itself.

Star Swarm GPU Scaling - Extreme Quality (4 Cores)

As it stands, with the CPU bottleneck swapped out for a GPU bottleneck, Star Swarm starts to favor NVIDIA GPUs right now. Even accounting for performance differences, NVIDIA ends up coming out well ahead here, with the GTX 980 beating the R9 290X by over 50%, and the GTX 680 some 25% ahead of the R9 285, both values well ahead of their average lead in real-world games. With virtually every aspect of this test still being under development – OS, drivers, and Star Swarm – we would advise not reading into this too much right now, but it will be interesting to see if this trend holds with the final release of DirectX 12.

Meanwhile it’s interesting to note that largely due to their poor DirectX 11 performance in this benchmark, AMD sees the greatest gains from DirectX 12 on a relative basis and comes close to seeing the greatest gains on an absolute basis as well. The GTX 980’s performance improves by 150% and 40.1fps when switching APIs; the R9 290X improves by 416% and 34.6fps. As for AMD’s Mantle, we’ll get back to that in a bit.

Star Swarm GPU Scaling - Extreme Quality (2 Cores)

Having already established that even 2 CPU cores is enough to keep Star Swarm fed on anything less than a GTX 980, the results are much the same here for our 2 core configuration. Other than the GTX 980 being CPU limited, the gains from enabling DirectX 12 are consistent with what we saw for the 4 core configuration. Which is to say that even a relatively weak CPU can benefit from DirectX 12, at least when paired with a strong GPU.

However the GTX 750 Ti result in particular also highlights the fact that until a powerful GPU comes into play, the benefits today from DirectX 12 aren’t nearly as great. Though the GTX 750 Ti does improve in performance by 26%, this is far cry from the 150% of the GTX 980, or even the gains for the GTX 680. While AMD is terminally CPU limited here, NVIDIA can get just enough out of DirectX 11 that a 2 core configuration can almost feed the GTX 750 Ti. Consequently in the NVIDIA case, a weak CPU paired with a weak GPU does not currently see the same benefits that we get elsewhere. However as DirectX 12 is meant to be forward looking – to be out before it’s too late – as GPU performance gains continue to outstrip CPU performance gains, the benefits even for low-end configurations will continue to increase.

CPU Scaling DirectX 12 vs. Mantle, Power Consumption
Comments Locked

245 Comments

View All Comments

  • zmeul - Saturday, February 7, 2015 - link

    I wanted to see what's the difference in VRAM usage DX11 vs DX12 because from my own testing, MANTLE uses around 600MB more vs DX11 with the same settings
    tested in StarSwarm, Sniper Elite 3

    enough VRAM?!!? no I don't think so
    Sniper Elite III at maximum settings, 1080p, no super-sampling, used around 2.6Gb in MANTLE - if I recall
    making the Radeon 285 and the GTX960 obsolete right out of the bat - if VRAM usage in DX12 is anything like MANTLE
  • Ryan Smith - Saturday, February 7, 2015 - link

    At this point it's much too early to compare VRAM consumption. That's just about the last thing that will be optimized at both the driver level and the application level.
  • zmeul - Saturday, February 7, 2015 - link

    then why make this preview in the 1st place if not covering all aspects of DX11 vs DX12 vs MANTLE ??
    VRAM usage is a point of interest to many people, especially now with AMD's 300 series on the hirizont
  • jeffkibuule - Saturday, February 7, 2015 - link

    Then this article wouldn't exist until the fall.
  • Gigaplex - Sunday, February 8, 2015 - link

    Because it's a PREVIEW, not a final in-depth analysis.
  • killeak - Sunday, February 8, 2015 - link

    D3D12 can lower the memory requirements since it adds a bunch of features that allows the application to have a tighter control of the memory in a way that was not possible before, but it's the responsibility of the application to do that.
  • powerarmour - Friday, February 6, 2015 - link

    RIP Mantle, I've got cheese in my fridge that's lasted longer!
  • tipoo - Friday, February 6, 2015 - link

    It will live on in GLNext.
  • Shadowmaster625 - Friday, February 6, 2015 - link

    G3258 just got a buff!
  • ppi - Friday, February 6, 2015 - link

    I wonder how would AMD CPUs fare in this comparison. Currently they are slower than i3, but this could change picture a bit.

Log in

Don't have an account? Sign up now