GPU Scaling

Switching gears, let’s take a look at performance from a GPU standpoint, including how well Star Swarm performance scales with more powerful GPUs now that we have eliminated the CPU bottleneck. Until now Star Swarm has never been GPU bottlenecked on high-end NVIDIA cards, so this is our first time seeing just how much faster Star Swarm can get until it runs into the limits of the GPU itself.

Star Swarm GPU Scaling - Extreme Quality (4 Cores)

As it stands, with the CPU bottleneck swapped out for a GPU bottleneck, Star Swarm starts to favor NVIDIA GPUs right now. Even accounting for performance differences, NVIDIA ends up coming out well ahead here, with the GTX 980 beating the R9 290X by over 50%, and the GTX 680 some 25% ahead of the R9 285, both values well ahead of their average lead in real-world games. With virtually every aspect of this test still being under development – OS, drivers, and Star Swarm – we would advise not reading into this too much right now, but it will be interesting to see if this trend holds with the final release of DirectX 12.

Meanwhile it’s interesting to note that largely due to their poor DirectX 11 performance in this benchmark, AMD sees the greatest gains from DirectX 12 on a relative basis and comes close to seeing the greatest gains on an absolute basis as well. The GTX 980’s performance improves by 150% and 40.1fps when switching APIs; the R9 290X improves by 416% and 34.6fps. As for AMD’s Mantle, we’ll get back to that in a bit.

Star Swarm GPU Scaling - Extreme Quality (2 Cores)

Having already established that even 2 CPU cores is enough to keep Star Swarm fed on anything less than a GTX 980, the results are much the same here for our 2 core configuration. Other than the GTX 980 being CPU limited, the gains from enabling DirectX 12 are consistent with what we saw for the 4 core configuration. Which is to say that even a relatively weak CPU can benefit from DirectX 12, at least when paired with a strong GPU.

However the GTX 750 Ti result in particular also highlights the fact that until a powerful GPU comes into play, the benefits today from DirectX 12 aren’t nearly as great. Though the GTX 750 Ti does improve in performance by 26%, this is far cry from the 150% of the GTX 980, or even the gains for the GTX 680. While AMD is terminally CPU limited here, NVIDIA can get just enough out of DirectX 11 that a 2 core configuration can almost feed the GTX 750 Ti. Consequently in the NVIDIA case, a weak CPU paired with a weak GPU does not currently see the same benefits that we get elsewhere. However as DirectX 12 is meant to be forward looking – to be out before it’s too late – as GPU performance gains continue to outstrip CPU performance gains, the benefits even for low-end configurations will continue to increase.

CPU Scaling DirectX 12 vs. Mantle, Power Consumption
Comments Locked

245 Comments

View All Comments

  • OrphanageExplosion - Sunday, February 8, 2015 - link

    On a tiny minority of titles.
  • bloodypulp - Sunday, February 8, 2015 - link

    Battlefield 4
    Battlefield Hardline
    Thief
    Star Citizen
    Plants vs. Zombies: Garden Warfare
    Civilization: Beyond Earth
    Dragon Age: Inquisition
    Mirror's Edge 2
    Sniper Elite 3
    ... and growing every day.
  • bloodypulp - Sunday, February 8, 2015 - link

    Who needs to wait for DX12? Mantle is running great for me right now. :)
  • sireangelus - Sunday, February 8, 2015 - link

    would you do one quick test using an 8core fx?
  • johnny_boy - Sunday, February 8, 2015 - link

    Would have loved to see this, and some lower end CPUs even.
  • editorsorgtfo - Sunday, February 8, 2015 - link



    What about threaded CPUs ? for example 1 core 2 threads old pentium CPUs and 2 cores 4 threads i3 CPUs ? can you still count that them as 2 cores and 4 cores ?

    I wanna ask this on the anandtech comment section but I don't have an account there XD
  • boe - Sunday, February 8, 2015 - link

    What I care about are great graphics. It is a shame there is no Crytek 4 engine to show off what DX12 could do. MS should have hired the original crytek developers to create some showpiece game.
  • Gigaplex - Monday, February 9, 2015 - link

    The API won't really change what you can do compared to DX11 other than reduce some system requirements. The feature levels are what provides new eye candy, and this preview doesn't cover that aspect. Wait until it hits retail, you'll probably see some fancy tech demos.
  • Thermalzeal - Sunday, February 8, 2015 - link

    I have one big question to ask.

    Since Direct X12 is resulting in significant performance gains, what is the potential for these improvements to translate over to the Xbox One? While I'm sure the Xbox One already has some of these bare metal improvements, due to the focus of the device...is it possible that DX12 will make the Xbox One more powerful than the PS4?
  • Ryan Smith - Sunday, February 8, 2015 - link

    "Since Direct X12 is resulting in significant performance gains, what is the potential for these improvements to translate over to the Xbox One?"

    Only Microsoft really knows the answer to that one. But I would be shocked beyond belief if the XB1's D3D 11.X API didn't already implement many of these optimizations. It is after all a fixed console, where low-level APIs have been a mainstay since day one.

    "is it possible that DX12 will make the Xbox One more powerful than the PS4?"

    In a word, no. The best case scenario for Microsoft is that Sony implements their own low-level API (if they haven't already) and we're back at square one. APIs can't make up for hardware differences when both parties have the means and influence to create what would be similar APIs.

Log in

Don't have an account? Sign up now