GPU Scaling

Switching gears, let’s take a look at performance from a GPU standpoint, including how well Star Swarm performance scales with more powerful GPUs now that we have eliminated the CPU bottleneck. Until now Star Swarm has never been GPU bottlenecked on high-end NVIDIA cards, so this is our first time seeing just how much faster Star Swarm can get until it runs into the limits of the GPU itself.

Star Swarm GPU Scaling - Extreme Quality (4 Cores)

As it stands, with the CPU bottleneck swapped out for a GPU bottleneck, Star Swarm starts to favor NVIDIA GPUs right now. Even accounting for performance differences, NVIDIA ends up coming out well ahead here, with the GTX 980 beating the R9 290X by over 50%, and the GTX 680 some 25% ahead of the R9 285, both values well ahead of their average lead in real-world games. With virtually every aspect of this test still being under development – OS, drivers, and Star Swarm – we would advise not reading into this too much right now, but it will be interesting to see if this trend holds with the final release of DirectX 12.

Meanwhile it’s interesting to note that largely due to their poor DirectX 11 performance in this benchmark, AMD sees the greatest gains from DirectX 12 on a relative basis and comes close to seeing the greatest gains on an absolute basis as well. The GTX 980’s performance improves by 150% and 40.1fps when switching APIs; the R9 290X improves by 416% and 34.6fps. As for AMD’s Mantle, we’ll get back to that in a bit.

Star Swarm GPU Scaling - Extreme Quality (2 Cores)

Having already established that even 2 CPU cores is enough to keep Star Swarm fed on anything less than a GTX 980, the results are much the same here for our 2 core configuration. Other than the GTX 980 being CPU limited, the gains from enabling DirectX 12 are consistent with what we saw for the 4 core configuration. Which is to say that even a relatively weak CPU can benefit from DirectX 12, at least when paired with a strong GPU.

However the GTX 750 Ti result in particular also highlights the fact that until a powerful GPU comes into play, the benefits today from DirectX 12 aren’t nearly as great. Though the GTX 750 Ti does improve in performance by 26%, this is far cry from the 150% of the GTX 980, or even the gains for the GTX 680. While AMD is terminally CPU limited here, NVIDIA can get just enough out of DirectX 11 that a 2 core configuration can almost feed the GTX 750 Ti. Consequently in the NVIDIA case, a weak CPU paired with a weak GPU does not currently see the same benefits that we get elsewhere. However as DirectX 12 is meant to be forward looking – to be out before it’s too late – as GPU performance gains continue to outstrip CPU performance gains, the benefits even for low-end configurations will continue to increase.

CPU Scaling DirectX 12 vs. Mantle, Power Consumption
Comments Locked

245 Comments

View All Comments

  • inighthawki - Monday, February 9, 2015 - link

    >> btw funny how "M$ would need to do huge kernel rework to bring DX12 to Win7/8" while mantle, which does similar thing, is easily capable to be "OS version independent" (sure it is amd specific but still)

    How do you know that DX12 will not support a number of features that Mantle will not? For example, DX12 is expected to provide the application with manual memory management, a feature not available in Mantle while running on WDDM 1.3 or below.
  • lordken - Tuesday, February 10, 2015 - link

    what I meant is in performance terms. While mantle is able to deliver +/-same performance boost as DX12 but still on old windows kernel.
    Not saying DX12 wont support something that mantle wont be able to do on old windows kernel. I merely tried to highlight that same performance boost can be achieved on current OS without the need of M$ taunting gamers with Win10 (forced) upgrade for DX12
  • killeak - Tuesday, February 10, 2015 - link

    "btw funny how "M$ would need to do huge kernel rework to bring DX12 to Win7/8" while mantle, which does similar thing, is easily capable to be "OS version independent" (sure it is amd specific but still)"

    Direct3D has a very different design. While APIs like OpenGL or Mantle are implemented in the drivers, Direct3D is implemented (the runtime) in the OS. That means, that no matter what hardware you have, the code that is executed under the API, is for most part, always the same. Sure, the Driver needs to expose and abstract the hardware (following another API, in this case WDDM 2.0), but the actual implementation is much more slim. Which means is much more solid and reliable.

    Now, OpenGL is implemented in the driver, the OS only expose the basic C functions to create the context and the like. A good driver can make OpenGL works as fast, or even more, than D3D, but the reality says that 90% of the time, OpenGL works worse. Not just because of performance, but because each driver for each OS and each GPU has a different implementation, things usually doesn't work as you expected.

    After years of working with OpenGL and D3D, the thing that I miss more of D3D when I am coding for OpenGL platforms, is the single runtime. Program once, run everywhere (well on every windows) works on D3D but not on OpenGL, hell is even harder on mobile with OpenGL ES, and the broken drivers of Mali, Qualcomm, etc. Sure, if your app is simple OpenGL works, but for AAA it just doesn't cut...

    The true is, IHVs are here to sell hardware, not software, so they invest the minimum time and money on it (most of the time they optimize drivers for big AAA titles and benchmarks). For mobile, where SoCs are replaced every year, is even workse, since drivers never get mature enough. Heck, Mali for example doesn't have devices with the 700 series on the market and they already announced the 800 series, while their OpenGL ES drivers for the 600 are really bad.

    Going back to Mantle and Win7/8. In the drivers, you can do what ever you want, so yes, you can make your own API and make it work wherever you want, that is why Mantle can do things low level without WDDM 2.0, it doesn't need to be common or compatible to other drivers/vendors.
  • Bill McGann - Tuesday, February 10, 2015 - link

    Yeah, this is a huge reason why GL is largely ignored by Windows devs. D3D is extremely stable thanks to it largely being implemented by MS, and them having the power to test/certify the vendor's back-ends.
    GL on the other hand is the wild west, with every vendor doing whatever they like... You even have them stealing MS's terrible 90's browser-war strategies of deliberately supporting broken behavior, hoping devs will use it, so that games will break on other vendor's (strictly compliant) drivers. Any situation where vendors are abusing devs like this is pretty f'ed up.
  • tobi1449 - Thursday, February 12, 2015 - link

    The console & pc aspect isn't going anywhere and was never meant to. AMD formulated their early press releases in a way that some people jumped the hype train before it was even built, but AMD was shut down by Microsoft and Sony pretty quickly about that.
  • Bill McGann - Tuesday, February 10, 2015 - link

    FYI mantle is very carefully specified as a vendor-agnostic API, like GL, with extensions for vendor-specific behavior.

    If AMD even bother launching Mantle after D3D12/GLNext appear, and if it remains AMD-only, it's because nVidia/Intel have chosen not to adopt the spec, not because AMD have deliberately made it AMD-only.
  • tobi1449 - Thursday, February 12, 2015 - link

    a) I can see why there's resistance against adopting a competitors API.
    b) AFAIK AMD hasn't released anything needed to implement Mantle for other hardware yet. Sure, they've often talked about it and most of the time Mantle is mentioned this pops up, but in reality (if this is still correct) it is as locked down as say G-Sync or PhysX.
  • Arbie - Tuesday, February 10, 2015 - link

    I closely followed graphics board technology and performance for many years. But after a certain point I realized that there are actually very few - count 'em on one hand - games that I even enjoy playing. Three of those start with "Crysis" (and the other two with "Peggle"). The Battlefield series might have the same replay interest; don't know.

    So unless and until there are really startling ~3x gains for the same $$, my interest in desktop graphics card performance is much more constrained by game quality than by technology. I don't want to run "Borderlands" 50% faster because... I don't want to run it at all. Or any other of the lousy console ports out there.
  • computertech82 - Wednesday, February 11, 2015 - link

    SLIGHT PROBLEM. I think it's safe to say the dx11 vs dx12 was ran on the SAME OS 10. That probably just means dx11 runs crappy on win10, not that dx12 is so much better. I bet it would be different with win7/8 dx11 vs win10 dx12 (meaning very little difference).
  • Notmyusualid - Thursday, February 12, 2015 - link

    Good point - hadn't considered it until you mentioned it.

    Then the comparison should really have been dx11 - Win 7/8, dx12 - Win 10, Mantle - both (if poss).

Log in

Don't have an account? Sign up now