Frame Time Consistency & Recordings

Last, but not least, we wanted to also look at frame time consistency across Star Swarm, our two vendors, and the various APIs available to them. Next to CPU efficiency gains, one of the other touted benefits of low-level APIs like DirectX 12 is the ability for developers to better control frame time pacing due to the fact that the API and driver are doing fewer things under the hood and behind an application’s back. Inefficient memory management operations, resource allocation, and shader compiling in particular can result in unexpected and undesirable momentary drops in performance. However, while low-level APIs can improve on this aspect, it doesn’t necessarily mean high-level APIs are bad at it. So it is an important distinction between bad/good and good/better.

On a technical note, these frame times are measured within (and logged by) Star Swarm itself. So these are not “FCAT” results that are measuring the end of the pipeline, nor is that possible right now due to the lack of an overlay option for DirectX 12.

Starting with the GTX 980, we can immediately see why we can’t always write-off high-level APIs. Benchmark non-determinism aside, both DirectX 11 and DirectX 12 produce consistent frame times; one is just much, much faster than the other. Both on paper and subjectively in practice, Star Swarm has little trouble maintaining consistent frame times on the GTX 980. Even if DirectX 11 is slow, it is at least consistent.

The story is much the same for the R9 290X. DirectX 11 and DirectX 12 both produce consistent results, with neither API experiencing frame time swings. Meanwhile Mantle falls into the same category as DirectX 12, producing similarly consistent performance and frame times.

Ultimately it’s clear from these results that if DirectX 12 is going to lead to any major differences in frame time consistency, Star Swarm is not the best showcase for it. With DirectX 11 already producing consistent results, DirectX 12 has little to improve on.

Finally, along with our frame time consistency graphs, we have also recorded videos of shorter run-throughs on both the GeForce GTX 980 and Radeon R9 290X. With YouTube now supporting 60fps, these videos are frame-accurate representations of what we see when we run the Star Swarm benchmark, showing first-hand the overall frame time consistency among all configurations, and of course the massive difference in performance.

Mid Quality Performance First Thoughts
Comments Locked

245 Comments

View All Comments

  • Mr Perfect - Sunday, February 8, 2015 - link

    That's not what he's saying though, he said TDP is some measure of what amount of heat is 'wasted" heat. As if there's some way to figure out what part of the 165 watts is doing computational work, and what is just turning into heat without doing any computational work. That's not what TDP measures.

    Also, CPUs and GPUs can routinely go past TDP, so I'm not sure where people keep getting TDP is maximum power draw from. It's seen regularly in the benchmarks here at Anandtech. That's usually one of the goals of the power section of reviews, seeing if the manufacturers TDP calculation of typical power draw holds up in the real world.
  • Mr Perfect - Sunday, February 8, 2015 - link

    Although, now that I think about it, I do remember a time when TDP actually was pretty close to maximum power draw. But then Intel came out with the Netburst architecture and started defining TDP as the typical power used by the part in real world use, since the maximum power draw was so ugly. After a lot of outrage from the other companies, they picked up the same practice so they wouldn't seem to be at a disadvantage in regard to power draw. That was ages ago though, TDP hasn't meant maximum power draw for years.
  • Strunf - Sunday, February 8, 2015 - link

    TDP essentially means your GPU can work at that power input for a long time, in the past the CPU/GPU were close to it cause they didn't have throttle, idles and what not technologies. Today they have and they can go past the TDP for "short" period of times, with the help of thermal sensors they can adjust the power as they need without risking of burning down the CPU/GPU.
  • YazX_ - Friday, February 6, 2015 - link

    Dude, its total System power consumption not video card only.
  • Morawka - Friday, February 6, 2015 - link

    are you sure you not looking at factory overclocked cards? The 980 has a 8 pin and 6 pin connector. You gotta minus the CPU and Motherboard power.

    Check any reference review on power consumption

    http://www.guru3d.com/articles_pages/nvidia_geforc...
  • Yojimbo - Friday, February 6, 2015 - link

    Did you notice the 56% greater performance? The rest of the system is going to be drawing more power to keep up with the greater GPU performance. NVIDIA is getting much greater benefit of having 4 cores than 2, for instance. And who knows, maybe the GPU itself was able to run closer to full load. Also, the benchmark is not deterministic, as mentioned several times in the article. It is the wrong sort of benchmark to be using to compare two different GPUs in power consumption, unless the test is run significantly many times. Finally, you said the R9 290X-powered system consumed 14W more in the DX12 test than the GTX 980-powered system, but the list shows it consumed 24W more. Let's not even compare DX11 power consumption using this benchmark, since NVIDIA's performance is 222% higher.
  • MrPete123 - Friday, February 6, 2015 - link

    Win7 will be dominant in businesses for some time, but not gaming PCs where this will be benefit more.
  • Yojimbo - Friday, February 6, 2015 - link

    Most likely the main reasons for consumers not upgrading to Windows 10 will be laziness, comfort, and ignorance.
  • Murloc - Saturday, February 7, 2015 - link

    people who are CPU bottlenecked are not that kind of people given the amount of money they spend on GPUs.
  • Frenetic Pony - Friday, February 6, 2015 - link

    FREE. Ok. FREE. F and then R and then E and then another E.

Log in

Don't have an account? Sign up now