Frame Time Consistency & Recordings

Last, but not least, we wanted to also look at frame time consistency across Star Swarm, our two vendors, and the various APIs available to them. Next to CPU efficiency gains, one of the other touted benefits of low-level APIs like DirectX 12 is the ability for developers to better control frame time pacing due to the fact that the API and driver are doing fewer things under the hood and behind an application’s back. Inefficient memory management operations, resource allocation, and shader compiling in particular can result in unexpected and undesirable momentary drops in performance. However, while low-level APIs can improve on this aspect, it doesn’t necessarily mean high-level APIs are bad at it. So it is an important distinction between bad/good and good/better.

On a technical note, these frame times are measured within (and logged by) Star Swarm itself. So these are not “FCAT” results that are measuring the end of the pipeline, nor is that possible right now due to the lack of an overlay option for DirectX 12.

Starting with the GTX 980, we can immediately see why we can’t always write-off high-level APIs. Benchmark non-determinism aside, both DirectX 11 and DirectX 12 produce consistent frame times; one is just much, much faster than the other. Both on paper and subjectively in practice, Star Swarm has little trouble maintaining consistent frame times on the GTX 980. Even if DirectX 11 is slow, it is at least consistent.

The story is much the same for the R9 290X. DirectX 11 and DirectX 12 both produce consistent results, with neither API experiencing frame time swings. Meanwhile Mantle falls into the same category as DirectX 12, producing similarly consistent performance and frame times.

Ultimately it’s clear from these results that if DirectX 12 is going to lead to any major differences in frame time consistency, Star Swarm is not the best showcase for it. With DirectX 11 already producing consistent results, DirectX 12 has little to improve on.

Finally, along with our frame time consistency graphs, we have also recorded videos of shorter run-throughs on both the GeForce GTX 980 and Radeon R9 290X. With YouTube now supporting 60fps, these videos are frame-accurate representations of what we see when we run the Star Swarm benchmark, showing first-hand the overall frame time consistency among all configurations, and of course the massive difference in performance.

Mid Quality Performance First Thoughts
Comments Locked

245 Comments

View All Comments

  • junky77 - Friday, February 6, 2015 - link

    Looking at the CPU scaling graphs and CPU/GPU usage, it doesn't look like the situation in other games where CPU can be maxed out. It does seem like this engine and test might be really tailored for this specific case of DX12 and Mantle in a specific way

    The interesting thing is to understand whether the DX11 performance shown here is optimal. The CPU usage is way below max, even for the one core supposedly taking all the load. Something is bottlenecking the performance and it's not the number of cores, threads or clocks.
  • eRacer1 - Friday, February 6, 2015 - link

    So the GTX 980 is using less power than the 290X while performing ~50% better, and somehow NVIDIA is the one with the problem here? The data is clear. The GTX 980 has a massive DX12 (and DX11) performance lead and performance/watt lead over 290X.
  • The_Countess666 - Thursday, February 19, 2015 - link

    it also costs twice as much.

    and this is the first time in roughly 4 generations that nvidia's managed to release a new generation first. it would be shocking is there wasn't a huge performance difference between AMD and nvidia at the moment.
  • bebimbap - Friday, February 6, 2015 - link

    TDP and power consumption are not the same thing, but are related
    if i had to write a simple equation it would be something to the effect of

    TDP(wasted heat) = (Power Consumption) X (process node coeff) X (temperature of silicon coeff) X (Architecture coeff)

    so basically TDP or "wasted heat" is related to power consumption but not the same thing
    Since they are on the same process node by the same foundry, the difference in TDP vs power consumed would be because of Nvidia currently has the more efficient architecture, and that also leads to their chips being cooler, both of which lead to less "wasted heat"

    A perfect conductor would have 0 TDP and infinite power consumption.
  • Mr Perfect - Saturday, February 7, 2015 - link

    Erm, I don't think you've got the right term there with TDP. TDP is not defined as "wasted heat", but as the typical power draw of the board. So if TDP for the GTX 980 is 165 watts, that just means that in normal gaming use it's drawing 165 watts.

    Besides, if a card is drawing 165watts, it's all going to become heat somewhere along the line. I'm not sure you can really decide how many of those watts are "wasted" and how many are actually doing "work".
  • Wwhat - Saturday, February 7, 2015 - link

    No, he's right TDP means Thermal design power and defines the cooling a system needs to run at full power.
  • Strunf - Saturday, February 7, 2015 - link

    It's the same... if a GC draws 165W it needs a 165W cooler... do you see anything moving on your card exept the fans? no, so all power will be transformed into heat.
  • wetwareinterface - Saturday, February 7, 2015 - link

    no it's not the same. 165w tdp means the cooler has to dump 165w worth of heat.
    165w power draw means the card needs to have 165w of power available to it.

    if the card draws 300w of power and has 200w of heat output that means the card is dumping 200w of that 300w into the cooler.
  • Strunf - Sunday, February 8, 2015 - link

    It's impossible for the card to draw 300W and only output 200W of heat... unless of course now GC defy the laws of physics.
  • grogi - Sunday, April 5, 2015 - link

    What is it doing with the remaining 100W?

Log in

Don't have an account? Sign up now