Frame Time Consistency & Recordings

Last, but not least, we wanted to also look at frame time consistency across Star Swarm, our two vendors, and the various APIs available to them. Next to CPU efficiency gains, one of the other touted benefits of low-level APIs like DirectX 12 is the ability for developers to better control frame time pacing due to the fact that the API and driver are doing fewer things under the hood and behind an application’s back. Inefficient memory management operations, resource allocation, and shader compiling in particular can result in unexpected and undesirable momentary drops in performance. However, while low-level APIs can improve on this aspect, it doesn’t necessarily mean high-level APIs are bad at it. So it is an important distinction between bad/good and good/better.

On a technical note, these frame times are measured within (and logged by) Star Swarm itself. So these are not “FCAT” results that are measuring the end of the pipeline, nor is that possible right now due to the lack of an overlay option for DirectX 12.

Starting with the GTX 980, we can immediately see why we can’t always write-off high-level APIs. Benchmark non-determinism aside, both DirectX 11 and DirectX 12 produce consistent frame times; one is just much, much faster than the other. Both on paper and subjectively in practice, Star Swarm has little trouble maintaining consistent frame times on the GTX 980. Even if DirectX 11 is slow, it is at least consistent.

The story is much the same for the R9 290X. DirectX 11 and DirectX 12 both produce consistent results, with neither API experiencing frame time swings. Meanwhile Mantle falls into the same category as DirectX 12, producing similarly consistent performance and frame times.

Ultimately it’s clear from these results that if DirectX 12 is going to lead to any major differences in frame time consistency, Star Swarm is not the best showcase for it. With DirectX 11 already producing consistent results, DirectX 12 has little to improve on.

Finally, along with our frame time consistency graphs, we have also recorded videos of shorter run-throughs on both the GeForce GTX 980 and Radeon R9 290X. With YouTube now supporting 60fps, these videos are frame-accurate representations of what we see when we run the Star Swarm benchmark, showing first-hand the overall frame time consistency among all configurations, and of course the massive difference in performance.

Mid Quality Performance First Thoughts
Comments Locked

245 Comments

View All Comments

  • loguerto - Saturday, February 7, 2015 - link

    Microsoft is on the right way, but still, Mantle is the boss!
  • FXi - Saturday, February 7, 2015 - link

    I'm sadly more curious as to whether the 6 core chips prove their worth. A lot of rumor guessing seems to think that DX12 might finally show that a 6 core matters, but nothing here shows that. That's a very key issue when it comes to whether to go for a higher end chip or stick with the 4 core cpu's.
  • GMAR - Saturday, February 7, 2015 - link

    Excellent article. Thank you!
  • Shahnewaz - Saturday, February 7, 2015 - link

    Wait a minute, isn't the GTX 980 a 165W TDP card? Then how is it pulling over 200 watts?
  • eRacer1 - Sunday, February 8, 2015 - link

    The GTX 980 isn't pulling over 200W. The numbers shown are system power consumption not video card power consumption. The GTX 980 system power consumption isn't unusually high.

    Also, the system power consumption numbers are understating the power difference between the GTX 980 and Radeon 290X cards themselves under DX12. The GTX 980 has such a large performance advantage over the 290X in DX12 that the CPU is also using more power in the GTX 980 system to keep up with the video card.

    If anything the 290X power consumption is "too low", especially under DX12. To me it looks like the GPU is being underutilized, which seems to be the case based on the low FPS results and power consumption numbers. That could be due to many reasons: poor driver optimization, 290X architectural limitations, benchmark bug or design choice, Windows 10 issue, 290X throttling problem, etc. Hopefully, for AMD's sake, those issues can be worked out before the Windows 10 launch.
  • Shahnewaz - Sunday, February 8, 2015 - link

    That doesn't explain the <20W difference in both systems.
    And it's not like the CPU usage is also radically different.
    Remember, the TDP difference between the GPUs is a massive 165W (290W vs 165W).
  • eRacer1 - Sunday, February 8, 2015 - link

    "That doesn't explain the <20W difference in both systems. And it's not like the CPU usage is also radically different."

    Looking at the CPU usage graphs in the review the GTX 980 DX12 CPU average across all four cores is about 80% while the 290X average is only about 50%. So the GTX 980 CPU is doing 60% more work. That alone could easily account for 20+W watts of extra power consumption on CPU in the GTX 980 system. The ~60% CPU higher usage in the GTX 980 system makes sense as the frame rate is 56% higher as well. So what looks like a 14W difference is probably more like a 35W difference between the GTX 980 and 290X video cards.

    But the 35W difference doesn't tell the whole story because the GTX 980 is also 56% faster while using less power. So the GTX 980 has a MASSIVE efficiency advantage under these benchmark conditions. And it is doing it within a reasonable TDP because by the time you back out all of the non-GPU power consumption (CPU, memory, motherboard, hard drive, fans, etc.) and PSU inefficiency losses from the 271W system power consumption you'd likely find that the GTX 980 is under 200W.

    So the question we are left with is why is a 290W TDP 290X system power consumption only 285W under DX12? By the time you subtract the CPU power consumption (which is somewhat less than that of the GTX 980 test due to only being at 50% load instead of 80%), motherboard, memory and other components the 290X is probably using only 200-220W. To me it looks like the 290X is being bottlenecked and as a result isn't using as much power as one would expect. What the source of the bottleneck is, and if it is correctable, remains a mystery.
  • Shahnewaz - Saturday, February 7, 2015 - link

    It looks like AMD GPUs will get some 400%+ performance improvements! Sick!
  • ET - Sunday, February 8, 2015 - link

    My main takeaway from the article is that NVIDIA has done a much better job of optimising its DX11 drivers. AMD needs low level badly.
  • bloodypulp - Sunday, February 8, 2015 - link

    They already have it: Mantle.

Log in

Don't have an account? Sign up now