Just What Is Stuttering?

Now that we’ve seen a high-level overview of the rendering pipeline, we can dive into the subject of stuttering itself.

What is stuttering? In practice it’s any rendering anomaly that occurs that causes the time between frames to noticeably vary. This is admittedly a very generic definition, but it’s also a definition necessary to encompass all the different causes of stuttering.

We’ll get into specific scenarios of single-GPU and multi-GPU stuttering in the following pages, but briefly, stuttering can occur at several different points in the rendering pipeline. If the GPU takes longer to render a frame than expected – keeping in mind it’s impossible to accurately predict rendering times ahead of time – then that would result in stuttering. If a driver takes too long to prepare a frame for the GPU, backing up the rendering pipeline, that would result in stuttering. If a game simulation step takes too long and dispatches a frame later than it would have, or simply finds itself waiting too long before Windows lets it submit the next frame, that would result in stuttering. And if the CPU/OS is too busy to service an application or driver as soon as it would like, that would result in stuttering. The point of all of this being that stuttering and other pacing anomalies can occur at different points of the rendering pipeline, and become the responsibility of different hardware and software components.

Complicating all of this is the fact that Windows is not a real-time operating system, meaning that Windows cannot guarantee that it will execute any given command within a certain period of time. Essentially, Windows will get around to it when it can. In order to achieve the kind of millisecond level response time that applications and drivers need to ensure smoothness, Windows has to be overprovisioned to make sure it has excess resources. Consequently this is part of the reason for why the context queue exists in the first place, to serve as a buffer for when Windows can’t get the next frame passed down quickly enough.

Ultimately, while Windows will make a best-effort to get things done on time, the fact of the matter is that between the OS and the fact that PCs are composed of widely varied hardware, the software/hardware stack makes it virtually impossible to eliminate stuttering. Through careful profiling an optimizations it’s possible to get very close, but as the PC is not a fixed platform developers cannot count on any frame or any specific draw call being completed within a certain amount of time. For that kind of rendering pipeline consistency we’d have to look towards fixed platforms such as game consoles.

Moving on, stuttering is usually – though not always – a problem particular to gaming with v-sync disabled. When v-sync is enabled it places a hard floor on how often frames are presented to the user. For a typical 60Hz monitor this would mean there would be an interval of no shorter than 16.6ms, and in multiples of 16.6ms beyond that.

The significance of this is that if a game can consistently simulate and render at more than 60fps, v-sync effectively limits it to 60fps. With the end result being that the application is blocked from submitting any further frames once the context queue fills up, until the next scheduled frame is displayed. This fixed 16.6ms cycle makes it very easy to schedule frames and will typically minimize any stuttering. Of course v-sync also adds latency to the process since we’re now waiting on the GPU buffer to swap.

Throwing a few more definitions out before we move on, it’s important we differentiate between latency and the frame interval. Though latency gets thrown around as the time between frames, within the world of computer science and graphics that is not accurate, as latency has a different definition. Latency in this case is how long the entire rendering pipeline takes from start to end – from the moment the user clicks to the moment the first frame showing a response is displayed to the user. Most readers are probably more familiar with this concept as input lag, as latency in the rendering pipeline is a significant component of input lag.

Latency is closely related to, but not identical to the frame interval. Unlike latency, the frame interval is merely the time between frames, typically defined as the time (interval) between frames being displayed at the end of the rendering pipeline by the GPU performing a buffer swap. Typically latency and the frame interval are closely related, but thanks to the context queue it’s possible (and sometimes even likely) for a frame to go through the rendering pipeline with a high latency, while still being displayed at a consistent frame interval. For that matter the opposite can also happen.

When we’re looking at stuttering, what we’re really looking at is the frame interval rather than the latency. It’s possible to measure the latency separately, but whether it’s a software tool like FRAPS or something brute-force such as using a high-speed camera to measure the time between frames, what we’re seeing is the frame interval or a derivation thereof. The context queue means that the frame interval is not equivalent to the latency.

Finally, in our definition of stuttering we also need to somehow define when stuttering becomes apparent. Like input lag and other visual phenomena, there exists a point where stuttering is or isn’t visible to any given user. As we’ve already established that it’s virtually impossible to eliminate stuttering entirely on a variable platform like the PC, stuttering will always be with us to some degree, particularly if v-sync is disabled.

The problem is that this threshold is going to vary from person to person, and as such the idea of what an acceptable amount of stuttering would be is also going to vary depending on who you ask. If a frame takes 5ms longer than the previous, is that going to be noticeable? 10ms? 30ms? And what if this is at 30fps versus 60fps?


The $64K question: where is the cutoff for "good enough" stutter?

In our discussion with AMD, AMD brought up a very simple but very important point: while we can objectively measure instances of stuttering with the right tools, we cannot objectively measure the impact of stuttering on the user. We can make suggestions for what’s acceptable and set common-sense guidelines for how much of a variance is too much – similar to how 60fps is the commonly accepted threshold for smooth gameplay – but nothing short of a double-blind trial will tell us whether any given instance of stuttering is noticeable to any given individual.

AMD didn’t have all of the answers to this one, and frankly neither do we. Variance will always exist and so some degree of stuttering will always be present. The only point we can really make is the same point AMD made to us, which is that stuttering is only going to matter when it impacts the user. If the user cannot see stuttering then stuttering should no longer be an issue, even if we can measure some small degree of stuttering still occurring. Like input lag, framerates, and other aspects of rendering, there is going to be a point where stuttering can become “good enough” for most users.

The Start: The Rendering Pipeline In Detail The Tools of the Trade: FRAPS & GPUView
Comments Locked

103 Comments

View All Comments

  • Shark321 - Wednesday, March 27, 2013 - link

    Overall a good article, but it has one huge problem. Ryan, you are repeating about 10 times that there is no good tool to replace the Fraps measuring, which is inaccurate.

    But there is. PcPerformance has intruduced a new microstutter measuring method weeks ago: http://www.pcper.com/reviews/Graphics-Cards/Frame-...
  • rickcain2320 - Wednesday, March 27, 2013 - link

    I just bought an AMD/ATI card and not only do I have stuttering I have that horrid POWERPLAY kicking in all the time with screen tearing. I'm pulling my hair out and wondered why I didn't buy Geforce. My old 8800GTS was doing great but it finally gave up the ghost one day, I should have stuck with at least something consistent in performance.
  • Deo Domuique - Wednesday, March 27, 2013 - link

    This is the main problem on Anand's end, they need to sit down with a manufacturer firstly, in order to give us at least some valid graphs. It's understandable to a point, you don't bite the hand that feeds you, but... to a point. On the other hand, I trust TechReport's graphs... Actually TR is one of the very few websites I trust.
  • lally - Wednesday, March 27, 2013 - link

    There's actually been a lot of research on frame jitter's effects on people. You measure how well people do a specific task with different amounts of it, and compare their performance on the task to the jitter.

    http://lmgtfy.com/?q=virtual+reality+frame+rate+ji...
  • NerdT - Wednesday, March 27, 2013 - link

    First of all, it's a very good read. Thanks.

    Re problem of GPUView "Furthermore it still doesn’t show us when a GPU buffer swap actually takes place and the user sees a new frame, and that remains the basis of any kind of fine-grained look into stuttering." :

    It can actually show you a "flip queue" in yellow color where you can see when the frame was started to get flipped with the front buffer, the end of the flip process, and the wait time until it reaches VSync signal and that's the time user sees the frame. Not sure why you mentioned this. Better to revise it. I have been using GPUView for about two years and it's really unique, no other tool can yet compete with it.
  • mikato - Wednesday, March 27, 2013 - link

    Nvidia: ok we knew our ride here would end sometime. No more competitive advantage "secret bonus" in performance.

    AMD fanboy: argh, as usual my AMD parts will perform better with time, and not get the respect deserved since all the benchmarks were done already.
  • JeBarr - Thursday, March 28, 2013 - link

    What a long drawn out way of helping AMD in the PR department.

    Unlike most commenters, what I took away from this article is the fact that Ryan Smith is no longer qualified to conduct GPU benchmarks.

    GPUView too complicated? Seriously?

    lol.
  • Death666Angel - Thursday, March 28, 2013 - link

    First of all: Great read! Very technical, but very interesting and still easy to understand. :)

    Concerning V-Sync: I always enable it when I start playing a game for the first time. But 3 times out of 5, the gameplay gets too sluggish (that would probably be the added latency). So I have to turn it off and live with screen tearing and too much frames being rendered. It's a shame.

    And reading all this and the issues involved, it makes me wonder how Oculus and the involved parties are getting around this problem. They are working on minimizing latency left and right. I would like to see their input on this and if they are only optimizing for a few hardware setups. :)
  • LoccOtHaN - Wednesday, April 3, 2013 - link

    Mirillis Action! that Program is an Alternative to Fraps (no stutering ! and its werry light ) RECOMENDED by Ne01
  • KilledByAPixel - Thursday, April 4, 2013 - link

    It is great to finally see someone deconstructing the issue of stutter in games, it drives me nuts! I also wrote an article that actually offers a solution to this problem. I developed a simple system that allows games to smooth out their delta by predicting the time when a frame will be rendered rather then using the measured delta from the update.

    http://frankforce.com/?p=2636

Log in

Don't have an account? Sign up now