Just What Is Stuttering?

Now that we’ve seen a high-level overview of the rendering pipeline, we can dive into the subject of stuttering itself.

What is stuttering? In practice it’s any rendering anomaly that occurs that causes the time between frames to noticeably vary. This is admittedly a very generic definition, but it’s also a definition necessary to encompass all the different causes of stuttering.

We’ll get into specific scenarios of single-GPU and multi-GPU stuttering in the following pages, but briefly, stuttering can occur at several different points in the rendering pipeline. If the GPU takes longer to render a frame than expected – keeping in mind it’s impossible to accurately predict rendering times ahead of time – then that would result in stuttering. If a driver takes too long to prepare a frame for the GPU, backing up the rendering pipeline, that would result in stuttering. If a game simulation step takes too long and dispatches a frame later than it would have, or simply finds itself waiting too long before Windows lets it submit the next frame, that would result in stuttering. And if the CPU/OS is too busy to service an application or driver as soon as it would like, that would result in stuttering. The point of all of this being that stuttering and other pacing anomalies can occur at different points of the rendering pipeline, and become the responsibility of different hardware and software components.

Complicating all of this is the fact that Windows is not a real-time operating system, meaning that Windows cannot guarantee that it will execute any given command within a certain period of time. Essentially, Windows will get around to it when it can. In order to achieve the kind of millisecond level response time that applications and drivers need to ensure smoothness, Windows has to be overprovisioned to make sure it has excess resources. Consequently this is part of the reason for why the context queue exists in the first place, to serve as a buffer for when Windows can’t get the next frame passed down quickly enough.

Ultimately, while Windows will make a best-effort to get things done on time, the fact of the matter is that between the OS and the fact that PCs are composed of widely varied hardware, the software/hardware stack makes it virtually impossible to eliminate stuttering. Through careful profiling an optimizations it’s possible to get very close, but as the PC is not a fixed platform developers cannot count on any frame or any specific draw call being completed within a certain amount of time. For that kind of rendering pipeline consistency we’d have to look towards fixed platforms such as game consoles.

Moving on, stuttering is usually – though not always – a problem particular to gaming with v-sync disabled. When v-sync is enabled it places a hard floor on how often frames are presented to the user. For a typical 60Hz monitor this would mean there would be an interval of no shorter than 16.6ms, and in multiples of 16.6ms beyond that.

The significance of this is that if a game can consistently simulate and render at more than 60fps, v-sync effectively limits it to 60fps. With the end result being that the application is blocked from submitting any further frames once the context queue fills up, until the next scheduled frame is displayed. This fixed 16.6ms cycle makes it very easy to schedule frames and will typically minimize any stuttering. Of course v-sync also adds latency to the process since we’re now waiting on the GPU buffer to swap.

Throwing a few more definitions out before we move on, it’s important we differentiate between latency and the frame interval. Though latency gets thrown around as the time between frames, within the world of computer science and graphics that is not accurate, as latency has a different definition. Latency in this case is how long the entire rendering pipeline takes from start to end – from the moment the user clicks to the moment the first frame showing a response is displayed to the user. Most readers are probably more familiar with this concept as input lag, as latency in the rendering pipeline is a significant component of input lag.

Latency is closely related to, but not identical to the frame interval. Unlike latency, the frame interval is merely the time between frames, typically defined as the time (interval) between frames being displayed at the end of the rendering pipeline by the GPU performing a buffer swap. Typically latency and the frame interval are closely related, but thanks to the context queue it’s possible (and sometimes even likely) for a frame to go through the rendering pipeline with a high latency, while still being displayed at a consistent frame interval. For that matter the opposite can also happen.

When we’re looking at stuttering, what we’re really looking at is the frame interval rather than the latency. It’s possible to measure the latency separately, but whether it’s a software tool like FRAPS or something brute-force such as using a high-speed camera to measure the time between frames, what we’re seeing is the frame interval or a derivation thereof. The context queue means that the frame interval is not equivalent to the latency.

Finally, in our definition of stuttering we also need to somehow define when stuttering becomes apparent. Like input lag and other visual phenomena, there exists a point where stuttering is or isn’t visible to any given user. As we’ve already established that it’s virtually impossible to eliminate stuttering entirely on a variable platform like the PC, stuttering will always be with us to some degree, particularly if v-sync is disabled.

The problem is that this threshold is going to vary from person to person, and as such the idea of what an acceptable amount of stuttering would be is also going to vary depending on who you ask. If a frame takes 5ms longer than the previous, is that going to be noticeable? 10ms? 30ms? And what if this is at 30fps versus 60fps?


The $64K question: where is the cutoff for "good enough" stutter?

In our discussion with AMD, AMD brought up a very simple but very important point: while we can objectively measure instances of stuttering with the right tools, we cannot objectively measure the impact of stuttering on the user. We can make suggestions for what’s acceptable and set common-sense guidelines for how much of a variance is too much – similar to how 60fps is the commonly accepted threshold for smooth gameplay – but nothing short of a double-blind trial will tell us whether any given instance of stuttering is noticeable to any given individual.

AMD didn’t have all of the answers to this one, and frankly neither do we. Variance will always exist and so some degree of stuttering will always be present. The only point we can really make is the same point AMD made to us, which is that stuttering is only going to matter when it impacts the user. If the user cannot see stuttering then stuttering should no longer be an issue, even if we can measure some small degree of stuttering still occurring. Like input lag, framerates, and other aspects of rendering, there is going to be a point where stuttering can become “good enough” for most users.

The Start: The Rendering Pipeline In Detail The Tools of the Trade: FRAPS & GPUView
Comments Locked

103 Comments

View All Comments

  • Juddog - Tuesday, March 26, 2013 - link

    What the hell you talking about? Network latency is an entirely different subject.
  • Juddog - Tuesday, March 26, 2013 - link

    I had meant the above as a reply to the guy talking about network fragmentation; I'm not sure why the reply in the new format doesn't auto-nest the response.
  • danielkza - Tuesday, March 26, 2013 - link

    Because then the measurement wouldn't be representative of the performance users will actually see?
  • polaco - Tuesday, March 26, 2013 - link

    Thanks a lot for this interesting article. Is astonishing to see how minimal software issues can severely degrade performance and efforts done in other areas, turning a company less competitive with the money losses this takes with it.
    Also is a reminder of how important is to implement deep quality and performance evaluations in software development. Is a shame that in today software industry the delivery dates are more important than quality many times and programmers end up delivering half baked applications from also half baked requirements.
    Thanks again.
  • sudz - Tuesday, March 26, 2013 - link

    Good to know I'm not going crazy. Almost every game I play has a decent frame rate, but still doesn't seem smooth. (Gigabyte Windforce 6850 OC) Tried underclocking, overclocking, Different PC's... I thought I had a dud card.
  • DemBones79 - Tuesday, March 26, 2013 - link

    Reading through the whole article, I became increasingly convinced that it's not that FRAPS is necessarily a bad tool for measuring this, but that people need guidance in how to interpret the graphs correctly.

    The first time I saw a frame latency (or whatever you're calling them now) graph, my first impression wasn't, "Wow, look at all these little latency spikes." It was, "Holy sh*t! Look at those huge freaking spikes!" It was a simple matter of severity. I think anyone can take a look at the "heartbeat", see that it is a recurring pattern with a relatively consistent frequency, and- while they may not be able to say for certain if it is indicative of a problem- they can say that it is "normal" for that particular card. It's the huge spikes, the ones that aren't occurring at consistent intervals, that are so much more severe than the "heartbeat", that are the issue.

    How hard would it be for a reviewer to draw a pair of horizontal lines across the graph to indicate the limits of "normal" stuttering, where anything beyond the lines in either direction would be considered "abnormal"? A method of separating the signal from the noise.

    Furthermore, I thought it was reviewers noticing a difference- that framerate alone couldn't explain- in the way games played between ATI and NVIDIA that prompted the whole investigation into latency. Several sections in the article mention how FRAPS results may not be indicative of user experience. But it was user experience that prompted using FRAPS to try and explain what was being observed.
  • JPForums - Tuesday, March 26, 2013 - link

    Thre are two things you need to keep in mind:
    1) Nvidia also agrees with the limitation of FRAPS. In fact, IIRC they were the first to voice the issue that FRAPS recordings are in the wrong place and can only infer what actually needs to be recorded. The author is correct, when Ati and Nvidia agree, we should at least pay attention.

    2) Though your your points are AFAIK correct and well articulated, they still point to the issue of FRAPS inferring, rather than recording the the targeted information. The difference is, rather than consistency of output frames, you are looking for consistency of simulation steps. I agree that this is a metric that really needs to be covered. In fact, I would even go as far as matching simulation steps to their corresponding frame times to expose issues when short steps are accompanied by long frames or vice versa.

    Unfortunately, FRAPS can't measure any of this directly and even for your points proves to be limited to inference. That said, until a reviewer gets tools that can reveal this information, inference via FRAPS is better than no information at all. Pcperspective's comments on AMD's stuttering issues are related (as they state) to crossfire setups. I could see the differences between CF and SLI in blind tests (though SLI also has some microstutter) and this only confirms it. The runt frames only add fuel to the fire. I'm open to using AMD in single GPU builds, but only use Nvidia for multiGPU builds. Perhaps this will change in July, but I'm guessing there will still be plenty of work to do.
  • JPForums - Tuesday, March 26, 2013 - link

    Thre are two things you need to keep in mind:
    1) Nvidia also agrees with the limitation of FRAPS. In fact, IIRC they were the first to voice the issue that FRAPS recordings are in the wrong place and can only infer what actually needs to be recorded. The author is correct, when Ati and Nvidia agree, we should at least pay attention.

    2) Though your your points are AFAIK correct and well articulated, they still point to the issue of FRAPS inferring, rather than recording the the targeted information. The difference is, rather than consistency of output frames, you are looking for consistency of simulation steps. I agree that this is a metric that really needs to be covered. In fact, I would even go as far as matching simulation steps to their corresponding frame times to expose issues when short steps are accompanied by long frames or vice versa.

    Unfortunately, FRAPS can't measure any of this directly and even for your points proves to be limited to inference. That said, until a reviewer gets tools that can reveal this information, inference via FRAPS is better than no information at all. Pcperspective's comments on AMD's stuttering issues are related (as they state) to crossfire setups. I could see the differences between CF and SLI in blind tests (though SLI also has some microstutter) and this only confirms it. The runt frames only add fuel to the fire. I'm open to using AMD in single GPU builds, but only use Nvidia for multiGPU builds. Perhaps this will change in July, but I'm guessing there will still be plenty of work to do.
  • hero1 - Tuesday, March 26, 2013 - link

    Long time reader first timer commentor. I really liked this article, and have liked most of the articles here. What I want to say is, I hope that AMD fixes their drivers and address both single and dual gpu issues. I personally didn't have any stuttering when I had 2x7970s but they sometimes lost the link to each other and my system would only see one. I switched to the Titan since I got it for a reasonable price. Now this articles makes me wonder whether I should go back and grab the 2x7970s and save some cash in hopping that AMD has the mutliple GPUs issue solved by early summer. It's good to see them working to address the issue and hope we never have to encounter this again once it's done with. Next step should be how their mutli gpu solutions scale. Thanks Ryan and keep up the good work.
  • Hrel - Tuesday, March 26, 2013 - link

    That was a good breakdown of Direct3D. I'd like to see another one for OpenGL if we could. A side by side comparison would be nice.

Log in

Don't have an account? Sign up now