Scanout and the Display

Alright. So depending on the game, we are up to somewhere between 13ms and 58ms after our mouse was moved. The GPU just finished rendering and swapped the finished frame to the front buffer. What happens next is called scanout: the frame is sent out the DVI-I port over the cable and to the monitor.

If our monitor's refresh rate is 60Hz (as is typical these days), it will actually take something like 16ms to send the full frame to the monitor (plus there's about half a millisecond of "blanking" between frames being sent) giving us 16.67ms of transmission delay. In this case we are limited by the bandwidth capabilities of DVI, HDMI and DisplayPort and the timing standards put forth by VESA. So to send a full frame of anything to the display we will have 16.67ms of input lag added. Some monitors will display this data as it is received, but others will latch input meaning the full frame must be sent before it can be displayed (but let's not get too far ahead of ourselves). Either way, we will consider the latency of this step to be at least one frame (as the monitor will still take 16ms to draw the image either way).

So now we need to talk about vsync. Let's pretend we aren't using it. Let's pretend our game runs at a rock solid exact 60 FPS and our refresh rate is 60Hz, but the buffer swap happens half way between each vertical sync. This means every frame being scanned out would be split down the middle. The top half of the frame will be an additional 16.67ms behind (for a total of 33.3ms of lag). Of course, the bottom half, while 16.67ms newer than the top, won't have it's own top half sent until the next frame 16.67ms later.

In this particular case, the way the math works out if we average the latency of all the pixels on a split frame we would get the same average latency as if we enabled vsync. Unfortunately, when framerate is either higher or lower than refresh rate, vsync has the potential to cause tons of problems and this equivalence doesn't carry in the least.

If our frametime is just longer than 16.67ms with vsync enabled, we will add a full additional frame of latency (with no work being done on the GPU) before we are able to swap the finished buffer to the front for scanout. The wasted work can cause our next frame not to come in before the next vsync, giving us up to two frames of latency (one because we wait to swap and one because of the delay in starting the next frame). If our framerate is higher than 60 FPS, our GPU will have to stop working after rendering until the next vsync. This is a waste of resources and decreases overall performance, but definitely not by as much as if we use vsync at less than the monitor refresh. The upper limit of additional delay is 16.67ms minus frametime (less than one frame) rather than two full frames.

When framerate is lower than refresh rate, using either a 1 frame flip queue with vsync or triple buffering will allow the graphics hardware to continue doing rendering work while adding between 0 and 16.67ms of additional latency (the average will be between the two extremes). So you get the potential benefits of vsync (no tearing and synchronization) without the additional decrease in performance that occurs when no work gets done on the GPU. At framerates higher than refresh rate, when using a render queue, we do end up adding an additional frame of latency per number of frames we render ahead, so this solution isn't a very good one for mitigating input latency (especially in twitch shooters) in high framerate games.

Once the data is sent to the monitor, we've got more delay in store.

We've already mentioned that some LCDs latch the entire frame before display. Beyond this delay, some displays will perform image processing on the input (including scaling if this is not done on the graphics hardware). In some cases, monitors will save two frames to overdrive LCD cells to get them to respond faster. While this can improve the speed at which the picture on the monitor changes, it can add another 16.67ms to 33.3ms of latency to the input (depending on whether one frame is processed or two). Monitors with a game mode or true 120Hz monitors should definitely add less input lag than monitors that require this sort of processing.

Add, on top of all this, the fact that it will take between 2ms and 16ms for the pixels on the LCD to actually switch (response time varies between panels and depending on what levels the transition is between) and we are done: the image is now on the screen.

So what do we have total after the image is flipped to the front buffer?

One frame of lag for transmission (to display a full frame), up to 1 frame of lag if we enable triple buffering (or 1 frame render ahead and we run at less than refresh rate), up to two frames of lag if we just turn on vsync, at framerates higher than the refresh rate we we'll add an additional frame of lag for every frame we render ahead with vsync on, and zero to 2 frames of lag for the monitor to display the image (if it does extensive image processing).

So after crazy speed from the mouse to the front buffer, here we are waiting ridiculous amounts of time to get the image to appear on the screen. We add at the very very least 16.67ms of lag in this stage. At worst we're taking on between 66.67ms and 83.3ms which is totally unacceptable. And that's after the computer is completely done working on the image.

This brings our totals up to about 33ms to 80ms input lag for typical cases. Our worst case for what we've outlined, however, is about 135ms of latency between mouse movement and final display which could be discernible and might start to feel mushy. Sometimes game developers stray a bit and incur a little more input lag than is reasonable. Oblivion and Fallout 3 come to mind.

But don't worry, we'll take a look at some specific cases next.

Of the GPU and Shading Realworld Testing w/ High Speed Video
Comments Locked

85 Comments

View All Comments

  • DerekWilson - Sunday, July 19, 2009 - link

    It was bound to happen wasn't it?

    This has been around for a few years now, but (for obvious reasons) never made it into the mainstream gaming community. And, really, now that high performance mice are much more available it isn't as much of an issue.
  • Kaihekoa - Saturday, July 18, 2009 - link

    From the conclusion this point wasn't clear to me.
  • DerekWilson - Sunday, July 19, 2009 - link

    at present triple buffering in DirectX == a 1 frame flip queue in all cases ...

    so ... it is best to disable triple buffering in DirectX if you are over refresh rate in performance (60FPS generally) ...

    and it is better to enable triple buffering in DirectX if you are under 60 FPS.
  • Squall Leonhart - Wednesday, March 30, 2011 - link

    This is not always the case actually, there are some DirectX engines specifically the age of empires 3 engine as an example, that have hitching when moving around the map unless triple buffering is forced on the game.
  • billythefisherman - Saturday, July 18, 2009 - link

    First of all I'd like to say well done on the article you're probably the first person outside of game industry developers to have looked at this rather complex topic and certainly the first to take into account the whole hardware pipeline as well.

    Sadly though there are some gaping holes in your analysis mainly focused around the CPU stage. Sadly your CPU isn't going to run any faster than your GPU (and actually the same is correct in reverse) as one is dependent on the other (the GPU is dependent on the CPU). As such the CPU may finish all of its tasks faster than the GPU but the CPU will have to wait for the GPU to finish rendering the last frame before it can start on the next frame of logic.

    No game team in the world developing for a console is going to triple buffer their GPU command list.

    I intentionally added 'developing for a console' as this is also an important factor I'd say around 75% (being very conservative) of mainstream PC games now are based on cross platform engines. As such developers will more than likely gear their engines to the consoles as these make up the largest market segment by far.

    The consoles all have very limited memory capacities
    in comparison to their computational power and so developers will more than likely try to save memory over computation thus a double buffered command list is the norm. Some advanced console specific engines actually dropping down to a single command buffer and using CPU - GPU synchronisation techniques because of CPU's being faster than GPU's. This kind of thing isnt going to happen on the PC because the GPU is invariably faster than the CPU.

    When porting a game to PC a developer is very unlikely to spend the money re-engineering the core pipline because of the massive problems that can cause. This can be seen in most 'DirectX 10' games, as they simply add a few more post processing effects to soak up the extra power - you may call it lazy coding, I don't, it's just commercial reality these are businesses at the end of the day.

    So both your diagrams on the last page are wrong with regards to the CPU stage as they will be roughly the same amount of time as the GPU in the vast majority of frames because of frame locality ie one frame differs little to the next frame as the player tends not to jump around in space and so neighbouring frames take similar amounts of time to render.

    Onto my next complaint :
    "If our frametime is just longer than 16.67ms with vsync enabled, we will add a full additional frame of latency (with no work being done on the GPU) before we are able to swap the finished buffer to the front for scanout. The wasted work can cause our next frame not to come in before the next vsync, giving us up to two frames of latency (one because we wait to swap and one because of the delay in starting the next frame)."

    What are you talking about man!?! You don't drop down to 20fps (ie two more frames of latency) because you take 17ms to render your frame - you drop down to 30fps! With vsync enabled your graphics processor will be stalled until the next frame but thats all and you could possibly kick off your CPU to calculate the next frame to take advantage of that time. Not that thats going to make the slightest jot of difference if you're GPU bound because you have to wait for the GPU to finish with the command buffer its rendering (as you don't know where in the command buffer the GPU is).

    As I've said on the consoles there are tricks you can do to synchronise the GPU with the CPU but you don't have that low level control of the GPU on the PC as Nvidia/ATI don't want the internals of thier drivers exposed to one another.

    And as I've said not that you'd want to do such a thing on PC as the CPU is usually going to be slower than the GPU and cause the GPU to stall constantly hence the reason to double buffer the command buffer in the first place.

    I've also tried to explain in my posts to your triple buffering article why there's a lot cobblers in the next few paragraphs.
  • DerekWilson - Sunday, July 19, 2009 - link

    Fruit pies? ... anyway...

    Thanks for your feedback. On the first issue, the console development is one of growing importance as much as I would like for it not to be. At some point, though, I expect there will be an inflection point where it will just not be possible to build certain types of games for consoles that can be built on PCs ... and we'll have this before the next generation of consoles. Maybe it's a pipedream, but I'm hoping the development focus will shift back to the PC rather than continue to pull away (I don't think piracy is a real factor in profitability though I do believe publishers use the issue to take advantage of developers and consumers).

    And I get that with GPU as bottleneck you have that much time to use the CPU as well ... but you /could/ decouple CPU and GPU and gain performance or reduce lag. Currently, it may make sense that if we are GPU limited the CPU stage will effectively equal the GPU stage in latency -- and likewise that if we are CPU limited, the GPU state effectively equals the CPU stage (because of stalling) in input latency.

    Certainly it is a more complex topic than I illustrated, and if I didn't make that clear then I do apologize. I just wanted to get across the general idea rather than a "this is how it always is" kind of thing ... clearly Fallout 3 has even more input lag than any of my worst case scenarios account for even with 2 frame of image processing on the monitor ... I have no idea what they are doing ...

    ...

    As for the second issue -- you can get up to two frames of INPUT LAG with vsync enabled and 17ms GPU time.

    you will get up to these two frames (60Hz frames) of input lag at 30FPS ...

    I'm not talking about the frame rate dropping to 2 frames then 1 frame (20 FPS) ... I'm talking about the fact that, at best, your input is gathered 17ms before your frame completes on the GPU (1 frame of input lag) and (because it missed vsync) it will take another frame for that to hit the screen (for a total of two).
  • billythefisherman - Monday, July 20, 2009 - link

    I have to re-iterate: well done on tackling this rather complex issue, I applaud you! (I just wish you hadn't whipped up your punters so much in the benefits of triple buffering!)
  • Gastra - Saturday, July 18, 2009 - link

    For (quite a lot if you follow the links) of information on what an optical mouse see:
    http://hackedgadgets.com/2008/10/15/optical-mouse-...">http://hackedgadgets.com/2008/10/15/optical-mouse-...
  • DerekWilson - Sunday, July 19, 2009 - link

    That's pretty cool stuff ... And it lines up pretty well with our guess at mouse sensor resolution for the G9x.

    It'd still be a lot nicer if we could get the specs straight from the manufacturer though ...
  • PrinceGaz - Friday, July 17, 2009 - link

    "For input lag reduction in the general case, we recommend disabling vsync. For NVIDIA card owners running OpenGL games, forcing triple buffering in the driver will provide a better visual experience with no tearing and will always start rendering the same frame that would start rendering with vsync disabled."

    I'm going to ask this again I'm afraid :) Are you sure Derek? Does nVidia's triple-buffer OpenGL driver implementation do that, or is it just the same as what most people take triple-buffer rendering to be, that is having one additional back buffer to render to so as to provide a steady supply of frames when the framerate dips below the refresh rate? Have you got confirmation either from screenshots or something else (like nVidia saying that is how it works) that OpenGL triple-buffering is any different from Direct3D rendering, or how AMD handle it?.

    Because if you don't, then all you are saying is that triple-buffering is a second back-buffer which is filled to prevent lags when the framerate falls below the refresh rate. Do you know for sure that nVidia OpenGL drivers render constantly when in triple-buffer mode or are you only assuming they do so?

Log in

Don't have an account? Sign up now