What are Double Buffering, vsync and Triple Buffering?

When a computer needs to display something on a monitor, it draws a picture of what the screen is supposed to look like and sends this picture (which we will call a buffer) out to the monitor. In the old days there was only one buffer and it was continually being both drawn to and sent to the monitor. There are some advantages to this approach, but there are also very large drawbacks. Most notably, when objects on the display were updated, they would often flicker.


The computer draws in as the contents are sent out.
All illustrations courtesy Laura Wilson.


In order to combat the issues with reading from while drawing to the same buffer, double buffering, at a minimum, is employed. The idea behind double buffering is that the computer only draws to one buffer (called the "back" buffer) and sends the other buffer (called the "front" buffer) to the screen. After the computer finishes drawing the back buffer, the program doing the drawing does something called a buffer "swap." This swap doesn't move anything: swap only changes the names of the two buffers: the front buffer becomes the back buffer and the back buffer becomes the front buffer.


Computer draws to the back, monitor is sent the front.


After a buffer swap, the software can start drawing to the new back buffer and the computer sends the new front buffer to the monitor until the next buffer swap happens. And all is well. Well, almost all anyway.

In this form of double buffering, a swap can happen anytime. That means that while the computer is sending data to the monitor, the swap can occur. When this happens, the rest of the screen is drawn according to what the new front buffer contains. If the new front buffer is different enough from the old front buffer, a visual artifact known as "tearing" can be seen. This type of problem can be seen often in high framerate FPS games when whipping around a corner as fast as possible. Because of the quick motion, every frame is very different, when a swap happens during drawing the discrepancy is large and can be distracting.

The most common approach to combat tearing is to wait to swap buffers until the monitor is ready for another image. The monitor is ready after it has fully drawn what was sent to it and the next vertical refresh cycle is about to start. Synchronizing buffer swaps with the Vertical refresh is called vsync.

While enabling vsync does fix tearing, it also sets the internal framerate of the game to, at most, the refresh rate of the monitor (typically 60Hz for most LCD panels). This can hurt performance even if the game doesn't run at 60 frames per second as there will still be artificial delays added to effect synchronization. Performance can be cut nearly in half cases where every frame takes just a little longer than 16.67 ms (1/60th of a second). In such a case, frame rate would drop to 30 FPS despite the fact that the game should run at just under 60 FPS. The elimination of tearing and consistency of framerate, however, do contribute to an added smoothness that double buffering without vsync just can't deliver.

Input lag also becomes more of an issue with vsync enabled. This is because the artificial delay introduced increases the difference between when something actually happened (when the frame was drawn) and when it gets displayed on screen. Input lag always exists (it is impossible to instantaneously draw what is currently happening to the screen), but the trick is to minimize it.

Our options with double buffering are a choice between possible visual problems like tearing without vsync and an artificial delay that can negatively effect both performance and can increase input lag with vsync enabled. But not to worry, there is an option that combines the best of both worlds with no sacrifice in quality or actual performance. That option is triple buffering.


Computer has two back buffers to bounce between while the monitor is sent the front buffer.


The name gives a lot away: triple buffering uses three buffers instead of two. This additional buffer gives the computer enough space to keep a buffer locked while it is being sent to the monitor (to avoid tearing) while also not preventing the software from drawing as fast as it possibly can (even with one locked buffer there are still two that the software can bounce back and forth between). The software draws back and forth between the two back buffers and (at best) once every refresh the front buffer is swapped for the back buffer containing the most recently completed fully rendered frame. This does take up some extra space in memory on the graphics card (about 15 to 25MB), but with modern graphics card dropping at least 512MB on board this extra space is no longer a real issue.

In other words, with triple buffering we get the same high actual performance and similar decreased input lag of a vsync disabled setup while achieving the visual quality and smoothness of leaving vsync enabled.

Now, it is important to note, that when you look at the "frame rate" of a triple buffered game, you will not see the actual "performance." This is because frame counters like FRAPS only count the number of times the front buffer (the one currently being sent to the monitor) is swapped out. In double buffering, this happens with every frame even if the next frames done after the monitor is finished receiving and drawing the current frame (meaning that it might not be displayed at all if another frame is completed before the next refresh). With triple buffering, front buffer swaps only happen at most once per vsync.

The software is still drawing the entire time behind the scenes on the two back buffers when triple buffering. This means that when the front buffer swap happens, unlike with double buffering and vsync, we don't have artificial delay. And unlike with double buffering without vsync, once we start sending a fully rendered frame to the monitor, we don't switch to another frame in the middle.

This last point does bring to bear the one issue with triple buffering. A frame that completes just a tiny bit after the refresh, when double buffering without vsync, will tear near the top and the rest of the frame would carry a bit less lag for most of that refresh than triple buffering which would have to finish drawing the frame it had already started. Even in this case, though, at least part of the frame will be the exact same between the double buffered and triple buffered output and the delay won't be significant, nor will it have any carryover impact on future frames like enabling vsync on double buffering does. And even if you count this as an advantage of double buffering without vsync, the advantage only appears below a potential tear.

Let's help bring the idea home with an example comparison of rendering using each of these three methods.

Index Digging Deeper: Galloping Horses Example
Comments Locked

184 Comments

View All Comments

  • oralpain - Saturday, June 27, 2009 - link

    Even though I've been well aware of how triple buffering works, and how to enable it, I rarely use it.

    Even on my 60Hz LCDs, I usually have a better subjective experience with vsynch off. Not exactly sure why this is, but higher FSP, even if I'm not seeing the visual effects of it, is worth it over an elemination in the occasional tearing I notice.

    In the handful of games where I do prefer vsych, I've always tried to use triple buffering.
  • billythefisherman - Saturday, June 27, 2009 - link

    Ok triple buffering can undeniably offer benefits in certain situations but saying turn on triple buffering always gives you a better experience is nonsense.

    Take for example the case where you are running under 60hz in this case over an average amount of frames you'll experience exactly the same amount of lag as double buffering with vsync but now you have lost some video memory that could be used to hold that top level mip map your currently staring at and so see a lower quality picture at that point.

    Another problem is that with tripple buffering your lag is unequally distributed because each frame takes a variable amount of time to create/render which could give a wierd feel to your play compared to what you otherwise maybe accustomed to - and it'll get worse with lower frame rates.

    Another problem is that developers may take advantage of this lag on the CPU side of things if they've coded for vsync double buffered (which they invariably will do in most modern games) and know that on faster machine they may have more CPU resources so they may speed up the AI update or process more physics calculations with this left over CPU time.

    Ok they may not but a game engine is not a straight forward simple system that runs everything on the GPU: it consists of many parts all working together to try to produce the lowest lag possible from input to output and a vsynced double buffered scenario provides the easiest environment to tune that system.

    Its no where near as clear cut as this article makes out.
  • DerekWilson - Saturday, June 27, 2009 - link

    quote:

    "Take for example the case where you are running under 60hz in this case over an average amount of frames you'll experience exactly the same amount of lag as double buffering with vsync"


    This is definitely NOT true at all. you will, in fact, experience the same amount of lag as double buffering WITHOUT vsync. If you real performance is consistently 45 FPS every frame (each frame takes 22.2ms), in triple buffering and double buffering without vsync with both deliver 45 FPS with the same latency for the start of the displayed image. Average latency will be 1.5 frames. BUT double buffered WITH vsync will only give 30 FPS in this case average latency is 2 frames.

    for triple buffering, lag is distributed the same as double buffering without vsync for the top of the displayed frame (above any tearing).

    The CPU side of rendering for rendering's sake is no longer huge, especially with multicore CPUs. The way a developer handles work between frames won't be hampered on the CPU side by a high framerate unless they have done something wrong.

    I intentionally kept this article simple in order to get the concept across and start talking about the subject. I could have included examples of things like 50 FPS, 45 FPS, and 20 FPS with all three page flipping techniques, but I felt it would just get in the way of itself by making the article unnecessarily longer and more complicated -- and all the examples deliver the same information: that triple buffering is equivalent in lag to double buffering without vsync for the top of the frame and the only time you see significant newer info in a double buffered no vsync situation is after a visible tear.

    Developing /for/ the page flipping method is not the most desirable approach... Unless it's triple buffering :-)
  • billythefisherman - Thursday, July 9, 2009 - link

    Example, your monitor is running at 60fps your graphics card is running at 45fps, as they are not in sync because of triple buffering for 2 out of 3 frames the monitor will be displaying the same frame, at best the user sees 40 new frames per second.

    Ok thats more frames but if your looking at what is arguably more important: the amount of lag between your input being sampled and the results being displayed then you see that your no better off.

    For example lets assume your input on the game side is locked to the GPU which is typically the case in triple buffering or without vsync setup.

    If the GPU is running at a constant 45fps you will see on the first frame 0 lag between the last frame being displayed. The last sample of your analogue input will be lets say for sake of simplicity ~16.667ms ago.

    On the second frame the monitor will display the same frame becuase the GPU has finished rendering and so will be displaying input from ~33.334ms ago ie the frame will be now ~16.667ms old.

    On the third frame the monitor will now display the first new frame rendered since the start which will now be 8.3335ms old (at constant 45fps) ie the input sampled is now ~25.00ms old.

    With double buffered vsync on, your input on frame one will be 16.667ms old and on the second frame it will be 33.334ms old then on the third frame it repeats ie it will be 16.667ms old again etc.

    Multiply this out over a 60fps ie 20*1667+20*33.334+20*25.002=~1500 and 30*16.667+30*33.334=~1500 and as you can see the lag between your input being sampled and it being displayed is on average the same.

    All the game systems such as physics etc running on the CPU will have similar lag time characteristics - you won't see that much difference from frame to frame and now with triple buffering your sampling at uneven periods of time which could give undesirable effects.
  • billythefisherman - Thursday, July 9, 2009 - link

    Sorry correction:

    ...

    On the second frame the monitor will display the same frame becuase the GPU *hasn't* finished rendering

    ...
  • Nighteye2 - Saturday, June 27, 2009 - link

    It looks like Triple Buffering, while delivering good results, also involves a lot of excess rendering of frames that never get displayed.

    Unlike double buffering with vsync, where every rendered frame gets displayed.

    It should be possible to get triple buffer performance with double buffering and vsync - by predicting how long it takes to render a frame (based on render time of the previous frame and a small margin), the computer could delay drawing the next frame instead of starting to draw it immediately. If the rendering of the frame gets finished just in time instead of shortly after the last refresh, it would eliminate the display lag.
  • DerekWilson - Saturday, June 27, 2009 - link

    When framerate is less than 60 FPS, triple buffering doesn't spin off into oblivion doing work no one sees -- it maintains the same performance of double buffering without vsync but avoids tearing. predicting rendering time isn't a viable option at this point for games ...
  • Nighteye2 - Saturday, June 27, 2009 - link

    If it renders 300 frames and only 60 frames get shown, doesn't that mean 240 excess frames have been rendered?

    It would be better to conserve energy and have the GPU run less hot by rendering less frames, while still getting the exact same output on the screen...
  • Scalarscience - Saturday, June 27, 2009 - link

    I'm late into this so I don't know if Derek (or anyone else) will get around to responding, but there's 2 things I thought I might bring up. I'll post the second as a separate post in case actual discussion ensues...

    First, the comments have established the differences between 'render ahead' & Double/Triple buffering in DirectX fairly well. But for the people who are actually trying this, the situation is imo potentially confusing. For instance, does forcing triple buffering+vsync via Rivatuner's utility (for games with no native implementation) still keep the default render-ahead setting (ie, 3 frames?) If so then this indeed is the source of a huge latency penalty.

    Even with games that implement Triple Buffering themselves in DirectX, there seems to be some variance and it would be nice for devs to publish their implementation (Valve?) and how it interacts with the 'render ahead' control panel setting. I always find that for FPS (online or otherwise) setting the render ahead for DirectX to 2 instead of 3 helps the game's 'feel', though I do put it back to the default of 3 for single player games where the eye candy is making my machine struggle (and I'm willing to trade some performance for keeping the graphics cranked up.)

    Now some games will have their OWN 'render ahead' implementation, like UT3 and other Unreal3 engine games. I've had to not only set 'render ahead' to 2 but also dig into UT3's ini and disable it's native 'one frame render queue' setting (or whatever it is.) The last major update did bring that into the GUI settings finally.

    So the question there is how does the DirectX render queue & vsync + double/triple buffering interact? I'm guessing there's at least a few variations in that answer and I would love a discussion or article that begins with the early 3d games (Quake engine, Unreal engine then Source etc) and moves forward in time covering the mainstays in modern FPS games.
  • DerekWilson - Saturday, June 27, 2009 - link

    Let me preface this with: I'm unsure what game developers actually do at this point. If there is enough interest for an article, I'll try and sit down with some game developers and ask them about this.

    But this is what they /should/ do when combining render ahead with triple buffering.

    Start by rendering into the queue. Every vertical refresh, you send the oldest fully completed frame to the front buffer. If you fill up the queue before the next vertical refresh, drop the oldest frame and start rendering another newer one. Continue this until the next vertical refresh comes along.

    The game always renders to whatever buffer is marked current, and front buffer is always swapped with the buffer marked oldest.

    You still end up with a high potential latency of (16.67ms * queue_length) but depending on how the game handles it, this could potentially only happen when frametime >= (16.67ms * queue_lenght) anyway. The minimum latency in this case is longer than without the render ahead queue as well ...

    but there could be some flexibility in maintaining a minimum number of frames in the queue or even keeping it full until frametime severely dips ... there might be some ways to use this to help SLI/CF play nicer with triple buffering as well. Not that multiGPU needs anything to add more potential lag or anything ...

Log in

Don't have an account? Sign up now