What are Double Buffering, vsync and Triple Buffering?

When a computer needs to display something on a monitor, it draws a picture of what the screen is supposed to look like and sends this picture (which we will call a buffer) out to the monitor. In the old days there was only one buffer and it was continually being both drawn to and sent to the monitor. There are some advantages to this approach, but there are also very large drawbacks. Most notably, when objects on the display were updated, they would often flicker.


The computer draws in as the contents are sent out.
All illustrations courtesy Laura Wilson.


In order to combat the issues with reading from while drawing to the same buffer, double buffering, at a minimum, is employed. The idea behind double buffering is that the computer only draws to one buffer (called the "back" buffer) and sends the other buffer (called the "front" buffer) to the screen. After the computer finishes drawing the back buffer, the program doing the drawing does something called a buffer "swap." This swap doesn't move anything: swap only changes the names of the two buffers: the front buffer becomes the back buffer and the back buffer becomes the front buffer.


Computer draws to the back, monitor is sent the front.


After a buffer swap, the software can start drawing to the new back buffer and the computer sends the new front buffer to the monitor until the next buffer swap happens. And all is well. Well, almost all anyway.

In this form of double buffering, a swap can happen anytime. That means that while the computer is sending data to the monitor, the swap can occur. When this happens, the rest of the screen is drawn according to what the new front buffer contains. If the new front buffer is different enough from the old front buffer, a visual artifact known as "tearing" can be seen. This type of problem can be seen often in high framerate FPS games when whipping around a corner as fast as possible. Because of the quick motion, every frame is very different, when a swap happens during drawing the discrepancy is large and can be distracting.

The most common approach to combat tearing is to wait to swap buffers until the monitor is ready for another image. The monitor is ready after it has fully drawn what was sent to it and the next vertical refresh cycle is about to start. Synchronizing buffer swaps with the Vertical refresh is called vsync.

While enabling vsync does fix tearing, it also sets the internal framerate of the game to, at most, the refresh rate of the monitor (typically 60Hz for most LCD panels). This can hurt performance even if the game doesn't run at 60 frames per second as there will still be artificial delays added to effect synchronization. Performance can be cut nearly in half cases where every frame takes just a little longer than 16.67 ms (1/60th of a second). In such a case, frame rate would drop to 30 FPS despite the fact that the game should run at just under 60 FPS. The elimination of tearing and consistency of framerate, however, do contribute to an added smoothness that double buffering without vsync just can't deliver.

Input lag also becomes more of an issue with vsync enabled. This is because the artificial delay introduced increases the difference between when something actually happened (when the frame was drawn) and when it gets displayed on screen. Input lag always exists (it is impossible to instantaneously draw what is currently happening to the screen), but the trick is to minimize it.

Our options with double buffering are a choice between possible visual problems like tearing without vsync and an artificial delay that can negatively effect both performance and can increase input lag with vsync enabled. But not to worry, there is an option that combines the best of both worlds with no sacrifice in quality or actual performance. That option is triple buffering.


Computer has two back buffers to bounce between while the monitor is sent the front buffer.


The name gives a lot away: triple buffering uses three buffers instead of two. This additional buffer gives the computer enough space to keep a buffer locked while it is being sent to the monitor (to avoid tearing) while also not preventing the software from drawing as fast as it possibly can (even with one locked buffer there are still two that the software can bounce back and forth between). The software draws back and forth between the two back buffers and (at best) once every refresh the front buffer is swapped for the back buffer containing the most recently completed fully rendered frame. This does take up some extra space in memory on the graphics card (about 15 to 25MB), but with modern graphics card dropping at least 512MB on board this extra space is no longer a real issue.

In other words, with triple buffering we get the same high actual performance and similar decreased input lag of a vsync disabled setup while achieving the visual quality and smoothness of leaving vsync enabled.

Now, it is important to note, that when you look at the "frame rate" of a triple buffered game, you will not see the actual "performance." This is because frame counters like FRAPS only count the number of times the front buffer (the one currently being sent to the monitor) is swapped out. In double buffering, this happens with every frame even if the next frames done after the monitor is finished receiving and drawing the current frame (meaning that it might not be displayed at all if another frame is completed before the next refresh). With triple buffering, front buffer swaps only happen at most once per vsync.

The software is still drawing the entire time behind the scenes on the two back buffers when triple buffering. This means that when the front buffer swap happens, unlike with double buffering and vsync, we don't have artificial delay. And unlike with double buffering without vsync, once we start sending a fully rendered frame to the monitor, we don't switch to another frame in the middle.

This last point does bring to bear the one issue with triple buffering. A frame that completes just a tiny bit after the refresh, when double buffering without vsync, will tear near the top and the rest of the frame would carry a bit less lag for most of that refresh than triple buffering which would have to finish drawing the frame it had already started. Even in this case, though, at least part of the frame will be the exact same between the double buffered and triple buffered output and the delay won't be significant, nor will it have any carryover impact on future frames like enabling vsync on double buffering does. And even if you count this as an advantage of double buffering without vsync, the advantage only appears below a potential tear.

Let's help bring the idea home with an example comparison of rendering using each of these three methods.

Index Digging Deeper: Galloping Horses Example
Comments Locked

184 Comments

View All Comments

  • profoundWHALE - Monday, January 19, 2015 - link

    You'll need backlight strobing to get CRT-like performance on LCDs. Take a look at http://www.blurbusters.com/
  • texkill - Friday, June 26, 2009 - link

    First, let me sum up the actual advantage of triple buffering: smoothing out variable draw times when game framerate < monitor refresh. That's it.

    This article severely overstates the case for triple buffering when it says "there is an option that combines the best of both worlds with no sacrifice in quality or actual performance." Okay so you want "the best of both worlds" which would be no tearing and minimum input lag? And the example used to prove this is 300 fps on 60hz. Well guess what, I can give you the best of both worlds with something called "waiting a while." See those horse figures at the beginning of each frame in the double-buffer figure? Move them from the beginning of the frame to near the end and viola, input lag is looking good again.

    But actually it gets even better when you add multithreading to a double-buffered solution. Now you not only don't have to draw frames that will *never be seen by any living creature on Earth* (not the default behavior in DirectX btw), you can actually make use of the CPU time that would otherwise be spent in the graphics api to do something useful like physics or AI. You also then don't need to have frames that are drawing when the v-sync happens and causing the input lag and smoothness to vary every single frame (again, not the default DX behavior).

    Triple buffering has its place when drawing times vary and smooth animation is desired. But it should definitely not be blindly demanded of all game developers when most of them already know the tradeoffs and have already made very good judgments on this decision.
  • DerekWilson - Friday, June 26, 2009 - link

    this is more of an additional advantage. without vsync, double buffering still starts drawing the same frame that triple buffering would start drawing but changes frames in between. throw in vsync and you still get a doubling of worst case added input lag (and an increase in average case input lag too).

    and it's not about drawing the frames that will never be seen -- it's about not seeing frames that are outdated when newer frames can be finished before the next refresh (reducing input lag).

    multithreading still helps triple buffering ... i don't see why that even enters into the situation.

    the game can't know for sure how long a frame will take to render when it starts rendering (otherwise it would know how long it could wait to start the process so that the frame is as new as possible before the next refresh). there is no way to avoid having frames that are being worked on during a vertical refresh.
  • JarredWalton - Friday, June 26, 2009 - link

    VSYNC is really the absolutely worst solution to this problem in my opinion. Let's say you have a game that runs at ~75FPS on average on your system, with VSYNC off. Great. Enable triple buffering and you still get 75FPS average, though some frames will never be seen. Use double buffering with VSYNC and you'll render 60FPS... ideally, at least.

    The problem with VSYNC is that you get lower minimum frame rates, and those become very noticeable. If you're running at 60FPS most of the time, then drop to 30FPS or 20FPS or 15FPS (notice how all of those are an even divisor of 60), those lows become even more distracting. Far more common, unfortunately, is that maintaining 60FPS with many games is very difficult, even with high-end hardware. Rather than getting a smooth 60FPS, what you usually end up with is 30FPS.

    Finally, in cases where the frame rate is much higher than the refresh rate, triple buffering does give you reduced image latency relative to double buffering with VSYNC - though as Derek points out it still has a worst case of 16.7ms (lower than double with VSYNC).
  • zulezule - Friday, June 26, 2009 - link

    Your comment made me realize that I'd prefer my GPU to render the 60 vsync-ed frames and stay cool, instead of rendering 300 fps (out of which 4/5 are useless), overheat, become noisy and maybe even crash. The only case when I'd want more frames rendered would be when they are used to insert something in the one visible frame, as for example if the 4 invisible frames are averaged with the visible one to create motion blur. However, I'm pretty sure beautiful motion blur can be obtained much more easily.
  • DerekWilson - Friday, June 26, 2009 - link

    The advantages still exist at a sub 60 FPS level. I just chose 300 FPS to illustrate the idea more easily.

    At less than 60 FPS, the triple buffered case still shows the same performance as double buffering -- they both start rendering the same frame after a refresh. double buffering with vsync still adds more input lag on average than the other cases.
  • Mills - Friday, June 26, 2009 - link

    You made a good case of something currently impossible (if I understand you correctly) being better than triple buffering but I don't see where you made the case that triple buffering isn't better than double buffering in the case of FPS being much greater than refresh rate.

    The point is, when we are given a choice between double and triple, is there a reason not to choose triple?
  • texkill - Friday, June 26, 2009 - link

    What's impossible about it?

    Yes, there are drawback to triple buffering. Implement it the way directX does by default and you get input lag. Implement it the way the article suggests and you get wasted cpu and jerky animation. And either way you are sacrificing video memory that could have been used for something else.
  • DerekWilson - Friday, June 26, 2009 - link

    1) DirectX does not implement triple buffering (render-ahead is not the same and should not be referred to as "triple buffering" when set to 3 frames). The way to think of the DX mess is that they set up a queue to for the back buffer, but there is only one real back buffer and one front buffer (even with 3 frame render ahead, it is essentailly double buffered if we're talking about page flipping).

    2) The triple buffering approach described in this article is the only thing that should actually be called "triple buffering" if we are contrasting it with "double buffering" and referring to page flipping. Additionally, it does not create jerky animation -- the animation will be much smoother than either double buffering with or without vsync (either because frames have less lag or because they don't tear).
  • toyota - Friday, June 26, 2009 - link

    yeah it makes me wonder why both card companies dont even allow it straight from the cp for DX games if there are no drawbacks. also it seems like all game developers would incorporate it in their games if again there were no drawbacks.

Log in

Don't have an account? Sign up now