What are Double Buffering, vsync and Triple Buffering?

When a computer needs to display something on a monitor, it draws a picture of what the screen is supposed to look like and sends this picture (which we will call a buffer) out to the monitor. In the old days there was only one buffer and it was continually being both drawn to and sent to the monitor. There are some advantages to this approach, but there are also very large drawbacks. Most notably, when objects on the display were updated, they would often flicker.


The computer draws in as the contents are sent out.
All illustrations courtesy Laura Wilson.


In order to combat the issues with reading from while drawing to the same buffer, double buffering, at a minimum, is employed. The idea behind double buffering is that the computer only draws to one buffer (called the "back" buffer) and sends the other buffer (called the "front" buffer) to the screen. After the computer finishes drawing the back buffer, the program doing the drawing does something called a buffer "swap." This swap doesn't move anything: swap only changes the names of the two buffers: the front buffer becomes the back buffer and the back buffer becomes the front buffer.


Computer draws to the back, monitor is sent the front.


After a buffer swap, the software can start drawing to the new back buffer and the computer sends the new front buffer to the monitor until the next buffer swap happens. And all is well. Well, almost all anyway.

In this form of double buffering, a swap can happen anytime. That means that while the computer is sending data to the monitor, the swap can occur. When this happens, the rest of the screen is drawn according to what the new front buffer contains. If the new front buffer is different enough from the old front buffer, a visual artifact known as "tearing" can be seen. This type of problem can be seen often in high framerate FPS games when whipping around a corner as fast as possible. Because of the quick motion, every frame is very different, when a swap happens during drawing the discrepancy is large and can be distracting.

The most common approach to combat tearing is to wait to swap buffers until the monitor is ready for another image. The monitor is ready after it has fully drawn what was sent to it and the next vertical refresh cycle is about to start. Synchronizing buffer swaps with the Vertical refresh is called vsync.

While enabling vsync does fix tearing, it also sets the internal framerate of the game to, at most, the refresh rate of the monitor (typically 60Hz for most LCD panels). This can hurt performance even if the game doesn't run at 60 frames per second as there will still be artificial delays added to effect synchronization. Performance can be cut nearly in half cases where every frame takes just a little longer than 16.67 ms (1/60th of a second). In such a case, frame rate would drop to 30 FPS despite the fact that the game should run at just under 60 FPS. The elimination of tearing and consistency of framerate, however, do contribute to an added smoothness that double buffering without vsync just can't deliver.

Input lag also becomes more of an issue with vsync enabled. This is because the artificial delay introduced increases the difference between when something actually happened (when the frame was drawn) and when it gets displayed on screen. Input lag always exists (it is impossible to instantaneously draw what is currently happening to the screen), but the trick is to minimize it.

Our options with double buffering are a choice between possible visual problems like tearing without vsync and an artificial delay that can negatively effect both performance and can increase input lag with vsync enabled. But not to worry, there is an option that combines the best of both worlds with no sacrifice in quality or actual performance. That option is triple buffering.


Computer has two back buffers to bounce between while the monitor is sent the front buffer.


The name gives a lot away: triple buffering uses three buffers instead of two. This additional buffer gives the computer enough space to keep a buffer locked while it is being sent to the monitor (to avoid tearing) while also not preventing the software from drawing as fast as it possibly can (even with one locked buffer there are still two that the software can bounce back and forth between). The software draws back and forth between the two back buffers and (at best) once every refresh the front buffer is swapped for the back buffer containing the most recently completed fully rendered frame. This does take up some extra space in memory on the graphics card (about 15 to 25MB), but with modern graphics card dropping at least 512MB on board this extra space is no longer a real issue.

In other words, with triple buffering we get the same high actual performance and similar decreased input lag of a vsync disabled setup while achieving the visual quality and smoothness of leaving vsync enabled.

Now, it is important to note, that when you look at the "frame rate" of a triple buffered game, you will not see the actual "performance." This is because frame counters like FRAPS only count the number of times the front buffer (the one currently being sent to the monitor) is swapped out. In double buffering, this happens with every frame even if the next frames done after the monitor is finished receiving and drawing the current frame (meaning that it might not be displayed at all if another frame is completed before the next refresh). With triple buffering, front buffer swaps only happen at most once per vsync.

The software is still drawing the entire time behind the scenes on the two back buffers when triple buffering. This means that when the front buffer swap happens, unlike with double buffering and vsync, we don't have artificial delay. And unlike with double buffering without vsync, once we start sending a fully rendered frame to the monitor, we don't switch to another frame in the middle.

This last point does bring to bear the one issue with triple buffering. A frame that completes just a tiny bit after the refresh, when double buffering without vsync, will tear near the top and the rest of the frame would carry a bit less lag for most of that refresh than triple buffering which would have to finish drawing the frame it had already started. Even in this case, though, at least part of the frame will be the exact same between the double buffered and triple buffered output and the delay won't be significant, nor will it have any carryover impact on future frames like enabling vsync on double buffering does. And even if you count this as an advantage of double buffering without vsync, the advantage only appears below a potential tear.

Let's help bring the idea home with an example comparison of rendering using each of these three methods.

Index Digging Deeper: Galloping Horses Example
Comments Locked

184 Comments

View All Comments

  • DerekWilson - Friday, June 26, 2009 - link

    I do make the claim that it's always better, but just wanted to use one example for simplicity sake (the 300 fps example).

    at lower refresh rates, the general case for performance is still the same as double buffering without vsync (which starts rendering the same frame that triple buffering would start rendering) ... and it still has the smoothness and lack of tearing of double buffering with vsync.
  • james jwb - Friday, June 26, 2009 - link

    what about when 120 hz LCD's come out and if a game can provide 120 fps as a minimum. surely double buffering is the best case here, or will triple buffering perform exactly the same in this case?
  • JimmiG - Friday, June 26, 2009 - link

    The reason double buffering still prevails is probably because when current 3D standards were set during the late 90's, video memory was at a premium.

    For example, at 1024x768 (standard resolution at the time), each buffer would take up 1.5 MB at 16bpp and 3MB at 32bpp. Not a lot today, but back then 8-16MB cards were the norm. If a game was designed so that VRAM usage would peak at 16MB, adding a couple MB's usage for another buffer would kill performance. So the general idea was that "Yeah, sure triple buffering is nice, but it uses too much memory", and that idea hs kind of stuck.
  • fiveday - Friday, June 26, 2009 - link

    There are major advantages to using Triple Buffering, but a few points that explain why its not automatically adopted.

    One big one is lag. Now, if things are pretty well lag free under double-buffering, no sweat. However, there's no getting around the fact that by adding an extra frame, you're adding 1/3 extra processing time between the frame being drawn and appearing on your display. If the game's pretty lag-free already, you'll never know the difference. If the game is already prone to some sort of input lag, it's about to get worse. How much worse, depends on the game itself... and in some cases it can drastically soften up your controls. It can be tricky to predict how much impact it will have, if any... a point I'll return to in the conclusion.

    Another issue is memory usage. In a perfect world, every system will always have adequate texture memory accommodate triple buffering. Is it a perfect world? Nope. And if your graphics card is getting thin on ram, get ready for a performance hit. How much? Maybe none, maybe a lot. Which brings me to my last point.

    Whether or not you'll see these adverse affects from using Triple Buffering depends partly on the game itself, how it was written, and partly on your own system configuration. Now, the developers are responsible for their own software, but there's no telling what kind of system a end user is going to try to run the game on. These days, a 4670 graphics card and a phenom X2, while seemingly meager, are enough to get most games out there plenty playable... but there's still folks out there trying to run a game like Bioshock on a Radeon 9700 pro (what's SM2.0, they cry!?!). Lord forbid someone try to use their laptop to play a game.

    By the way... SLI and X-Fire setups tend to HATE triple buffering.

    So you see... the developers have a tough enough time as it is getting their games playable on an extremely unpredictable variety of systems. Triple Buffering, while it has its advantages, simply introduces further risk of poor performance on a lot of systems out there. Should it be automatically enabled? Nope.

    But should it be available as an option? These days, I see no reason why not. The original Unreal and UT engines offered it as an option, and that was ten years ago. Bring it back for those of us who want to take a crack at it.
  • DerekWilson - Friday, June 26, 2009 - link

    you are correct that SLI and CrossFire don't like playing well with triple buffering... but then there have been plenty of oddities no matter what page flipping method we want to use.

    but enabling triple buffering does NOT add an additional latency penalty over double buffering unless double buffering visibly tears and you are talking about the rest of the frame ... double buffering and triple buffering start rendering the same frame every time.

    there is no inserted frame into the pipeline, as it's not a pipeline -- what you are describing is more like DirectX's default 3 frame render ahead which has much higher potential to add latency than triple buffering (when we are talking about the page flipping method and not just "having three buffers").
  • sbuckler - Friday, June 26, 2009 - link

    If tearing is not a problem then you are better off double buffering with vsync off. Turn on triple buffering and you introduce another 16.6ms of display lag which matters in a fast fps.
  • DerekWilson - Friday, June 26, 2009 - link

    you do NOT automatically incur a one frame lag -- you have at most an additional one frame lag.

    as i explained, especially in fast shooters, triple buffering and double buffering with no vsync begin rendering the exact same frame even if double buffering without vsync switches to a newer frame at some point.

    and when tearing doesn't "happen" (read isn't noticable) then that means the updated frames were not different enough to really matter anyway (otherwise you would see the difference).

    the possible advantage of double buffering could be argued when a tear happens near the top of the screen, but whether this is a real advantage is debatable.
  • BJ Eagle - Friday, June 26, 2009 - link

    with double buffer vsync and framrate just below 30 FPS ?

    I would go for double buffer vsync if there is not an equal penalty going below 30 FPS as there is when going below 60 FPS. Simply because I don't want to waste power rendering frames I wont miss...
    Movie industry teaches us that 25-30 FPS is actually enough to fool our brains to perceive motion. But if the lag skyrockets with vsync af below 30 FPS I guess I would go with triplebuffer...
  • nafhan - Friday, June 26, 2009 - link

    With a 60hz refresh rate, I think dropping below 30 will cause the same issues as dropping below 60. Vsync is going to update frames at intervals that divide evenly into the refresh rate. So, if 30 is not an option, then it will update every 20 frames.

    I think most people can tell the difference between 30FPS in a game and 60FPS. However, more than that really doesn't provide much benefit.
  • toyota - Friday, June 26, 2009 - link

    a game is not a movie...

Log in

Don't have an account? Sign up now