What are Double Buffering, vsync and Triple Buffering?

When a computer needs to display something on a monitor, it draws a picture of what the screen is supposed to look like and sends this picture (which we will call a buffer) out to the monitor. In the old days there was only one buffer and it was continually being both drawn to and sent to the monitor. There are some advantages to this approach, but there are also very large drawbacks. Most notably, when objects on the display were updated, they would often flicker.


The computer draws in as the contents are sent out.
All illustrations courtesy Laura Wilson.


In order to combat the issues with reading from while drawing to the same buffer, double buffering, at a minimum, is employed. The idea behind double buffering is that the computer only draws to one buffer (called the "back" buffer) and sends the other buffer (called the "front" buffer) to the screen. After the computer finishes drawing the back buffer, the program doing the drawing does something called a buffer "swap." This swap doesn't move anything: swap only changes the names of the two buffers: the front buffer becomes the back buffer and the back buffer becomes the front buffer.


Computer draws to the back, monitor is sent the front.


After a buffer swap, the software can start drawing to the new back buffer and the computer sends the new front buffer to the monitor until the next buffer swap happens. And all is well. Well, almost all anyway.

In this form of double buffering, a swap can happen anytime. That means that while the computer is sending data to the monitor, the swap can occur. When this happens, the rest of the screen is drawn according to what the new front buffer contains. If the new front buffer is different enough from the old front buffer, a visual artifact known as "tearing" can be seen. This type of problem can be seen often in high framerate FPS games when whipping around a corner as fast as possible. Because of the quick motion, every frame is very different, when a swap happens during drawing the discrepancy is large and can be distracting.

The most common approach to combat tearing is to wait to swap buffers until the monitor is ready for another image. The monitor is ready after it has fully drawn what was sent to it and the next vertical refresh cycle is about to start. Synchronizing buffer swaps with the Vertical refresh is called vsync.

While enabling vsync does fix tearing, it also sets the internal framerate of the game to, at most, the refresh rate of the monitor (typically 60Hz for most LCD panels). This can hurt performance even if the game doesn't run at 60 frames per second as there will still be artificial delays added to effect synchronization. Performance can be cut nearly in half cases where every frame takes just a little longer than 16.67 ms (1/60th of a second). In such a case, frame rate would drop to 30 FPS despite the fact that the game should run at just under 60 FPS. The elimination of tearing and consistency of framerate, however, do contribute to an added smoothness that double buffering without vsync just can't deliver.

Input lag also becomes more of an issue with vsync enabled. This is because the artificial delay introduced increases the difference between when something actually happened (when the frame was drawn) and when it gets displayed on screen. Input lag always exists (it is impossible to instantaneously draw what is currently happening to the screen), but the trick is to minimize it.

Our options with double buffering are a choice between possible visual problems like tearing without vsync and an artificial delay that can negatively effect both performance and can increase input lag with vsync enabled. But not to worry, there is an option that combines the best of both worlds with no sacrifice in quality or actual performance. That option is triple buffering.


Computer has two back buffers to bounce between while the monitor is sent the front buffer.


The name gives a lot away: triple buffering uses three buffers instead of two. This additional buffer gives the computer enough space to keep a buffer locked while it is being sent to the monitor (to avoid tearing) while also not preventing the software from drawing as fast as it possibly can (even with one locked buffer there are still two that the software can bounce back and forth between). The software draws back and forth between the two back buffers and (at best) once every refresh the front buffer is swapped for the back buffer containing the most recently completed fully rendered frame. This does take up some extra space in memory on the graphics card (about 15 to 25MB), but with modern graphics card dropping at least 512MB on board this extra space is no longer a real issue.

In other words, with triple buffering we get the same high actual performance and similar decreased input lag of a vsync disabled setup while achieving the visual quality and smoothness of leaving vsync enabled.

Now, it is important to note, that when you look at the "frame rate" of a triple buffered game, you will not see the actual "performance." This is because frame counters like FRAPS only count the number of times the front buffer (the one currently being sent to the monitor) is swapped out. In double buffering, this happens with every frame even if the next frames done after the monitor is finished receiving and drawing the current frame (meaning that it might not be displayed at all if another frame is completed before the next refresh). With triple buffering, front buffer swaps only happen at most once per vsync.

The software is still drawing the entire time behind the scenes on the two back buffers when triple buffering. This means that when the front buffer swap happens, unlike with double buffering and vsync, we don't have artificial delay. And unlike with double buffering without vsync, once we start sending a fully rendered frame to the monitor, we don't switch to another frame in the middle.

This last point does bring to bear the one issue with triple buffering. A frame that completes just a tiny bit after the refresh, when double buffering without vsync, will tear near the top and the rest of the frame would carry a bit less lag for most of that refresh than triple buffering which would have to finish drawing the frame it had already started. Even in this case, though, at least part of the frame will be the exact same between the double buffered and triple buffered output and the delay won't be significant, nor will it have any carryover impact on future frames like enabling vsync on double buffering does. And even if you count this as an advantage of double buffering without vsync, the advantage only appears below a potential tear.

Let's help bring the idea home with an example comparison of rendering using each of these three methods.

Index Digging Deeper: Galloping Horses Example
Comments Locked

184 Comments

View All Comments

  • DerekWilson - Friday, June 26, 2009 - link

    unfortunately, you really can't build a practical implementation that starts rendering a frame at the point where it will finish just before the next viable refresh. typically, with anything changing at all on screen, you aren't going to have previous frames be good predictors down to the accuracy level you would need.

    I didn't include sub 60 fps or sub 30 fps examples to keep it simple ... but in each case, the frame that starts being drawn at each refresh is equivalent between double buffering with no vsync and triple buffering.

    the "odd frame" here or there really add up when you look at an entire second by the way.
  • velanapontinha - Friday, June 26, 2009 - link

    I always try to play with double buffering + V-Sync. I've known about Tripple Buffering for quite some time, but I still prefer DB+Vsync. It's just that I never felt the theoretical input lag, while I can feel the benefits of having my CPU and GPU rest, instead of beeing always striving to get those useless 100fps.
    60fps (heck, even 30fps), if constant, provide a flawless gaming experience, and if you can have a wonderful gaming experience without your hardware being pointlessly pushed to its limits, why make it render frames you will never miss?
    Less workload, less heat, less noise, less energy, and still an impecable gaming experience.
  • DerekWilson - Friday, June 26, 2009 - link

    there is still benefit at 30 FPS as well and not only when the framerate skyrockets.

    as frametime gets longer, input lag starts to become more and more of an issue. minimizing additional lag (as triple buffering can do) can help more at lower framerates when compared to double buffering and vsync.
  • KikassAssassin - Friday, June 26, 2009 - link

    I just ran a test in WoW (I picked it since it has a Triple Buffering option built-in), where I ran down a path and back again, running the same path three times, once with double buffering and vsync disabled, one with double buffering and vsync enabled, and one with triple buffering. I had RivaTuner open in the background monitoring my CPU and GPU usage.

    In all three tests, the CPU and GPU usage graphs look exactly the same. There's almost no difference between them whatsoever.
  • velanapontinha - Friday, June 26, 2009 - link

    Well, if you can't see any difference, I guess (i'm just guessing) that you're running WoW close to your setup limits, then.

    I'm a beta tester for a software company, and I can assure you that vsync can and will keep your CPU and GPU usage much lower.

    Try running a 3D software that lets your hardware at ease (and thus runs and over 100fps, double buffer without v-sync).
    Then run the same software with v-sync enabled, and you'll see that your hardware has a lot less to struggle for.

    Try this one:
    http://www.theprodukkt.com/downloads/fr-041_debris...">http://www.theprodukkt.com/downloads/fr-041_debris...

    A very small app (177kb) that looks impressive. Run it at a low resolution (say 1024x768, for example), and then check it out. You have v-syn option in the app.
  • velanapontinha - Friday, June 26, 2009 - link

    At least I'm sure you'll notice that CPU usage will be lower. As to GPU, it depends, as GPU load indicators usually are not reliable (always varying at either 0% or 99%)
  • randomname - Friday, June 26, 2009 - link

    I usually start by switching most of the options on in games. After I realize it isn't running fast enough, I start switching some of those options off. Therefore even triple buffering is a "nice to have" property, that I would select (or not) based on an experiment. Unfortunately, that little tryout probably isn't representative of the rest of the game. So often just when it gets really interesting (a lot of stuff and cool effects start happening), the performance plummets. Then you switch off everything that doesn't have an immediate visual impact (maybe triple buffering as well) and try again.

    Absolutely the best part of console gaming is that someone else has made the (artistic) choice of enabling something, and they are in effect saying that your experience is best with these options. The game has been reviewed with those options and the same hardware, and if it sucks, it's the developers fault. The argument doesn't go towards "you really need a fast machine to appreciate the graphics", which leaves questions about how fast is fast enough (to play through the heaviest scenes) and is there any sense in making a several hundred dollar investment to play a fifty buck game, and exactly what options and hardware did the reviewer use? All that tends to take a lot away from the enjoyment and immersion.

    One example is the motion blur in Crysis. It looks really nice and smooths out that FPS-style jerkiness of being able to move your head (optical axis) so fast. But it was also quite a heavy option, and although I really, really didn't want to switch it off, I had to.
  • SleepyGreg - Friday, June 26, 2009 - link

    Having a poll of which buffering method you use under the heading "Triple buffering: Why we love it" is rather flawed. People often answer what they think is the right answer, not what they actually do.
  • DerekWilson - Friday, June 26, 2009 - link

    You know, I agree with you ... I apologize for poisoning the sample. I don't think I'm that great at article titles anyway, but the poll was just something I thought would be a cool idea. I didn't think about how they would impact each other.

    I'll try to be more careful with stuff like this if I do it in the future.
  • Mills - Friday, June 26, 2009 - link

    Seems like nobody here really agrees when it is better.

    Some people say it's better only when your FPS is greater than refresh, some say it's better only when FPS is less than refresh.

    Article seems to make the claim it's always better.

    I remain confused.

Log in

Don't have an account? Sign up now