What are Double Buffering, vsync and Triple Buffering?

When a computer needs to display something on a monitor, it draws a picture of what the screen is supposed to look like and sends this picture (which we will call a buffer) out to the monitor. In the old days there was only one buffer and it was continually being both drawn to and sent to the monitor. There are some advantages to this approach, but there are also very large drawbacks. Most notably, when objects on the display were updated, they would often flicker.


The computer draws in as the contents are sent out.
All illustrations courtesy Laura Wilson.


In order to combat the issues with reading from while drawing to the same buffer, double buffering, at a minimum, is employed. The idea behind double buffering is that the computer only draws to one buffer (called the "back" buffer) and sends the other buffer (called the "front" buffer) to the screen. After the computer finishes drawing the back buffer, the program doing the drawing does something called a buffer "swap." This swap doesn't move anything: swap only changes the names of the two buffers: the front buffer becomes the back buffer and the back buffer becomes the front buffer.


Computer draws to the back, monitor is sent the front.


After a buffer swap, the software can start drawing to the new back buffer and the computer sends the new front buffer to the monitor until the next buffer swap happens. And all is well. Well, almost all anyway.

In this form of double buffering, a swap can happen anytime. That means that while the computer is sending data to the monitor, the swap can occur. When this happens, the rest of the screen is drawn according to what the new front buffer contains. If the new front buffer is different enough from the old front buffer, a visual artifact known as "tearing" can be seen. This type of problem can be seen often in high framerate FPS games when whipping around a corner as fast as possible. Because of the quick motion, every frame is very different, when a swap happens during drawing the discrepancy is large and can be distracting.

The most common approach to combat tearing is to wait to swap buffers until the monitor is ready for another image. The monitor is ready after it has fully drawn what was sent to it and the next vertical refresh cycle is about to start. Synchronizing buffer swaps with the Vertical refresh is called vsync.

While enabling vsync does fix tearing, it also sets the internal framerate of the game to, at most, the refresh rate of the monitor (typically 60Hz for most LCD panels). This can hurt performance even if the game doesn't run at 60 frames per second as there will still be artificial delays added to effect synchronization. Performance can be cut nearly in half cases where every frame takes just a little longer than 16.67 ms (1/60th of a second). In such a case, frame rate would drop to 30 FPS despite the fact that the game should run at just under 60 FPS. The elimination of tearing and consistency of framerate, however, do contribute to an added smoothness that double buffering without vsync just can't deliver.

Input lag also becomes more of an issue with vsync enabled. This is because the artificial delay introduced increases the difference between when something actually happened (when the frame was drawn) and when it gets displayed on screen. Input lag always exists (it is impossible to instantaneously draw what is currently happening to the screen), but the trick is to minimize it.

Our options with double buffering are a choice between possible visual problems like tearing without vsync and an artificial delay that can negatively effect both performance and can increase input lag with vsync enabled. But not to worry, there is an option that combines the best of both worlds with no sacrifice in quality or actual performance. That option is triple buffering.


Computer has two back buffers to bounce between while the monitor is sent the front buffer.


The name gives a lot away: triple buffering uses three buffers instead of two. This additional buffer gives the computer enough space to keep a buffer locked while it is being sent to the monitor (to avoid tearing) while also not preventing the software from drawing as fast as it possibly can (even with one locked buffer there are still two that the software can bounce back and forth between). The software draws back and forth between the two back buffers and (at best) once every refresh the front buffer is swapped for the back buffer containing the most recently completed fully rendered frame. This does take up some extra space in memory on the graphics card (about 15 to 25MB), but with modern graphics card dropping at least 512MB on board this extra space is no longer a real issue.

In other words, with triple buffering we get the same high actual performance and similar decreased input lag of a vsync disabled setup while achieving the visual quality and smoothness of leaving vsync enabled.

Now, it is important to note, that when you look at the "frame rate" of a triple buffered game, you will not see the actual "performance." This is because frame counters like FRAPS only count the number of times the front buffer (the one currently being sent to the monitor) is swapped out. In double buffering, this happens with every frame even if the next frames done after the monitor is finished receiving and drawing the current frame (meaning that it might not be displayed at all if another frame is completed before the next refresh). With triple buffering, front buffer swaps only happen at most once per vsync.

The software is still drawing the entire time behind the scenes on the two back buffers when triple buffering. This means that when the front buffer swap happens, unlike with double buffering and vsync, we don't have artificial delay. And unlike with double buffering without vsync, once we start sending a fully rendered frame to the monitor, we don't switch to another frame in the middle.

This last point does bring to bear the one issue with triple buffering. A frame that completes just a tiny bit after the refresh, when double buffering without vsync, will tear near the top and the rest of the frame would carry a bit less lag for most of that refresh than triple buffering which would have to finish drawing the frame it had already started. Even in this case, though, at least part of the frame will be the exact same between the double buffered and triple buffered output and the delay won't be significant, nor will it have any carryover impact on future frames like enabling vsync on double buffering does. And even if you count this as an advantage of double buffering without vsync, the advantage only appears below a potential tear.

Let's help bring the idea home with an example comparison of rendering using each of these three methods.

Index Digging Deeper: Galloping Horses Example
Comments Locked

184 Comments

View All Comments

  • davidri - Sunday, July 26, 2009 - link

    I'll just stick to vsync on with whatever the default frame buffer is in the Nvidia control panel. I get very good performance (most games I play run at 60fps) and no vertical tearing. I don't care to deal with third party apps.
  • griffhamlin - Thursday, July 16, 2009 - link

    you can force tripple buffering in DX . some appz exist for that.
    D3DOverrider , for one...
  • Muhammed - Wednesday, July 8, 2009 - link

    I must say , Excellent article up to the page number 2 , after that things started to get REAL messy .

    I don't consider myself too stupid , nor too genius , but I am confident I am smart , and everything was fine till the second page , where you explained the principles of the idea , I quickly understood it just from one concentrated read , but the horses example is simply HORRIBLE , I understand you didn't want to waste 9 pages on a simple thing like V-Sync , hence so you wrapped up the concept quickly , but this has left us readers really confused .

    Firstly , you started slow (page 2), elaborating on every little detail , then you provided an example that should make the picture even clearer , but on the contrary .. you put a lot of possibilities and new concepts into this example , and you successfully made it MORE COMPLEX , instead of being SIMPLER .

    Secondly , horrible elaboration in the example made it even more convoluted , adding the complexity into the equation = HORRIBLE Example .

    I am waiting for a follow up article .. one with even 18 pages , I will read them all .. every last letter , for this is the price of knowledge , just remember SIMPLIFY and ELABORATE .

    Thanks you for your understanding .
  • quarup - Tuesday, July 7, 2009 - link

    The following seems confusion, could you please clarify or reword it:

    "In double buffering, this happens with every frame even if the next frames done after the monitor is finished receiving and drawing the current frame (meaning that it might not be displayed at all if another frame is completed before the next refresh)."

    It sounds like it says two contradicting things about double-buffering + vsync:
    1. a swap buffer happens once per frame
    2. a frame might be skipped if we're rendering frames too fast (this sounds more like triple buffering?)

    Also:

    "With triple buffering, front buffer swaps only happen at most once per vsync."

    Isn't this true with double buffer + vsync, too?
  • pakotlar - Sunday, July 5, 2009 - link

    t.buffering is great, but tighter integration between the abstraction layer and developer tools along with general programming protocols on GPU's, should allow (maybe with the use of a dynamic LOD system like SPVO) should kill the need for t.buffering or vsync. there has to be a better solution in place today for homogenous hardware.
  • happymanz - Saturday, July 4, 2009 - link

    Hi,

    I mostly play games at 1024x768@120hz if they are non competative, and 800x600\640x480@160hz if they are competative. (I am not able to notice any tearing at 160hz)

    What settings are recommended for gamers using CRT or 120hz LCD monitors? (most people will not notice any tearing even at 120hz)

    Alot of older (and still popular) games run various versions of the quakeengine where physics are affected by the FPS (I'm no expert on the matter)

    (I have yet to see any LCD monitor getting close in terms of imagequality, and so far it seems you cant have your cake and eat it aswell when it comes to different types of panels)
  • urebelscum - Thursday, July 2, 2009 - link

    Nice article; I loved seeing the example. However, from reading all the commons, I think a follow up article is needed. First, another example is needed: when rendering is slower than monitor refresh. I thought I pictured what would happen, but now I'm not sure. Maybe another example, covering what happens when a game drops below the 60 fps threashhold, but if the other example is clear, maybe not. The rest of the followup should include more info: basically add render ahead to the first example, and a list of which games use true triple buffer, and which use mis-named render ahead.

    The last is why I still don't use "triple buffering" all the time. It seems most games I play are calling render ahead the wrong name, so I leave the wrongly called "triple buffering" "disabled".

    Two things that probably are beyond the scope are: how to tell if a game is using true triple buffering or if it's using render ahead, and what devs need to do to use true triple buffering. (I'm following a couple open source games that say they support triple buffering, but might be using render ahead.)
  • DerekWilson - Thursday, July 2, 2009 - link

    Thanks for the feedback. I'm looking into the possibility of a follow up and appreciate your suggestions of things to look into.
  • castanza - Tuesday, June 30, 2009 - link

    Enabling triple buffering whenever possible is not the right idea.

    Again, the choices are:
    1) double buffer w/o vsync
    2) double buffer w/ vsync
    3) triple buffer w/ vsync

    Suppose your screen refreshes @ 60 Hz (pretty common now).

    The key question is: can your machine pump out 60 fps consistently for the game in question?

    If it can, then you probably want double buffer w/ vsync. I enabled this setting in L4D and I can enjoy NO TEARING with minimum lag because L4D gives very nice frame rates on my machine, not often dipping below 60 fps.

    If it can't, then your choice depends on how well you can tolerate lag in this particular game. In some games you may not notice it, in others you will. I find it less noticeable in driving games than fps games for example. I tried this setting in L4D, and I found the added lag unacceptable. Anyway if you don't mind a bit of extra input lag in this particular game, then you want triple buffer w/ vsync. Otherwise, if getting rid of lag is more important than eliminating tearing, you'll choose double buffer w/o vsync.

    That's my $0.02 on this subject :)
  • Hrel - Monday, June 29, 2009 - link

    Sounds to me like the best option would be a tripple buffer with a rendering que; coupled with a rendered frames management feature.

    From this, I THINK the to get the best image a COMPLETED frame should be shown every single refresh of the monitor.

    And in order to reduce lag as much as possible, the most recent fully rendered frame should be put out to the monitor; and all the older frames should be thrown out. Which means skipping could occur but with proper management it should be so minor that we really won't notice.

    Especially as monitor refresh rates go up to 120HZ and beyond.

    Comments anyone???

Log in

Don't have an account? Sign up now