Wrapping It Up

So there you have it. Triple buffering gives you all the benefits of double buffering with no vsync enabled in addition to all the benefits of enabling vsync. We get smooth full frames with no tearing. These frames are swapped to the front buffer only on refresh, but they have just as little input lag as double buffering with no vsync at the start of output to the monitor. Even though "performance" doesn't always get reported right with triple buffering, the graphics hardware is working just as hard as it does with double buffering and no vsync and the end user gets all the benefit with out the potential downside. Triple buffering does take up a handful of extra memory on the graphics hardware, but on modern hardware this is not a significant issue.

Just to recap, from our previous example, here are what the three frames we looked at rendering stack up side by side.

 


Triple Buffering


 

 


Double Buffering


 

 


Double Buffering with vsync


 

We've presented the qualitative argument and the quantitative argument in support of triple buffering. So, now the question is: does this data change things? Are people going to start looking for that triple buffering option more now than without this information? Let's find out.

{poll 135:300}

Regardless of the results, we do hope that this article has been helpful both in explaining an often overlooked option. While it might not be something we test with because of the issues with measuring performance, triple buffering is the setting we prefer to play with. We hope we've helped show our readers why they should give triple buffering a shot as well. 

We also hope more developers will start making triple buffering the default option in their games, as it will deliver the best experience to gamers interested in both quality and performance. There are only a handful of games that include triple buffering as a built in option, and NVIDIA and AMD drivers currently only allow forcing triple buffering in OpenGL games. This really needs to change, as there is no reason we shouldn't see pervasive triple buffering today.


UPDATE: There has been a lot of discussion in the comments of the differences between the page flipping method we are discussing in this article and implementations of a render ahead queue. In render ahead, frames cannot be dropped. This means that when the queue is full, what is displayed can have a lot more lag. Microsoft doesn't implement triple buffering in DirectX, they implement render ahead (from 0 to 8 frames with 3 being the default).

The major difference in the technique we've described here is the ability to drop frames when they are outdated. Render ahead forces older frames to be displayed. Queues can help smoothness and stuttering as a few really quick frames followed by a slow frame end up being evened out and spread over more frames. But the price you pay is in lag (the more frames in the queue, the longer it takes to empty the queue and the older the frames are that are displayed).

In order to maintain smoothness and reduce lag, it is possible to hold on to a limited number of frames in case they are needed but to drop them if they are not (if they get too old). This requires a little more intelligent management of already rendered frames and goes a bit beyond the scope of this article.

Some game developers implement a short render ahead queue and call it triple buffering (because it uses three total buffers). They certainly cannot be faulted for this, as there has been a lot of confusion on the subject and under certain circumstances this setup will perform the same as triple buffering as we have described it (but definitely not when framerate is higher than refresh rate).

Both techniques allow the graphics card to continue doing work while waiting for a vertical refresh when one frame is already completed. When using double buffering (and no render queue), while vertical sync is enabled, after one frame is completed nothing else can be rendered out which can cause stalling and degrade actual performance.

When vsync is not enabled, nothing more than double buffering is needed for performance, but a render queue can still be used to smooth framerate if it requires a few old frames to be kept around. This can keep instantaneous framerate from dipping in some cases, but will (even with double buffering and vsync disabled) add lag and input latency. Even without vsync, render ahead is required for multiGPU systems to work efficiently.

So, this article is as much for gamers as it is for developers. If you are implementing render ahead (aka a flip queue), please don't call it "triple buffering," as that should be reserved for the technique we've described here in order to cut down on the confusion. There are games out there that list triple buffering as an option when the technique used is actually a short render queue. We do realize that this can cause confusion, and we very much hope that this article and discussion help to alleviate this problem.

Digging Deeper: Galloping Horses Example
Comments Locked

184 Comments

View All Comments

  • davidri - Sunday, July 26, 2009 - link

    I'll just stick to vsync on with whatever the default frame buffer is in the Nvidia control panel. I get very good performance (most games I play run at 60fps) and no vertical tearing. I don't care to deal with third party apps.
  • griffhamlin - Thursday, July 16, 2009 - link

    you can force tripple buffering in DX . some appz exist for that.
    D3DOverrider , for one...
  • Muhammed - Wednesday, July 8, 2009 - link

    I must say , Excellent article up to the page number 2 , after that things started to get REAL messy .

    I don't consider myself too stupid , nor too genius , but I am confident I am smart , and everything was fine till the second page , where you explained the principles of the idea , I quickly understood it just from one concentrated read , but the horses example is simply HORRIBLE , I understand you didn't want to waste 9 pages on a simple thing like V-Sync , hence so you wrapped up the concept quickly , but this has left us readers really confused .

    Firstly , you started slow (page 2), elaborating on every little detail , then you provided an example that should make the picture even clearer , but on the contrary .. you put a lot of possibilities and new concepts into this example , and you successfully made it MORE COMPLEX , instead of being SIMPLER .

    Secondly , horrible elaboration in the example made it even more convoluted , adding the complexity into the equation = HORRIBLE Example .

    I am waiting for a follow up article .. one with even 18 pages , I will read them all .. every last letter , for this is the price of knowledge , just remember SIMPLIFY and ELABORATE .

    Thanks you for your understanding .
  • quarup - Tuesday, July 7, 2009 - link

    The following seems confusion, could you please clarify or reword it:

    "In double buffering, this happens with every frame even if the next frames done after the monitor is finished receiving and drawing the current frame (meaning that it might not be displayed at all if another frame is completed before the next refresh)."

    It sounds like it says two contradicting things about double-buffering + vsync:
    1. a swap buffer happens once per frame
    2. a frame might be skipped if we're rendering frames too fast (this sounds more like triple buffering?)

    Also:

    "With triple buffering, front buffer swaps only happen at most once per vsync."

    Isn't this true with double buffer + vsync, too?
  • pakotlar - Sunday, July 5, 2009 - link

    t.buffering is great, but tighter integration between the abstraction layer and developer tools along with general programming protocols on GPU's, should allow (maybe with the use of a dynamic LOD system like SPVO) should kill the need for t.buffering or vsync. there has to be a better solution in place today for homogenous hardware.
  • happymanz - Saturday, July 4, 2009 - link

    Hi,

    I mostly play games at 1024x768@120hz if they are non competative, and 800x600\640x480@160hz if they are competative. (I am not able to notice any tearing at 160hz)

    What settings are recommended for gamers using CRT or 120hz LCD monitors? (most people will not notice any tearing even at 120hz)

    Alot of older (and still popular) games run various versions of the quakeengine where physics are affected by the FPS (I'm no expert on the matter)

    (I have yet to see any LCD monitor getting close in terms of imagequality, and so far it seems you cant have your cake and eat it aswell when it comes to different types of panels)
  • urebelscum - Thursday, July 2, 2009 - link

    Nice article; I loved seeing the example. However, from reading all the commons, I think a follow up article is needed. First, another example is needed: when rendering is slower than monitor refresh. I thought I pictured what would happen, but now I'm not sure. Maybe another example, covering what happens when a game drops below the 60 fps threashhold, but if the other example is clear, maybe not. The rest of the followup should include more info: basically add render ahead to the first example, and a list of which games use true triple buffer, and which use mis-named render ahead.

    The last is why I still don't use "triple buffering" all the time. It seems most games I play are calling render ahead the wrong name, so I leave the wrongly called "triple buffering" "disabled".

    Two things that probably are beyond the scope are: how to tell if a game is using true triple buffering or if it's using render ahead, and what devs need to do to use true triple buffering. (I'm following a couple open source games that say they support triple buffering, but might be using render ahead.)
  • DerekWilson - Thursday, July 2, 2009 - link

    Thanks for the feedback. I'm looking into the possibility of a follow up and appreciate your suggestions of things to look into.
  • castanza - Tuesday, June 30, 2009 - link

    Enabling triple buffering whenever possible is not the right idea.

    Again, the choices are:
    1) double buffer w/o vsync
    2) double buffer w/ vsync
    3) triple buffer w/ vsync

    Suppose your screen refreshes @ 60 Hz (pretty common now).

    The key question is: can your machine pump out 60 fps consistently for the game in question?

    If it can, then you probably want double buffer w/ vsync. I enabled this setting in L4D and I can enjoy NO TEARING with minimum lag because L4D gives very nice frame rates on my machine, not often dipping below 60 fps.

    If it can't, then your choice depends on how well you can tolerate lag in this particular game. In some games you may not notice it, in others you will. I find it less noticeable in driving games than fps games for example. I tried this setting in L4D, and I found the added lag unacceptable. Anyway if you don't mind a bit of extra input lag in this particular game, then you want triple buffer w/ vsync. Otherwise, if getting rid of lag is more important than eliminating tearing, you'll choose double buffer w/o vsync.

    That's my $0.02 on this subject :)
  • Hrel - Monday, June 29, 2009 - link

    Sounds to me like the best option would be a tripple buffer with a rendering que; coupled with a rendered frames management feature.

    From this, I THINK the to get the best image a COMPLETED frame should be shown every single refresh of the monitor.

    And in order to reduce lag as much as possible, the most recent fully rendered frame should be put out to the monitor; and all the older frames should be thrown out. Which means skipping could occur but with proper management it should be so minor that we really won't notice.

    Especially as monitor refresh rates go up to 120HZ and beyond.

    Comments anyone???

Log in

Don't have an account? Sign up now