Wrapping It Up

So there you have it. Triple buffering gives you all the benefits of double buffering with no vsync enabled in addition to all the benefits of enabling vsync. We get smooth full frames with no tearing. These frames are swapped to the front buffer only on refresh, but they have just as little input lag as double buffering with no vsync at the start of output to the monitor. Even though "performance" doesn't always get reported right with triple buffering, the graphics hardware is working just as hard as it does with double buffering and no vsync and the end user gets all the benefit with out the potential downside. Triple buffering does take up a handful of extra memory on the graphics hardware, but on modern hardware this is not a significant issue.

Just to recap, from our previous example, here are what the three frames we looked at rendering stack up side by side.

 


Triple Buffering


 

 


Double Buffering


 

 


Double Buffering with vsync


 

We've presented the qualitative argument and the quantitative argument in support of triple buffering. So, now the question is: does this data change things? Are people going to start looking for that triple buffering option more now than without this information? Let's find out.

{poll 135:300}

Regardless of the results, we do hope that this article has been helpful both in explaining an often overlooked option. While it might not be something we test with because of the issues with measuring performance, triple buffering is the setting we prefer to play with. We hope we've helped show our readers why they should give triple buffering a shot as well. 

We also hope more developers will start making triple buffering the default option in their games, as it will deliver the best experience to gamers interested in both quality and performance. There are only a handful of games that include triple buffering as a built in option, and NVIDIA and AMD drivers currently only allow forcing triple buffering in OpenGL games. This really needs to change, as there is no reason we shouldn't see pervasive triple buffering today.


UPDATE: There has been a lot of discussion in the comments of the differences between the page flipping method we are discussing in this article and implementations of a render ahead queue. In render ahead, frames cannot be dropped. This means that when the queue is full, what is displayed can have a lot more lag. Microsoft doesn't implement triple buffering in DirectX, they implement render ahead (from 0 to 8 frames with 3 being the default).

The major difference in the technique we've described here is the ability to drop frames when they are outdated. Render ahead forces older frames to be displayed. Queues can help smoothness and stuttering as a few really quick frames followed by a slow frame end up being evened out and spread over more frames. But the price you pay is in lag (the more frames in the queue, the longer it takes to empty the queue and the older the frames are that are displayed).

In order to maintain smoothness and reduce lag, it is possible to hold on to a limited number of frames in case they are needed but to drop them if they are not (if they get too old). This requires a little more intelligent management of already rendered frames and goes a bit beyond the scope of this article.

Some game developers implement a short render ahead queue and call it triple buffering (because it uses three total buffers). They certainly cannot be faulted for this, as there has been a lot of confusion on the subject and under certain circumstances this setup will perform the same as triple buffering as we have described it (but definitely not when framerate is higher than refresh rate).

Both techniques allow the graphics card to continue doing work while waiting for a vertical refresh when one frame is already completed. When using double buffering (and no render queue), while vertical sync is enabled, after one frame is completed nothing else can be rendered out which can cause stalling and degrade actual performance.

When vsync is not enabled, nothing more than double buffering is needed for performance, but a render queue can still be used to smooth framerate if it requires a few old frames to be kept around. This can keep instantaneous framerate from dipping in some cases, but will (even with double buffering and vsync disabled) add lag and input latency. Even without vsync, render ahead is required for multiGPU systems to work efficiently.

So, this article is as much for gamers as it is for developers. If you are implementing render ahead (aka a flip queue), please don't call it "triple buffering," as that should be reserved for the technique we've described here in order to cut down on the confusion. There are games out there that list triple buffering as an option when the technique used is actually a short render queue. We do realize that this can cause confusion, and we very much hope that this article and discussion help to alleviate this problem.

Digging Deeper: Galloping Horses Example
Comments Locked

184 Comments

View All Comments

  • greylica - Friday, June 26, 2009 - link

    I always use triple buffering in OpenGL apps, and the performance is superb, until Vista/7 cames and crippled my hardware with Vsync enabled by default. This sh*t of hell Microsoft invention crippled my flawless GTX 285 to a mere 1/3 of the performance in OpenGL in the two betas I have tested.

    Thanks to GNU/Linux I have at least one chance to be free of the issue and use my 3D apps with full speed.
  • The0ne - Friday, June 26, 2009 - link

    Love your comment lol
  • JonP382 - Friday, June 26, 2009 - link

    I always avoided triple buffering because it introduced input lag for me. I guess the implementation that ATI and Nvidia have for OpenGL is not the same as this one. Too bad. :(

    I'm going to try triple buffering in L4D and TF2 later today, but I'm just curious if their implementation is the same as the one promoted in this article?
  • DerekWilson - Friday, June 26, 2009 - link

    I haven't spoke with valve, but I suspect their implementation is good and should perform as expected.
  • JonP382 - Friday, June 26, 2009 - link

    Same old story - I get even more input lag on triple buffering than on double buffering. :(
  • JonP382 - Friday, June 26, 2009 - link

    I should say that triple buffering introduced additional lag. Vsync itself introduces an enormous amount of input lag and drives me insane. But I do hate tearing...
  • prophet001 - Friday, June 26, 2009 - link

    one of the best articles i've read on here in a long time. i knew what vsync did as far as degrading performance (only in that it waited for the frame to be complete before displaying) but i never knew how double and triple buffering actually worked. triple buff from here on out

    4.9 out of 5.0 :-D
    (but only b/c nobody gets a 5.0 lol)
    thank you

    Preston
  • danielk - Friday, June 26, 2009 - link

    This was an excellent article!

    While im a gamer, i dont know much about the settings i "should" be running for optimal FPS vs. quality. I've run with vsync on as thats been the only remedy ive found for tearing, but had it set to "always on" in the gfx driver, as i didnt know better.

    Naturally, triple buffering will be on from here on.

    I would love to see more info about the different settings(anti aliasing etc) and their impact on FPS and image quality in future articles.

    Actually, if anyone has a good guide to link, i would appreciate it!


    Regards,
    Daniel
  • DerekWilson - Friday, June 26, 2009 - link

    keep in mind that you can't force triple buffering on in DirectX games from the control panel (yet - hopefully). It works for OpenGL though.

    For DX games, there are utilities out there that can force the option on for most games, but I haven't done indepth testing with these utilities, so I'm not sure on the specifics of how they work/what they do and if it is a good implementation.

    The very best option (as with all other situtions) is to find an in-game setting for triple buffering. Which many developers do not include (but hopefully that trend is changing).
  • psychobriggsy - Friday, June 26, 2009 - link

    I can see the arguments for triple buffering when the rendered frame rate is above the display frame rate. Of course a lot of work is wasted with this method, especially with your 300fps example.

    However I've been drawing out sub-display-rate examples on paper here to match your examples, and it's really not better than VsyncDB apart from the odd frame here and there.

    What appears to be the best solution is for a game to time each frame's rendering (on an ongoing basis) and adjust when it starts rendering the frame so that it finishes rendering just before the Vsync. I will call this "Adaptive Vsync Double Buffering", which uses the previous frame rendering time to work out when to render the next frame so that what is displayed is up to date, but work is reduced.

    In the meantime, lets work on getting 120fps monitors, in terms of the input signal. That would be the best way to reduce input lag in my opinion.

Log in

Don't have an account? Sign up now