Wrapping It Up

So there you have it. Triple buffering gives you all the benefits of double buffering with no vsync enabled in addition to all the benefits of enabling vsync. We get smooth full frames with no tearing. These frames are swapped to the front buffer only on refresh, but they have just as little input lag as double buffering with no vsync at the start of output to the monitor. Even though "performance" doesn't always get reported right with triple buffering, the graphics hardware is working just as hard as it does with double buffering and no vsync and the end user gets all the benefit with out the potential downside. Triple buffering does take up a handful of extra memory on the graphics hardware, but on modern hardware this is not a significant issue.

Just to recap, from our previous example, here are what the three frames we looked at rendering stack up side by side.

 


Triple Buffering


 

 


Double Buffering


 

 


Double Buffering with vsync


 

We've presented the qualitative argument and the quantitative argument in support of triple buffering. So, now the question is: does this data change things? Are people going to start looking for that triple buffering option more now than without this information? Let's find out.

{poll 135:300}

Regardless of the results, we do hope that this article has been helpful both in explaining an often overlooked option. While it might not be something we test with because of the issues with measuring performance, triple buffering is the setting we prefer to play with. We hope we've helped show our readers why they should give triple buffering a shot as well. 

We also hope more developers will start making triple buffering the default option in their games, as it will deliver the best experience to gamers interested in both quality and performance. There are only a handful of games that include triple buffering as a built in option, and NVIDIA and AMD drivers currently only allow forcing triple buffering in OpenGL games. This really needs to change, as there is no reason we shouldn't see pervasive triple buffering today.


UPDATE: There has been a lot of discussion in the comments of the differences between the page flipping method we are discussing in this article and implementations of a render ahead queue. In render ahead, frames cannot be dropped. This means that when the queue is full, what is displayed can have a lot more lag. Microsoft doesn't implement triple buffering in DirectX, they implement render ahead (from 0 to 8 frames with 3 being the default).

The major difference in the technique we've described here is the ability to drop frames when they are outdated. Render ahead forces older frames to be displayed. Queues can help smoothness and stuttering as a few really quick frames followed by a slow frame end up being evened out and spread over more frames. But the price you pay is in lag (the more frames in the queue, the longer it takes to empty the queue and the older the frames are that are displayed).

In order to maintain smoothness and reduce lag, it is possible to hold on to a limited number of frames in case they are needed but to drop them if they are not (if they get too old). This requires a little more intelligent management of already rendered frames and goes a bit beyond the scope of this article.

Some game developers implement a short render ahead queue and call it triple buffering (because it uses three total buffers). They certainly cannot be faulted for this, as there has been a lot of confusion on the subject and under certain circumstances this setup will perform the same as triple buffering as we have described it (but definitely not when framerate is higher than refresh rate).

Both techniques allow the graphics card to continue doing work while waiting for a vertical refresh when one frame is already completed. When using double buffering (and no render queue), while vertical sync is enabled, after one frame is completed nothing else can be rendered out which can cause stalling and degrade actual performance.

When vsync is not enabled, nothing more than double buffering is needed for performance, but a render queue can still be used to smooth framerate if it requires a few old frames to be kept around. This can keep instantaneous framerate from dipping in some cases, but will (even with double buffering and vsync disabled) add lag and input latency. Even without vsync, render ahead is required for multiGPU systems to work efficiently.

So, this article is as much for gamers as it is for developers. If you are implementing render ahead (aka a flip queue), please don't call it "triple buffering," as that should be reserved for the technique we've described here in order to cut down on the confusion. There are games out there that list triple buffering as an option when the technique used is actually a short render queue. We do realize that this can cause confusion, and we very much hope that this article and discussion help to alleviate this problem.

Digging Deeper: Galloping Horses Example
Comments Locked

184 Comments

View All Comments

  • DerekWilson - Friday, June 26, 2009 - link

    I do make the claim that it's always better, but just wanted to use one example for simplicity sake (the 300 fps example).

    at lower refresh rates, the general case for performance is still the same as double buffering without vsync (which starts rendering the same frame that triple buffering would start rendering) ... and it still has the smoothness and lack of tearing of double buffering with vsync.
  • james jwb - Friday, June 26, 2009 - link

    what about when 120 hz LCD's come out and if a game can provide 120 fps as a minimum. surely double buffering is the best case here, or will triple buffering perform exactly the same in this case?
  • JimmiG - Friday, June 26, 2009 - link

    The reason double buffering still prevails is probably because when current 3D standards were set during the late 90's, video memory was at a premium.

    For example, at 1024x768 (standard resolution at the time), each buffer would take up 1.5 MB at 16bpp and 3MB at 32bpp. Not a lot today, but back then 8-16MB cards were the norm. If a game was designed so that VRAM usage would peak at 16MB, adding a couple MB's usage for another buffer would kill performance. So the general idea was that "Yeah, sure triple buffering is nice, but it uses too much memory", and that idea hs kind of stuck.
  • fiveday - Friday, June 26, 2009 - link

    There are major advantages to using Triple Buffering, but a few points that explain why its not automatically adopted.

    One big one is lag. Now, if things are pretty well lag free under double-buffering, no sweat. However, there's no getting around the fact that by adding an extra frame, you're adding 1/3 extra processing time between the frame being drawn and appearing on your display. If the game's pretty lag-free already, you'll never know the difference. If the game is already prone to some sort of input lag, it's about to get worse. How much worse, depends on the game itself... and in some cases it can drastically soften up your controls. It can be tricky to predict how much impact it will have, if any... a point I'll return to in the conclusion.

    Another issue is memory usage. In a perfect world, every system will always have adequate texture memory accommodate triple buffering. Is it a perfect world? Nope. And if your graphics card is getting thin on ram, get ready for a performance hit. How much? Maybe none, maybe a lot. Which brings me to my last point.

    Whether or not you'll see these adverse affects from using Triple Buffering depends partly on the game itself, how it was written, and partly on your own system configuration. Now, the developers are responsible for their own software, but there's no telling what kind of system a end user is going to try to run the game on. These days, a 4670 graphics card and a phenom X2, while seemingly meager, are enough to get most games out there plenty playable... but there's still folks out there trying to run a game like Bioshock on a Radeon 9700 pro (what's SM2.0, they cry!?!). Lord forbid someone try to use their laptop to play a game.

    By the way... SLI and X-Fire setups tend to HATE triple buffering.

    So you see... the developers have a tough enough time as it is getting their games playable on an extremely unpredictable variety of systems. Triple Buffering, while it has its advantages, simply introduces further risk of poor performance on a lot of systems out there. Should it be automatically enabled? Nope.

    But should it be available as an option? These days, I see no reason why not. The original Unreal and UT engines offered it as an option, and that was ten years ago. Bring it back for those of us who want to take a crack at it.
  • DerekWilson - Friday, June 26, 2009 - link

    you are correct that SLI and CrossFire don't like playing well with triple buffering... but then there have been plenty of oddities no matter what page flipping method we want to use.

    but enabling triple buffering does NOT add an additional latency penalty over double buffering unless double buffering visibly tears and you are talking about the rest of the frame ... double buffering and triple buffering start rendering the same frame every time.

    there is no inserted frame into the pipeline, as it's not a pipeline -- what you are describing is more like DirectX's default 3 frame render ahead which has much higher potential to add latency than triple buffering (when we are talking about the page flipping method and not just "having three buffers").
  • sbuckler - Friday, June 26, 2009 - link

    If tearing is not a problem then you are better off double buffering with vsync off. Turn on triple buffering and you introduce another 16.6ms of display lag which matters in a fast fps.
  • DerekWilson - Friday, June 26, 2009 - link

    you do NOT automatically incur a one frame lag -- you have at most an additional one frame lag.

    as i explained, especially in fast shooters, triple buffering and double buffering with no vsync begin rendering the exact same frame even if double buffering without vsync switches to a newer frame at some point.

    and when tearing doesn't "happen" (read isn't noticable) then that means the updated frames were not different enough to really matter anyway (otherwise you would see the difference).

    the possible advantage of double buffering could be argued when a tear happens near the top of the screen, but whether this is a real advantage is debatable.
  • BJ Eagle - Friday, June 26, 2009 - link

    with double buffer vsync and framrate just below 30 FPS ?

    I would go for double buffer vsync if there is not an equal penalty going below 30 FPS as there is when going below 60 FPS. Simply because I don't want to waste power rendering frames I wont miss...
    Movie industry teaches us that 25-30 FPS is actually enough to fool our brains to perceive motion. But if the lag skyrockets with vsync af below 30 FPS I guess I would go with triplebuffer...
  • nafhan - Friday, June 26, 2009 - link

    With a 60hz refresh rate, I think dropping below 30 will cause the same issues as dropping below 60. Vsync is going to update frames at intervals that divide evenly into the refresh rate. So, if 30 is not an option, then it will update every 20 frames.

    I think most people can tell the difference between 30FPS in a game and 60FPS. However, more than that really doesn't provide much benefit.
  • toyota - Friday, June 26, 2009 - link

    a game is not a movie...

Log in

Don't have an account? Sign up now