Wrapping It Up

So there you have it. Triple buffering gives you all the benefits of double buffering with no vsync enabled in addition to all the benefits of enabling vsync. We get smooth full frames with no tearing. These frames are swapped to the front buffer only on refresh, but they have just as little input lag as double buffering with no vsync at the start of output to the monitor. Even though "performance" doesn't always get reported right with triple buffering, the graphics hardware is working just as hard as it does with double buffering and no vsync and the end user gets all the benefit with out the potential downside. Triple buffering does take up a handful of extra memory on the graphics hardware, but on modern hardware this is not a significant issue.

Just to recap, from our previous example, here are what the three frames we looked at rendering stack up side by side.

 


Triple Buffering


 

 


Double Buffering


 

 


Double Buffering with vsync


 

We've presented the qualitative argument and the quantitative argument in support of triple buffering. So, now the question is: does this data change things? Are people going to start looking for that triple buffering option more now than without this information? Let's find out.

{poll 135:300}

Regardless of the results, we do hope that this article has been helpful both in explaining an often overlooked option. While it might not be something we test with because of the issues with measuring performance, triple buffering is the setting we prefer to play with. We hope we've helped show our readers why they should give triple buffering a shot as well. 

We also hope more developers will start making triple buffering the default option in their games, as it will deliver the best experience to gamers interested in both quality and performance. There are only a handful of games that include triple buffering as a built in option, and NVIDIA and AMD drivers currently only allow forcing triple buffering in OpenGL games. This really needs to change, as there is no reason we shouldn't see pervasive triple buffering today.


UPDATE: There has been a lot of discussion in the comments of the differences between the page flipping method we are discussing in this article and implementations of a render ahead queue. In render ahead, frames cannot be dropped. This means that when the queue is full, what is displayed can have a lot more lag. Microsoft doesn't implement triple buffering in DirectX, they implement render ahead (from 0 to 8 frames with 3 being the default).

The major difference in the technique we've described here is the ability to drop frames when they are outdated. Render ahead forces older frames to be displayed. Queues can help smoothness and stuttering as a few really quick frames followed by a slow frame end up being evened out and spread over more frames. But the price you pay is in lag (the more frames in the queue, the longer it takes to empty the queue and the older the frames are that are displayed).

In order to maintain smoothness and reduce lag, it is possible to hold on to a limited number of frames in case they are needed but to drop them if they are not (if they get too old). This requires a little more intelligent management of already rendered frames and goes a bit beyond the scope of this article.

Some game developers implement a short render ahead queue and call it triple buffering (because it uses three total buffers). They certainly cannot be faulted for this, as there has been a lot of confusion on the subject and under certain circumstances this setup will perform the same as triple buffering as we have described it (but definitely not when framerate is higher than refresh rate).

Both techniques allow the graphics card to continue doing work while waiting for a vertical refresh when one frame is already completed. When using double buffering (and no render queue), while vertical sync is enabled, after one frame is completed nothing else can be rendered out which can cause stalling and degrade actual performance.

When vsync is not enabled, nothing more than double buffering is needed for performance, but a render queue can still be used to smooth framerate if it requires a few old frames to be kept around. This can keep instantaneous framerate from dipping in some cases, but will (even with double buffering and vsync disabled) add lag and input latency. Even without vsync, render ahead is required for multiGPU systems to work efficiently.

So, this article is as much for gamers as it is for developers. If you are implementing render ahead (aka a flip queue), please don't call it "triple buffering," as that should be reserved for the technique we've described here in order to cut down on the confusion. There are games out there that list triple buffering as an option when the technique used is actually a short render queue. We do realize that this can cause confusion, and we very much hope that this article and discussion help to alleviate this problem.

Digging Deeper: Galloping Horses Example
Comments Locked

184 Comments

View All Comments

  • DerekWilson - Wednesday, July 1, 2009 - link

    "skipping" doesn't occur in games like it does with a video -- there is not a set number of frames that must be rendered in a set amount of time. The action happens independently of the frames rendered in a game, while for a video you there is an exact framerate that needs to be maintained in order to see smooth motion as it was captured.

    in the old days, console games would tie the game timer to framerate which was always set to vsync. If frame rate dropped from 60 FPS to 30, the game would actually slow down (when too much action was going on on the screen). Modern PC games do not rely on framerate to time their game, in stead framerate is a snapshot of the game at a certain time.

    if you drop all but the most recently completed frame, then you are just doing triple buffering the way this article describes.
  • ufon68 - Monday, March 28, 2016 - link

    This not completely accurate.

    While usually, and for a good reason, physical simulation indeed "ticks" independent of FPS, the actual gameplay logic is usually tied to the frames being displayed.
    You don't see the game slowing down or speeding up because the simulation takes into account the FPS speed, ie. it multiplies everything by the delta time which is the time it took to tick the last frame.
    For instnance to translate an object along a vector at a certain speed, you move it every frame by: Direction * Speed * DeltaTime (DeltaTime being the time it took the game to tick the last frame in seconds, given a constant fps[just for simplifacation purposes, the fps can move up and down and this will still work] of 10, the DeltaTime is 0.1 for this given frame/tick)

    So it's not correct that what you see are snapshots of what's going on in the game world, it just gives you that impression by doing this neat trick.
  • VinnyV - Monday, June 29, 2009 - link

    I just wanted to say that I really appreciate this article. I think I just went from understanding about 10% of what is usually discussed on this site to about 11 or 12%. Thanks! Please post more articles like this!
  • iwodo - Monday, June 29, 2009 - link

    I see this as an Direct X Problem only? May be we should call Microsoft to improve on it... ( Too Late for Direct X 11?? )
  • Dospac - Sunday, June 28, 2009 - link

    Derek, it would be interesting to get to the bottom of the multi-GPU input delay issue as well as devise a quantitative way to test the delay with various setups. It's confounding that this hasn't been investigated sooner and been sorted out. The potential PQ improvement is well worth your efforts. Thank you!
  • DerekWilson - Sunday, June 28, 2009 - link

    getting to the bottom of why delay happens conceptually isn't that complex -- there are a lot of issues in interGPU communication and synchronization that can cause issues.

    quantitative testing is possible but pretty expensive ... i'll see if i can convince Anand to invest in the equipment :-)
  • DerekWilson - Sunday, June 28, 2009 - link

    Added a note at the end of the article to try and help clear the air about the confusion over triple buffering as a page flipping method and flip queues (render ahead) with three buffers.

    I also wanted to note that this topic is not just confusing for gamers -- game developer do not always get their labeling right and sometimes refer to flip queues a "triple buffering" incorrectly.

    I do apologize for not addressing this issue at publication, but I hope this helps to clear the air.
  • Touche - Sunday, June 28, 2009 - link

    Have you seen this?

    http://msdn.microsoft.com/en-us/library/ms796537.a...">http://msdn.microsoft.com/en-us/library/ms796537.a...
    http://msdn.microsoft.com/en-us/library/ms893104.a...">http://msdn.microsoft.com/en-us/library/ms893104.a...
  • DerekWilson - Wednesday, July 1, 2009 - link

    What they are showing is 1 frame render ahead with vsync. In MS DX terms, this is a flip chain with 2 back buffers and a present interval of one.

    This is them calling it triple if uses three total buffers. This is still a flip queue and should be referred to as such to avoid confusion.
  • mikeev - Sunday, June 28, 2009 - link

    Add me to the list of people who tried triple buffering but had to turn it OFF due to the input lag.

    I ran the test in L4D anyway. It was unbearable. I couldn't hit a thing. The input lag was actually noticeably less with double buffering + vsync ON.

    Maybe I'm doing something wrong, but my results do not jive with this article at all.

Log in

Don't have an account? Sign up now