We often neglect to get too involved in the discussion of what options people should always enable when they play games. Rather, we tend to focus on what we test with. Honestly, our recommended settings for playing the games we test would be very similar to the settings we use to benchmark with one very important exception: we would enable triple buffering (which implies vsync) whenever possible. While it's not an available option in all games, it really needs to be, and we are here to make the case for why gamers should use triple buffering and why developers need to support it.

Most often gamers, when it comes to anything regarding vsync, swear by forcing vsync off in the driver or disabling it in the game. In fact, this is what we do when benchmarking because it allows us to see more clearly what is going on under the hood. Those who do enable vsync typically do so to avoid the visual "tearing" that can occur in some cases despite the negative side effects.

We would like to try something a little different with this article. We'll include two polls, one here and one at the end of the article. This first poll is designed to report what our readers already do with respect to vsync and double versus triple buffering.

{poll 134:300}

After reading the rest of this article, our readers are invited to answer a related poll which is designed to determine if arming gamers with the information this article provides will have any impact on what settings are used from here on out.

First up will be a conceptual review of what double buffering and vsync are, then we'll talk about what triple buffering brings to the table. For those who really want the nitty gritty (or who need more convincing) we will provide follow that up with a deeper dive into each approach complete with some nifty diagrams.

What are Double Buffering, vsync and Triple Buffering?
POST A COMMENT

186 Comments

View All Comments

  • DerekWilson - Friday, June 26, 2009 - link

    Really, the argument against including the option is more complex ...

    In the past, that extra memory required might not have been worth it -- using that memory on a 128mb card could actually degrade performance because of the additional resource usage. we've really recently gotten beyond this as a reasonable limitation.

    Also, double buffering is often seen as "good enough" and triple buffering doesn't add any additional eye candy. triple buffering is at best only as performant as double buffering. enabling vsync already eliminates tearing. neither of these options requires any extra work and triple buffering (at least under directx) does.

    Developers typically try to spend time on the things that they determine will be most desired by their customers or that will add the largest impact. Some developers have taken the time to start implementing triple buffering.

    but the "drawback" is development time... part of the goal here is to convince developers that it's worth the development time investment.
    Reply
  • Frumious1 - Friday, June 26, 2009 - link

    ...this sounds fine for games running at over 60FPS - and in fact I don't think there's really that much difference between double and triple buffering in that case; 60FPS is smooth and would be great.

    The thing is, what happens if the frame rate is just under 60 FPS? It seems to me that you'll still get some benefit - i.e. you'd see the 58 FPS - but there's a delay of one frame between when the scene is rendered and when it appears on the display. You neglected to spell out the the maximum "input latency" is entirely dependent on frame rate... though it will never be more than 17ms I don't think.

    I'm not one to state that input lag is a huge issue, provided it stays below around 20ms. I've used some LCDs that have definite lag (Samsung 245T - around 40ms I've read), and it is absolutely distracting even in normal windows use. Add another 17ms for triple buffering... well, I suppose the difference between 40ms and 57ms isn't all that huge, but neither is desirable.
    Reply
  • GourdFreeMan - Friday, June 26, 2009 - link

    Derek has already mentioned that the additional delay is at most one screen refresh, not exactly one refresh, but let me add two more points. First, the the additional delay will be dependent on the refresh rate of your monitor. If you have a 60 Hz LCD then, yes it will be ~16.6ms. If you have a 120 Hz LCD the additonal delay would be at most ~8.3ms. Second, that if you are running without vsync, the screen you see will be split into two regions -- one region is the newly rendered frame, the other will be the previous frame that will be the same age as the entire frame you would be getting with triple buffering. Running without vsync only reduces your latency if what you are focusing on is in the former.

    Also, we should probably call this display lag, not input lag, as the rate at which input is polled isn't necessarily related to screen refresh (it is for some games like Oblivion and Hitman: Blood Money, however).
    Reply
  • DerekWilson - Friday, June 26, 2009 - link

    you are right that maximum input latency is very dependent on framerate, but I believe I mentioned that maximum input latency with triple buffering is never more than 16.67ms, while with double buffering and vsync it could potentially climb to an additional 16.67ms due to the fact that the game has to wait to start rendering the next frame. If a frame completes just after a refresh, the game must artificially wait until after the next refresh to start drawing again giving something like an upper limit of input lag as (frametime + 33.3ms).

    With triple buffering, input lag is no longer than double buffering without vsync /for at least part of the frame/ ... This is never going to be longer than (frametime + 16.7ms) in either case.

    triple buffering done correctly does not add more input lag than double buffering in the general case (even when frametime > 17ms) unless/until you have a tear in the double buffered case. and there again, if the frames are similar enough that you don't see a tear, then then there was little need for an update half way through a frame anyway.

    i tried to keep the article as simple as i could, and getting into every situation of where frames finish rendering, how long frames take, and all that can get very messy ... but in the general case, triple buffering still has the advantages.
    Reply
  • DerekWilson - Friday, June 26, 2009 - link

    sorry, i meant input lag /due/ to triple buffering is never more than 16.67ms ... but the average case is shorter than this.

    total input lag can be longer than this because frame data is based on input when the frame began rendering so when framerate is less than 60FPS, frametime is already more than 16.67ms ... at 30 FPS, frametime is 33.3ms.
    Reply
  • Edirol - Friday, June 26, 2009 - link

    The wiki article on the subject mentions that it depends on the implementation of triple buffering. Can frames be dropped or not? Also there may be limitations to using triple buffering in SLI setups. Reply
  • DerekWilson - Friday, June 26, 2009 - link

    i'm not a fan of wikipedia's article on the subject ... they refer to the DX 3 frame render ahead as a form of "triple buffering" ... I disagree with the application of the term in this case.

    sure, it's got three things that are buffers, but the implication in the term triple buffering (just like in the term double buffering) when applied to displaying graphics on a monitor is more specific than that.

    just because something has two buffers to do something doesn't mean it uses "double buffering" in the sense that it is meant when talking about drawing to back buffers and swaping to front buffers for display.

    In fact, any game has a lot more than two or three buffers that it uses in it's rendering process.

    The DX 3 frame render ahead can actually be combined with double and triple buffering techniques when things are actually being displayed.

    I get that the wikipedia article is trying to be more "generally" correct in that something that uses three buffers to do anything is "triple buffered" in a sense ... but I submit that the term has a more specific meaning in graphics that has specifically to do with page flipping and how it is handled.
    Reply
  • StarRide - Friday, June 26, 2009 - link

    Very Informative. WoW is one of those games with inbuilt triple buffering, and the ingame tooltip to the triple buffering option says "may cause slight input lag", which is the reason why I haven't used triple buffering so far. But by this article, this is clearly false, so I will be turning triple buffering on from now on, thanks. Reply
  • Bull Dog - Friday, June 26, 2009 - link

    Now, how do we enable it? And when we enable it, how do we make sure we are getting triple buffering and not double buffering?

    ATI has an option in the CCC to enable triple buffering for OpenGL. What about D3D?
    Reply
  • gwolfman - Friday, June 26, 2009 - link

    What about nVidia? Do we have to go to the game profile to change this? Reply

Log in

Don't have an account? Sign up now