Wrapping It Up

So there you have it. Triple buffering gives you all the benefits of double buffering with no vsync enabled in addition to all the benefits of enabling vsync. We get smooth full frames with no tearing. These frames are swapped to the front buffer only on refresh, but they have just as little input lag as double buffering with no vsync at the start of output to the monitor. Even though "performance" doesn't always get reported right with triple buffering, the graphics hardware is working just as hard as it does with double buffering and no vsync and the end user gets all the benefit with out the potential downside. Triple buffering does take up a handful of extra memory on the graphics hardware, but on modern hardware this is not a significant issue.

Just to recap, from our previous example, here are what the three frames we looked at rendering stack up side by side.

 


Triple Buffering


 

 


Double Buffering


 

 


Double Buffering with vsync


 

We've presented the qualitative argument and the quantitative argument in support of triple buffering. So, now the question is: does this data change things? Are people going to start looking for that triple buffering option more now than without this information? Let's find out.

{poll 135:300}

Regardless of the results, we do hope that this article has been helpful both in explaining an often overlooked option. While it might not be something we test with because of the issues with measuring performance, triple buffering is the setting we prefer to play with. We hope we've helped show our readers why they should give triple buffering a shot as well. 

We also hope more developers will start making triple buffering the default option in their games, as it will deliver the best experience to gamers interested in both quality and performance. There are only a handful of games that include triple buffering as a built in option, and NVIDIA and AMD drivers currently only allow forcing triple buffering in OpenGL games. This really needs to change, as there is no reason we shouldn't see pervasive triple buffering today.


UPDATE: There has been a lot of discussion in the comments of the differences between the page flipping method we are discussing in this article and implementations of a render ahead queue. In render ahead, frames cannot be dropped. This means that when the queue is full, what is displayed can have a lot more lag. Microsoft doesn't implement triple buffering in DirectX, they implement render ahead (from 0 to 8 frames with 3 being the default).

The major difference in the technique we've described here is the ability to drop frames when they are outdated. Render ahead forces older frames to be displayed. Queues can help smoothness and stuttering as a few really quick frames followed by a slow frame end up being evened out and spread over more frames. But the price you pay is in lag (the more frames in the queue, the longer it takes to empty the queue and the older the frames are that are displayed).

In order to maintain smoothness and reduce lag, it is possible to hold on to a limited number of frames in case they are needed but to drop them if they are not (if they get too old). This requires a little more intelligent management of already rendered frames and goes a bit beyond the scope of this article.

Some game developers implement a short render ahead queue and call it triple buffering (because it uses three total buffers). They certainly cannot be faulted for this, as there has been a lot of confusion on the subject and under certain circumstances this setup will perform the same as triple buffering as we have described it (but definitely not when framerate is higher than refresh rate).

Both techniques allow the graphics card to continue doing work while waiting for a vertical refresh when one frame is already completed. When using double buffering (and no render queue), while vertical sync is enabled, after one frame is completed nothing else can be rendered out which can cause stalling and degrade actual performance.

When vsync is not enabled, nothing more than double buffering is needed for performance, but a render queue can still be used to smooth framerate if it requires a few old frames to be kept around. This can keep instantaneous framerate from dipping in some cases, but will (even with double buffering and vsync disabled) add lag and input latency. Even without vsync, render ahead is required for multiGPU systems to work efficiently.

So, this article is as much for gamers as it is for developers. If you are implementing render ahead (aka a flip queue), please don't call it "triple buffering," as that should be reserved for the technique we've described here in order to cut down on the confusion. There are games out there that list triple buffering as an option when the technique used is actually a short render queue. We do realize that this can cause confusion, and we very much hope that this article and discussion help to alleviate this problem.

Digging Deeper: Galloping Horses Example
Comments Locked

184 Comments

View All Comments

  • SirLamer - Friday, June 26, 2009 - link

    It's just because nVidia hasn't designed their control panel to be super invasive. The only way to make it work is to have a program sitting there that intercepts calls from DirectX and changes them.

    Rather than blaming AMD and nVidia, it's probably better to ask why DirectX doesn't include a mechanism to control this performance parameter like it does for most other driver-configurable settings.

    Download Riva Tuner, and from the zip file install D3D Overider. It will sit on your taskbar and do the job. I have used this program in the past, but I forgot to put it back since my last reformat and will now do it tonight. Thanks for the reminder article!
  • GourdFreeMan - Friday, June 26, 2009 - link

    Hmm... it seems you are correct.

    How bizarre! I can understand the usefulness in keeping previous frames for post-processing effects, but you would only be reading from the frames and writing to the new frame, never writing to old ones. Why doesn't this "just work" under the control panel for DirectX like it does for OpenGL?
  • smn198 - Saturday, June 27, 2009 - link

    I suppose we have monitor refresh rates as a legacy from CRT technology. Is there any reason (other than comparability) why we can't have a LCD that refreshes ad-hoc, when both it and the next frame are ready? No more just missing a refresh.

    Alternatively could LCD lie about its refresh rate and have some sort of buffer internally to achieve the same thing - reducing lag?
  • GourdFreeMan - Sunday, June 28, 2009 - link

    All LCDs (both passive and active matrix) still refresh the screen periodically to prevent individual pixel elements from fading, so there is still a notion of refresh rate for LCDs. You do raise a good question of whether it would be possible to refresh the screen whenever frames are completed (which would have to be in addition to this base refresh rate, or you would get a flickering in brightness).

    Having an input buffer to reduce perceived display lag would result in torn frames if you tried to swap in the new frame mid-refresh. You still have to wait until the refresh is completed.
  • erple2 - Tuesday, June 30, 2009 - link

    That doesn't make sense from the perspective of how an LCD works. The charge that twists the polarizing LCD element doesn't fade over time (well, not over the few milliseconds between updates - though the charge probably fades over the years as they wear out).

    The pixel elements don't generate any light themselves. How do they fade then?

    I think that you're confusing 2 things here. The refresh rate of the LCD is tied to the output signal - they're both set to run at 60 Hz, so the video card outputs a "new frame" (even if the frame hasn't changed, it's a new frame) ever 1/60th of a second. The LCD then reads that signal every 1/60th of a second and displays it. Part of the reason they chose 60 Hz is due to the bandwidth limitation of the set standard. To update more frequently than that, you'll clearly need the capability of transmitting more data down the interface. Right now, the DVI interface can transmit up to 3.96 Gigabits of info per second. at 24 bits per pixel, and a 1920x1200 resolution, that's 55,296,000 bits per image. Given the hard cap of 3.96 Gbps, that's 3.96/55296000 * 1billion which is about 71 Hz. That's the fastest a single link DVI interface can refresh at that resolution. I believe it was therefore decided to cap the refresh rate at 60 Hz for any WUXGA resolutions. But, that's out of convenience, not for any reason related to fading pixels (unlike a CRT).

    LCD's don't flicker per se, as there's no light that's turning on and off. The backlight is more or less constantly on.
  • overzealot - Wednesday, July 1, 2009 - link

    As the pixels untwist (no power applied, or power applied in reverse) they transition back to blocking light. You could, theoretically, call the process fading, as that's what it would appear to do.

    The backlights in most LCD's run off AC, you can't say they're always on. Best you can say is that because the frequency is much higher than 50/60hz you can't see the flickering. It's still there.

    There are faster than 60hz panels, it's just that the electronics are more costly - and the majority of people don't care.
    I do care, but not enough to pay the extra cost of a 120hz panel.
    I'd rather have a larger panel.
  • DerekWilson - Wednesday, July 1, 2009 - link

    it is my understanding that pixel state on an LCD panel is driven by a steady voltage applied across the liquid crystal cell (aside from possible overdrive on the upswing to increase transition time). because they are digital, until the controller changes the state of that pixel, it can remain at a constant percentage of twist because there is a constant voltage applied. no refresh is "required" and the bandwidth issue is what drives "refresh rate" on LCD panels.

    many LCDs do use CCFL for backlight which can have a slight flicker for a very very short time period every time current alternates polarity, but it isn't really ever "off" as they are driven both ways (there are no dedicated anodes and cathodes - they switch with current).

    But as we move toward LED backlighting (or away from CCFL and toward other technologies which are DC) then we won't have any flicker at all there either.
  • GourdFreeMan - Monday, August 31, 2009 - link

    This "steady" voltage (only true of active-matrix LCDs) isn't maintained directly by the LCD's power supply. For TFTs there are one or more capacitors gated by a transistor for controlling their voltage per pixel element (R,G,B) that maintains the state. These capacitors slowly discharge and must be refreshed periodically. In this sense all LCDs have a "base refresh rate".

    If you do a Google search for LCD controllers integrated into consumer products you will find there is an issue with perceived flickering in brightness as the pixel elements fade if you do not refresh them often enough.

    I was asked if it was possible to refresh the screen only when frames are completed, and this was the first issue I discovered when researching the question. Other than increased power usage and added controller complexity I do not know if there would be other issues if you tried to do a second "just in time" refresh and left my reply to the original question at that.
  • RSmith - Thursday, April 8, 2010 - link

    Hey GourdFree Man,

    I got here thinking exactly the same thing as you: why do we need a fixed refresh rate on LCD's?

    Did you get any answers to that?

    I hope that future display technologies will allow this to happen, it would certainly be of huge benefit to gaming if frames were drawn as they were rendered.
  • homerdog - Sunday, June 28, 2009 - link

    I can set my LCD to 75Hz, which AFAIK is a lie.

Log in

Don't have an account? Sign up now