Earlier today NVIDIA announced G-Sync, its variable refresh rate technology for displays. The basic premise is simple. Displays refresh themselves at a fixed interval, but GPUs render frames at a completely independent frame rate. The disconnect between the two is one source of stuttering. You can disable v-sync to try and work around it but the end result is at best tearing, but at worst stuttering and tearing.

NVIDIA's G-Sync is a combination of software and hardware technologies that allows a modern GeForce GPU to control a variable display refresh rate on a monitor equipped with a G-Sync module. In traditional setups a display will refresh the screen at a fixed interval, but in a G-Sync enabled setup the display won't refresh the screen until it's given a new frame from the GPU.

NVIDIA demonstrated the technology on 144Hz ASUS panels, which obviously caps the max GPU present rate at 144 fps although that's not a limit of G-Sync. There's a lower bound of 30Hz as well, since anything below that and you'll begin to run into issues with flickering. If the frame rate drops below 30 fps, the display will present duplicates of each frame.

There's a bunch of other work done on the G-Sync module side to deal with some funny effects of LCDs when driven asynchronously. NVIDIA wouldn't go into great detail other than to say that there are considerations that need to be taken into account.

The first native G-Sync enabled monitors won't show up until Q1 next year, however NVIDIA will be releasing the G-Sync board for modding before the end of this year. Initially supporting Asus’s VG248QE monitor, end-users will be able to mod their monitor to install the board, or alternatively professional modders will be selling pre-modified monitors. Otherwise in Q1 of next year ASUS will be selling the VG248QE with the G-Sync board built in for $399, while BenQ, Philips, and ViewSonic are also committing to rolling out their own G-Sync equipped monitors next year too. I'm hearing that NVIDIA wants to try and get the module down to below $100 eventually. The G-Sync module itself looks like this:

There's a controller and at least 3 x 256MB memory devices on the board, although I'm guessing there's more on the back of the board. NVIDIA isn't giving us a lot of detail here so we'll have to deal with just a shot of the board for now.

Meanwhile we do have limited information on the interface itself; G-Sync is designed to work over DisplayPort (since it’s packet based), with NVIDIA manipulating the timing of the v-blank signal to indicate a refresh. Importantly, this indicates that NVIDIA may not be significantly modifying the DisplayPort protocol, which at least cracks open the door to other implementations on the source/video card side.

Although we only have limited information on the technology at this time, the good news is we got a bunch of cool demos of G-Sync at the event today. I'm going to have to describe most of what I saw since it's difficult to present this otherwise. NVIDIA had two identical systems configured with GeForce GTX 760s, both featured the same ASUS 144Hz displays but only one of them had NVIDIA's G-Sync module installed. NVIDIA ran through a couple of demos to show the benefits of G-Sync, and they were awesome.

The first demo was a swinging pendulum. NVIDIA's demo harness allows you to set min/max frame times, and for the initial test case we saw both systems running at a fixed 60 fps. The performance on both systems was identical as was the visual experience. I noticed no stuttering, and since v-sync was on there was no visible tearing either. Then things got interesting.

NVIDIA then dropped the frame rate on both systems down to 50 fps, once again static. The traditional system started to exhibit stuttering as we saw the effects of having a mismatched GPU frame rate and monitor refresh rate. Since the case itself was pathological in nature (you don't always have a constant mismatch between the two), the stuttering was extremely pronounced. The same demo on the g-sync system? Flawless, smooth.

NVIDIA then dropped the frame rate even more, down to an average of around 45 fps but also introduced variability in frame times, making the demo even more realistic. Once again, the traditional setup with v-sync enabled was a stuttering mess while the G-Sync system didn't skip a beat.

Next up was disabling v-sync with hopes of reducing stuttering, resulting in both stuttering (still refresh rate/fps mismatch) and now tearing. The G-Sync system, once again, handled the test case perfectly. It delivered the same smoothness and visual experience as if the we were looking at a game rendering perfectly at a constant 60 fps. It's sort of ridiculous and completely changes the overall user experience. Drops in frame rate no longer have to be drops in smoothness. Game devs relying on the presence of G-Sync can throw higher quality effects at a scene since they don't need to be as afraid of drops in frame rate excursions below 60 fps.

Switching gears NVIDIA also ran a real world demonstration by spinning the camera around Lara Croft in Tomb Raider. The stutter/tearing effects weren't as pronounced as in NVIDIA's test case, but they were both definitely present on the traditional system and completely absent on the G-Sync machine. I can't stress enough just how smooth the G-Sync experience was, it's a game changer.

The combination of technologies like GeForce Experience, having a ton of GPU performance and G-Sync can really work together to deliver a new level of smoothness, image quality and experience in games. We've seen a resurgence of PC gaming over the past few years, but G-Sync has the potential to take the PC gaming experience to a completely new level.

Update: NVIDIA has posted a bit more information about G-Sync, including the specs of the modified Asus VG248QE monitor, and the system requirements.

NVIDIA G-Sync System Requirements
Video Card GeForce GTX 650 Ti Boost or Higher
Display G-Sync Equipped Display
Driver R331.58 or Higher
Operating System Windows 7/8/8.1

Comments Locked

217 Comments

View All Comments

  • Soulwager - Monday, October 21, 2013 - link

    It doesn't matter whether you measure position in inches or pixels, what matters is relative velocity. As in, the slope of the line connecting any two dots. That slope is what your brain interprets as velocity, and it's harder to see that on a scatter plot. Latency is distance between two lines on the X axis.

    Second, you don't know how triple buffering works if you think it adds any more latency than double buffering. All triple buffering does is let the GPU start rendering the next frame while Vsync is blocking a buffer swap.

    Third, I have no idea why you're trying to convince me that there's no visible difference, because this is demonstrably false. Watch your mouse cursor as you move it quickly across the screen, you see it in several distinct locations corresponding to where it was when your monitor refreshed. When your monitor is updating fast enough to outpace what the eye/brain registers, those distinct locations will appear to blur into a line. This doesn't happen at 60hz, it doesn't happen at 120hz, and I'll be surprised if it happens at 240hz. There's a huge difference between "faster than the eye can detect" and "good enough that we need to prioritize other aspects of image quality". If you can't tell the difference between two monitors, that doesn't mean your experience transfers to other people, hell, about 10-15% of people can't even tell the difference between a 60hz and 120hz display.
  • wojtek - Monday, October 21, 2013 - link

    OK, you may believe in any voodoo you want, even that your retina is nexgen one. But it is not science it's just your religion. About triple buffering it depends how it operate on buffers (yes there are few strategies). If you want real smoothness then additional latency is added, if you prefer less latency then jagging may occur: http://en.wikipedia.org/wiki/Multiple_buffering
  • Soulwager - Monday, October 21, 2013 - link

    There have been blind studies on 60hz vs 120hz panels, and that's where I got the 10-15% figure. The vast majority of people can feel the missing frame, even though it's only an 8.33 ms difference in how long the frame lasts. Yes a millisecond is a small measurement of time. a thousandth of an inch is a small measurement of distance, but you can still feel it with a fingertip. Relative values are more important than absolute values as far as human perception goes, because our instincts make logarithmic comparisons. It's far easier to detect a 6ms variation on a 12ms frame time than it is to detect the same 6ms variation on a 24ms frame time.

    You're also confusing triple buffering with frame metering, which may be used in conjunction with triple buffering. Backpressure also impacts latency, both perceived and real.
  • Soulwager - Monday, October 21, 2013 - link

    And to clarify, even with the D3D implementation of triple buffering, you don't get extra latency until your framerate starts getting up near your refresh rate, which wasn't relevant in the graph I posted earlier.
  • wojtek - Tuesday, October 22, 2013 - link

    OK, And how is that related to low framerate much higher latency (because of frame timing) g-sync solution? :P
  • Soulwager - Tuesday, October 22, 2013 - link

    I'm not sure what you're asking about, so I'll just elaborate in general:

    There's no reason to use triple buffering if you're using G-sync, no-sync, or if you can maintain a consistent framerate equal to your refresh rate. Triple buffering is only useful if you're using v-sync with a framerate below your refresh rate, in which case it will keep your GPU active, and allow framerates between thresholds. With the d3d implementation of triple buffering, you get latency problems when you fill both back buffers before you're ready to scan, because it won't drop the older frame, it will just sit there with both back buffers full until the next refresh. Now, keep in mind you can only fill both back buffers if you have a framerate higher than your refresh rate(or bad frame metering with a multi-gpu setup, but that's a separate issue). If double buffering was used instead in that graph, the delays in the V-sync graph would cause more dropped frames instead of the drift(every frame that took over 16.6ms would be delayed, instead of just when all the delays accumulate).

    G-sync is basically a double buffered V-sync that pushes a frame to the monitor whenever the frame is done. This means when your framerate would exceed your max refresh rate, it's basically equivalent to current double buffered v-sync, but when your framerate drops below the max refresh, you don't get any tearing, latency increase(beyond that caused by increased render time), or judder, it just delays the monitor's refresh until the frame is finished. This means you don't get a big penalty for slightly late frames, like you do with V-sync. If your frame takes 17 ms, you see it at 17 ms instead of waiting until the next refresh. A 1ms late frame is 1 ms late with g-sync, instead of being 8.33 ms late on a 120hz display or, 16.6ms late on a 60hz display. As a bonus, when you have a non-maxed framerate the buffer flips immediately when the frame is finished, so it doesn't cause your frame rate to drop to the next lower threshold, and it doesn't introduce latency by waiting for the next refresh.

    If you normally game with v-sync disabled, g-sync means you get rid of tearing, without increasing latency. Basically, when you're at the point of the scan where the buffer flips, and you would be halfway through scanning the old frame onto the panel, you instead scan the entire newly completed frame onto the panel.

    Oh, and here's a scatter plot version of that graph, in case you're interested: http://i.imgur.com/1Ev18XX.png

    As for whether you specifically can see a difference between G-sync and v-sync, I don't know, I guess you'll just have to wait until you can see it in person, or maybe watch a high speed video comparison. The only people that won't see benefit are those that can consistently maintain a framerate equal to their refresh rate. But not all game can manage that, even on a high end system.
  • wojtek - Wednesday, October 23, 2013 - link

    "As for whether you specifically can see a difference between G-sync and v-sync, I don't know, I guess you'll just have to wait until you can see it in person"

    Yes and that is the point. After the technology will be released we should do double blind tests to see if anyone in reality is able to see the difference between g-sync and 120/144 FPS monitors. But ahead of time I am sure g-sync is incompatibile with comfortable backlight strobing (we were talking latter) when it drops below 60 Hz (or even 70 Hz). 120/144 Hz backlight strobes will remain consistent and comfortable.
  • wojtek - Wednesday, October 23, 2013 - link

    BTW Temporal resolution, the speed of your retina perceptiveness, is 100 ms for rods (cells that see colors) and 10 to 15 ms for cones (cells that see b&w light), so no chance to see 8 ms timing difference, but great chance to see a lot of blur on objects moving with FPS lower than 60 Hz. http://webvision.med.utah.edu/book/part-viii-gabac...
  • Soulwager - Wednesday, October 23, 2013 - link

    All that means is that your eye adds latency to whatever is coming in. If you have a fully white screen that turns black for 5ms and back to white, you probably won't notice(if anything it would just look like a slight change in brightness).

    The eye doesn't have a scan rate, your rods can fire independently of each other. Say you have a white ball moving left to right across a black background on a 500hz display. If that ball jumps up an inch for a single frame and then back down like nothing happened, you'll still see that it happened, because those rods were triggered, it just takes them some time to report that they were triggered, and reset. When your eye is tracking a moving object, the exact same thing happens in front of and behind the object if it isn't moving as expected.

    http://www.100fps.com/how_many_frames_can_humans_s...
  • wojtek - Wednesday, October 23, 2013 - link

    No, as you describe later, latency is one side effect, another is sub 10 ms awareness that does not exist. That is why the strobe above 100Hz and above is irrelevant, and may be used to mask shifting of picture distorsion. In essence what it does is some kind of low pass filtering where blur is caused by high frequency visual noise of overlaping images.

Log in

Don't have an account? Sign up now