Final Words

After spending a few days with G-Sync, I’m just as convinced as I was in Montreal. The technology, albeit a relatively simple manipulation of display timing, is a key ingredient in delivering a substantially better gaming experience.

In pathological cases the impact can be shocking, particularly if you’re coming from a 60Hz panel today (with or without v-sync). The smoothness afforded by G-Sync is just awesome. I didn’t even realize how much of the v-sync related stutter I had simply come to accept. I’d frequently find a scene that stuttered a lot with v-sync enabled and approach it fully expecting G-Sync to somehow fail at smoothing things out this time. I always came away impressed. G-Sync also lowered my minimum frame rate requirement to not be distracted by stuttering. Dropping below 30 fps is still bothersome, but in all of the games I tested as long as I could keep frame rates north of 35 fps the overall experience was great. 

In many situations the impact of G-Sync can be subtle. If you’re not overly bothered by tearing or are ok with v-sync stuttering, there’s really nothing G-Sync can offer you. There’s also the fact that G-Sync optimizes for a situation that may or may not be so visible 100% of the time. Unlike moving to a higher resolution or increasing quality settings, G-Sync’s value is best realized in specific scenarios where there’s a lot of frame rate variability - particularly between 30 and 60 fps. Staying in that sweet spot is tougher to do on a 1080p panel, especially if you’ve already invested in a pretty fast video card.

If you’re already running games at a fairly constant 60 fps, what G-Sync will allow you to do is to crank up quality levels even more without significantly reducing the smoothness of your experience. I feel like G-Sync will be of even more importance with higher resolution displays where it’s a lot harder to maintain 60 fps. Ideally I’d love to see a 2560 x 1440 G-Sync display with an IPS panel that maybe even ships properly calibrated from the factory. I suspect we’ll at least get the former.

There's also what happens if game developers can assume the world is running on displays with variable refresh rates. All of the sudden targeting frame rates between 30 and 60 fps becomes far less of a tradeoff.

NVIDIA hasn’t disclosed much about G-Sync pricing, although ASUS has already given us a little guidance. The VG248QE currently sells for $280 on Newegg, while the upcoming G-Sync enabled flavor will apparently be sold for $400. The $120 premium can be a tough pill to swallow. A 40% increase in display cost is steep, which is another reason why I feel like NVIDIA might have a little more success pushing G-Sync as a part of a higher end display. On the flip side NVIDIA could easily get those costs down by migrating from an FPGA to an ASIC, although to justify that move we’d have to see pretty broad adoption of G-Sync.

Some system integrators will be selling the aftermarket upgraded VG248QE between now and CES, but you can expect other displays to be announced over the coming months. NVIDIA still hasn't figured out if/how it wants to handle end user upgrades for those who already own VG248QE displays.

I feel like NVIDIA is slowly but surely assembling a bunch of components of a truly next-generation gaming experience. With all of the new consoles launched, the bar is set for the next several years. PCs already exceed what consoles are capable of in terms of performance, but the focus going forward really needs to be on improving ease of use as well as the rest of the experience. Things like GeForce Experience are a step in the right direction, but they need far more polish and honestly, integration into something like Steam. G-Sync just adds to the list. For PC gaming to continue to thrive, it needs to evolve into something even more polished than it is today. It’s not enough to just offer higher resolution and a better looking image than what you can get on other platforms, it’s very important to provide a smoother and more consistent experience as well. G-Sync attempts to and succeeds at doing just that.

With G-Sync enabled, I began to expect/demand more visually from my games. Aliasing and other rendering imperfections were far more pronounced now that a big portion of stuttering was removed. G-Sync isn't the final solution, but rather the first on a long list of things that need improving. There are other use cases for G-Sync outside of gaming as well. Streaming video where bandwidth constraints force a variable frame rate is another one I’ve heard passed around.

Although G-Sync is limited to NVIDIA hardware (GeForce GTX 650 Ti Boost or greater), the implementation seems simple enough that other manufacturers should be able to do something similar. That’s obviously the biggest issue with what we have here today - it only works with NVIDIA hardware. For die hard NVIDIA fans, I can absolutely see a G-Sync monitor as being a worthy investment. You might just want to wait for some more displays to hit the market first.

How it Plays
Comments Locked

193 Comments

View All Comments

  • extide - Thursday, December 12, 2013 - link

    I think you are on to something here, something like a modification of the packet based DP protocol. Now, the speed of the packets in DP depends on the resolution and refresh rate. Why not make the packets come whenever they are ready (in 3d) and at a regular rate (in 2d). THen have a monitor with an lcd panel expecting a signal like that.

    I mean, I think in the end the way nVidia did it right here is just a way to make it work right now with the constraints of the exiting lcd panels and DP protocols. In the future, I could easily see this sort of tech being built in to future protocols, video cards, and monitors, and probably all odne without needing an expensive FPGA and additional RAM.
  • Egg - Friday, December 13, 2013 - link

    I don't see how drawing pixel by pixel solves anything. The issue arises whenever you do not draw a full frame.
  • ZKriatopherZ - Saturday, December 14, 2013 - link

    Yes, and if you aren't drawing frame by frame on the monitor there is no issue :D I'm saying remove the frame by frame drawing on the display side completely. This would create an effective "refresh" rate that is only limited by how many pixels the card can push and how fast the pixels can change on the display side. You could also take it a step further and move away from the frame by frame output on the software side. Fantastic example is the desktop display where 50% of the time the only change on the display is the movement of the mouse cursor. Even in a fps game where most of the screen changes constantly it would still benefit from the lack of a frame holdup since what we call refresh rate would no longer exist.

    Truthfully this is all speculative, I don't have enough experience to tell if there are shortcomings here I'm overlooking but nothing blatant stands out at the moment.
  • blitzninja - Saturday, December 21, 2013 - link

    Here's one problem with the video card pushing out data on a per pixel basis. It would take a LOT more data.

    When a scene is rendered, the instructions essentially target a set of pixels and apply an effect to them (colour, saturation, brightness, etc) and when all calculations are done the final product is sent to the screen to be rendered (tearing is when the not all of the new image has been copied into the monitor's frame buffer).

    The problem is that the GPU is blind to any upcoming draws calls in that it does not know which pixels will be affected until the calculations are done. This means that there is no way for the GPU to know when a particular pixel is full computed or "rendered" and ready to be sent to the monitor's frame buffer.

    A better solution would be to, for 2D applications, check for pixel changes from one frame to another and simply send the change.

    For 3D is see this as impossible since any small change (camera movement or otherwise) will require a complete re-render due to the nature of 3D and how the calculations are done (the GPU has no idea how to render a scene, it simply follows instructions layed out by the developer, so it can't figure out which pixels it can skip).
  • blitzninja - Saturday, December 21, 2013 - link

    Quick clarification: The increase in data would be the need for the GPU to continuously overwrite pixels in the monitor's frame buffer in 3D mode until all draws are complete.
  • ZKriatopherZ - Saturday, December 28, 2013 - link

    Does this have to do with the developer or the established rendering API? (OpenGL or DirectX) If the API is designed to output that way wouldn't that make both development and implementation easier? I get what you are saying about camera movement but there is a speed limitation caused by rendering frame by frame as well. if you are dealing with something like a tn panel that has a quick color to color change the effective fps on a per pixel basis becomes closer to 300 fps (or we could call it pixel per second here) even if you are drawing full screens.

    I also am wrapping my head around what you are saying about 3d applications. Ultimately they are still outputting to a 2d display. I understand there are shaders and other effects that may require a full screen write and it sounds like the Graphics Card, OS, API's and Display are all set up that way. It may take some serious effort to really take a step back and take a more efficient approach based on the current display technology. Ultimately though if you can quickly kind of go through and make the changes to allow this to take place I do feel like it will end up being a much more efficient and faster approach. It may just not be as easy as it first seemed to me.
  • otherwise - Thursday, December 12, 2013 - link

    In the future, we're all going to need a new monitor. Depending on the price premium; and assuming these make their way into IPS displays; it might be hard to justify buying a non-Gsync monitor over a GSync monitor. I doubt many are going to run out to buy a new $400 display right now; but this will have a powerful effect on consumer behavior down the line.
  • oranos - Thursday, December 12, 2013 - link

    whats your point? Maybe this site should stop posting all tech articles that don't fit into wide mainstream demographics?
  • Black Obsidian - Thursday, December 12, 2013 - link

    My point was obviously that this seemed to be a technology currently useful to virtually nobody.

    Ian pointed out future applications that I hadn't considered, which was exactly the sort of feedback I was hoping for.
  • BeVar - Thursday, December 12, 2013 - link

    @Black Obsidian> No! This is just a marketing gimmick. A costly one at that. I like you would just buy a batter video board. But, as Art Linkletter said "people are funny".

Log in

Don't have an account? Sign up now