Crysis, DX10 and Forcing VSYNC Off in the Driver

Why do we keep on coming back to Crysis as a key focal point for our reviews? Honestly because it’s the only thing out there that requires the ultra high end hardware enabled by recently released hardware.

That and we’ve discovered something very interesting this time around.

We noted that some of our earlier DX10 performance numbers on Skulltrail looked better than anything we could get more recently. In general, the higher number is usually more likely to be "right", and it has been a frustrating journey trying to hunt down the issues that lead to our current situation.

Many reinstalls and configuration tweaks later and we’ve got an answer.

Every time I set up a system, because I want to ensure maximum performance, the first thing I do is force VSYNC off in the driver. I also generally run without having the graphics card scale output for my panel; centered timings allow me to see what resolution is currently running without having to check. But I was in a hurry on Sunday and I must have forgotten to check the driver after I set up an 8800 Ultra SLI for testing Crysis.

Low and behold, when I looked at the numbers, I saw a huge performance increase. No, it couldn’t be that VSYNC was simply not forced off in the driver could it? After all, Crysis has a setting for VSYNC and it was explicitly disabled; the driver setting shouldn’t matter.

But it does.

Forcing VSYNC off in the driver can decrease performance by 25% under the DX10 applications we tested. We see a heavier impact in CPU limited situations. Interestingly enough, as we discussed last week, with our high end hardware, Crysis and World in Conflict were heavily CPU and system limited. Take a look for yourself at the type of performance gains we saw from disabling VSYNC. These tests were run on Crysis using the GeForce 9800 GX2 in Quad SLI.


We would have tried overclocking the 790i system as well if we could have done so and maintained stability.

In looking at these numbers, we can see some of the major issues we had between NVIDIA platforms and Skulltrail diminish. There is still a difference, but 790i does have PCIe 2.0 bandwidth between its cards and it uses DDR3 rather than FB-DIMMS. We won’t be able to change those things, but right now my option is to run half the slots with 800 MHz FB-DIMMS or all four slots with 667 MHz. We should be getting a handful of higher speed lower latency FB-DIMMS in for testing soon which will we believe will help. Now that we’ve gotten a better feel for the system, we also plan on trying some bus overclocking to help alleviate the PCIe 2.0 bandwidth advantage 790i has. It also seems possible to push our CPUs up over 4GHz with air cooling, but we really need a larger PSU to keep the system stable (without any graphics going, a full CPU load can pull about 700W at the wall when running 4GHz at 1.5v) especially when you start running a GPU on top of that.

Slower GPUs will benefit less from not forcing VSYNC off in the driver, but even if the framerate is near a CPU limit (say, within 20%) performance will improve. NVIDIA seems more impacted by this than AMD, but we aren’t sure at this point whether that is because NVIDIA’s cards expose more of a CPU limit due to their higher performance.

NVIDIA is aware of the VSYNC problem, as they were able to confirm our findings yesterday.

Being that we review hardware, it is conceivable that this issue might not affect most gamers. Many people like VSYNC on, and since most games allow for the option it isn’t usually necessary or beneficial to force VSYNC off in the driver. So we decided to ask in our video forum just how many people force VSYNC off in the driver, and whether they do so always or just some of the time.

More than half our 89 respondents (at the time of this writing) never force VSYNC off, but 40% of the remaining respondents admitted to forcing VSYNC off at some point, half of these always forcing VSYNC off (just as we do in our testing).

This is a big deal, especially for members who want to play Crysis and have lower end CPUs. We didn’t have time to rerun all of our numbers without VSYNC forced off in the driver, so keep in mind that these numbers could benefit a lot by doing so.


The Test Scaling and Performance with 3-way SLI
Comments Locked

49 Comments

View All Comments

  • crimsonson - Tuesday, April 1, 2008 - link

    Although the graphs works well it is gets very difficult to read when there are a lot of test subjects. Can you guys find another way? Trying to trace a dozen lines and trying to make a distinction between them is rather hard and defeats the whole purpose of a visual AID. I end up reading the spreadsheet instead.

    .02
  • Spacecomber - Wednesday, April 2, 2008 - link

    These graphs are beginning to resemble the one's from AnandTech's heatsink reviews, which is not a good thing. Jamming as much information as you can into a single graph serves no purpose. They're unreadable. The reader is forced to use the table, instead, which means the graph has failed as an illustration.
  • geogaddi - Tuesday, April 1, 2008 - link


    seconded. and reading spreadsheets for me is like shooting pool with a piece of rope...

    .04
  • jtleon - Tuesday, April 1, 2008 - link

    I must second that sentiment. Isn't the objective of web content to clearly communicate a message and minimize confusion? There are many data communication tools available, one example is DaDISP (www.dadisp.com) which is extremely powerful and rather cost effective. I don't work for dadisp, only use it on a regular basis. The free evaluation version should satisfy your needs easily.

    Regards,
    jtleon
  • 7Enigma - Tuesday, April 1, 2008 - link

    I'm really dissapointed to see the 8800GT not present in this review. As a person just getting ready to build a system I am on the fence with purchasing this new 9800GTX, or saving >$100 and going with the 8800GT until we actually get a next-gen part.

    Since the platforms/issues are around I cannot really compare these results to previous reviews. If you could please comment, or throw the 8800GT on for a couple quick gaming benchmarks I (we) would be greatly appreciative!
  • Spuke - Tuesday, April 1, 2008 - link

    I was hoping I could finally see a comparison between the 8800GT 512MB and the 9600GT. All Anandtech has is a review of the 8800GT 256BM versus the 9600GT.
  • 7Enigma - Tuesday, April 1, 2008 - link

    8800GT and 9600GT are directly compared here:

    http://www.xbitlabs.com/articles/video/display/gai...">http://www.xbitlabs.com/articles/video/...ainward-...

    "As we have seen in the gaming tests, 64 execution units are enough for most of modern games. We’ve only seen a serious performance hit in comparison with the Nvidia GeForce 8800 GT 512MB in such games as Bioshock, Crysis, Unreal Tournament 3 and Company of Heroes: Opposing Fronts. In a few tests the GeForce 9600 GT was even faster than the more expensive and advanced GeForce 8800 GT 512MB due to the higher frequency of the core. "

    So basically its a good stop-gap solution for right now, but I would probably go with the 8800GT while waiting for the next gen cards (even at typical 19" LCD resolutions). If ATI/AMD was competetive currently I think the 9600GT would be the perfect card, but we have no idea how long Nvidia will milk their crown, and in turn how long before that next gen card actually takes to come out.
  • bill3 - Tuesday, April 1, 2008 - link

    So you're saying Nvidia doesn't like more profits? And they like to help AMD?

    Because that's what not bringing out a next gen card does.

    Is Intel going to delay Penyrns successor now to milk their lead also?

    Nvidia doesn't have any other card ready, period. Because they are too slow, period.
  • 7Enigma - Wednesday, April 2, 2008 - link

    Lol Bill, you need to learn something about business. If you have the lead virtually across the board, any system builder but far more importantly OEM's will purchase your card if the price is right (ie AMD/ATI not undercutting for the sake of survival as they are with their Phenom cpu's). It doesn't matter if the top of the line is 2X as fast as the competition or 5X as fast, it will be purchased because it, for the time being, is the best. The same follows for the lower-grade cards. There is no reason to bring out the next-gen card killer until the competition brings something to the table that is actually competetive (or gasp...better). And funny you mention Intel because I believe they are doing EXACTLY that with their new quad's that aren't the very top of the line. These long delays and extremely limited availability seems to smack of milking for all its worth.

    That's good business I can't fault them for doing it. Sucks for us, but smart for them.
  • Jovec - Tuesday, April 1, 2008 - link

    Yes, AT should pick 2-3 of the most common cards (how many of us are still running 8800GTS cards?) and include numbers as a baseline. Or have a low-mid and mid/high system that all cards get tested on for easy comparison in ongoing graphs. The question most of have is "I have card X, if I buy card Y how much of an improvement will it be and is it worth the cost?"

Log in

Don't have an account? Sign up now