GeForce 8800 GTX Core Clock Scaling

As we previously mentioned, in order to test core clock scaling, we fixed the memory clock at 1020 MHz and the shader clock at 1350 MHz. Our tests were performed at different clock speeds, and we will report on performance vs. clock speed.

One of the issues with games that don't make use of a timedemo that renders exactly the same thing every time is consistency. Both Oblivion and F.E.A.R. can vary in their results as the action that takes place in each benchmark is never exactly the same twice. These differences are normally minimized in our testing by using multiple runs. Unfortunately, with the detail we wanted to use to look at performance, normal variance was playing havoc with our graphs. For this, we devised a solution.

Our formula for determining average framerate at each clock speed is as follows:

FinalResult = MAX(AvgFPS.run1, AvgFPS.run2, ... , AvgFPS.run5, PreviousClockSpeedResult)

What this means is that we don't see normal fluctuation that would cause a higher clock speed to yield a lower average FPS while still maintaining a good deal of accuracy. Normally, our tests have a plus or minus 3 to 5 percent variability. Due to the number of samples we've taken and the fact that previous test results are used, deviation is cut down quite a bit. In actual gameplay, there will be much more fluctuation here.

As far as settings go, Oblivion is running with maximum quality settings, the original texture pack, no anisotropic filtering and no antialiasing. We have chosen 1920x1440 in an attempt to test a highly compute limited resolution without taxing memory as much as something like 2560x1600 would. For F.E.A.R., we are also using 1920x1440. All the quality settings are on maximum with the exception of soft shadows, which we leave disabled.

While the maximum stable clock speed of our OCZ card is 640, we were able to squeak out a couple benchmarks at higher frequencies if we powered down for a while between runs. It seems like aftermarket cooling could help our card maintain higher clock speeds, but we'll have to save that test for another article.



It's clear that there are some granularity issues in setting the core clock frequency of the 8800 GTX. Certainly, this doesn't seem to be nearly the problem we saw with the 7800 GTX, but it is still something to be aware of. In general, we see performance improvements similar to the percent increase of clock speed. This indicates that core clock speed has a fairly direct effect on performance.



The F.E.A.R. graph looks a little more disturbingly discrete, but it is important to note that the built in benchmark only provides whole numbers and not real average framerates. Again, for the most part, performance increases keep up with the percent increase in clock speed. This becomes decreasingly true at the higher end of the spectrum, and this could indicate that extreme overclocking of 8800 GTX cards will have diminishing returns over 660 MHz. Unfortunately, we don't have a card that can handle running at higher speeds at the moment.

Overclocking GeForce 8800 GTX Shader Clock Scaling
Comments Locked

12 Comments

View All Comments

  • acejj26 - Friday, February 16, 2007 - link

    is it a combination of warranty and guarantee?
  • DerekWilson - Friday, February 16, 2007 - link

    Sorry, I typed the word for the person to whom a warranty is given, thus the spell checker failed me. Thanks for the catch.

Log in

Don't have an account? Sign up now