Overclocking our Geforce 6600GTs

Once again, we're going to commend NVIDIA for including the coolbits registry tweak in their drivers that allows core and memory clock speed adjustment (among other things). No matter how new and pretty ATI makes overdrive look, it just doesn't give the end user the kind of control that these two slider bars have. Unless something pretty big changes by the end of the year, ATI users still have to rely on 3rd party tools for any manual overclocking.

But, when all is said and done, overclocking is about much more than just moving some sliders to the right. After spending quite a few hours (each) doing nothing but playing with the clock speeds of eleven different Geforce 6600GT cards, we started to wonder if there was any purpose in life but to increase the speed of small square bits of silicon to a point just before failure. Hopefully, we can pass on what we have learned.

There are multiple things to keep in mind; it's not just core and memory speed. Power and GPU quality are also concerns. If the device doesn't have the power to drive all its components at the requested clock speed, something has to give. This means it may be able to push memory really high, and core really high, but not both at the same time. Also, GPU quality physically limits the maximum speed at which a core can run and yield correct results. One of the nice things about coolbits is that it won't let you try to run your card at a clock speed that is impossible. If the card can't provide enough power to the components, or the GPU simply fails to run correctly at that speed, the driver won't let you enable that clock speed. With utilities such as PowerStrip, the end user only has visual glitches and system lockups/reboots to indicate problems of this nature.

For anyone who doesn't know, to enable the clock controls on NVIDIA hardware, simply add a DWORD named coolbits with hex value 3 to registry key: "HKLM/Software/NVIDIA Corporation/Global/NVTweak/"

The beginner's and advanced method for a stable overclock begin at the same place: basing your core and mem clock speeds near what NVIDIA's driver picks. Go into the NVIDIA control panel after enabling coolbits, choose the clock control panel, and select manual control. Make sure that you are always setting clocks for 3D performance and not 2D. Let NVIDIA pick some clock speeds for you. At this point, we aren't sure exactly what process NVIDIA uses to determine these clock speeds, but at the very least, it makes sure that both the GPU and RAM will have enough power at those frequencies. We will try to look into the other conditions of this feature for future articles.

The stable way out is to look at what NVIDIA set the clock speeds to, drop them by 10MHz (core and mem), and set them there. Then grab Half-Life 2, 3dmark05, or Doom 3 and run a timedemo numerous times, watching closely for glitches and signs of overheating or other issues. Those are the three hottest running titles that we have in our labs at the moment, but Half-Life 2 is, hands down, the leader in turning video cards into cookware.

If you want more performance, it's possible to go faster than what NVIDIA says you can do. The first thing to do is to find the fastest speed that the driver will let you set the core. Then you have somewhat of range of what is possible. Of course, that speed won't be it; try half way between the NVIDIA recommendation and the max clock speed - but leave the memory at its factory setting. Pay close attention, and make sure that you're using a benchmark that you can bail quickly in case you notice any problems. If there are glitches, cut the space between where you are and the NVIDIA setting in half and try again. It's almost like a binary search for the sweet spot, but you can stop when you know that you're safe. When you find a core clock speed that you like, if it's much higher than the NVIDIA driver-determined setting, you may wish to bring the memory clock up slowly to keep from throwing off the balance.

So how do you know if something is wrong when you've overclocked? In newer games like Half-Life 2, all the shaders start to render slightly incorrectly. In HL2 especially, the anomalies tend to have high locality of reference (similar problems happen near each other) and form an almost grid-like pattern of disruption on surfaces. It used to be that disappearing geometry and hard locks were the number one tell tale sign, but now vertex and pixel shaders are a little more sensitive and subtle. On the memory side, if clocks are too high, we might see speckling or off-color pixels. Edges could be disjoint, and texturing issues can occur.

Index Albatron
POST A COMMENT

84 Comments

View All Comments

  • Pete - Friday, December 10, 2004 - link

    Obviously Derek OCed himself to get this article out, and he's beginning to show error. Better bump your (alarm) clocks down 10MHz (an hour) or so, Derek. Reply
  • pio!pio! - Friday, December 10, 2004 - link

    Noticed a typo. At one point your wrote 'clock stock speed' instead of 'stock clock speed' easy mistake. Reply
  • Pete - Friday, December 10, 2004 - link

    Another reason to narrow the distance b/w the mic and the noise source is that some of these cards may go into SFFs, or cases that sit on the desk. 12" may well be more indicative of the noise level those users would experience. Reply
  • Pete - Friday, December 10, 2004 - link

    Great article, Derek!

    As usual, I keep my praise concise and my constructive criticism elaborate (although I could argue that the fact that I keep coming back is rather elaborate praise :)). I think you made the same mistake I made when discussing dB and perceived noise, confusing power with loudness. From the following two sources, I see that a 3dB increase equates to 2x more power, but is only 1.23x as loud. A 10db increase corresponds to 10x more power and a doubling of loudness. So apparently the loudest HSFs in this roundup are "merely" twice as loud as the quietest.

    http://www.gcaudio.com/resources/howtos/voltagelou...
    http://www.silentpcreview.com/article121-page1.htm...

    Speaking of measurements, do you think 1M is a bit too far away, perhaps affording less precision than, say, 12"?

    You might also consider changing the test system to a fanless PSU (Antec and others make them), with a Zalman Reserator cooling the CPU and placed at as great a distance from the mic as possible. I'd also suggest simply laying the test system out on a piece of (sound-dampening) foam, rather than fitting it in a case (with potential heat trapping and resonance). The HD should also be as quiet as possible (2.5"?).

    I still think you should buy these cards yourselves, a la Consumer Reports, if you want true samples (and independence). Surely AT can afford it, and you could always resell them in FS/FT for not much of a loss.

    Anyway, again, cheers for an interesting article.
    Reply
  • redavnI - Thursday, December 9, 2004 - link

    Very nice article, but any chance we could get a part 2 with any replacement cards the manufacturers send and I'd like the see the Pine card reviewed too. It's being advertised as the Anandtech Deal at the top of this article and has dual dvi like the XFX card. Kind of odd one of the only cards not reviewed gets a big fat buy me link.

    To me it seems that with the 6600GT/6800 series Nvidia has their best offering since the Geforce4 TI's...I'm sure I'm not the only one still hanging on to my Ti4600.

    Reply
  • Filibuster - Thursday, December 9, 2004 - link

    Something I've just realized: The Gigabyte NX66T256D is not a GT yet supports SLI. Are they using a GT that can't run at the faster speeds and selling it as a 6600 standard? It has 256MB.
    We ordered two from a vendor who said it definately does SLI.

    http://www.giga-byte.com/VGA/Products/Products_GV-...

    Can you guys find out for sure?
    Reply
  • TrogdorJW - Thursday, December 9, 2004 - link

    Derek, the "enlarged images" all seem to be missing, or else the links are somehow broken. I tested with Firefox and IE6 and neither one would resolve the image links.

    Other than that, *wow* - who knew HSFs could be such an issue? I'm quite surprised that they are only secured at two corners. Would it really have been that difficult to use four mount points? The long-term prospects for these cards are not looking too good.
    Reply
  • CrystalBay - Thursday, December 9, 2004 - link

    Great job on the quality control inspections of these cards D.W. Hopefully IHV's take notice and resolve these potentially damageing problems. Reply
  • LoneWolf15 - Thursday, December 9, 2004 - link

    I didn't see a single card in this review that didn't have a really cheesey looking fan...the type that might last a couple years if you're really lucky, but might last six months on some cards if you're not. The GeForce 6600GT is a decent card; for $175-250 (depending on PCIe or AGP) you'd think vendors would put a fan deserving of the price. My PNY 6800NU came with a squirrel-cage fan and super heavy heatsink that I know will last. Hopefully, Arctic Cooling will come out with an NV Silencer soon for the 6600 family; I wouldn't trust any of the fans I saw here to last. Reply
  • Filibuster - Thursday, December 9, 2004 - link

    What quality settings were used in the games?

    I am assuming that Doom 3 is in medium since these are 128MB cards.
    I've read that there are some 6600GT 256MB cards coming out (Gigabyte GV-NX66T256D and MSI 6600GT-256E, maybe more) Please show us some tests with the 256MB models once they hit the streets (or if you know they are definately not, please tell us that too)

    Even though the cards only have 128bit bus, wouldn't the extra ram help out in places like Doom 3 where texture quality is a matter of ram quantity? The local video ram still has to be faster than fetching it from system ram.
    Reply

Log in

Don't have an account? Sign up now