Overclocking

As always, we looked at overclocking these already fast 8800s to see how much more we could get out of the GPU. Since none of these cards were clocked higher than reference speeds, and we covered performance of the reference 8800 GTX and 8800 GTS in the launch article, we will be focusing on overclocking, power, heat, and noise levels for this review.

We overclocked these cards by using NVIDIA's coolbits driver tweak allow us to raise the core and memory clock speeds in set increments until they reach the highest point possible while still running stable. By running stable, we mean that the cards run our benchmarks for extended periods of time without any graphical artifacts or crashes. Below is a table of the cards and the clock speeds we achieved when doing our own user overclocking on them.

Card Core Overclock Memory Overclock
ASUS GeForce EN8800 GTX 629MHz 1021MHz
BFG GeForce 8800 GTX 649MHz 973MHz
EVGA e-GeForce 8800 GTX w/ ACS3 659MHz 1013MHz
Leadtek Winfast GeForce 8800 GTX 627MHz 1033MHz
MSI GeForce NX8800 GTX 652MHz 1040MHz
Sparkle Calibre 8800 GTX 631MHz 914MHz
Sparkle GeForce 8800 GTX 629MHz 1011MHz
XFX GeForce 8800 GTS 654MHz 866MHz

Keep in mind that the factory speeds of these cards were all reference speeds; that is 575/900 for the 8800 GTX and 500/800 for the 8800 GTS. With that in mind we can see that several of these cards got fairly high overclocks, especially on the memory clock. Something we noticed when overclocking (that we will see illustrated in our overclocking performance graphs) is that like previous generations of NVIDIA cards, the core clock generally makes a more significant impact on the overall performance of the card.

Note that the current version of the driver only allows adjustment of the core and memory clocks, not the shader clock of G80. We aren't sure if the shader and core clocks are derived from the same source clock so that overclocking one actually changes the other, but we plan on looking at shader core overclocking as soon as there is software support for it. Currently there is a way to increase shader clock through a BIOS flash of the cards, but we chose to focus on overclocking that was possible without resorting to flashing your card's BIOS for this article.

The MSI GeForce 8800 GTX managed to achieve one of the highest overclocks, with the EVGA e-GeForce 8800 GTX beating it slightly on the core but not on the memory clock. The XFX 8800 GTS was able to achieve a core clock speed of higher than some of the 8800 GTXs, which is interesting, but because of its scaled down shader core it will not be able to outperform a GTX. Interestingly, the card we had higher hopes for didn't seem to overclock as well as we would have thought. The Sparkle Calibre 8800 GTX, with it's elaborate HSF design and peltier cooler, didn't get a very high memory clock at all, but did get a core boost of 56MHz on the core clock which will help performance a little.

As a final comment on overclocking, remember that your own results may vary. Given the brand-spanking-new nature of the GPU core, it could be that NVIDIA chose 575 MHz as the final core clock in order to get the most cores running at that speed. Since they aren't offering a range of GPU speeds (i.e. like CPU releases), cores either work or don't work at 575 MHz. Whether the overclocks we reached are due to variability in cores or come from actual BIOS/card differences (not that most of the cards are really different) is impossible to say without a much larger sampling of units from each manufacturer.

Leadtek, ASUS & XFX Overclocked Performance
Comments Locked

34 Comments

View All Comments

  • JarredWalton - Monday, November 13, 2006 - link

    Derek already addressed the major problem with measuring GPU power draw on its own. However, given similar performance we can say that the cards are the primary difference in the power testing, so you get GTX cards using 1-6w more power at idle, and the Calibre uses up to 15W more. At load, the power differences cover a 10W spread, with the Calibre using up to 24W more.

    If we were to compare idle power with IGP and an 8800 card, we could reasonably compare how much power the card requires at idle. However, doing so at full load is impossible without some customized hardware, and such a measurement isn't really all that meaningful anyway if the card is going to make the rest of the system draw more power anyway. To that end, we feel the system power draw numbers are about the most useful representation of power requirements. If all other components are kept constant on a testbed, the power differences we should should stay consistent as well. How much less power would an E6400 with one of these cards require? Probably somewhere in the range of 10-15W at most be likely.
  • IKeelU - Monday, November 13, 2006 - link

    Nice roundup. One comment about the first page, last paragraph:

    "HDMI outputs are still not very common on PC graphics cards and thus HDCP is supported on each card."

    Maybe I'm misinterpreting, but it sounds like you are saying that HDCP is present *instead* of HDMI. The two are independent of each other. HDMI is the electrical/physical interface, whereas HDCP is the type of DRM with which the information will be encrypted.
  • Josh Venning - Monday, November 13, 2006 - link

    The sentence has been reworked. We meant to say HDCP is supported through DVI on each card. Thanks.
  • TigerFlash - Monday, November 13, 2006 - link

    Does anyone know if the Evga WITH ACS3 is what is on retail right now? Evga's website seems to be the only place that distinguishes the difference. Everyone else is just selling an "8800 GTX."

    Thanks.
  • Josh Venning - Monday, November 13, 2006 - link

    The ACS3 version of the EVGA 8800 GTX we had for this review is apparently not available yet anywhere, and we couldn't find any info on their website about it. Right now we are only seeing the reference design 8800 GTX for sale from EVGA, but the ACS3 should be out soon. The price for this part may be a bit higher, but our sample has the same clock speeds as the reference part.
  • SithSolo1 - Monday, November 13, 2006 - link

    They have two different heat sinks so I assume one could tell by looking at the product picture. I know a lot of sites use the product picture from the manfacture's site but I think they would use the one of the ASC3 if that's the one they had. I also assume they would charge a little more for it.
  • imaheadcase - Monday, November 13, 2006 - link

    Would love to see how these cards performance in Vista, even RC2 would be great.

    I know the graphics drivers for nivdia are terrible, i mean terrible, in vista atm but when they at least get a final out for 8800 a RC2 or a Vista final roundup vs winxp SP@ would be great :D
  • DerekWilson - Monday, November 13, 2006 - link

    Vista won't be that interesting until we see a DX10 driver from NVIDIA -- which we haven't yet and don't expect for a while. We'll certainly test it and see what happens though.
  • imaheadcase - Monday, November 13, 2006 - link

    Oh the current drivers for the 8800 beta do not support DX10? Is that the new detX drivers I read about Nvidia working on are for?
  • peternelson - Saturday, November 25, 2006 - link


    I'd be interested to know if the 8800 drivers even support SLI yet? The initial ones I heard of did not.

Log in

Don't have an account? Sign up now