Overclocking

As always, we looked at overclocking these already fast 8800s to see how much more we could get out of the GPU. Since none of these cards were clocked higher than reference speeds, and we covered performance of the reference 8800 GTX and 8800 GTS in the launch article, we will be focusing on overclocking, power, heat, and noise levels for this review.

We overclocked these cards by using NVIDIA's coolbits driver tweak allow us to raise the core and memory clock speeds in set increments until they reach the highest point possible while still running stable. By running stable, we mean that the cards run our benchmarks for extended periods of time without any graphical artifacts or crashes. Below is a table of the cards and the clock speeds we achieved when doing our own user overclocking on them.

Card Core Overclock Memory Overclock
ASUS GeForce EN8800 GTX 629MHz 1021MHz
BFG GeForce 8800 GTX 649MHz 973MHz
EVGA e-GeForce 8800 GTX w/ ACS3 659MHz 1013MHz
Leadtek Winfast GeForce 8800 GTX 627MHz 1033MHz
MSI GeForce NX8800 GTX 652MHz 1040MHz
Sparkle Calibre 8800 GTX 631MHz 914MHz
Sparkle GeForce 8800 GTX 629MHz 1011MHz
XFX GeForce 8800 GTS 654MHz 866MHz

Keep in mind that the factory speeds of these cards were all reference speeds; that is 575/900 for the 8800 GTX and 500/800 for the 8800 GTS. With that in mind we can see that several of these cards got fairly high overclocks, especially on the memory clock. Something we noticed when overclocking (that we will see illustrated in our overclocking performance graphs) is that like previous generations of NVIDIA cards, the core clock generally makes a more significant impact on the overall performance of the card.

Note that the current version of the driver only allows adjustment of the core and memory clocks, not the shader clock of G80. We aren't sure if the shader and core clocks are derived from the same source clock so that overclocking one actually changes the other, but we plan on looking at shader core overclocking as soon as there is software support for it. Currently there is a way to increase shader clock through a BIOS flash of the cards, but we chose to focus on overclocking that was possible without resorting to flashing your card's BIOS for this article.

The MSI GeForce 8800 GTX managed to achieve one of the highest overclocks, with the EVGA e-GeForce 8800 GTX beating it slightly on the core but not on the memory clock. The XFX 8800 GTS was able to achieve a core clock speed of higher than some of the 8800 GTXs, which is interesting, but because of its scaled down shader core it will not be able to outperform a GTX. Interestingly, the card we had higher hopes for didn't seem to overclock as well as we would have thought. The Sparkle Calibre 8800 GTX, with it's elaborate HSF design and peltier cooler, didn't get a very high memory clock at all, but did get a core boost of 56MHz on the core clock which will help performance a little.

As a final comment on overclocking, remember that your own results may vary. Given the brand-spanking-new nature of the GPU core, it could be that NVIDIA chose 575 MHz as the final core clock in order to get the most cores running at that speed. Since they aren't offering a range of GPU speeds (i.e. like CPU releases), cores either work or don't work at 575 MHz. Whether the overclocks we reached are due to variability in cores or come from actual BIOS/card differences (not that most of the cards are really different) is impossible to say without a much larger sampling of units from each manufacturer.

Leadtek, ASUS & XFX Overclocked Performance
Comments Locked

34 Comments

View All Comments

  • JarredWalton - Monday, November 13, 2006 - link

    It appears Oblivion isn't fully able to use all the SPs at present. The stock 8800 GTX should still have about 17% more potential core performance, although maybe not? If the SPs run at 1.35 GHz, what runs at 575 MHz? Or in the case of the OC'ed GTS, at 654 MHz? It could be they have a similar number of ROPs or some other logic that somehow makes the core clock more important in some cases. Or it could just be that the drivers need more optimizations to make the GTX outperform the GTS in all games. Obviously Oblivion isn't GPU bandwidth limited; beyond that, more testing will need to be done.
  • dcalfine - Monday, November 13, 2006 - link

    What about the Liquid-cooled BFG 8800GTX?
    Any news on that? I'd be interested in seeing how it compared in speed, overclockablility, temperature and power consumption.

    Keep up the good work though!
  • shamgar03 - Monday, November 13, 2006 - link

    I ordered one, hopefully it will do well in the over clocking section. I am a bit concerned with the differences in over clocking the cards from different manufacturers. Does anyone know the cause of that? I mean if two cards are the exact same as the reference except for the sticker you have to wonder if there is a bit of a variance in quality of semiconductor production. Maybe favorite distributors get the better cores? Any thoughts on what causes these differences?
  • yyrkoon - Monday, November 13, 2006 - link

    quote:

    If you can't find the EVGA card, then pretty much any of the reference designs will work, and although Sparkle gets extra points for trying something different with its peltier cooler the implementation just didn't work out.


    I assume this text about the sparkle card is in refference to it's in-ability to overclock ? In my opinion, I would rather use this card, or another card that ran equaly (or better), and remained as cool (or cooler). I dont know about you guys, or anyone else, but the though of a Graphics card approaching 90C (@ load, barring the sparkle) scares the crap out of me, and if this is a sign of things to come, then I'm not sure what my future options are. Lets not forget about 300WATTS + under load . . .

    Just as the heat / power consumption is an issue (once again, in my opinion), equally disturbing, is the brass it takes to charge $650 usd, for a first generation, card, that obviously needs alot of work. Yes, it would be nice to own such a card, for pumping out graphics better than anything previous, however, I personally would rather pay $650 for something that ran a lot cooler, and offered just as much performance, or better.

    Now, to the guy talking about Vista RC2 drivers from nVidia . . . Do you really expect someone to keep up on drivers, for a "product" that is basicly doomed to die a quiet death ? "RC2" . . . Release candadite . . . as far as I'm aware, the last I checked, alot of the graphics features (of Vista) in these betas were not even implemented. This means, that quite possibly, the drivers between RC2, and release could be a good bit different. Personally, I'd rather have nVidia work on the finished product drivers, VS. the release candadite drivers any day of the week. Aside from yourself, I hardly think anyone cares if you want to run RC2 until May 2007 (legally).
  • Griswold - Thursday, November 23, 2006 - link

    I fail to see your issue with temperatures. These cards were designed to run safely at these temperatures. Just because the figures are higher than you have come used to over the years, doesnt mean its bad.
  • RMSistight - Monday, November 13, 2006 - link

    How come the Quad SLI setup was not included on the tests? Quad SLI owners want to know.
  • DigitalFreak - Monday, November 13, 2006 - link

    quote:

    How come the Quad SLI setup was not included on the tests? Quad SLI owners want to know.


    LOL. You really want to see how bad a $1200 setup will get spanked by a single card that costs half as much? You must be a masochist.
  • penga - Monday, November 13, 2006 - link

    Hey, iam always interested in the most exact wattage number a card uses and i find it hard to do the maths from the given total system power consumption and conclude how much only the card eats. So my idea was why not use a mainboard with integrated graphics card and compare the numbers? hope u get the idea. what u think, wouldnt that work?
  • DerekWilson - Monday, November 13, 2006 - link

    The only way to do this would be to place extremely low resistance (but high current) shunt resistors in the power lines AND build a PCIe riser card to measure the power supplied by the motherboard while the system is running at load.

    There isn't a really good way to report the power of just the card any other way -- using an onboard graphics card wouldn't do it because the rest of the system would be using a different ammount of power as well (different cards require the system to do different types of work -- a higher powered graphics card will cause the CPU, memory, and chipset to all work harder and draw more power than a lower performance card).
  • yyrkoon - Monday, November 13, 2006 - link

    Derek, I think he was asking: "why not use an integrated graphics motherboard, as a refference system, for power consumption tests".

    However, it should be obvious, that this wouldnt be a good idea from a game benchmark perspective, in that, it's been my experience that integrated graphics mainboards dont normally perform as well, and often use dated technology / components. Although I havent really paid that much attention to detail, I would assume you guys use the "best" motherboard, for gaming benchmarks, and probably use the same mainboard for the rest of your tests.

Log in

Don't have an account? Sign up now