Overclocking

Unlike the EVGA e-GeForce 7800 GTX that we reviewed, the MSI NX7800 GTX did not come factory overclocked. The core clock was 430MHz and the memory clock was set at 1.20GHz, the same as our reference card. We were, however, able to overclock our MSI card a bit more than the EVGA, giving us 485MHz core and 1.25GHz memory. Our EVGA only reached up to 475MHz, and we'll see how the numbers stand up to each other in the next section.

We used the same method as mentioned in the EVGA article to get our overclock speeds. To recap, we turned up the core and memory clock speeds in increments and ran Battlefield 2 benchmarks to test stability until it no longer ran cleanly. Then we backed it down until it was stable and used those numbers.

For some more general info about how we deal with overclocking, check the overclocking section of the last article on the EVGA e-GeForce 7800 GT. The story doesn't end here, however. We have been doing some digging regarding NVIDIA's handling of clock speed adjustment and have found some quite problematic information.

After talking to NVIDIA at length, we have learned that it is more difficult to actually set the 7800 GTX to a particular clock speed than it is to achieve 18pi miles per hour in a Ferrari. Apparently, NVIDIA looks at clock speed adjustments as a "speed knob" rather than a real clock speed control. The granularity of NVIDIA's clock speed adjustment is not incremental by 1 MHz as the Coolbits slider would have us believe. There are multiple clock partitions with "most of the die area" being clocked at 430MHz on the reference card. This makes it difficult to say what parts of the chip are running at a particular frequency at any given time.

Presuming that we should know exactly how fast every transistor is running at all times is absurd, but we don't have any such info on CPUs, yet there's no problem there. When working to overclock CPUs, we look at multipliers and bus speeds, giving us good reference points. If core frequency and the clock speed slider are more like a "speed knob" and all we need is a reference point, why not pick 0 to 10? Remember when enthusiasts would go so far as to replace crystals on motherboards to overclock? Are we going to have to return to those days in order to know truly what speed our GPU is running? I, for one, certainly hope not.

We can understand the delicate situation that NVIDIA is in when it comes to revealing too much information about how their chips are clocked and designed. However, there is a minimum level of information that we need in order to understand what it means to overclock a 7800 GTX, and we just don't have it yet. NVIDIA tells us that fill rate can be used to verify clock speed, but this doesn't give us enough details to determine what's actually going on.

Asking NVIDIA for more information regarding how their chips are clocked has been akin to beating one's head against a wall. And so, we decided to take it upon ourselves to test a wide variety of clock speeds between 400MHz and 490MHz to try to get a better idea of what's going on. Here's a look at Splinter Cell: Chaos Theory performance over multiple core clock speeds.


As we can see, there is really no major effect on performance if clock speed isn't adjusted by about 15 to 25 MHz at a time. Smaller increases do yield some differences. The most interesting aspect to note is that it takes more of an increase to have a significant effect when starting from a higher frequency. We can see that at lower frequency, plateaus span about 10 MHz; while between 450 and 470 MHz, there is no useful increase in speed.

This data seems to indicate that each increase in the driver does increase the speed of something insignificant at every step. When moving up to one of the plateaus, it seems that a multiplier for something more important (like the pixel shader hardware) gets bumped up to the next discrete level. It is difficult to say with any certainty what is happening inside the hardware without more information.

We will be following this issue over time as we continue to cover individual 7800 GTX cards. NVIDIA has also indicated that they may "try to improve the granularity of clock speed adjustments", but when or if that happens and what the change will bring are questions that they would not discuss. Until we know more or have a better tool for overclocking, we will continue testing cards as we have in the past. For now, let's get back to the MSI card.

The Card Performance Tests
Comments Locked

42 Comments

View All Comments

  • Fluppeteer - Friday, July 29, 2005 - link

    I'm not sure how board-specific this would be (although the BIOS could easily get
    in on the act), but I notice nVidia are claiming a big readback speed increase on
    the Quadro FX4500 over the FX4400 (2.4GB/s vs 1GB/s). This doesn't seem to apply
    to the 7800GTX in the GPUbench report I managed to find, but it's the kind of thing
    which could be massively driver and BIOS-dependent.

    I know this is a more artifical figure than the games which have been run, but
    significant jumps like this (along with the increased vector dot rate) make these
    cards much more attractive than the 6800 series for non-graphical work. Would it
    be possible to try to confirm whether this speed-up is specific to the Quadro
    board, or whether it applies to the consumer cards too? (Either by a little bit
    of test code, or by running some artificial benchmarks.)

    Just curious. Not that I'll be able to afford a 4500 anyway...
  • tmehanna - Thursday, July 28, 2005 - link

    ALL 7800GTX cards at this point are manufactured by nvidia and sold as is by the "vendors". ONLY physical difference is the logo on the cooler. If some vendors screen and OC their cards before selling, clock speeds would be the only difference. ANY perfomance or heat dissipation differences at similar clock speeds are MERELY manufacturing variances.
  • DerekWilson - Thursday, July 28, 2005 - link

    Not true. Vendors have some bios control over aspect of the cards that are not exposed to users. We have not been able to confirm any details from any vendor or NVIDIA (as they like to keep this stuff under wraps), but temp, heat, and noise (and even overclockability) could be affected by video bios settings.

    We don't know the details; we need more clarification. In the meantime, these are the numbers we are seeing so we will report them. If we are able to get the information we need to really say why we see these differences then we will definitely publish our findings.
  • lambchops3344 - Wednesday, July 27, 2005 - link

    no matter how much better a card does im always going to by evga... ive saved more time and money with the step up program. there customer support is soo good too.
  • NullSubroutine - Tuesday, July 26, 2005 - link

    After reading an article about how CPU performance is tapering off (murphy's law or moores law, i forget which one), but GPU performance has continued to increase, and has showed signs that it will continue to increase. Also I remember an article about Nvidia or ATi (i cant remember which) was asked about any "dual core" GPU's that will be developed. They answered that if you really look at the hardware, GPUs are like multiprocessors, or something to that nature. Perhaps this could be the reason for the clockspeed questions? It would seem logical to me that their technology doesnt run like a typical cpu, because each "processor" runs at a different speed? I think you might understand what im trying to say, at least I hope so cuz im failing miserably at...what was i sayin?
  • Gamingphreek - Monday, July 25, 2005 - link

    Not sure if this has already been discussed in earlier articles, but, the 7800GTX as everyone (including myself) seems bottlenecked at every resolution except 16x12. And then with AA and AF enabled the X850XT seems to catch up. While the averages might be the same, has anandtech ever thought of including the minimum and maximum framerates on their graphs.

    Thanks,
    -Kevin Boyd
  • Fluppeteer - Monday, July 25, 2005 - link

    Just wanted to thank Derek and Josh for clarifying the dual link situation. MSI don't mention anything about dual link, but after the debacle with their 6800"GT" I'm not sure I'd have trusted their publications anyway... If *all* the 7800GTXs are dual link, I'm more confident (although if there's actually a chance to try one with a 30" ACD or - preferably - a T221 DG5 in a future review I'd be even happier!)

    Good review, even if we can expect most cards to be pretty much clones of the reference design for now.
  • DerekWilson - Monday, July 25, 2005 - link

    We'll have some tests with a Cinema Display at some point ...

    But for now, we can actually see the Silicon Image TMDS used for Dual-Link DVI under the HSF. :-)
  • Fluppeteer - Monday, July 25, 2005 - link

    Cool; it'd reassure me before I splash out! (Although I'm still hoping for the extra RAM pads to get filled out - got to hate 36MB frame buffers - but with the Quadro 4500 allegedly due at SIGGRAPH it shouldn't be long now.)

    Sounds like the same solution as the Quadro 3400/6800GTo, with the internal transmitter used for one link and the SiI part for the other. I don't suppose you've pulled the fan off to find out the part number?

    I'd also be interested in knowing whether the signal quality has improved on the internal transmitter; nVidia have a bad record with this, and the T221 pushes the single link close to the 165MHz limit (and the dual link, for that matter). People have struggled with the 6800 series, even in Quadro form, where the internal transmitters have been in use. It'd be nice to find out if they're learning, although asking you to stick an oscilloscope on the output is a bit optimistic. :-) These days this probably affects people with (two) 1920x1200 panels as well as oddballs like me with DG5s, though.

    On the subject of DVI, I don't suppose nVidia have HDCP support yet, do they? (Silicon Image do a part which can help out, or I believe it can be done in the driver.) It's really a Longhorn thing, but you never know...

    Now, if only nVidia would produce an SLi SFR mode with horizontal spanning which didn't try to merge data down the SLi link, I'd be able to get two cards and actually play games on two inputs to the T221 (or two monitors); the way the 7800 benchmarks are going, 3840x2400 is going to be necessary to make anything fill rate limited in SLi. (Or have they done this already? There was talk about Quadros having dual-card OpenGL support, but I'm behind on nVidia drivers while my machine's in bits.)

    Thanks for the info!

    (Starts saving up...)
  • meolsen - Wednesday, July 27, 2005 - link

    Nether Evga NOR MSI advertise that their card is capable of driving at the resolutions that would suggest that the dual-link DVI is enabled.

    E.g., MSI:

    Advanced Display Functionality
    • Dual integrated 400MHz RAMDACs for display resolutions up to and including 2048x1536 at 85Hz
    • Dual DVO ports for interfacing to external TMDS transmitters and external TV encoders
    • Full NVIDIA nView multi-display technology capability

    Why would they conceal this feature/?

Log in

Don't have an account? Sign up now