Heat, Power and Noise

We measured heat, power, and noise the same way that we did with the EVGA e-GeForce 7800 GTX, by looping a Battlefield 2 time demo for about 45 minutes to stress the system.

Heat

Load Temperature


An interesting thing that we noticed about this card is that it runs a lot cooler than the EVGA. We attained an idle temperature of 40 degrees Celius as opposed to EVGA's 46° C, and when we started running tests, our suspicions were confirmed. Clocked normally, the MSI NX7800 GTX reached a peak temperature of 75° C, and only went up by one degree when we overclocked it. EVGA's temperature was 81° C for both the 475MHz and 450MHz clock speeds, which is a significant difference. This might be the card to go with if you live in an extremely hot area as it seems to handle its heat pretty well. Of course, heat can vary as much as the ability to overclock in every card, so it could simply be a case of the MSI board having a "sweeter" chip.

Power

We measured power at the wall outlet in the different states as described in the last article to get an idea of how much of a power load the card uses.

Load Power


While the system was idle, we observed that the power load was 147 watts, which is about 6 watts more than the system with the EVGA card. Although the temperature of this card was lower than EVGA's, it is odd that the card seemed to create a larger power draw on the computer. As the graph shows, at 430 MHz, the power load is 271 W, and overclocked to 485MHz, it's 277 W. EVGA's card had only 268 W at 450MHz, and 272 W at 475MHz.

Noise

As with the EVGA card, we didn't notice anything strange about the fan noise. It wasn't especially quiet or loud from a subjective standpoint compared to other 7800 GTX cards. We used the formula mentioned before and reached the end result of 39.2 dB. This is slightly higher than the EVGA card's noise level.

For those interested, the formula looks like this (all sound measurements are made at a stationary position one meter from the system):
gpufactor = (10(SPLsys / 20))2 - (10 (SPLamb+cpu / 20))2
SPLamb+gpu = 20 log(sqrt(gpufactor + (10( SPLamb / 20))2))
SPLsys is the measured SPL of the entire system.
SPLamb is the SPL of the room with the computer shut down.
SPLamb+ cpu is the measured SPL of the system without the graphics card installed.

Since the heat sinks have been the same so far, it's no surprise that there isn't much difference in noise levels. Likely, there will not be much of a difference between future cards, except if we see things like varying fan speeds or, of course, different styles of heat sinks.

Performance Tests Final Words
Comments Locked

42 Comments

View All Comments

  • Fluppeteer - Friday, July 29, 2005 - link

    I'm not sure how board-specific this would be (although the BIOS could easily get
    in on the act), but I notice nVidia are claiming a big readback speed increase on
    the Quadro FX4500 over the FX4400 (2.4GB/s vs 1GB/s). This doesn't seem to apply
    to the 7800GTX in the GPUbench report I managed to find, but it's the kind of thing
    which could be massively driver and BIOS-dependent.

    I know this is a more artifical figure than the games which have been run, but
    significant jumps like this (along with the increased vector dot rate) make these
    cards much more attractive than the 6800 series for non-graphical work. Would it
    be possible to try to confirm whether this speed-up is specific to the Quadro
    board, or whether it applies to the consumer cards too? (Either by a little bit
    of test code, or by running some artificial benchmarks.)

    Just curious. Not that I'll be able to afford a 4500 anyway...
  • tmehanna - Thursday, July 28, 2005 - link

    ALL 7800GTX cards at this point are manufactured by nvidia and sold as is by the "vendors". ONLY physical difference is the logo on the cooler. If some vendors screen and OC their cards before selling, clock speeds would be the only difference. ANY perfomance or heat dissipation differences at similar clock speeds are MERELY manufacturing variances.
  • DerekWilson - Thursday, July 28, 2005 - link

    Not true. Vendors have some bios control over aspect of the cards that are not exposed to users. We have not been able to confirm any details from any vendor or NVIDIA (as they like to keep this stuff under wraps), but temp, heat, and noise (and even overclockability) could be affected by video bios settings.

    We don't know the details; we need more clarification. In the meantime, these are the numbers we are seeing so we will report them. If we are able to get the information we need to really say why we see these differences then we will definitely publish our findings.
  • lambchops3344 - Wednesday, July 27, 2005 - link

    no matter how much better a card does im always going to by evga... ive saved more time and money with the step up program. there customer support is soo good too.
  • NullSubroutine - Tuesday, July 26, 2005 - link

    After reading an article about how CPU performance is tapering off (murphy's law or moores law, i forget which one), but GPU performance has continued to increase, and has showed signs that it will continue to increase. Also I remember an article about Nvidia or ATi (i cant remember which) was asked about any "dual core" GPU's that will be developed. They answered that if you really look at the hardware, GPUs are like multiprocessors, or something to that nature. Perhaps this could be the reason for the clockspeed questions? It would seem logical to me that their technology doesnt run like a typical cpu, because each "processor" runs at a different speed? I think you might understand what im trying to say, at least I hope so cuz im failing miserably at...what was i sayin?
  • Gamingphreek - Monday, July 25, 2005 - link

    Not sure if this has already been discussed in earlier articles, but, the 7800GTX as everyone (including myself) seems bottlenecked at every resolution except 16x12. And then with AA and AF enabled the X850XT seems to catch up. While the averages might be the same, has anandtech ever thought of including the minimum and maximum framerates on their graphs.

    Thanks,
    -Kevin Boyd
  • Fluppeteer - Monday, July 25, 2005 - link

    Just wanted to thank Derek and Josh for clarifying the dual link situation. MSI don't mention anything about dual link, but after the debacle with their 6800"GT" I'm not sure I'd have trusted their publications anyway... If *all* the 7800GTXs are dual link, I'm more confident (although if there's actually a chance to try one with a 30" ACD or - preferably - a T221 DG5 in a future review I'd be even happier!)

    Good review, even if we can expect most cards to be pretty much clones of the reference design for now.
  • DerekWilson - Monday, July 25, 2005 - link

    We'll have some tests with a Cinema Display at some point ...

    But for now, we can actually see the Silicon Image TMDS used for Dual-Link DVI under the HSF. :-)
  • Fluppeteer - Monday, July 25, 2005 - link

    Cool; it'd reassure me before I splash out! (Although I'm still hoping for the extra RAM pads to get filled out - got to hate 36MB frame buffers - but with the Quadro 4500 allegedly due at SIGGRAPH it shouldn't be long now.)

    Sounds like the same solution as the Quadro 3400/6800GTo, with the internal transmitter used for one link and the SiI part for the other. I don't suppose you've pulled the fan off to find out the part number?

    I'd also be interested in knowing whether the signal quality has improved on the internal transmitter; nVidia have a bad record with this, and the T221 pushes the single link close to the 165MHz limit (and the dual link, for that matter). People have struggled with the 6800 series, even in Quadro form, where the internal transmitters have been in use. It'd be nice to find out if they're learning, although asking you to stick an oscilloscope on the output is a bit optimistic. :-) These days this probably affects people with (two) 1920x1200 panels as well as oddballs like me with DG5s, though.

    On the subject of DVI, I don't suppose nVidia have HDCP support yet, do they? (Silicon Image do a part which can help out, or I believe it can be done in the driver.) It's really a Longhorn thing, but you never know...

    Now, if only nVidia would produce an SLi SFR mode with horizontal spanning which didn't try to merge data down the SLi link, I'd be able to get two cards and actually play games on two inputs to the T221 (or two monitors); the way the 7800 benchmarks are going, 3840x2400 is going to be necessary to make anything fill rate limited in SLi. (Or have they done this already? There was talk about Quadros having dual-card OpenGL support, but I'm behind on nVidia drivers while my machine's in bits.)

    Thanks for the info!

    (Starts saving up...)
  • meolsen - Wednesday, July 27, 2005 - link

    Nether Evga NOR MSI advertise that their card is capable of driving at the resolutions that would suggest that the dual-link DVI is enabled.

    E.g., MSI:

    Advanced Display Functionality
    • Dual integrated 400MHz RAMDACs for display resolutions up to and including 2048x1536 at 85Hz
    • Dual DVO ports for interfacing to external TMDS transmitters and external TV encoders
    • Full NVIDIA nView multi-display technology capability

    Why would they conceal this feature/?

Log in

Don't have an account? Sign up now