MSI N470GTX: Power, Temperature, Noise, & Overclocking

As we’ve discussed in previous articles, with the Fermi family GPUs no longer are binned for operation at a single voltage, rather they’re assigned whatever level of voltage is required for them to operate at the desired clockspeeds. As a result any two otherwise identical cards can have a different core voltage, which muddies the situation some. This is particularly the case for our GTX 470 cards, as our N470GTX has a significantly different voltage than our reference GTX 470 sample.

GeForce GTX 400 Series Voltage
Ref GTX 480 Ref GTX 470 MSI N470GTX Ref GTX 460 768MB Ref GTX 460 1GB
0.987v
0.9625v
1.025v
0.987v
1.025v

While our reference GTX 470 has a VID of 0.9625v, our N470GTX sample has a VID of 1.025v, a 0.0625v difference. Bear in mind that this comes down to the luck of the draw, and the situation could easily have been reversed. In any case this is the largest difference we’ve seen among any of the GTX 400 series cards we’ve tested, so we’ve gone ahead and recorded separate load numbers for our N470GTX sample to highlight the power/temperature/noise range that exists within a single product line.

Quickly looking at load temperatures, in practice these don’t vary due to the fact that the cooler is programed to keep the card below a predetermined temperature and will simply ramp up to a higher speed on a hotter card.

This brings us to power consumption where the difference in VID makes itself much more apparent. With all things being held equal, under Crysis our N470GTX sample ends up consuming 7W more than our reference sample GTX 470. However under Furmark this becomes a 37W difference, showcasing just how wide of a variance the use of multiple VIDs can lead to in a single product. Ultimately for most games such a large VID isn’t going to result in more than a few watts’ difference in power consumption, but under extreme loads having a card with a lower VID GPU can have its advantages.

As we started before, the cooler on the GTX 470 targets a specific temperature, varying the fan speed to match it. With the higher VID and greater power consumption of our N470GTX sample, this means the card ends up being a good 3.7dB louder under Furmark than our reference sample, thanks to the higher power draw (and hence heat dissipation) of the card. Note that this is a worst-case scenario though, as under most games there’s a much smaller power draw difference between cards of different VIDs, and as a result the difference in load noise is also minimal.

Overclocking

With MSI’s Afterburner software it’s possible to increase the core voltage on the N470GTX up to 1.0875v, 0.0625v above the stock voltage of our sample card. Although by no means a small difference, neither is it more than the reference GTX 470 cooler can handle, so in a well-ventilated case we’ve found that it’s safe to go all the way to 1.0875v.

  Stock Clock Max Overclock Stock Voltage Overclocked Voltage
MSI N470GTX 675MHz 790MHz 1.025v 1.087v

With our N470GTX cranked up to 1.0875v, we were able to increase the core clock to 790MHz (which also gives us a 1580MHz shader clock), a 183MHz (30%) increase in the core clock speed. Anything beyond 790MHz would result in artifacting.

Overclocking alone isn’t enough to push the N470GTX to GTX 480 levels, but it’s enough to come close much of the time. In cases where the GTX 480 already has a solid lead over its competition the overclocked N470GTX is often right behind it – in this case this means the overclocked N470GTX manages to consistently beat the Radeon 5870, something a stock-clocked GTX 470 cannot do.

But due to the use of overvolting, that extra performance means there comes a time to pay the piper. While our temperatures hold consistent the additional voltage directly leads to additional power draw and higher fan speeds. The overclocked N470GTX can approach GTX 480 performance, but it exceeds the GTX 480 in these metrics. In terms of load noise the overclocked N470GTX is pushing just shy of 100% fan speed, making it the loudest card among our test suite. Similarly, load power consumption under both Crysis and Furmark exceeds any of our stock-clocked single-GPU cards.

Ultimately overclocking the N470GTX provides a very generous performance boost, but to make use of it you need to put up with an incredible amount of heat and noise, so it’s not by any means an easy tradeoff.

The Test GTX 470 SLI Performance
Comments Locked

41 Comments

View All Comments

  • Tunnah - Sunday, August 1, 2010 - link

    seems my comments are being taken out of context, sorry guys not used to posting on a board i'm an IRC kind of guy

    my overshadowing comment was in reference to SLI only

    was asking why study the 470 SLI at the moment when the 460 seems to be grabbing more headlines, especially with it's scaling capabilities

    the 460 SLI numbers were what i was asking about, as from what i've read in other reviews the scaling is amazing and it brings it up against, and sometimes passes, the 5850 in XF

    even though i know a big SLI round up is coming it just seemed weird to focus on the 470, but as they say they've been waiting for a second one to do SLI testing for a while..
  • mapesdhs - Saturday, July 31, 2010 - link


    I see the 8800GT in the test setup summary, but why no results for it (especially SLI) in
    the performance tables?

    Ian.
  • Perisphetic - Sunday, August 1, 2010 - link

    A picture of twin jet engine exhaust on the sticker & a software that's called afterburner. Can this be used for this new type of hot air drilling or just plain marshmallow roasting???
  • Perisphetic - Sunday, August 1, 2010 - link

    But jokes aside, where in the software is the setting for heat shrinking tube?
  • nmctech - Monday, August 2, 2010 - link

    I noticed a few days back they released the Quadro Fermi cards 4000, 5000 and 6000. I found a couple of gamer reviews but a more thorough review of the cards for 3D use would be nice.

    Have you guys had a chance to check those out yet?
  • mapesdhs - Wednesday, August 4, 2010 - link


    I expect they'll review them eventually, but more likely reviews for the new cards
    will appear on other sites first, eg. those aimed at users of Maya, ProE, CATIA, etc.

    Presumably they'll run Viewperf, Cinebench, etc. among other things. I have two
    Quadro FX 5500s to test (after which I'll put them up for sale), so I can gather
    some results, post the data on my site for comparison to whoever reviews the
    newer cards. If anyone here is interested, let me know (mapesdhs@yahoo.com)
    and I'll send out a URL when the tests are done.

    Btw, I was surprised to see NVIDIA's summary shows the 5500 is 3X faster than
    the 5800:

    http://www.nvidia.com/content/PDF/product-comparis...

    so it should be interesting to see how two 5500s SLI compare to the new 6000,
    sans any differences in CPU/RAM/mbd that might affected the results (my system
    is a 4GHz i7 860, so the two cards will be running 8X/8X for SLI).

    Ian.
  • hsew - Tuesday, August 3, 2010 - link

    I wish SOMEBODY would do an article on multiple GPU scaling , CFX and TriSLI, on AMD vs Intel.

    Something like:

    Core i7 980X, Core i7 9xx, Core i7 8xx, Core i5 7xx, Core i5 6xx, Core i3 5xx.

    Phenom II X6, X4, X3, X2, Athlon II X4, X3, X2.

    all systems 4GB ram each.

    Now, I know that such an article would likely take an astronomical amount of time to write, BUT, it would answer a seriously nagging question:

    Do you really need four or more cores in a Multi-GPU system? Do you even need an Intel CPU to effectively run a Multi-GPU system?
  • Exelius - Wednesday, August 4, 2010 - link

    I think the reason this hardware is so boring is that the difference between low-end cards and high-end cards is so high. Low end cards are far more popular though; and game companies aim for the lowest common denominator. Thus there is no market for exciting cards because there are no games that can use them.

    NVidia knows this; and are desperately trying to find a new market for their hardware. ATI knew this, which is why the merger with AMD happened. I'm guessing NVidia won't last long as an independent company; Fermi for HPC isn't catching on quickly and I don't think NVidia is in a stable enough position to convince HPC users to begin the costly and time consuming project of moving to Fermi. I think they need an Intel, IBM or HP behind them for that to happen.

    But yes, PC graphics have become boring. Blame $400 PCs and smartphonea for that.
  • Heatlesssun - Saturday, August 7, 2010 - link

    Haven't played with a high-end system lately have you? Graphics boring on high-end PCs, you gotta be kidding me! 3D Surround, just amazing stuff that that $400 PC and smart phone need not apply.
  • Patrick Wolf - Monday, August 16, 2010 - link

    It'd be great if you explained under what conditions you record temps. Things like using a case or an open bench? Are there any additional fans blowing on the card(s)? Room temp? How long do you run Furmark and what settings are used?

Log in

Don't have an account? Sign up now