Overclocking

With GTX 980 we saw first-hand how GM204 had very significant overclocking headroom. Even without the ability to meaningful overvolt on NVIDIA cards, we were able to push our base GPU clock speed up from 1126MHz to 1377MHz, or in terms of the maximum boost bin, from 1265MHz to 1515MHz. Consequently with GTX 970 shipping at lower clock speeds, we have very lofty expectations here.

But running counter to that will be TDP. As we have already seen, GTX 970 is TDP limited right out of the gate, so even if our card has more clock speed headroom, its 110% TDP limit doesn’t leave much more in the way of power headroom. Furthermore as this is already a factory overclocked card, there’s no guarantee that EVGA has left us much overclocking headroom to play with in the first place.

EVGA GeForce GTX 970 FTW Overclocking
  FTW Overclocked
Core Clock 1216MHz 1241MHz
Boost Clock 1367MHz 1392MHz
Max Boost Clock 1418MHz 1455MHz
Memory Clock 7GHz 7.8GHz
Max Voltage 1.218v 1.243v

And in fact our results show they haven’t. We aren’t able to get even another 50MHz out of our GPU before errors start setting in; 25MHz is all we will get, which pushes our base GPU clock speed from 1216MHz to 1241MHz, and our maximum boost clock from 1418MHz to 1455MHz. Overall this is a weaker overclock than GTX 980, though not immensely so.

Meanwhile memory overclocking was just as fruitful as it was on GTX 980, with our card being able to handle up to 7.8GHz on its GDDR5 memory. As we saw with GTX 980 we’re nearly as memory bandwidth bottlenecked as we are GPU bottlenecked, but we will take what performance we can get.

OC: Metro: Last Light - 2560x1440 - High Quality

OC: Bioshock Infinite - 2560x1440 - Ultra Quality + DDoF

OC: Battlefield 4 - 2560x1440 - Ultra Quality

OC: Crysis 3 - 2560x1440 - High Quality + FXAA

OC: Total War: Rome 2 - 2560x1440 - Extreme Quality + Med. Shadows

OC: Thief - 2560x1440 - Very High Quality

OC: Thief - Min. Frame Rate - 2560x1440 - Very High Quality

As you’d expect from such a mild overclock, the performance increase is very limited. Our overclocked GTX 970 FTW does close on GTX 980 even more, but even with this full overclock it won’t overcome the 3 SMM deficit.

Overall in all likelihood the GTX 970 FTW benefits more from the 10% increase in TDP than it does the clock speed increase. GTX 970 – and GM204 in general – clearly desires to be fed with more voltage and more power overall than what any NVIDIA approved card is going to see.

OC: Load Power Consumption - Crysis 3

OC: Load Power Consumption - FurMark

OC: Load GPU Temperature - Crysis 3

OC: Load GPU Temperature - FurMark

OC: Load Noise Levels - Crysis 3

OC: Load Noise Levels - FurMark

Power consumption and noise tick up, but only slightly. The limited 10% TDP increase means that the amount of power the card can draw and dissipate as heat only increases slightly. You aren’t getting much more performance, but you also aren’t getting much more noise.

Power, Temperature, & Noise Final Words
Comments Locked

155 Comments

View All Comments

  • Casecutter - Friday, September 26, 2014 - link

    I’m confident in if we had two of what where the normal "AIB OC customs" of both a 970 and 290 things between might not appear so skewed. First as much as folks want this level of card to get them into 4K, there not... So it really just boils down to seeing what similarly generic OC custom offer and say "spar back and forth" @2560x1440 depending on the titles.

    As to power I wish these reviews would halt the inadequate testing like it’s still 2004! The power (complete PC) should for each game B-M’d, and should record in retime the oscillation of power in milliseconds, then output the 'mean' over the test duration. As we know each title fluctuates boost frequency across every title, the 'mean' across each game is different. Then each 'mean' can be added and the average from the number of titles would offer to most straight-forward evaluation of power while gaming. Also, as most folk today "Sleep" their computers (and not many idle for more than 10-20min) I believe the best calculation for power is what a graphics card "suckles" while doing nothing like 80% each month. I’d more like to see how AMD ZeroCore impacts a machines power usage over a months’ time, verse the savings only during gaming. Consider gaming 3hr a day which constitutes 12.5% of a month, does the 25% difference in power gaming beat the 5W saved with Zerocore 80% of that month. Saving energy while using and enjoying something is fine, although wasting watts while doing nothing is incomprehensible.
  • Impulses - Sunday, September 28, 2014 - link

    Ehh, I recently bought 2x custom 290, but I've no doubt that even with a decent OC the 970 can st the very least still match it in most games... I don't regret the 290s, but I also only paid $350/360 for my WF Gigabyte cards, had I paid closer to $400 I'd be kicking myself right about now.
  • Iketh - Monday, September 29, 2014 - link

    most PCs default to sleeping during long idles and most people shut it off
  • dragonsqrrl - Friday, September 26, 2014 - link

    Maxwell truly is an impressive architecture, I just wish Nvidia would stop further gimping double precision performance relative to single precision with each successive generation of their consumer cards. GF100/110 were capped at 1/8, GK110 was capped at 1/24, and now GM204 (and likely GM210) is capped at 1/32... What's still yet to be seen is how they're capping the performance on GM204, whether it's a hardware limitation like GK104, or a clock speed limitation in firmware like GK110.

    Nvidia: You peasants want any sort of reasonable upgrade in FP64 performance? Pay up.
  • D. Lister - Friday, September 26, 2014 - link

    "Company X: You peasants want any sort of reasonable upgrade in product Y? Pay up."

    Well, that's capitalism for ya... :p. Seriously though, if less DP ability means a cheaper GPU then as a gamer I'm all for it. If a dozen niche DP hobbyists get screwed over, and a thousand gamers get a better deal on a gaming card then why not? Remember what all that bit mining nonsense did to the North American prices of the Radeons?
  • D. Lister - Friday, September 26, 2014 - link

    Woah, it seems they do tags differently here at AT :(. Sorry if the above message appears improperly formatted.
  • Mr Perfect - Friday, September 26, 2014 - link

    It's not you, the italic tag throws in a couple extra line breaks. Bold might too, I seem to remember that mangling a post of mine in the past.
  • D. Lister - Sunday, September 28, 2014 - link

    Oh, okay, thanks for the explanation :).
  • wetwareinterface - Saturday, September 27, 2014 - link

    this^

    you seem to be under the illusion that nvidia intended to keep shooting themselves in the foot forever in regards to releasing their high end gpgpu chip under a gaming designation and relying on the driver (which is easy to hack) to keep people from buying a gamer card for workstation loads. face it they wised up and charge extra for fp64 and the higher ram count now. no more cheap workstation cards. the benefit as already described is cheaper gaming cards that are designed to be more efficient at gaming and leave the workstation loads to the workstation cards.
  • dragonsqrrl - Saturday, September 27, 2014 - link

    This is only partially true, and I think D. Lister basically suggested the same thing so I'll just make a single response for both. The argument for price and efficiency would really only be the case for a GK104 type scenario, where on die FP64 performance is physically limited to 1/24 FP32 due to there being 1/24 the Cuda cores. But what about GK110? There is no reason to limit it to 1/24 SP other than segmentation. There's pretty much no efficiency or price argument there, and we see proof of that in the Titan, no less efficient at gaming and really no more expensive to manufacture outside the additional memory and maybe some additional validation. In other words there's really no justification (or at least certainly not the justification you guys are suggesting) for why the GTX780 Ti couldn't have had 1/12 SP with 3GB GDDR5 at the same $700 MSRP, for instance. Of course other than further (and in my opinion unreasonable) segmentation.

    This is why I was wondering how they're capping performance in GM204.

Log in

Don't have an account? Sign up now