Overclocking GTX 980

One of GTX 750 Ti’s more remarkable features was its overclocking headroom. GM107 could overclock so well that upon initial release, NVIDIA did not program in enough overclocking headroom in their drivers to allow for many GTX 750 Ti cards to be overclocked to their true limits. This is a legacy we would be glad to see repeated for GTX 980, and is a legacy we are going to put to the test.

As with NVIDIA’s Kepler cards, NVIDIA’s Maxwell cards are subject to NVIDIA’s stringent power and voltage limitations. Overvolting is limited to NVIDIA’s built in overvoltage function, which isn’t so much a voltage control as it is the ability to unlock 1-2 more boost bins and their associated voltages. Meanwhile TDP controls are limited to whatever value NVIDIA believes is safe for that model card, which can vary depending on its GPU and its power delivery design.

For GTX 980 we have a 125% TDP limit, meanwhile we are able to overvolt by 1 boost bin to 1265MHz, which utilizes a voltage of 1.25v.

GeForce GTX 980 Overclocking
  Stock Overclocked
Core Clock 1126MHz 1377MHz
Boost Clock 1216MHz 1466MHz
Max Boost Clock 1265MHz 1515MHz
Memory Clock 7GHz 7.8GHz
Max Voltage 1.25v 1.25v

GTX 980 does not let us down, and like its lower end Maxwell 1 based counterpart the GTX 980 turns in an overclocking performance just short of absurd. Even without real voltage controls we were able to push another 250MHz (22%) out of our GM204 GPU, resulting in an overclocked base clock of 1377MHz and more amazingly an overclocked maximum boost clock of 1515MHz. That makes this the first NVIDIA card we have tested to surpass both 1.4GHz and 1.5GHz, all in one fell swoop.

This also leaves us wondering just how much farther GM204 could overclock if we were able to truly overvolt it. At 1.25v I’m not sure too much more voltage is good for the GPU in the long term – that’s already quite a bit of voltage for a TSMC 28nm process – but I suspect there is some untapped headroom left in the GPU at higher voltages.

Memory overclocking on the other hand doesn’t end up being quite as extreme, but we’ve known from the start that at 7GHz for the stock memory clock, we were already pushing the limits for GDDR5 and NVIDIA’s memory controllers. Still, we were able to work another 800MHz (11%) out of the memory subsystem, for a final memory clock of 7.8GHz.

Before we go to our full results, in light of GTX 980’s relatively narrow memory bus and NVIDIA’s color compression improvements, we quickly broke apart our core and memory overclock testing in order to test each separately. This is to see which overclock has more effect: the core overclock or the memory overclock. One would presume that the memory overclock is the more important given the narrow memory bus, but as it turns out that is not necessarily the case.

GeForce GTX 980 Overclocking Performance
  Core (+22%) Memroy (+11%) Combined
Metro: LL
+15%
+4%
+20%
CoH2
+19%
+5%
+20%
Bioshock
+9%
+4%
+15%
Battlefield 4
+10%
+6%
+17%
Crysis 3
+12%
+5%
+15%
TW: Rome 2
+16%
+7%
+20%
Thief
+12%
+6%
+16%

While the core overclock is greater overall to begin with, what we’re also seeing is that the performance gains relative to the size of the overclock consistently favor the core overclock to the memory overclock. With a handful of exceptions our 11% memory overclock is netting us less than a 6% increase in performance. Meanwhile our 22% core overclock is netting us a 12% increase or more. This despite the fact that when it comes to core overclocking, the GTX 980 is TDP limited; in many of these games it could clock higher if the TDP budget was large enough to accommodate higher sustained clockspeeds.

Memory overclocking is still effective, and it’s clear that GTX 980 spends some of its time memory bandwidth bottlenecked (otherwise we wouldn’t be seeing even these performance gains), but it’s simply not as effective as core overclocking. And since we have more core headroom than memory headroom in the first place, it’s a double win for core overclocking.

To put it simply, the GTX 980 was already topping the charts. Now with overclocking it’s another 15-20% faster yet. With this overclock factored in the GTX 980 is routinely 2x faster than the GTX 680, if not slightly more.

OC: Load Power Consumption - Crysis 3

OC: Load Power Consumption - FurMark

But you do pay for the overclock when it comes to power consumption. NVIDIA allows you to increase the TDP by 25%, and to hit these performance numbers you are going to need every bit of that. So what was once a 165W card is now a 205W card.

OC: Load GPU Temperature - Crysis 3

OC: Load GPU Temperature - FurMark

Even though overclocking involves raising the temperature limit to 91C, NVIDIA's fan curve naturally tops out at 84C. So even in the case of overclocking the GTX 980 isn't going to reach temperatures higher than the mid-80s.

OC: Load Noise Levels - Crysis 3

OC: Load Noise Levels - FurMark

The noise penalty for overclocking is also pretty stiff. Since we're otherwise TDP limited, all of our workloads top out at 53.6dB, some 6.6dB higher than stock. In the big picture this means the overclocked GTX 980 is still in the middl of the pack, but it is noticably louder than before and louder than a few of NVIDIA's other cards. However interestingly enough it's no worse than the original stock GTX 680 at Crysis 3, and still better than said GTX 680 under FurMark. It's also still quieter than the stock Radeon R9 290X, not to mention the louder yet uber mode.

Power, Temperature, & Noise Final Words
Comments Locked

274 Comments

View All Comments

  • Kutark - Sunday, September 21, 2014 - link

    I'd hold on to it. Thats still a damn fine card. Honestly you could probably find a used one on ebay for a decent price and SLI it up.

    IMO though id splurge for a 970 and call it a day. I've got dual 760's right now, first time i've done SLI in prob 10 years. And honestly, the headaches just arent worth it. Yeah, most games work, but some games will have weird graphical issues (BF4 near release was a big one, DOTA 2 doesnt seem to like it), others dont utilize it well, etc. I kind of wish id just have stuck with the single 760. Either way, my 2p
  • SkyBill40 - Wednesday, September 24, 2014 - link

    @ Kutark:

    Yeah, I tried to buy a nice card at that time despite wanting something higher than a 660Ti. But, as my wallet was the one doing the dictating, it's what I ended up with and I've been very happy. My only concern with a used one is just that: it's USED. Electronics are one of those "no go" zones for me when it comes to buying second hand since you have no idea about the circumstances surrounding the device and seeing as it's a video card and not a Blu Ray player or something, I'd like to know how long it's run, it's it's been OC'd or not, and the like. I'd be fine with buying another one new but not for the prices I'm seeing that are right in line with a 970. That would be dumb.

    In the end, I'll probably wait it out a bit more and decide. I'm good for now and will probably buy a new 144Hz monitor instead.
  • Kutark - Sunday, September 21, 2014 - link

    Psshhhhh.... I still have my 3dfx Voodoo SLI card. Granted its just sitting on my desk, but still!!!

    In all seriousness though, my roommate, who is NOT a gamer, is still using an old 7800gt card i had laying around because the video card in his ancient computer decided to go out and he didnt feel like building a new one. Can't say i blame him, Core 2 quad's are juuust fine for browsing the web and such.
  • Kutark - Sunday, September 21, 2014 - link

    Voodoo 2, i meant, realized i didnt type the 2.
  • justniz - Tuesday, December 9, 2014 - link

    >> the power bills are so ridiculous for the 8800 GTX!

    Sorry but this is ridiculous. Do the math.

    Best info I can find is that your card is consuming 230w.
    Assuming you're paying 15¢/kWh, even gaming for 12 hours a day every day for a whole month will cost you $12.59. Doing the same with a gtx980 (165w) would cost you $9.03/month.

    So you'd be paying maybe $580 to save $3.56 a month.
  • LaughingTarget - Friday, September 19, 2014 - link

    There is a major difference between market capitalization and available capital for investment. Market Cap is just a rote multiplication of the number of shares outstanding by the current share price. None of this is available for company use and is only an indirect measurement of how well a company is performing. Nvidia has $1.5 billion in cash and $2.5 billion in available treasury stock. Attempting to match Intel's process would put a significant dent into that with little indication it would justify the investment. Nvidia already took on a considerable chunk of debt going into this year as well, which would mean that future offerings would likely go for a higher cost of debt, making such an investment even harder to justify.

    While Nvidia is blowing out AMD 3:1 on R&D and capacity, Intel is blowing both of them away, combined, by a wide margin. Intel is dropping $10 billion a year on R&D, which is a full $3 billion beyond the entire asset base of Nvidia. It's just not possible to close the gap right now.
  • Silma - Saturday, September 20, 2014 - link

    I don't think you realize how many billion dollars you need to spend to open a 14 nm factory, not even counting R&D & yearly costs.
    It's humongous, there is a reason why there are so few foundries in the world.
  • sp33d3r - Saturday, September 20, 2014 - link

    Well, if the NVIDIA/AMD CEOs is blind enough and cannot see it coming, then intel are gonna manufacture their next integrated graphics on a 10 or 8 nm chip and though immature will be a tough competition to them in terms of power and efficiency and even weight.

    remember currently pcs load integrated graphics as a must by intel and people add third party graphics only 'cause intels is not good enough literally adding weight of two graphics cards (Intels and third partys) to the product. Its all worlds apart more convenient when integrated graphics outperforms or able to challenge third party GPUs, we would just throw away NVIDIA and guess what they wont remain a monopoly anymore rather completely wiped out

    Besides Intels integrated graphics are getting more mature in terms of not just die size with every launch, just compare 4000s with 5000s, it wont be long before they catch up.
  • wiyosaya - Friday, September 26, 2014 - link

    I have to agree that it is partly not about the verification cost breaking the bank. However, what I think is the more likely reason is that since the current node works, they will try to wring every penny out of that node. Look at the prices for the Titan Z. If this is not an attempt to fleece the "gotta have it buyer," I don't know what is.
  • Ushio01 - Thursday, September 18, 2014 - link

    Wouldn't paying to use the 22nm fabs be a better idea as there about to become under used and all the teething troubles have been fixed.

Log in

Don't have an account? Sign up now