Power, Temperature, & Noise

As always, last but not least is our look at power, temperature, and noise. Next to price and performance of course, these are some of the most important aspects of a GPU, due in large part to the impact of noise. All things considered, a loud card is undesirable unless there’s a sufficiently good reason – or sufficiently good performance – to ignore the noise.

As the GM200 flagship card, GTX Titan X gets the pick of the litter as far as GM200 GPUs go. GTX Titan X needed fully-functional GM200 GPUs, and even then needed GPUs that were good enough to meet NVIDIA’s power requirements. GTX 980 Ti on the other hand, as a cut-down/salvage card, gets second pick. So we expect to see these chips be just a bit worse; to have either functional units that came out of the fab damaged, or have functional units that have been turned off due to power reasons.

GeForce GTX Titan X/980 Voltages
GTX Titan X Boost Voltage GTX 980 Ti Boost Voltage GTX 980 Boost Voltage
1.162v 1.187v 1.225v

Looking at voltages, we can see just that in our samples. GTX 980 Ti has a slightly higher boost voltage – 1.187v – than our GTX Titan X. NVIDIA sometimes bins their second-tier cards for lower voltage, but this isn’t something we’re seeing here. Nor is there necessarily a need to bin in such a manner since the 250W TDP is unchanged from GTX Titan X.

GeForce GTX 980 Ti Average Clockspeeds
Game GTX 980 Ti GTX Titan X
Max Boost Clock 1202MHz 1215MHz
Battlefield 4
1139MHz
1088MHz
Crysis 3
1177MHz
1113MHz
Mordor
1151MHz
1126MHz
Civilization: BE
1101MHz
1088MHz
Dragon Age
1189MHz
1189MHz
Talos Principle
1177MHz
1126MHz
Far Cry 4
1139MHz
1101MHz
Total War: Attila
1139MHz
1088MHz
GRID Autosport
1164MHz
1151MHz
Grand Theft Auto V
1189MHz
1189MHz

The far more interesting story here is GTX 980 Ti’s clockspeeds. As we have pointed out time and time again, GTX 980 Ti’s gaming performance trails GTX Titan X by just a few percent, this despite the fact that GTX 980 Ti is down by 2 SMMs and is clocked identically. On paper there is a 9% performance difference that in the real world we’re not seeing. So what’s going on?

The answer to that is that what GTX 980 Ti lacks in SMMs it’s making up in clockspeeds. The card’s average clockspeeds are frequently two or more bins ahead of GTX Titan X, topping out at a 64MHz advantage under Crysis 3. All of this comes despite the fact that GTX 980 Ti has a lower maximum boost clock than GTX Titan X, topping out one bin lower at 1202MHz to GTX Titan X’s 1215MHz.

Ultimately the higher clockspeeds are a result of the increased power and thermal headroom the GTX 980 Ti picks up from halving the number of VRAM chips along with disabling two SMMs. With those components no longer consuming power or generating heat, and yet the TDP staying at 250W, GTX 980 Ti can spend its power savings to boost just a bit higher. This in turn compresses the performance gap between the two cards (despite what the specs say), which coupled with the fact that performance doesn't scale lineraly with SMM count or clockspeed (you rarely lose the full theoretical performance amount when shedding frequency or functional units) leads to the GTX 980 Ti trailing the GTX Titan X by an average of just 3%.

Idle Power Consumption

Starting off with idle power consumption, there's nothing new to report here. GTX 980 Ti performs just like the GTX Titan X, which at 74W is second only to the GTX 980 by a single watt.

Load Power Consumption - Crysis 3

Load Power Consumption - FurMark

Meanwhile load power consumption is also practically identical to the GTX Titan X. With the same GPU on the same board operating at the same TDP, GTX 980 Ti ends up right where we expect it, next to GTX Titan X. GTX Titan X did very well as far as energy efficiency is concerned – setting a new bar for 250W cards – and GTX 980 Ti in turn does just as well.

Idle GPU Temperature

Load GPU Temperature - Crysis 3

Load GPU Temperature - FurMark

As was the case with power consumption, video card temperatures are similarly unchanged. NVIDIA’s metal cooler does a great job here, keeping temperatures low at idle while NVIDIA’s GPU Boost mechanism keeps temperatures from exceeding 83C under full load.

Idle Noise Levels

Load Noise Levels - Crysis 3

Load Noise Levels - FurMark

Finally for noise, the situation is much the same. Unexpected but not all that surprising, the GTX 980 Ti ends up doing a hair worse than the GTX Titan X here. NVIDIA has not changed the fan curves or TDP, so this ultimately comes down to manufacturing variability in NVIDIA’s metal cooler, with our GTX 980 Ti faring ever so slightly worse than the Titan. Which is to say that it's still right at the sweet spot for noise versus power consumption, dissipating 250W at no more than 53dB, and once again proving the mettle of NVIDIA's metal cooler.

Compute Overclocking
Comments Locked

290 Comments

View All Comments

  • FlushedBubblyJock - Wednesday, June 10, 2015 - link

    I bought a bunch of G80 G92 G92b and G94 nvidia cards because you could purchase memory size, bandwidth, bit width, power connector config, essentially any speed at any price point for a gamers rig, install the same driver, change the cards easily, upgrade for your customers without hassles...

    IT WAS A GOLD MINE OF FLEXIBILITY

    What happened was, the amd fanboys got very angry over the IMMENSE SUCCESS of the initial G80 and it's reworked cores and totally fluid memory, card size, bit width, and pricing configurations... so they HAD TO TRY TO BRING IT DOWN...

    Thus AMD launched their PR war, and the clueless amd fan launched their endless lies.

    I'll tell you this much, no on would trade me a 9800GTX for a 9800GT

    I couldn't get the 92 bit width cards for the same price as the 128 bit

    DDR2 and DDR3 also differentiated the stack massively.

    What we had wasn't rebranding, but an amazingly flexible GPU core that stood roaring above at the top and could be CUT down to the middle and the low gaming end, an configured successfully with loads of different bit widths and memory configs....

    64 bit width, 92, 128, 256, 384, 192, ETC...

    That was an is a awesome core, period.
  • BillyONeal - Sunday, May 31, 2015 - link

    And people have been bent out of shape about it. For "YEARS" :)
  • dragonsqrrl - Sunday, May 31, 2015 - link

    Their highest-end rebadge, the 390X, will likely compete with the 980, not the 980 Ti. The 980 Ti will be closer to Fiji's performance profile.
  • austinsguitar - Sunday, May 31, 2015 - link

    I dont think you realize how much more efficiant this card is even compared to past cards for its nm and performance. This is a feat. Just calm down and enjoy. I am very happy that the cards price us perfect. :) thanks nvidia
  • MapRef41N93W - Sunday, May 31, 2015 - link

    Maybe you aren't aware of how silicon works, but this a 601mm^2 die which costs a boat load to produce especially with the rising costs of crystalline silicon dies. Being on 28nm this long just means the yields are higher (which is why a 601mm^2 is even possible).

    You aren't going to see a 14nm card that outperforms this by much till 2017 at the earliest which following the recent NVIDIA trends should see the Titan XYZ (whatever they want to call it) which should be a pretty huge jump at a pretty high price.
  • Thomas_K - Monday, June 1, 2015 - link

    Actually AMD is doing 14nm starting next year

    http://www.guru3d.com/news-story/it-is-official-am...
    "Although this was a rumor for a long time now we now know that AMD skips 20nm and jumps onto a 14nm fabrication node for their 2016 GPUs."
  • dragonsqrrl - Sunday, May 31, 2015 - link

    Not sure I understand your comment, 28nm is precisely why we're paying this much for this level of performance in 2015... But it's also pretty impressive for the same reason.
  • Azix - Sunday, May 31, 2015 - link

    14/16nm might cost more. 28nm should have better yields and lower cost. These chips do not cost much to make at all (retail price could be 2-3 times the chip cost)
  • dragonsqrrl - Sunday, May 31, 2015 - link

    I think you misinterpreted my comment. I was responding to someone who seemed shocked by the fact that price/performance ratios aren't improving dramatically despite the fact that we're on a very mature process. In response I said the fact that we're on the same process is precisely why we aren't seeing dramatic improvements in price/performance ratios.

    "28nm should have better yields and lower cost. These chips do not cost much to make at all (retail price could be 2-3 times the chip cost)"
    Yields are just one part of the equation. Die size also plays a significant role in manufacturing costs. The fact that your trying to say with a straight face that GM200 does not cost much to make says more than your written comment itself.
  • zepi - Monday, June 1, 2015 - link

    Assuming perfect scaling 600mm2 28nm chip would shrink to 150mm2 at 14nm.

    GM107 is a 148mm2 chip, so basically this "monster" with just a dieshrink would find a nice place for itself at the bottom end of Nvidias lineup with after transition to 14nm.

    This does not take into account the fact that at 14nm and 150mm2 they couldn't give it enough memory bandwidth so easily, but just tells you something about how significant the reduction in size and manifacturing cost is after the initial ramp-up of the yields.

Log in

Don't have an account? Sign up now