Power, Temperature, & Noise

As always, last but not least is our look at power, temperature, and noise. Next to price and performance of course, these are some of the most important aspects of a GPU, due in large part to the impact of noise. All things considered, a loud card is undesirable unless there’s a sufficiently good reason – or sufficiently good performance – to ignore the noise.

GeForce Video Card Voltages
GTX 1080 Ti Boost GTX 1080 Boost GTX 980 Ti Boost GTX 1080 Ti Idle
1.062v 1.062v 1.187v 0.65v

Starting with voltages, there aren’t any big surprises with the GTX 1080 Ti. The underlying GP102 GPU has the same load voltages as the GP104 GPU in the GTX 1080, resulting in a load voltage of 1.062v.

Moving on, let’s take a look at average clockspeeds. The GTX 1080 Ti’s official base and boost clockspeeds are lower than the GTX 1080’s, but as we’ve seen before with other NVIDIA video cards, the actual clockspeeds are a little more variable and almost always higher than NVIDIA’s official values. Consequently the GTX 1080 Ti’s clockspeeds may on average trail the GTX 1080 less than what the specifications say.

GeForce Video Card Average Clockspeeds
Game GTX 1080 Ti GTX 1080
Max Boost Clock
1898MHz
1898MHz
Tomb Raider
1620MHz
1721MHz
DiRT Rally
1721MHz
1771MHz
Ashes
1680MHz
1759MHz
Battlefield 4
1657MHz
1771MHz
Crysis 3
1632MHz
1759MHz
The Witcher 3
1645MHz
1759MHz
The Division
1645MHz
1721MHz
Grand Theft Auto V
1746MHz
1797MHz
Hitman
1657MHz
1771MHz

On the whole, the GTX 1080 Ti does average lower clockspeeds than the GTX 1080. Whereas the latter would frequently average clockspeeds in the 1700MHz range, the GTX 1080 Ti averages clockspeeds in the 1600MHz range. This, in part, is why NVIDIA is promoting the GTX 1080 Ti as being 35% faster than the GTX 1080, despite the card having a 40% advantage in total hardware units.

It is interesting to note though that our GTX 1080 Ti sample has the same maximum boost clock as the GTX 1080: 1898MHz. If the GTX 1080 Ti didn’t hit its thermal limit as often, it likely would come even closer to the GTX 1080 in average clockspeeds.

Idle Power Consumption

Moving on to power consumption, we’ll start as always with idle power. All told, there are no surprises here. GTX 1080 Ti’s idle power consumption is right next to GTX 980 Ti, which is where we’d expect it given the 250W design.

Load Power Consumption - Crysis 3

System power consumption under Crysis 3 is also right where we’d expect it to be for the GTX 1080 Ti. In absolute terms it’s second only to the R9 Fury X – which is to say that it’s high – but as Crysis 3 is a real-world test, power numbers here are influenced by the rest of the system. The faster the card, the more work required of the CPU, RAM, etc, and that’s exactly what’s happening here.

Load Power Consumption - FurMark

Switching over to FurMark, which is a much more GPU-focused test, we find that our GTX 1080 Ti-equipped testbed draws 394W at the wall. This is a smidge higher than the GTX 980 Ti, but not meaningfully so. All 3 250W NVIDIA cards are closely clustered together, showing that NVIDIA’s power throttling is working as expected, and at the levels expected. GTX 1080 Ti is rated for 70W higher than the GTX 1080, and our results back this rating up. With NVIDIA’s well-established power/performance tiers, GTX 1080 Ti makes the expected leap in power consumption in order to reach its loftier performance target.

Idle GPU Temperature

Moving on to temperatures, at idle the GTX 1080 Ti settles at 30C, the same as its siblings.

Load GPU Temperature - Crysis 3

As for Crysis 3, the GTX 1080 Ti ends up being the hottest card here despite the cooling improvements, though it should be noted that this is intentional. While NVIDIA doesn’t publish this information directly, the GTX 1080 Ti’s preset thermal throttle point is 84C, which is a degree or two higher than on NVIDIA’s previous cards. As a result, the card reaches equilibrium at a slightly higher temperature than NVIDIA’s other cards.

It’s interesting to note that the throttle point has been slowly creeping up over the years; going back to the original Titan, it was only 80C. As far as reference specification designs go, the higher temperatures improve the efficiency of the cooler. The downside to higher temperatures is that power leakage increases with the temperature. So there’s a careful balancing act here in getting better cooling performance without drowning it out in more leakage-induced heat. In the case of the GTX 1080 Ti, I suspect NVIDIA paid their dues here with the additional MOSFETs, giving them a bit more headroom for leakage.

Load GPU Temperature - FurMark

The story is much the same under FurMark. The GTX 1080 Ti settles at 84C here as well – though it did peak at 86C before reaching equilibrium – showcasing that regardless of the workload, the card always levels out at its thermal throttling point.

Idle Noise Levels

Finally we have our look at noise, starting with idle noise. Relative to the GTX 1080 NVIDIA has tweaked the fan curve a bit here, but at idle the GTX 1080 Ti is already below our noise floor.

Load Noise Levels - Crysis 3

Moving over to Crysis 3, we find that the GTX 1080 Ti holds a small edge over our other 250W TI-series cards. NVIDIA’s latest card hits 50.3dB, about 1.5dB below the GTX 980 Ti and GTX 780 Ti. This won’t make much of a difference, but it does close the gap between the 250W cards and the GTX 1080 by a bit.

Load Noise Levels - FurMark

Finally, the situation with FurMark is much the same. The GTX 1080 Ti is still ever so slightly quieter than the other 250W NVIDIA cards, but a few dB louder than the GTX 1080.

Overall, judging from the power and noise characteristics of the GTX 1080 Ti, along with its throttling practices, it looks like NVIDIA invested most of their gains with the improved cooling system in removing more heat from the card itself. With few exceptions, the GTX 1080 Ti thermal throttles before it TDP throttles, and anecdotally, it tends to be closer to its TDP limit than most of the other high-end NVIDIA cards we’ve seen in the past couple of years.

Going back to what NVIDIA said earlier, they are claiming that the GTX 1080 Ti offers the largest performance uplift over its non-TI counterpart. Given that the TDP gap between the GTX 1080 Ti and GTX 1080 is actually smaller than the gap between the GTX 980 Ti and GTX 980 Ti – 70W versus 85W – if anything we’d expect this to be smaller. But by investing their gain from the improved cooler in better heat removal, NVIDIA is actually getting the GTX 1080 Ti closer to its performance/TDP limit than the previous generations of Ti cards. The only downside here is that you can only remove the DVI port once, so this isn’t an act that NVIDIA will be able to repeat in the next generation.

Synthetics Final Words
Comments Locked

161 Comments

View All Comments

  • close - Monday, March 13, 2017 - link

    I was talking about optimizing Nvidia's libraries. When you're using an SDK to develop a game you'er relying a lot on that SDK. And if that's exclusively optimized for one GPU/driver combination you're not going to develop an alternate engine that's also optimized for a completely different GPU/driver. And there's a limit to how much you can optimize for AMD when you're building a game using Nvidia SDK.

    Yes, the developer could go ahead and ignore any SDK out there (AMD or Nvidia) just so they're not lazy but that would only bring worse results equally spread across all types of GPUs, and longer development times (with the associated higher costs).

    You have the documentation here:
    https://docs.nvidia.com/gameworks/content/gamework...

    AMD offers the same services technically but why would developers go for it? They're optimizing their game for just 25% of the market. Only now is AMD starting to push with the Bethesda partnership.

    So to summarize:
    -You cannot touch Nvidia's *libraries and code* to optimize them for AMD
    -You are allowed to optimize your game for AMD without losing any kind of support from Nvidia but when you're basing it on Nvidia's SDK there's only so much you can do
    -AMD doesn't really support developers much with this since optimizing a game based on Nvidia's SDK seems to be too much effort even for them, and AMD would rather have developers using the AMD libraries but...
    -Developers don't really want to put in triple the effort to optimize for AMD also when they have only 20% market share compared to Nvidia's 80% (discrete GPUs)
    -None of this is illegal, it's "just business" and the incentive for developers is already there: Nvidia has the better cards so people go for them, it's logical that developers will follow
  • eddman - Monday, March 13, 2017 - link

    Again, most of those gameworks effects are CPU only. It does NOT matter at all what GPU you have.

    As for GPU-bound gameworks, they are limited to just a few in-game effects that can be DISABLED in the options menu.

    The main code of the game is not gameworks related and the developer can optimize it for AMD. Is it clear now?

    Sure, it sucks that GPU-bound gameworks effects cannot be optimized for AMD and I don't like it either, but they are limited to only a few cosmetic effects that do not have any effect on the main game.
  • eddman - Monday, March 13, 2017 - link

    Not to mention that a lot of gameworks game do not use any GPU-bound effects at all. Only CPU.
  • eddman - Monday, March 13, 2017 - link

    Just one example: http://www.geforce.com/whats-new/articles/war-thun...

    Look for the word "CPU" in the article.
  • Meteor2 - Tuesday, March 14, 2017 - link

    Get a room you two!
  • MrSpadge - Thursday, March 9, 2017 - link

    AMD demonstrated they "cache thing" (which seems to be tile based rendering, as in Maxwell and Pascal) to result in a 50% performance increase. So 20% IPC might be far too conservative. I wouldn't bet on a 50% clock speed increase, though. nVidia designed Pascal for high clocks, it's not just the process. AMD seems to intend the same, but can they get it similarly well? If so I'm inclined to ask "why did it take you so long"?
  • FalcomPSX - Thursday, March 9, 2017 - link

    I look forward to vega and seeing how much performance it brings, and i really hope it does end up giving performance around a 1080 level for typically lower and more reasonable AMD pricing, but honestly, i expect it to probably come close to but not quite match a 1070 in dx11, surpass it in dx12, and at a much lower price.
  • Midwayman - Thursday, March 9, 2017 - link

    Even if its just 2 polaris chips of performance you're past 1070 level. I think conservative is 1080 @ $400-450. Not that there won't be a cut down part at 1070 level, but I'd be really surprised if that is the full die version.
  • Meteor2 - Tuesday, March 14, 2017 - link

    I think that sometimes Volta is over-looked. Whatever Vega brings, I feel Volta is going to top it.

    AMD is catching up with Intel and Nvidia, but outside of mainstream GPUs and HEDT CPUs, they've not done it yet.
  • Meteor2 - Tuesday, March 14, 2017 - link

    Mind you Volta is only coming to Tesla this year, and not consumer until next year. Do AMD should have a competitive full stack for a year. Good times!

Log in

Don't have an account? Sign up now