Gaming Performance, Power, Temperature, & Noise

So with the basics of the architecture and core configuration behind us, let’s dive into some numbers.

Rise of the Tomb Raider - 3840x2160 - Very High (DX11)

Dirt Rally - 3840x2160 - Ultra

Ashes of the Singularity - 3840x2160 - Extreme

Battlefield 4 - 3840x2160 - Ultra Quality (0x MSAA)

Crysis 3 - 3840x2160 - Very High Quality + FXAA

The Witcher 3 - 3840x2160 - Ultra Quality (No Hairworks)

The Division - 3840x2160 - Ultra Quality

Grand Theft Auto V - 3840x2160 - Very High Quality

Hitman - 3840x2160 - Ultra Quality

As the first high-end card of this generation to launch, NVIDIA gets to set the pace for the market. At the risk of being redundant the GTX 1080 is now the fastest single-GPU card on the market, and even at 4K it wins at every single gaming benchmark, typically by a good margin. In practice we’re looking at a 31% performance lead over GTX 980 Ti – the card the GTX 1080 essentially replaces – with a similar 32% lead over AMD’s Radeon R9 Fury X. Meanwhile against the slightly older GTX 980, that gap is 70%.

On a generational basis this ends up being very close to the 74% jump in 4K performance going from the GTX 680 to GTX 980. And although the pricing comparison is not especially flattering for NVIDIA here, it should be evident that NVIDIA isn’t just looking to sell GTX 1080 as an upgrade for high-end Kepler cards, but as an upgrade for GTX 980 as well, just 20 months after it launched.

The Witcher 3 - 1920x1080 - Ultra Quality (No Hairworks)

I also wanted to quickly throw in a 1080p chart, both for the interest of comparing the GTX 1080 to the first-generation 28nm cards, and for gamers who are playing on high refresh rate 1080p monitors. Though this will of course vary from game to game, roughly speaking the GTX 1080 should be 3x faster than the GTX 680 or Radeon HD 7970. This is a good reminder of how architectural efficiency has played a greater role in past years, as this is a much larger gain than we saw jumping from 55nm to 40nm or 40nm to 28nm, both of which were much closer to the historical norm of 2x.

Load Power Consumption - Crysis 3

Meanwhile when it comes to power, temperature, and noise, NVIDIA continues to execute very well here. Power consumption under Crysis 3 is some 20W higher than GTX 980 or 52W lower than GTX 980 Ti, generally in line with NVIDIA’s own TDP ratings after accounting for the slightly higher CPU power consumption incurred by the card’s higher performance. The end result is that GTX 1080 is a bit more power hungry than GTX 980, but still in the sweet spot NVIDIA has carved out in the gaming market. Broadly speaking, this amounts to a 54% increase in energy efficiency in the case of Crysis 3.

Load GPU Temperature - Crysis 3

Load Noise Levels - Crysis 3

Otherwise from a design perspective the GTX 1080 Founders Edition carries on from NVIDIA’s high-end GTX 700/900 reference design, allowing NVIDIA to once again offer a superior blower-based solution. NVIDIA’s temperature management technology has not changed relative to Maxwell, so like their other cards, the GTX 1080 tops out in the low 80s for load temperature. More significantly, at 47.5 db(A) load noise, the card is on par with the GTX 780 and half a dB off of the GTX 980.

Ultimately NVIDIA has designed the GTX 1080 to be a drop-in replacement for the GTX 980, and this data confirms just that, indicating that GTX 1080’s much higher performance comes with only a slight increase in power consumption and no meaningful change in temperatures or acoustics.

The NVIDIA GeForce GTX 1080 Preview First Thoughts
Comments Locked

262 Comments

View All Comments

  • Yojimbo - Thursday, May 19, 2016 - link

    Why is everything 100% with you? Neither of us know 100% anything about this issue. And the fact that half precision at double throughput is not possible on the GTX 1080 does not mean that it's not possible on the GP104.

    Further explanation of what you said "huh?" to: NVIDIA revealed the Drive PX 2 at both CES 2016 and GTC 2016. It has two Pascal-based Tegra chips and two larger Pascal GPUs. The main purpose of the Drive PX 2 will be to run inference algorithms for self driving cars. There are large portions of these algorithms which only require FP16 precision. NVIDIA would be leaving performance on the table if they didn't include the FP16 throughput enhancements in whatever chips they are using for the Drive PX 2. And those GPUs are definitely not GP100s. Unless they specially designed another GPU that is based on the GP100, but much smaller, they are probably using something along the lines of a GP106 or GP107 for that purpose.

    I'm guessing it's easier to design 6 GPUs and put FP16 enhancements in all of them then it is to design 8 GPUs and put FP16 enhancements in 4 of them. I don't think you have any reason to believe it's so difficult for them to put the FP16 enhancements into GP104. (They had already done so for the Maxwell-based Tegra X1, by the way.) You just seem to want to believe things which fit into your preferred narrative of "GTX 1080 is almost identical to Maxwell".
  • dragonsqrrl - Wednesday, May 18, 2016 - link

    @vladx
    They're all based on the same underlying architecture (Pascal). I'm actually not sure why you think GP104 is closer to Maxwell architecturally than GP100. Are you referring to the SMM layout?
  • Ryan Smith - Wednesday, May 18, 2016 - link

    "Does it support the special instructions that the Tesla P100 has for half precision float (FP16), which double throughput?"

    The answer is basically no. More info to come in the full review.
  • modeless - Thursday, May 19, 2016 - link

    :( Thanks. Hope NVIDIA gets some competition in deep learning soon...
  • Yojimbo - Thursday, May 19, 2016 - link

    They have competition already with Xeon Phi and CPUs. The trouble with AMD's GPUs for deep learning is that they don't have nearly the same level of library support as NVIDIA's GPUs do. Intel is also hoping to adapt FPGAs for deep learning purposes, I think, but I doubt that's going to help you out much.
  • damianrobertjones - Tuesday, May 17, 2016 - link

    Each new gen sees around an extra 10/14fps being added to the top card over the previous gen. No. No thank you. These companies keep DRIP FEEDING us small advances and, obviously, this is business.

    Spend your cash, fine, but they're laughing at us each time. (I have an ebay 980)
  • FMinus - Tuesday, May 17, 2016 - link

    Though the move was from Maxwell to Pascal, looks more like Paxwell, Maxwell on steroids - 70% clock, 30% compression, not much innovation. And that PCB is a disgrace, skimping on the 6th phase, and only one mosfet per VRM phase - weren't they speaking of premium components thus the added premium, certainly doesn't look premium.
  • leoneo.x64 - Tuesday, May 17, 2016 - link

    Ryan. Please excuse me for asking. I am not being rude. But where is part 2 of the Galaxy s7 edge review?
  • leoneo.x64 - Tuesday, May 17, 2016 - link

    Ryan. Please excuse me for asking. I am not being rude. But where is part 2 of the Galaxy s7 edge review?
  • Lolimaster - Tuesday, May 17, 2016 - link

    Fail gen for nvidia.

    They need 1.7Ghz to actually show improvement vs the 1-1.2Ghz of the previous AMD/Nvidia gpu's. Imagine the GP104 at 1.2Ghz.

    Wheres the efficiency?

    Polaris 10 is aiming at the same 1Ghz sweet spot, improving the hell out of it's gpu cores.

Log in

Don't have an account? Sign up now