Gaming Performance, Power, Temperature, & Noise

So with the basics of the architecture and core configuration behind us, let’s dive into some numbers.

Rise of the Tomb Raider - 3840x2160 - Very High (DX11)

Dirt Rally - 3840x2160 - Ultra

Ashes of the Singularity - 3840x2160 - Extreme

Battlefield 4 - 3840x2160 - Ultra Quality (0x MSAA)

Crysis 3 - 3840x2160 - Very High Quality + FXAA

The Witcher 3 - 3840x2160 - Ultra Quality (No Hairworks)

The Division - 3840x2160 - Ultra Quality

Grand Theft Auto V - 3840x2160 - Very High Quality

Hitman - 3840x2160 - Ultra Quality

As the first high-end card of this generation to launch, NVIDIA gets to set the pace for the market. At the risk of being redundant the GTX 1080 is now the fastest single-GPU card on the market, and even at 4K it wins at every single gaming benchmark, typically by a good margin. In practice we’re looking at a 31% performance lead over GTX 980 Ti – the card the GTX 1080 essentially replaces – with a similar 32% lead over AMD’s Radeon R9 Fury X. Meanwhile against the slightly older GTX 980, that gap is 70%.

On a generational basis this ends up being very close to the 74% jump in 4K performance going from the GTX 680 to GTX 980. And although the pricing comparison is not especially flattering for NVIDIA here, it should be evident that NVIDIA isn’t just looking to sell GTX 1080 as an upgrade for high-end Kepler cards, but as an upgrade for GTX 980 as well, just 20 months after it launched.

The Witcher 3 - 1920x1080 - Ultra Quality (No Hairworks)

I also wanted to quickly throw in a 1080p chart, both for the interest of comparing the GTX 1080 to the first-generation 28nm cards, and for gamers who are playing on high refresh rate 1080p monitors. Though this will of course vary from game to game, roughly speaking the GTX 1080 should be 3x faster than the GTX 680 or Radeon HD 7970. This is a good reminder of how architectural efficiency has played a greater role in past years, as this is a much larger gain than we saw jumping from 55nm to 40nm or 40nm to 28nm, both of which were much closer to the historical norm of 2x.

Load Power Consumption - Crysis 3

Meanwhile when it comes to power, temperature, and noise, NVIDIA continues to execute very well here. Power consumption under Crysis 3 is some 20W higher than GTX 980 or 52W lower than GTX 980 Ti, generally in line with NVIDIA’s own TDP ratings after accounting for the slightly higher CPU power consumption incurred by the card’s higher performance. The end result is that GTX 1080 is a bit more power hungry than GTX 980, but still in the sweet spot NVIDIA has carved out in the gaming market. Broadly speaking, this amounts to a 54% increase in energy efficiency in the case of Crysis 3.

Load GPU Temperature - Crysis 3

Load Noise Levels - Crysis 3

Otherwise from a design perspective the GTX 1080 Founders Edition carries on from NVIDIA’s high-end GTX 700/900 reference design, allowing NVIDIA to once again offer a superior blower-based solution. NVIDIA’s temperature management technology has not changed relative to Maxwell, so like their other cards, the GTX 1080 tops out in the low 80s for load temperature. More significantly, at 47.5 db(A) load noise, the card is on par with the GTX 780 and half a dB off of the GTX 980.

Ultimately NVIDIA has designed the GTX 1080 to be a drop-in replacement for the GTX 980, and this data confirms just that, indicating that GTX 1080’s much higher performance comes with only a slight increase in power consumption and no meaningful change in temperatures or acoustics.

The NVIDIA GeForce GTX 1080 Preview First Thoughts
Comments Locked

262 Comments

View All Comments

  • lashek37 - Wednesday, May 18, 2016 - link

    I have a Evga 980T.I from Amazon
  • wumpus - Tuesday, May 17, 2016 - link

    Looks like I get to eat my words about posting "doom and gloom" about a Friday 6pm press event. They didn't have any real "bad news" (although the reason for refusal to demonstrate 'ray traced sound' was clearly a lie. You can simply play the sounds of being in various places to an audience as easily as in a movie as in VR). I wouldn't call it terribly great news either, just the slow and steady progression of a company without competition.

    Looks like it competes well enough against the existing base of nvidia cards. It also appears that they don't feel a need to bother worrying about "competition" from AMD:( (Note that Intel appears to spend at least as many mm and/or transistors on GPU space as this beast. What they don't spend is power (Watts) and bandwidth. The difference is obvious (and I can't see them trying to increase either on their CPUs).

    One thing that keeps popping up in these reviews is the 250W power limit. This just screams for someone to take a (non-founders' edition) reference card and slap a closed watercooling system on it. The results might not be as extreme as the 390, but it should be up there. I suspect the same is true (and possibly moreso unless deliberately crippled) on the 1070.
  • rhysiam - Tuesday, May 17, 2016 - link

    "Note that Intel appears to spend at least as many mm and/or transistors on GPU space as this beast"

    I don't think that's accurate at all. To my knowledge Intel haven't released specific die size or Transistor counts since Haswell. But the entire CPU package of a 4770K is ~1.4B transistors (~one fifth of a GP204 GPU). Anandtech estimated ~33% of the die area (roughly 500M transistors) was dedicated to the 20EU GT2 GPU. Obviously the GT2 is hardly Intel's biggest graphics package, but even a larger one like the 48EU GT3e package from the Broadwell i7-5775C must surely still have significantly fewer transistors than a GP204.
  • rhysiam - Tuesday, May 17, 2016 - link

    I mean GP104 of course.
  • bill44 - Tuesday, May 17, 2016 - link

    When you do a full review, could you spear a thought to some of us, who are not into gaming.
    I would like to know about audio side (sample rates supported etc.) as an example, and a proper full test for using it with madVR (yes, we know it supports the usual frame rates etc.).
    Some insights into 10/12bit support on Windows 10 (not just for games & madVR DX11 FSE) inc. generic programs like Photoshop/Premiere etc.

    On a side note: if you're not into gaming, but prefer 4K@60p dual screen setup with 10bit colour, which GPU is best?
  • bill44 - Tuesday, May 17, 2016 - link

    forgot to add.
    Tomshardware does not mention any of this.
    http://www.tomshardware.co.uk/nvidia-geforce-gtx-1...
  • vladx - Tuesday, May 17, 2016 - link

    Why would you want a beast like GTX 1080 for work in Photoshop and rest of Adobe's suite? It'd be just a big waste of money.
  • bill44 - Tuesday, May 17, 2016 - link

    Architectural changes.
    By the end of the year, there will be some 4K HDR monitors. Maybe even 120p. If I want to edit in Premiere with dual 4K HDR 120p screens, or I prefer a 5k screen over a single cable connection, what are my GPU choices? DP 1.3?

    I also mentioned 10bit support (not Quattro) and madVR. It's not this card (specifically) I'm interested in, but the architecture. There will be cheaper cards in the future for sure, however, they will use the same tech as here. Hence my curiosity.
  • dragonsqrrl - Tuesday, May 17, 2016 - link

    The performance can be very useful in Premiere and After Effects for both viewport rendering and export.
  • Ryan Smith - Wednesday, May 18, 2016 - link

    "Some insights into 10/12bit support on Windows 10 (not just for games & madVR DX11 FSE) inc. generic programs like Photoshop/Premiere etc."

    You're still going to want a Quadro for pro work. NVIDIA is going to allow 10bpc support in full screen OpenGL applications, but not windowed applications.

Log in

Don't have an account? Sign up now