Earlier this month NVIDIA announced their latest generation flagship GeForce card, the GeForce GTX 1080. Based on their new Pascal architecture and built on TSMC’s 16nm FinFET process, the GTX 1080 is being launched as the first 16nm/14nm-based video card, and in time-honored fashion NVIDIA is starting at the high-end. The end result is that the GTX 1080 will be setting the new high mark for single-GPU performance.

Unlike past launches, NVIDIA is stretching out the launch of the GTX 1080 a bit more. After previously announcing it back on May 6th, the company is lifting their performance and architecture embargo today. Gamers however won’t be able to get their hands on the card until the 27th – next Friday – with pre-order sales starting this Friday. It is virtually guaranteed that the first batch of cards will sell out, but potential buyers will have a few days to mull over the data and decide if they want to throw down $699 for one of the first Founders Edition cards.

As for the AnandTech review, as I’ve only had a few days to work on the article, I’m going to hold it back rather than rush it out as a less thorough article. In the meantime however, as I know everyone is eager to see our take on performance, I wanted to take a quick look at the card and the numbers as a preview of what’s to come. Furthermore the entire performance dataset has been made available in the new GPU 2016 section of AnandTech Bench, for anyone who wants to see results at additional resolutions and settings.

Architecture
 

NVIDIA GPU Specification Comparison
  GTX 1080 GTX 980 Ti GTX 980 GTX 780
CUDA Cores 2560 2816 2048 2304
Texture Units 160 176 128 192
ROPs 64 96 64 48
Core Clock 1607MHz 1000MHz 1126MHz 863MHz
Boost Clock 1733MHz 1075MHz 1216MHz 900Mhz
TFLOPs (FMA) 9 TFLOPs 6 TFLOPs 5 TFLOPs 4.1 TFLOPs
Memory Clock 10Gbps GDDR5X 7Gbps GDDR5 7Gbps GDDR5 6Gbps GDDR5
Memory Bus Width 256-bit 384-bit 256-bit 384-bit
VRAM 8GB 6GB 4GB 3GB
FP64 1/32 1/32 1/32 FP32 1/24 FP32
TDP 180W 250W 165W 250W
GPU GP104 GM200 GM204 GK110
Transistor Count 7.2B 8B 5.2B 7.1B
Manufacturing Process TSMC 16nm TSMC 28nm TSMC 28nm TSMC 28nm
Launch Date 05/27/2016 06/01/2015 09/18/2014 05/23/2013
Launch Price MSRP: $599
Founders $699
$649 $549 $649

While I’ll get into architecture in much greater detail in the full article, at a high level the Pascal architecture (as implemented in GP104) is a mix of old and new; it’s not a revolution, but it’s an important refinement. Maxwell as an architecture was very successful for NVIDIA both at the consumer level and the professional level, and for the consumer iterations of Pascal, NVIDIA has not made any radical changes. The basic throughput of the architecture has not changed – the ALUs, texture units, ROPs, and caches all perform similar to how they did in GM2xx.

Consequently the performance aspects of consumer Pascal – we’ll ignore GP100 for the moment – are pretty easy to understand. NVIDIA’s focus on this generation has been on pouring on the clockspeed to push total compute throughput to 9 TFLOPs, and updating their memory subsystem to feed the beast that is GP104.

On the clockspeed front, a great deal of the gains come from the move to 16nm FinFET. The smaller process allows NVIDIA to design a 7.2B transistor chip at just 314mm2, while the use of FinFET transistors, though ultimately outright necessary for a process this small to avoid debilitating leakage, has a significant benefit to power consumption and the clockspeeds NVIDIA can get away with at practical levels of power consumption. To that end NVIDIA has sort of run with the idea of boosting clockspeeds, and relative to Maxwell they have done additional work at the chip design level to allow for higher clockspeeds at the necessary critical paths. All of this is coupled with energy efficiency optimizations at both the process and architectural level, in order to allow NVIDIA to hit these clockspeeds without blowing GTX 1080’s power budget.

Meanwhile to feed GTX 1080, NVIDIA has made a pair of important changes to improve their effective memory bandwidth. The first of these is the inclusion of faster GDDR5X memory, which as implemented on GTX 1080 is capable of reaching 10Gb/sec/pin, a significant 43% jump in theoretical bandwidth over the 7Gb/sec/pin speeds offered by traditional GDDR5 on last-generation Maxwell products. Coupled with this is the latest iteration of NVIDIA’s delta color compression technology – now on its fourth generation – which sees NVIDIA once again expanding their pattern library to better compress frame buffers and render targets. NVIDIA’s figures put the effective memory bandwidth gain at 20%, or a roughly 17% reduction in memory bandwidth used thanks to the newer compression methods.

As for features included, we’ll touch upon that in a lot more detail in the full review. But while Pascal is not a massive overhaul of NVIDIA’s architecture, it’s not without its own feature additions. Pascal gains the ability to pre-empt graphics operations at the pixel (thread) level and compute operations at the instruction level, allowing for much faster context switching. And on the graphics side of matters, the architecture introduces a new geometry projection ability – Simultaneous Multi-Projection – and as a more minor update, gets bumped up to Conservative Rasterization Tier 2.

Looking at the raw specifications then, GTX 1080 does not disappoint. Though we’re looking at fewer CUDA cores than the GM200 based GTX 980 Ti or Titan, NVIDIA’s significant focus on clockspeed means that GP104’s 2560 CUDA cores are far more performant than a simple core count would suggest. The base clockspeed of 1607MHz is some 42% higher than GTX 980 (and 60% higher than GTX 980 Ti), and the 1733MHz boost clockspeed is a similar gain. On paper, GTX 1080 is set to offer 78% better performance than GTX 980, and 47% better performance than GTX 980 Ti. The real world gains are, of course, not quite this great, but they’re also relatively close to these numbers at times.

Gaming Performance, Power, Temperature, & Noise
Comments Locked

262 Comments

View All Comments

  • lashek37 - Wednesday, May 18, 2016 - link

    I have a Evga 980T.I from Amazon
  • wumpus - Tuesday, May 17, 2016 - link

    Looks like I get to eat my words about posting "doom and gloom" about a Friday 6pm press event. They didn't have any real "bad news" (although the reason for refusal to demonstrate 'ray traced sound' was clearly a lie. You can simply play the sounds of being in various places to an audience as easily as in a movie as in VR). I wouldn't call it terribly great news either, just the slow and steady progression of a company without competition.

    Looks like it competes well enough against the existing base of nvidia cards. It also appears that they don't feel a need to bother worrying about "competition" from AMD:( (Note that Intel appears to spend at least as many mm and/or transistors on GPU space as this beast. What they don't spend is power (Watts) and bandwidth. The difference is obvious (and I can't see them trying to increase either on their CPUs).

    One thing that keeps popping up in these reviews is the 250W power limit. This just screams for someone to take a (non-founders' edition) reference card and slap a closed watercooling system on it. The results might not be as extreme as the 390, but it should be up there. I suspect the same is true (and possibly moreso unless deliberately crippled) on the 1070.
  • rhysiam - Tuesday, May 17, 2016 - link

    "Note that Intel appears to spend at least as many mm and/or transistors on GPU space as this beast"

    I don't think that's accurate at all. To my knowledge Intel haven't released specific die size or Transistor counts since Haswell. But the entire CPU package of a 4770K is ~1.4B transistors (~one fifth of a GP204 GPU). Anandtech estimated ~33% of the die area (roughly 500M transistors) was dedicated to the 20EU GT2 GPU. Obviously the GT2 is hardly Intel's biggest graphics package, but even a larger one like the 48EU GT3e package from the Broadwell i7-5775C must surely still have significantly fewer transistors than a GP204.
  • rhysiam - Tuesday, May 17, 2016 - link

    I mean GP104 of course.
  • bill44 - Tuesday, May 17, 2016 - link

    When you do a full review, could you spear a thought to some of us, who are not into gaming.
    I would like to know about audio side (sample rates supported etc.) as an example, and a proper full test for using it with madVR (yes, we know it supports the usual frame rates etc.).
    Some insights into 10/12bit support on Windows 10 (not just for games & madVR DX11 FSE) inc. generic programs like Photoshop/Premiere etc.

    On a side note: if you're not into gaming, but prefer 4K@60p dual screen setup with 10bit colour, which GPU is best?
  • bill44 - Tuesday, May 17, 2016 - link

    forgot to add.
    Tomshardware does not mention any of this.
    http://www.tomshardware.co.uk/nvidia-geforce-gtx-1...
  • vladx - Tuesday, May 17, 2016 - link

    Why would you want a beast like GTX 1080 for work in Photoshop and rest of Adobe's suite? It'd be just a big waste of money.
  • bill44 - Tuesday, May 17, 2016 - link

    Architectural changes.
    By the end of the year, there will be some 4K HDR monitors. Maybe even 120p. If I want to edit in Premiere with dual 4K HDR 120p screens, or I prefer a 5k screen over a single cable connection, what are my GPU choices? DP 1.3?

    I also mentioned 10bit support (not Quattro) and madVR. It's not this card (specifically) I'm interested in, but the architecture. There will be cheaper cards in the future for sure, however, they will use the same tech as here. Hence my curiosity.
  • dragonsqrrl - Tuesday, May 17, 2016 - link

    The performance can be very useful in Premiere and After Effects for both viewport rendering and export.
  • Ryan Smith - Wednesday, May 18, 2016 - link

    "Some insights into 10/12bit support on Windows 10 (not just for games & madVR DX11 FSE) inc. generic programs like Photoshop/Premiere etc."

    You're still going to want a Quadro for pro work. NVIDIA is going to allow 10bpc support in full screen OpenGL applications, but not windowed applications.

Log in

Don't have an account? Sign up now