Earlier this month NVIDIA announced their latest generation flagship GeForce card, the GeForce GTX 1080. Based on their new Pascal architecture and built on TSMC’s 16nm FinFET process, the GTX 1080 is being launched as the first 16nm/14nm-based video card, and in time-honored fashion NVIDIA is starting at the high-end. The end result is that the GTX 1080 will be setting the new high mark for single-GPU performance.

Unlike past launches, NVIDIA is stretching out the launch of the GTX 1080 a bit more. After previously announcing it back on May 6th, the company is lifting their performance and architecture embargo today. Gamers however won’t be able to get their hands on the card until the 27th – next Friday – with pre-order sales starting this Friday. It is virtually guaranteed that the first batch of cards will sell out, but potential buyers will have a few days to mull over the data and decide if they want to throw down $699 for one of the first Founders Edition cards.

As for the AnandTech review, as I’ve only had a few days to work on the article, I’m going to hold it back rather than rush it out as a less thorough article. In the meantime however, as I know everyone is eager to see our take on performance, I wanted to take a quick look at the card and the numbers as a preview of what’s to come. Furthermore the entire performance dataset has been made available in the new GPU 2016 section of AnandTech Bench, for anyone who wants to see results at additional resolutions and settings.

Architecture
 

NVIDIA GPU Specification Comparison
  GTX 1080 GTX 980 Ti GTX 980 GTX 780
CUDA Cores 2560 2816 2048 2304
Texture Units 160 176 128 192
ROPs 64 96 64 48
Core Clock 1607MHz 1000MHz 1126MHz 863MHz
Boost Clock 1733MHz 1075MHz 1216MHz 900Mhz
TFLOPs (FMA) 9 TFLOPs 6 TFLOPs 5 TFLOPs 4.1 TFLOPs
Memory Clock 10Gbps GDDR5X 7Gbps GDDR5 7Gbps GDDR5 6Gbps GDDR5
Memory Bus Width 256-bit 384-bit 256-bit 384-bit
VRAM 8GB 6GB 4GB 3GB
FP64 1/32 1/32 1/32 FP32 1/24 FP32
TDP 180W 250W 165W 250W
GPU GP104 GM200 GM204 GK110
Transistor Count 7.2B 8B 5.2B 7.1B
Manufacturing Process TSMC 16nm TSMC 28nm TSMC 28nm TSMC 28nm
Launch Date 05/27/2016 06/01/2015 09/18/2014 05/23/2013
Launch Price MSRP: $599
Founders $699
$649 $549 $649

While I’ll get into architecture in much greater detail in the full article, at a high level the Pascal architecture (as implemented in GP104) is a mix of old and new; it’s not a revolution, but it’s an important refinement. Maxwell as an architecture was very successful for NVIDIA both at the consumer level and the professional level, and for the consumer iterations of Pascal, NVIDIA has not made any radical changes. The basic throughput of the architecture has not changed – the ALUs, texture units, ROPs, and caches all perform similar to how they did in GM2xx.

Consequently the performance aspects of consumer Pascal – we’ll ignore GP100 for the moment – are pretty easy to understand. NVIDIA’s focus on this generation has been on pouring on the clockspeed to push total compute throughput to 9 TFLOPs, and updating their memory subsystem to feed the beast that is GP104.

On the clockspeed front, a great deal of the gains come from the move to 16nm FinFET. The smaller process allows NVIDIA to design a 7.2B transistor chip at just 314mm2, while the use of FinFET transistors, though ultimately outright necessary for a process this small to avoid debilitating leakage, has a significant benefit to power consumption and the clockspeeds NVIDIA can get away with at practical levels of power consumption. To that end NVIDIA has sort of run with the idea of boosting clockspeeds, and relative to Maxwell they have done additional work at the chip design level to allow for higher clockspeeds at the necessary critical paths. All of this is coupled with energy efficiency optimizations at both the process and architectural level, in order to allow NVIDIA to hit these clockspeeds without blowing GTX 1080’s power budget.

Meanwhile to feed GTX 1080, NVIDIA has made a pair of important changes to improve their effective memory bandwidth. The first of these is the inclusion of faster GDDR5X memory, which as implemented on GTX 1080 is capable of reaching 10Gb/sec/pin, a significant 43% jump in theoretical bandwidth over the 7Gb/sec/pin speeds offered by traditional GDDR5 on last-generation Maxwell products. Coupled with this is the latest iteration of NVIDIA’s delta color compression technology – now on its fourth generation – which sees NVIDIA once again expanding their pattern library to better compress frame buffers and render targets. NVIDIA’s figures put the effective memory bandwidth gain at 20%, or a roughly 17% reduction in memory bandwidth used thanks to the newer compression methods.

As for features included, we’ll touch upon that in a lot more detail in the full review. But while Pascal is not a massive overhaul of NVIDIA’s architecture, it’s not without its own feature additions. Pascal gains the ability to pre-empt graphics operations at the pixel (thread) level and compute operations at the instruction level, allowing for much faster context switching. And on the graphics side of matters, the architecture introduces a new geometry projection ability – Simultaneous Multi-Projection – and as a more minor update, gets bumped up to Conservative Rasterization Tier 2.

Looking at the raw specifications then, GTX 1080 does not disappoint. Though we’re looking at fewer CUDA cores than the GM200 based GTX 980 Ti or Titan, NVIDIA’s significant focus on clockspeed means that GP104’s 2560 CUDA cores are far more performant than a simple core count would suggest. The base clockspeed of 1607MHz is some 42% higher than GTX 980 (and 60% higher than GTX 980 Ti), and the 1733MHz boost clockspeed is a similar gain. On paper, GTX 1080 is set to offer 78% better performance than GTX 980, and 47% better performance than GTX 980 Ti. The real world gains are, of course, not quite this great, but they’re also relatively close to these numbers at times.

Gaming Performance, Power, Temperature, & Noise
Comments Locked

262 Comments

View All Comments

  • QinX - Tuesday, May 17, 2016 - link

    Thanks for the explanation, I was worried that support for older games was already going down.
  • Badelhas - Tuesday, May 17, 2016 - link

    What about including tht HTC Vive on your benchmarks? If you talk about the VR benefits, you have to show them in graphs, it´s you speciality AnadTech! ;)
  • JeffFlanagan - Tuesday, May 17, 2016 - link

    Seconded. At this point VR gaming is much more interesting to me than even 4K gaming, and will drive my video card upgrades from now on. It's really nice to be able to play a game like it's the real world, rather than using a controller and looking at a screen.
  • MFK - Tuesday, May 17, 2016 - link

    Completely agreed.
    I'm a casual gamer, and my i5-2500k + GTX760 serve me perfectly fine.
    I have a 1440p monitor but I reduce the resolution to 1080 or 720 demanding on how demanding the game is.

    My upgrade will be determined and driven by VR. Whoever manages to deliver acceptable VR performance in a reasonable price will get my $.

    And they will be competing in price and content against the PS4k + Move + Morpheus combo.
  • Ryan Smith - Tuesday, May 17, 2016 - link

    It's in the works, though there's an issue with how many games can be properly tested in VR mode without a headset attached.
  • haplo602 - Tuesday, May 17, 2016 - link

    It will be interesting how much GDDR5X affects the scores vs GDDR5. 1080 vs 1070 will be very telling or in the alternative a downclocked 1080 vs a 980 Ti ....
  • fanofanand - Tuesday, May 17, 2016 - link

    excellent preview, little typo here.

    Translating this into numbers, at 4K we’re looking at 30% performance gain versus the GTX 980 and a 70% performance gain over the GTX 980, amounting to a very significant jump in efficiency and performance over the Maxwell generation. That durn GTX 980 is just all over the board!
  • tipoo - Tuesday, May 17, 2016 - link

    How does Pascal do on async compute? I know that was the big bugbear with Maxwell, with Nvidia promising it but it looking like they were doing it in CPU for scheduling, not GPU like GCN.

    http://www.extremetech.com/extreme/213519-asynchro...

    https://forum.beyond3d.com/threads/dx12-performanc...
  • Stuka87 - Tuesday, May 17, 2016 - link

    I do find it a bit annoying that you guys are still using a junk reference 290X instead of a properly cooled 390X.
  • TheinsanegamerN - Tuesday, May 17, 2016 - link

    That's what AMD provided. A custom cooled nvidia 980ti will perform better then the stock model, yet people dont complain about that.

    When anand DID use a third party card (460s IIRC) there was a massive backlash from the community saying they were 'unfair' in their reviews. So now they just use stock cards. Blame AMD for dropping the ball on that one.

Log in

Don't have an account? Sign up now