The NVIDIA GeForce GTX 1080 Preview: A Look at What's to Comeby Ryan Smith on May 17, 2016 9:00 AM EST
Earlier this month NVIDIA announced their latest generation flagship GeForce card, the GeForce GTX 1080. Based on their new Pascal architecture and built on TSMC’s 16nm FinFET process, the GTX 1080 is being launched as the first 16nm/14nm-based video card, and in time-honored fashion NVIDIA is starting at the high-end. The end result is that the GTX 1080 will be setting the new high mark for single-GPU performance.
Unlike past launches, NVIDIA is stretching out the launch of the GTX 1080 a bit more. After previously announcing it back on May 6th, the company is lifting their performance and architecture embargo today. Gamers however won’t be able to get their hands on the card until the 27th – next Friday – with pre-order sales starting this Friday. It is virtually guaranteed that the first batch of cards will sell out, but potential buyers will have a few days to mull over the data and decide if they want to throw down $699 for one of the first Founders Edition cards.
As for the AnandTech review, as I’ve only had a few days to work on the article, I’m going to hold it back rather than rush it out as a less thorough article. In the meantime however, as I know everyone is eager to see our take on performance, I wanted to take a quick look at the card and the numbers as a preview of what’s to come. Furthermore the entire performance dataset has been made available in the new GPU 2016 section of AnandTech Bench, for anyone who wants to see results at additional resolutions and settings.
|NVIDIA GPU Specification Comparison|
|GTX 1080||GTX 980 Ti||GTX 980||GTX 780|
|TFLOPs (FMA)||9 TFLOPs||6 TFLOPs||5 TFLOPs||4.1 TFLOPs|
|Memory Clock||10Gbps GDDR5X||7Gbps GDDR5||7Gbps GDDR5||6Gbps GDDR5|
|Memory Bus Width||256-bit||384-bit||256-bit||384-bit|
|FP64||1/32||1/32||1/32 FP32||1/24 FP32|
|Manufacturing Process||TSMC 16nm||TSMC 28nm||TSMC 28nm||TSMC 28nm|
|Launch Price||MSRP: $599
While I’ll get into architecture in much greater detail in the full article, at a high level the Pascal architecture (as implemented in GP104) is a mix of old and new; it’s not a revolution, but it’s an important refinement. Maxwell as an architecture was very successful for NVIDIA both at the consumer level and the professional level, and for the consumer iterations of Pascal, NVIDIA has not made any radical changes. The basic throughput of the architecture has not changed – the ALUs, texture units, ROPs, and caches all perform similar to how they did in GM2xx.
Consequently the performance aspects of consumer Pascal – we’ll ignore GP100 for the moment – are pretty easy to understand. NVIDIA’s focus on this generation has been on pouring on the clockspeed to push total compute throughput to 9 TFLOPs, and updating their memory subsystem to feed the beast that is GP104.
On the clockspeed front, a great deal of the gains come from the move to 16nm FinFET. The smaller process allows NVIDIA to design a 7.2B transistor chip at just 314mm2, while the use of FinFET transistors, though ultimately outright necessary for a process this small to avoid debilitating leakage, has a significant benefit to power consumption and the clockspeeds NVIDIA can get away with at practical levels of power consumption. To that end NVIDIA has sort of run with the idea of boosting clockspeeds, and relative to Maxwell they have done additional work at the chip design level to allow for higher clockspeeds at the necessary critical paths. All of this is coupled with energy efficiency optimizations at both the process and architectural level, in order to allow NVIDIA to hit these clockspeeds without blowing GTX 1080’s power budget.
Meanwhile to feed GTX 1080, NVIDIA has made a pair of important changes to improve their effective memory bandwidth. The first of these is the inclusion of faster GDDR5X memory, which as implemented on GTX 1080 is capable of reaching 10Gb/sec/pin, a significant 43% jump in theoretical bandwidth over the 7Gb/sec/pin speeds offered by traditional GDDR5 on last-generation Maxwell products. Coupled with this is the latest iteration of NVIDIA’s delta color compression technology – now on its fourth generation – which sees NVIDIA once again expanding their pattern library to better compress frame buffers and render targets. NVIDIA’s figures put the effective memory bandwidth gain at 20%, or a roughly 17% reduction in memory bandwidth used thanks to the newer compression methods.
As for features included, we’ll touch upon that in a lot more detail in the full review. But while Pascal is not a massive overhaul of NVIDIA’s architecture, it’s not without its own feature additions. Pascal gains the ability to pre-empt graphics operations at the pixel (thread) level and compute operations at the instruction level, allowing for much faster context switching. And on the graphics side of matters, the architecture introduces a new geometry projection ability – Simultaneous Multi-Projection – and as a more minor update, gets bumped up to Conservative Rasterization Tier 2.
Looking at the raw specifications then, GTX 1080 does not disappoint. Though we’re looking at fewer CUDA cores than the GM200 based GTX 980 Ti or Titan, NVIDIA’s significant focus on clockspeed means that GP104’s 2560 CUDA cores are far more performant than a simple core count would suggest. The base clockspeed of 1607MHz is some 42% higher than GTX 980 (and 60% higher than GTX 980 Ti), and the 1733MHz boost clockspeed is a similar gain. On paper, GTX 1080 is set to offer 78% better performance than GTX 980, and 47% better performance than GTX 980 Ti. The real world gains are, of course, not quite this great, but they’re also relatively close to these numbers at times.
Post Your CommentPlease log in or sign up to comment.
View All Comments
bananaforscale - Wednesday, May 18, 2016 - linkEarly adopter tax.
nagi603 - Tuesday, May 17, 2016 - linkSo much for a fury x price cut...
just4U - Wednesday, May 18, 2016 - linklikely about $1200 in Canada.. uh.. no.
qasdfdsaq - Tuesday, May 17, 2016 - linkAs long as the follow up doesn't go the way of part 2 of Galaxy S7 review... 2.5 months and no news or updates. Mhmm.
milkywayer - Tuesday, May 17, 2016 - linkI'm jaw-dropped by performance jump.
Guess what's going to replace my 970 in a month? (assuming I'll be able to find the damn thing even with another +$100 premium here in Pakistan)
cknobman - Tuesday, May 17, 2016 - linkPerformance improvement is nice but not jaw dropping.
Honestly I think Nvidia has left the door open for AMD to take control of the high end later this year with the new Fury line.
I'll be waiting on making a purchase.
ChefJeff789 - Tuesday, May 17, 2016 - linkLater this year? AMD has said that Vega is not coming until next year. I'd be shocked to see it sooner.
zoxo - Tuesday, May 17, 2016 - linkThere are rumours that AMD might come out with Vega this October. Then again the GP100 chip can be released to the consumer space early too if NVidia feels the need to respond. All in all, I'm hoping for pretty darn amazing high-end MXM cards from both sides. This generation is rather exciting!
Yojimbo - Tuesday, May 17, 2016 - linkThere are always rumors of AMD releasing something. There were rumors that the Fury cards would be released up to 9 months before they were eventually released. For the past few years AMD's release schedules go backwards, not forwards. I'll believe a 2016 Vega release when I see it. Did AMD even say when in 2017 Vega is supposed to arrive? They have it sitting there at the very beginning of 2017 in their slide graphic, but that's hardly something they'd have any trouble ignoring (in terms of PR) if it actually comes out in Q2 2017.
zoxo - Tuesday, May 17, 2016 - linkWell, they have a lot of wiggle room in this. If they release it early, it means lower yields, which means lower initial margins, and possibly lower out of the box clock speeds, but they do get the performance crown (at least before the 1080Ti is released). They can however release it early, then release a 'GHz' edition after Ti is out to compete with it as needed.