Final Words

Bringing this belated review to a close, I want to pick up where I started this review: FinFET. In ages long gone, we used to get near yearly updates to manufacturing nodes, and while these half-node shrinks weren’t as potent as a full node shrink over a longer period of time, it kept the GPU industry moving at a quick pace. Which not to get distracted by history, but I won’t lie that as a long time editor and gamer, I do still miss those days. At the same time it underscores why I’m so excited about the first full node shrink in 4 years. It has taken a long time to get here, but now that we’re finally here we get to reap the benefits.

GP104 and the Pascal architecture is certainly defined by the transition to 16nm FinFET. Smaller and much better transistors have allowed NVIDIA to make a generational leap in performance in less than two years. You can now buy a video card built with a 314mm2 die packing 7.2B transistors, with all of those transistors adding up to fantastic performance. It’s fundamental progress in the truest sense of the word, and after 4 years it’s refreshing.

But even though FinFET is a big part of what makes Pascal so powerful, it’s still just a part. NVIDIA’s engineering team pulled off a small miracle with Maxwell, and while Pascal doesn’t rock the boat too hard here, there are still some very important changes here that set Pascal apart from Maxwell 2. These will reverberate across NVIDIA’s GPU lineup for years to come.

While not unexpected, the use of GDDR5X is an interesting choice for NVIDIA, and one that should keep NVIDIA’s consumer GPUs relatively well fed for a couple of years or so. The new memory technology is not a radical change – it’s an extension of GDDR5, after all – but it allows NVIDIA to continue to improve on memory bandwidth without having to resort to more complex and expensive technologies like HBM2. Combined with the latest generation of delta color compression, and NVIDIA’s effective memory bandwidth for graphics has actually increased by a good deal. And though it’s only being used on GTX 1080 at this time, there’s an obvious path towards using it in future cards (maybe a Pascal refresh?) if NVIDIA wants to go that route.

On the implementation side of matters, I give a lot of credit to FinFET, but NVIDIA clearly also put a great deal of work into running up the clocks for GP104. GPUs have historically favored growing wider instead of growing faster, so this is an unexpected change. It’s not one without its drawbacks – overclocking isn’t looking very good right now – but on the other hand it allows NVIDIA to make a generational jump without making their GPU too much wider, which neatly bypasses potential scaling issues for this generation.

As for the Pascal architecture, I don’t think we’re in a position to fully comprehend and appreciate the work scheduling changes that NVIDIA has made, but it will take developers some time to put these features to good use. From a computer science standpoint, the instruction level preemption addition is a huge advancement for a GPU, but right now the consumer applications are admittedly limited. Though as GPUs and CPUs get closer and closer, that won’t always be the case. Otherwise the most consumer applicable change is to dynamic load balancing, which gives Pascal the flexibility it needs to properly benefit from workload concurrency via asynchronous compute. Don’t expect AMD-like gains here, but hopefully developers will be able to squeeze a bit more still out of Pascal.

I’m also interested in seeing what developers eventually do with Simultaneous Multi-Projection. NVIDIA going after the VR market with it first is the sensible move, and anything that improves VR performance is a welcome change given the high system requirements for the technology. But there’s a lot of flexibility here that developers have only begun to experiment with.

Finally, in the grab bag, we’re still a bit too early for HDR monitors and displays that can take advantage of Pascal’s DisplayPort 1.4 controller, but the groundwork has been laid. The entire point of HDR technology is to make a night and day difference, and I’m excited to see what kind of an impact this can make on PC gaming. In the meantime, we can still enjoy things such as Fast Sync, and finally for NVIDIA’s high-end cards, a modern video codec block that can support just about every codec under the sun.

Performance & Recommendations: By The Numbers

With all of that said, let’s get down to the business of numbers. By the numbers, GeForce GTX 1080 is the fastest card on the market, and we wouldn’t expect anything less from NVIDIA. I’m still on the fence about whether GTX 1080 is truly fast enough for 4K, as our benchmarks still show cases where even NVIDA’s latest and greatest can’t get much above 30fps with all the quality features turned up, but certainly GTX 1080 has the best chance. Otherwise for 1440p the card would likely make Asus PG279Q G-Sync monitor owners very happy.

Relative to GTX 980 then, we’re looking at an average performance gain of 66% at 1440p, and 71% at 4K. This is a very significant step up for GTX 980 owners, but it’s also not quite the same step up we saw from GTX 680 to GTX 980 (75%). GTX 980 owners who are looking for a little more bang for their buck could easily be excused for waiting another generation for a true doubling, especially with GTX 1080’s higher prices. GTX 980 Ti/Titan X owners can also hold back, as this card isn’t GM200’s replacement. Otherwise for GTX 700 or 600 series owners, GTX 1080 is a rather massive step up.

GTX 1070 follows this same mold as well. NVIDIA is targeting the card at the 1440p market, and there it does a very good job, delivering 60fps performance in most games. By the numbers, it’s a good step up from GTX 970, but with a 57% at 1440p, it’s not a night and day difference. Current GTX 770/670 owners on the other hand should be very satisfied.

It’s interesting to note though that the performance gap between NVIDIA’s 80 and 70 cards have increased this generation. At 1440p GTX 970 delivers 87% of GTX 980’s performance, but GTX 1070 only delivers 81% of GTX 1080’s performance at the same settings. The net result of this is that GTX 1070 isn’t quite as much of a spoiler as GTX 970 was, or to flip that around, GTX 1080 is more valuable than GTX 980 was.

Meanwhile from a technical perspective, NVIDIA has once again nailed the technical trifecta of performance, noise, and power efficiency. GP104 in this respect is clearly descended from GM204, and it makes GTX 1080 and 1070 very potent cards. Top-tier performance with lower power consumption is always great news for desktop gamers – especially in the middle of summer – but I’m especially interested in seeing what this means for the eventual laptop SKUs. The slight uptick in rated TDPs does bear keeping an eye on though; after the GTX 700 series, NVIDIA came back to their senses on power consumption, so hopefully this isn’t the resumption of TDP creep as a means to keep performance growing.

The one real drawback right now is pricing and availability. Now, even 2 months after the launch of the GTX 1080, supplies are still very tight. GTX 1070 is much better, thankfully, but those cards still go rather quickly. The end result is that NVIDIA’s MSRPs have proven unrealistic; if you want a GTX 1080 today, be prepared to spend $699, while GTX 1070 will set you back $429 or more. Clearly these cards are worth the price to some, as NVIDIA and their partners keep selling them, but it puts a damper on things. For now all that NVIDIA can do is keep shipping chips, and hopefully once supply reaches equilibrium with demand, we get the $599/$379 prices NVIDIA original touted.

Otherwise I’m of two minds on the Founders Edition cards. NVIDIA has once again built a fantastic set of install-it-and-forget-it cards, and while not radically different from the GTX 900 series reference designs, these are still their best designs to date. That this comes with an explicit price premium makes it all a bit harder to cheer for though, as it pushes the benefits of the reference design out of the hand of some buyers. If and when overall card pricing finally comes down, it will be interesting to see what card sales are like for the Founders Editions, and if it makes sense for NVIDIA to continue doing this. I suspect it will – and that this is going to be a new normal – but it’s going to depend on consumer response and just what kind of cool things NVIDIA’s board partners do with their own design.

Overall then, I think it’s safe to say that NVIDIA has started off the FinFET generation with a bang. GTX 1080 and GTX 1070 are another fantastic set of cards from NVIDIA, and they will keep the GPU performance crown solidly in NVIDIA’s hands. At the same time competitor AMD won’t have anything to response to the high-end market for at least the next few months, so this will be an uncontested reign for NVIDIA. It goes without saying then that with current card prices due to the shortage, that I hope they prove to be benevolent rulers.

Last, but not least however, we’re not done yet. NVIDIA is moving at a quick pace, and this is just the start of the Pascal generation. GeForce GTX 1060 launched this week and we’ll be taking a look at it on Friday. Pascal has setup NVIDIA very well, and it will be interesting to see how that extends to the mainstream/enthusiast market.

Overclocking
Comments Locked

200 Comments

View All Comments

  • Ranger1065 - Thursday, July 21, 2016 - link

    Your unwavering support for Anandtech is impressive.

    I too have a job that keeps me busy, yet oddly enough I find the time to browse (I prefer that word to "trawl") a number of sites.

    I find it helps to form objective opinions.

    I don't believe in early adoption, but I do believe in getting the job done on time, however if you are comfortable with a 2 month delay, so be it :)

    Interesting to note that architectural deep dives concern your art and media departments so closely in their purchasing decisions. Who would have guessed?

    It's true (God knows it's been stated here often enough) that
    Anandtech goes into detail like no other, I don't dispute that.
    But is it worth the wait? A significant number seem to think not.

    Allow me to leave one last issue for you to ponder (assuming you have the time in your extremely busy schedule).

    Is it good for Anandtech?
  • catavalon21 - Thursday, July 21, 2016 - link

    Impatient as I was at the first for benchmarks, yes, I'm a numbers junkie, since it's evident precious few of us will have had a chance to buy one of these cards yet (or the 480), I doubt the delay has caused anyone to buy the wrong card. Can't speak for the smart phone review folks are complaining about being absent, but as it turns out, what I'm initially looking for is usually done early on in Bench. The rest of this, yeah, it can wait.
  • mkaibear - Saturday, July 23, 2016 - link

    Job, house, kids, church... more than enough to keep me sufficiently busy that I don't have the time to browse more than a few sites. I pick them quite carefully.

    Given the lifespan of a typical system is >5 years I think that a 2 month delay is perfectly reasonable. It can often take that long to get purchasing signoff once I've decided what they need to purchase anyway (one of the many reasons that architectural deep dives are useful - so I can explain why the purchase is worthwhile). Do you actually spend someone else's money at any point or are you just having to justify it to yourself?

    Whether or not it's worth the wait to you is one thing - but it's clearly worth the wait to both Anandtech and to Purch.
  • razvan.uruc@gmail.com - Thursday, July 21, 2016 - link

    Excellent article, well deserved the wait!
  • giggs - Thursday, July 21, 2016 - link

    While this is a very thorough and well written review, it makes me wonder about sponsored content and product placement.
    The PG279Q is the only monitor mentionned, making sure the brand appears, and nothing about competing products. It felt unnecessary.
    I hope it's just a coincidence, but considering there has been quite a lot of coverage about Asus in the last few months, I'm starting to doubt some of the stuff I read here.
  • Ryan Smith - Thursday, July 21, 2016 - link

    "The PG279Q is the only monitor mentionned, making sure the brand appears, and nothing about competing products."

    There's no product placement or the like (and if there was, it would be disclosed). I just wanted to name a popular 1440p G-Sync monitor to give some real-world connection to the results. We've had cards for a bit that can drive 1440p monitors at around 60fps, but GTX 1080 is really the first card that is going to make good use of higher refresh rate monitors.
  • giggs - Thursday, July 21, 2016 - link

    Fair enough, thank you for responding promptly. Keep up the good work!
  • arh2o - Thursday, July 21, 2016 - link

    This is really the gold standard of reviews. More in-depth than any site on the internet. Great job Ryan, keep up the good work.
  • Ranger1065 - Thursday, July 21, 2016 - link

    This is a quality article.
  • timchen - Thursday, July 21, 2016 - link

    Great article. It is pleasant to read more about technology instead of testing results. Some questions though:

    1. higher frequency: I am kind of skeptical that the overall higher frequency is mostly enabled by FinFET. Maybe it is the case, but for example when Intel moved to FinFET we did not see such improvement. RX480 is not showing that either. It seems pretty evident the situation is different from 8800GTX where we first get frequency doubling/tripling only in the shader domain though. (Wow DX10 is 10 years ago... and computation throughput is improved by 20x)

    2. The fastsync comparison graph looks pretty suspicious. How can Vsync have such high latency? The most latency I can see in a double buffer scenario with vsync is that the screen refresh just happens a tiny bit earlier than the completion of a buffer. That will give a delay of two frame time which is like 33 ms (Remember we are talking about a case where GPU fps>60). This is unless, of course, if they are testing vsync at 20hz or something.

Log in

Don't have an account? Sign up now