Synthetics

Though we’ve covered bits and pieces of synthetic performance when discussing aspects of the Pascal architecture, before we move on to power testing I want to take a deeper look at synthetic performance. Based on what we know about the Pascal architecture we should have a good idea of what to expect, but these tests none the less serve as a canary for any architectural changes we may have missed.

Synthetic: TessMark, Image Set 4, 64x Tessellation

Starting off with tessellation performance, we find that the GTX 1080 further builds on NVIDIA’s already impressive tessellation performance. Unrivaled at this point, GTX 1080 delivers a 63% increase in tessellation performance here, and maintains a 24% lead over GTX 1070. Suffice it to say, the Pascal cards will have no trouble keeping up with geometry needs in games for a long time to come.

Breaking down performance by tessellation level to look at the GTX 980 and GTX 1080 more closely on a logarithmic scale, what we find is that there’s a rather consistent advantage for the GTX 1080 at all tessellation levels. Even 8x tessellation is still 56% faster. This indicates that NVIDIA hasn’t made any fundamental changes to their geometry hardware (PolyMorph Engines) between Maxwell 2 and Pascal. Everything has simply been scaled up in clockspeed and scaled out in the total number of engines. Though I will note that the performance gains are less than the theoretical maximum, so we're not seeing perfect scaling by any means.

Up next, we have SteamVR’s Performance Test. While this test is based on the latest version of Valve’s Source engine, the test itself is purely synthetic, designed to test the suitability of systems for VR, making it our sole VR-focused test at this time. It should be noted that the results in this test are not linear, and furthermore the score is capped at 11. Of particular note, cards that fail to reach GTX 970/R9 290 levels fall off of a cliff rather quickly. So test results should be interpreted a little differently.

SteamVR Performance Test

With the minimum recommended GTX 970 and Radeon R9 290 cards get in the mid-to-high 6 range, NVIDIA’s new Pascal cards max out the score at 11. Which for the purposes of this test means that both cards exceed Valve’s recommended specifications, making them capable of running Valve’s VR software at maximum quality with no performance issues.

Finally, for looking at texel and pixel fillrate, for 2016 we have switched from the rather old 3DMark Vantage to the Beyond3D Test Suite. This test offers a slew of additional tests – many of which use behind the scenes or in our earlier architectural analysis – but for now we’ll stick to simple pixel and texel fillrates.

Beyond3D Suite - Pixel Fillrate

Starting with pixel fillrate, the GTX 1080 is well in the lead. While at 64 ROPs GP104 has fewer ROPs than the GM200 based GTX 980 Ti, it more than makes up for the difference with significantly higher clockspeeds. Similarly, when it comes to feeding those ROPs, GP104’s narrower memory bus is more than offset with the use of 10Gbps GDDR5X. But even then the two should be closer than this on paper, so the GTX 1080 is exceeding expectations.

As we discovered in 2014 with Maxwell 2, NVIDIA’s Delta Color Compression technology has a huge impact on pixel fillrate testing. So most likely what we’re seeing here is Pascal’s 4th generation DCC in action, helping GTX 1080 further compress its buffers and squeeze more performance out of the ROPs.

Though with that in mind, it’s interesting to note that even with an additional generation of DCC, this really only helps NVIDIA keep pace. The actual performance gains here versus GTX 980 are 56%, not too far removed from the gains we see in games and well below the theoretical difference in FLOPs. So despite the increase in pixel throughput due to architectural efficiency, it’s really only enough to help keep up with the other areas of the more powerful Pascal GPU.

As for GTX 1070, things are a bit different. The card has all of the ROPs of GTX 1080 and 80% of the memory bandwidth, however what it doesn’t have is GP104’s 4th GPC. Home of the Raster Engine responsible for rasterization, GTX 1070 can only setup 48 pixels/clock to begin with, despite the fact that the ROPs can accept 64 pixels. As a result it takes a significant hit here, delivering 77% of GTX 1080’s pixel throughput. With all of that said, the fact that in-game performance is closer than this is a reminder to the fact that while pixel throughput is an important part of game performance, it’s often not the bottleneck.

Beyond3D Suite - INT8 Texel Fillrate

As for INT8 texel fillrates, the results are much more straightforward. GTX 1080’s improvement over GTX 980 in texel throughput almost perfectly matches the theoretical improvement we’d expect based on the specifications (if not slightly exceeding it), delivering an 85% boost. As a result it’s now the top card in our charts for texel throughput, dethroning the still-potent Fury X. Meanwhile GTX 1070 backs off a bit from these gains, as we’d expect, as a consequence of having only three-quarters the number of texture units.

Compute Power, Temperature, & Noise
Comments Locked

200 Comments

View All Comments

  • grrrgrrr - Wednesday, July 20, 2016 - link

    Solid review! Some nice architecture introductions.
  • euskalzabe - Wednesday, July 20, 2016 - link

    The HDR discussion of this review was super interesting, but as always, there's one key piece of information missing: WHEN are we going to see HDR monitors that take advantage of these new GPU abilities?

    I myself am stuck at 1080p IPS because more resolution doesn't entice me, and there's nothing better than IPS. I'm waiting for HDR to buy my next monitor, but being 5 years old my Dell ST2220T is getting long in the teeth...
  • ajlueke - Wednesday, July 20, 2016 - link

    Thanks for the review Ryan,

    I think the results are quite interesting, and the games chosen really help show the advantages and limitations of the different architectures. When you compare the GTX 1080 to its price predecessor, the 980 Ti, you are getting an almost universal ~25%-30% increase in performance.
    Against rival AMDs R9 Fury X, there is more of a mixed bag. As the resolutions increase the bandwidth provided by the HBM memory on the Fury X really narrows the gap, sometimes trimming the margin to less that 10%,s specifically in games optimized more for DX12 "Hitman, "AotS". But it other games, specifically "Rise of the Tomb Raider" which boasts extremely high res textures, the 4Gb memory size on the Fury X starts to limit its performance in a big way. On average, there is again a ~25%-30% performance increase with much higher game to game variability.
    This data lets a little bit of air out of the argument I hear a lot that AMD makes more "future proof" cards. While many Nvidia 900 series users may have to upgrade as more and more games switch to DX12 based programming. AMD Fury users will be in the same boat as those same games come with higher and higher res textures, due to the smaller amount of memory on board.
    While Pascal still doesn't show the jump in DX12 versus DX11 that AMD's GPUs enjoy, it does at least show an increase or at least remain at parity.
    So what you have is a card that wins in every single game tested, at every resolution over the price predecessors from both companies, all while consuming less power. That is a win pretty much any way you slice it. But there are elements of Nvidia’s strategy and the card I personally find disappointing.
    I understand Nvidia wants to keep features specific to the higher margin professional cards, but avoiding HBM2 altogether in the consumer space seems to be a missed opportunity. I am a huge fan of the mini ITX gaming machines. And the Fury Nano, at the $450 price point is a great card. With an NVMe motherboard and NAS storage the need for drive bays in the case is eliminated, the Fury Nano at only 6” leads to some great forward thinking, and tiny designs. I was hoping to see an explosion of cases that cut out the need for supporting 10-11” cards and tons of drive bays if both Nvidia and AMD put out GPUs in the Nano space, but it seems not to be. HBM2 seems destined to remain on professional cards, as Nvidia won’t take the risk of adding it to a consumer Titan or GTX 1080 Ti card and potentially again cannibalize the higher margin professional card market. Now case makers don’t really have the same incentive to build smaller cases if the Fury Nano will still be the only card at that size. It’s just unfortunate that it had to happen because NVidia decided HBM2 was something they could slap on a pro card and sell for thousands extra.
    But also what is also disappointing about Pascal stems from the GTX 1080 vs GTX 1070 data Ryan has shown. The GTX 1070 drops off far more than one would expect based off CUDA core numbers as the resolution increases. The GDDR5 memory versus the GDDR5X is probably at fault here, leading me to believe that Pascal can gain even further if the memory bandwidth is increased more, again with HBM2. So not only does the card limit you to the current mini-ITX monstrosities (I’m looking at you bulldog) by avoiding HBM2, it also very likely is costing us performance.
    Now for the rank speculation. The data does present some interesting scenarios for the future. With the Fury X able to approach the GTX 1080 at high resolutions, most specifically in DX12 optimized games. It seems extremely likely that the Vega GPU will be able to surpass the GTX 1080, especially if the greatest limitation (4 Gb HBM) is removed with the supposed 8Gb of HBM2 and games move more and more the DX12. I imagine when it launches it will be the 4K card to get, as the Fury X already acquits itself very well there. For me personally, I will have to wait for the Vega Nano to realize my Mini-ITX dreams, unless of course, AMD doesn’t make another Nano edition card and the dream is dead. A possibility I dare not think about.
  • eddman - Wednesday, July 20, 2016 - link

    The gap getting narrower at higher resolutions probably has more to do with chips' designs rather than bandwidth. After all, Fury is the big GCN chip optimized for high resolutions. Even though GP104 does well, it's still the middle Pascal chip.

    P.S. Please separate the paragraphs. It's a pain, reading your comment.
  • Eidigean - Wednesday, July 20, 2016 - link

    The GTX 1070 is really just a way for Nvidia to sell GP104's that didn't pass all of their tests. Don't expect them to put expensive memory on a card where they're only looking to make their money back. Keeping the card cost down, hoping it sells, is more important to them.

    If there's a defect anywhere within one of the GPC's, the entire GPC is disabled and the chip is sold at a discount instead of being thrown out. I would not buy a 1070 which is really just a crippled 1080.

    I'll be buying a 1080 for my 2560x1600 desktop, and an EVGA 1060 for my Mini-ITX build; which has a limited power supply.
  • mikael.skytter - Wednesday, July 20, 2016 - link

    Thanks Ryan! Much appreciated.
  • chrisp_6@yahoo.com - Wednesday, July 20, 2016 - link

    Very good review. One minor comment to the article writers - do a final check on grammer - granted we are technical folks, but it was noticeable especially on the final words page.
  • madwolfa - Wednesday, July 20, 2016 - link

    It's "grammar", though. :)
  • Eden-K121D - Thursday, July 21, 2016 - link

    Oh the irony
  • chrisp_6@yahoo.com - Thursday, July 21, 2016 - link

    oh snap, that is some funny stuff right there

Log in

Don't have an account? Sign up now