Synthetics

As always we’ll also take a quick look at synthetic performance. As a pretty straightforward and wider implementation of Pascal, GTX 1080 Ti shouldn't offer too many surprises here.

Synthetic: TessMark, Image Set 4, 64x Tessellation

With TessMark, we find that the that the GTX 1080 Ti offers 26% better tessellation performance than the GTX 1080, and 63% better performance than the GTX 980 Ti. NVIDIA has always built their architectures geometry-heavy, and GTX 1080 Ti further adds to that lead.

Finally, for looking at texel and pixel fillrate, we have the Beyond3D Test Suite. This test offers a slew of additional tests – many of which use behind the scenes or in our earlier architectural analysis – but for now we’ll stick to simple pixel and texel fillrates.

Synthetic: Beyond 3D Suite - Pixel Fillrate

Synthetic: Beyond 3D Suite - Texel Fillrate

As it turns out, the pixel fillrate results for the GTX 1080 Ti are a bit surprising. The GTX 1080 Ti doesn’t dominate by as much as I would have expected given the massive memory bandwidth advantage and additional ROP throughput. Not that a 25% increase over the GTX 1080 is anything to sneeze at, but I wonder if we’re looking at one of the consequences of the unusual way NVIDIA has cut-down GP102 for GTX 1080 Ti. We haven’t seen NVIDIA disable a single ROP/memory channel at the high-end before in this manner.

As for texel fillrate, the GTX 1080 Ti excels. In fact it does a bit better than I’d otherwise expect based on the specifications. This could be a sign that GTX 1080 is a bit bandwidth limited at times when it comes to texel throughput, as that’s the facet of performance the GTX 1080 Ti has improved upon the most.

Compute Power, Temperature, & Noise
Comments Locked

161 Comments

View All Comments

  • close - Monday, March 13, 2017 - link

    I was talking about optimizing Nvidia's libraries. When you're using an SDK to develop a game you'er relying a lot on that SDK. And if that's exclusively optimized for one GPU/driver combination you're not going to develop an alternate engine that's also optimized for a completely different GPU/driver. And there's a limit to how much you can optimize for AMD when you're building a game using Nvidia SDK.

    Yes, the developer could go ahead and ignore any SDK out there (AMD or Nvidia) just so they're not lazy but that would only bring worse results equally spread across all types of GPUs, and longer development times (with the associated higher costs).

    You have the documentation here:
    https://docs.nvidia.com/gameworks/content/gamework...

    AMD offers the same services technically but why would developers go for it? They're optimizing their game for just 25% of the market. Only now is AMD starting to push with the Bethesda partnership.

    So to summarize:
    -You cannot touch Nvidia's *libraries and code* to optimize them for AMD
    -You are allowed to optimize your game for AMD without losing any kind of support from Nvidia but when you're basing it on Nvidia's SDK there's only so much you can do
    -AMD doesn't really support developers much with this since optimizing a game based on Nvidia's SDK seems to be too much effort even for them, and AMD would rather have developers using the AMD libraries but...
    -Developers don't really want to put in triple the effort to optimize for AMD also when they have only 20% market share compared to Nvidia's 80% (discrete GPUs)
    -None of this is illegal, it's "just business" and the incentive for developers is already there: Nvidia has the better cards so people go for them, it's logical that developers will follow
  • eddman - Monday, March 13, 2017 - link

    Again, most of those gameworks effects are CPU only. It does NOT matter at all what GPU you have.

    As for GPU-bound gameworks, they are limited to just a few in-game effects that can be DISABLED in the options menu.

    The main code of the game is not gameworks related and the developer can optimize it for AMD. Is it clear now?

    Sure, it sucks that GPU-bound gameworks effects cannot be optimized for AMD and I don't like it either, but they are limited to only a few cosmetic effects that do not have any effect on the main game.
  • eddman - Monday, March 13, 2017 - link

    Not to mention that a lot of gameworks game do not use any GPU-bound effects at all. Only CPU.
  • eddman - Monday, March 13, 2017 - link

    Just one example: http://www.geforce.com/whats-new/articles/war-thun...

    Look for the word "CPU" in the article.
  • Meteor2 - Tuesday, March 14, 2017 - link

    Get a room you two!
  • MrSpadge - Thursday, March 9, 2017 - link

    AMD demonstrated they "cache thing" (which seems to be tile based rendering, as in Maxwell and Pascal) to result in a 50% performance increase. So 20% IPC might be far too conservative. I wouldn't bet on a 50% clock speed increase, though. nVidia designed Pascal for high clocks, it's not just the process. AMD seems to intend the same, but can they get it similarly well? If so I'm inclined to ask "why did it take you so long"?
  • FalcomPSX - Thursday, March 9, 2017 - link

    I look forward to vega and seeing how much performance it brings, and i really hope it does end up giving performance around a 1080 level for typically lower and more reasonable AMD pricing, but honestly, i expect it to probably come close to but not quite match a 1070 in dx11, surpass it in dx12, and at a much lower price.
  • Midwayman - Thursday, March 9, 2017 - link

    Even if its just 2 polaris chips of performance you're past 1070 level. I think conservative is 1080 @ $400-450. Not that there won't be a cut down part at 1070 level, but I'd be really surprised if that is the full die version.
  • Meteor2 - Tuesday, March 14, 2017 - link

    I think that sometimes Volta is over-looked. Whatever Vega brings, I feel Volta is going to top it.

    AMD is catching up with Intel and Nvidia, but outside of mainstream GPUs and HEDT CPUs, they've not done it yet.
  • Meteor2 - Tuesday, March 14, 2017 - link

    Mind you Volta is only coming to Tesla this year, and not consumer until next year. Do AMD should have a competitive full stack for a year. Good times!

Log in

Don't have an account? Sign up now