Theoreticals

As with any new architecture, we want to take a few moments to look at theoretical performance. These numbers shouldn’t be taken too seriously for cross-vendor comparison, but these numbers often tell us more about interesting architectural improvements that occur from one generation to the next.

3DMark Vantage Pixel Fill

Our first theoretical test is perhaps the most perplexing: 3DMark Vantage’s pixel fill test. Typically this test is memory bandwidth bound as the nature of the test has the ROPs pushing as many pixels as possible with as little overhead as possible, which in turn shifts the bottleneck to a mix of ROP performance and the memory bandwidth needed to feed those ROPs.

Compared to the GTX 580, the GTX 680 has almost exactly the same amount of memory bandwidth (192GB/sec) and only 86% of the theoretical ROP performance (37Gpix vs. 32Gpix). In short, it shouldn’t outperform the GTX 580 here, and yet it outperforms the 580 by 33%.

Why does it do this? That’s the hard thing to answer. As we mentioned in our look at GK104’s architecture, NVIDIA did make some minor incremental improvements to their ROPs coming from GF114, such as slightly improved compression and improved polygon merging. One of those may very well be the contributing factor, particularly the compression improvements since this is a typically memory bandwidth bottlenecked test. Alternatively, it’s interesting to note that the difference between the two video cards is almost identical to the difference in the core clock. GTX 560 Ti’s results tend to blow a hole in this theory, but it bears consideration.

In any case, it’s an interesting turn of events and hopefully one that isn’t simply an edge case. As we’ve seen in our benchmarks GTX 680 has strong performance – even if its lead compared to the 7970 diminishes with resolution – but compared to the GTX 580 in particular it needs strong ROP performance across all games in order to deliver good performance at high resolutions and anti-aliasing.

3DMark Vantage Texture Fill

Our second theoretical test is 3DMark Vantage’s texture fill test, which to no surprise has the GTX 680 handily clobbering all prior NVIDIA cards. NVIDIA’s inclusion of 128 texture units on GK104 versus 64 on their previous generation GPUs gives the GTX 680 far better texturing performance. The 30%+ core clock difference only serves to further widen the gap.

DirectX11 Detail Tessellation Sample - Normal

DirectX11 Detail Tessellation Sample - Max

Our third theoretical test is the set of settings we use with Microsoft’s Detail Tessellation sample program out of the DX11 SDK. Overall while NVIDIA didn’t make any significant changes to their tessellation hardware (peak triangle rate is still 4/cycle), they have been working on further improving performance at absurdly high tessellation factors. You can see some of this in action at the max factor setting, but even then we’re running into a general performance wall since the Detail Tessellation program can’t go to the absolute highest tessellation factors NVIDIA’s hardware supports.

Unigine Heaven

Our final theoretical test is Unigine Heaven 2.5, a benchmark that straddles the line between a synthetic benchmark and a real-world benchmark as the engine is licensed but no notable DX11 games have been produced using it yet. In any case the Heaven benchmark is notable for its heavy use of tessellation, which means it’s largely a proxy test for tessellation performance. Here we can see the GTX 680 shoot well ahead of the GTX 580 – by more than we saw in the DX11 Detail Tessellation sample – but at the same time there’s a lot more going on in Heaven than just tessellation.

Honestly at this point in time I’m not sure just how much more tessellation performance is going to matter. Until DX11 is the baseline API for games, tessellation is still an add-on feature, which means it’s being used to add fine detail to specific models rather than being used on everything in a game world. This demands good tessellation at high factors but at the same time it’s subject to diminishing returns on the improvement to image quality as triangles reach single pixel sizes and smaller. To that end I’m still waiting to see the day where we see tessellation scale similarly to textures – that is by using full MIP chaining of displacement maps – at which point we can evaluate tessellation performance similar to texture performance when it comes to both measuring the performance hit and evaluating the difference in image quality.

Compute: What You Leave Behind? Power, Temperature, & Noise
Comments Locked

404 Comments

View All Comments

  • george1976 - Saturday, March 24, 2012 - link

    Excuse me sir but I think you've been reading the wrong article.
  • Ryan Smith - Thursday, March 22, 2012 - link

    Just a heads up guys, we're a bit behind schedule and are still adding images and tables, so hold on.
  • casteve - Thursday, March 22, 2012 - link

    whew - thought my coffee hadn't kicked in :)
  • Granseth - Thursday, March 22, 2012 - link

    Hi, liked the review but are missing a few things, though I expect them to be reviewed at a later time in a new article. Like the improved multi-screen support, SLI, overclocking and things like that.

    But I would like to know more about this turbo as well. What I am courious about is if it will boost minimum framerate as well as average framerate, or if the GPU is so taxed when it hits minimum framerate that it won't have anything extra to offer up to its turbo.
  • Ryan Smith - Thursday, March 22, 2012 - link

    Minimum framerates. -16% power target on the left, stock on the right.

    Crysis Min: 21.4...21.9

    Dirt3 Min: 73.4....77.1

    So to answer your question, it depends on the game.
  • Jamahl - Thursday, March 22, 2012 - link

    Just a comment on the power draw - I wonder if you could test the 680 and 7970 in a different game, say for example Batman of BF3. The reason for this is due to the 7970 winning in Metro, while losing in most of the others and I wonder if there is something going on regarding power draw.
  • CeriseCogburn - Friday, March 23, 2012 - link

    See the GTX 680 win in Metro 2033 all the way on up 1920 and 2560 resolutions >
    http://hothardware.com/Reviews/NVIDIA-GeForce-GTX-...

    What's different is AAA is used, as well as the Sandy E runs stock at 3,300 and is not overclocked.
    What appears to be a big problem for AMD cards is they have been offloading work to the cpu much more than the Nvidia cards, and even more so in CF v SLI, so when you don't have a monster CPU with a monster overclock to boot the AMD cards lose even worse.
  • SlyNine - Friday, March 23, 2012 - link

    Anandtech uses AAA for Metro.

    You need to look agian, the difference is no DOF and hothardware is running at lower settings.

    you, fail.
  • CeriseCogburn - Tuesday, March 27, 2012 - link

    Oh I didn't fail, I showed the 680 winning in the game that is claimed it loses in.
    That's a WIN for me, period.
  • SlyNine - Friday, April 27, 2012 - link

    Ok so your 500$ video card can win at lower settings than the 459$ videocard.

Log in

Don't have an account? Sign up now