Theoreticals

As with any new architecture, we want to take a few moments to look at theoretical performance. These numbers shouldn’t be taken too seriously for cross-vendor comparison, but these numbers often tell us more about interesting architectural improvements that occur from one generation to the next.

3DMark Vantage Pixel Fill

Our first theoretical test is perhaps the most perplexing: 3DMark Vantage’s pixel fill test. Typically this test is memory bandwidth bound as the nature of the test has the ROPs pushing as many pixels as possible with as little overhead as possible, which in turn shifts the bottleneck to a mix of ROP performance and the memory bandwidth needed to feed those ROPs.

Compared to the GTX 580, the GTX 680 has almost exactly the same amount of memory bandwidth (192GB/sec) and only 86% of the theoretical ROP performance (37Gpix vs. 32Gpix). In short, it shouldn’t outperform the GTX 580 here, and yet it outperforms the 580 by 33%.

Why does it do this? That’s the hard thing to answer. As we mentioned in our look at GK104’s architecture, NVIDIA did make some minor incremental improvements to their ROPs coming from GF114, such as slightly improved compression and improved polygon merging. One of those may very well be the contributing factor, particularly the compression improvements since this is a typically memory bandwidth bottlenecked test. Alternatively, it’s interesting to note that the difference between the two video cards is almost identical to the difference in the core clock. GTX 560 Ti’s results tend to blow a hole in this theory, but it bears consideration.

In any case, it’s an interesting turn of events and hopefully one that isn’t simply an edge case. As we’ve seen in our benchmarks GTX 680 has strong performance – even if its lead compared to the 7970 diminishes with resolution – but compared to the GTX 580 in particular it needs strong ROP performance across all games in order to deliver good performance at high resolutions and anti-aliasing.

3DMark Vantage Texture Fill

Our second theoretical test is 3DMark Vantage’s texture fill test, which to no surprise has the GTX 680 handily clobbering all prior NVIDIA cards. NVIDIA’s inclusion of 128 texture units on GK104 versus 64 on their previous generation GPUs gives the GTX 680 far better texturing performance. The 30%+ core clock difference only serves to further widen the gap.

DirectX11 Detail Tessellation Sample - Normal

DirectX11 Detail Tessellation Sample - Max

Our third theoretical test is the set of settings we use with Microsoft’s Detail Tessellation sample program out of the DX11 SDK. Overall while NVIDIA didn’t make any significant changes to their tessellation hardware (peak triangle rate is still 4/cycle), they have been working on further improving performance at absurdly high tessellation factors. You can see some of this in action at the max factor setting, but even then we’re running into a general performance wall since the Detail Tessellation program can’t go to the absolute highest tessellation factors NVIDIA’s hardware supports.

Unigine Heaven

Our final theoretical test is Unigine Heaven 2.5, a benchmark that straddles the line between a synthetic benchmark and a real-world benchmark as the engine is licensed but no notable DX11 games have been produced using it yet. In any case the Heaven benchmark is notable for its heavy use of tessellation, which means it’s largely a proxy test for tessellation performance. Here we can see the GTX 680 shoot well ahead of the GTX 580 – by more than we saw in the DX11 Detail Tessellation sample – but at the same time there’s a lot more going on in Heaven than just tessellation.

Honestly at this point in time I’m not sure just how much more tessellation performance is going to matter. Until DX11 is the baseline API for games, tessellation is still an add-on feature, which means it’s being used to add fine detail to specific models rather than being used on everything in a game world. This demands good tessellation at high factors but at the same time it’s subject to diminishing returns on the improvement to image quality as triangles reach single pixel sizes and smaller. To that end I’m still waiting to see the day where we see tessellation scale similarly to textures – that is by using full MIP chaining of displacement maps – at which point we can evaluate tessellation performance similar to texture performance when it comes to both measuring the performance hit and evaluating the difference in image quality.

Compute: What You Leave Behind? Power, Temperature, & Noise
Comments Locked

404 Comments

View All Comments

  • unphased - Saturday, July 28, 2012 - link

    I'm using Precision X to OC my Gigabyte GTX 670 and I've got the Mem clock offset at +550Mhz. In the chart log it continues to run at 3557Mhz even while I am not playing any games.

    Is this normal? I even switched off Aero to check and it hasn't changed.
  • Gastec - Thursday, November 15, 2012 - link

    I know I'm nobody, not like you americans who are always "somebody" or "something" but I can't just sit here and read and not react. It's enough that I have to put up with them on YouTube, why do you condone them here as well? I'm refering to the likes of Wreckage and CeriseCogburn users that are obviously payed individuals to do negative publicity, here in favor of nVidia. Is that something acceptable here? Am I too old or not "in tone" with the working of the Internet or what?
  • Gastec - Thursday, November 15, 2012 - link

    Having white american genetic traits that allow you to be a convincing how-to-become-rich-and-successful book and TV religion seller migh be a praized quality in your american lands but in my lands we know one thing, and one thing only: that 60 fps is what we want in our games, be them old or new. I don't care if the card can do 120 fps, 10 more than the 110 fps that the other brand can do. That's irrelevant. My monitor works at 60 Hz. If one card can do 55 fps MIN/100 fps MAX, I'll take it over the other one that can do 40 fps MIN/120 fps MAX anyday. So why don't you think about that and convince me to buy your card. With pictures of course.
  • BrotherofCats - Monday, June 10, 2013 - link

    I have had nothing but trouble with this card from the start. It crashes a dozen times a day, mostly when I am playing a video, but sometimes when I am using my word processor or Excel. Mostly it freezes the screen for about a minute, then come back with a pop up box that states my driver crashed and has recovered. About twice a day it does a complete crash, the two peripheral screens going white and the central screen gray, and I have to hard boot my computer to get it working again. Have asked for help on the Nvidea forums and facebook page, and none of their solutions, clean install of the driver, or using an earlier driver has worked. Saw on the facebook page that other people are having the same problem. Will probably have to scrap this expensive turkey and get something cheaper that works. Not recommended at all.

Log in

Don't have an account? Sign up now