Theoreticals

As with any new architecture, we want to take a few moments to look at theoretical performance. These numbers shouldn’t be taken too seriously for cross-vendor comparison, but these numbers often tell us more about interesting architectural improvements that occur from one generation to the next.

3DMark Vantage Pixel Fill

Our first theoretical test is perhaps the most perplexing: 3DMark Vantage’s pixel fill test. Typically this test is memory bandwidth bound as the nature of the test has the ROPs pushing as many pixels as possible with as little overhead as possible, which in turn shifts the bottleneck to a mix of ROP performance and the memory bandwidth needed to feed those ROPs.

Compared to the GTX 580, the GTX 680 has almost exactly the same amount of memory bandwidth (192GB/sec) and only 86% of the theoretical ROP performance (37Gpix vs. 32Gpix). In short, it shouldn’t outperform the GTX 580 here, and yet it outperforms the 580 by 33%.

Why does it do this? That’s the hard thing to answer. As we mentioned in our look at GK104’s architecture, NVIDIA did make some minor incremental improvements to their ROPs coming from GF114, such as slightly improved compression and improved polygon merging. One of those may very well be the contributing factor, particularly the compression improvements since this is a typically memory bandwidth bottlenecked test. Alternatively, it’s interesting to note that the difference between the two video cards is almost identical to the difference in the core clock. GTX 560 Ti’s results tend to blow a hole in this theory, but it bears consideration.

In any case, it’s an interesting turn of events and hopefully one that isn’t simply an edge case. As we’ve seen in our benchmarks GTX 680 has strong performance – even if its lead compared to the 7970 diminishes with resolution – but compared to the GTX 580 in particular it needs strong ROP performance across all games in order to deliver good performance at high resolutions and anti-aliasing.

3DMark Vantage Texture Fill

Our second theoretical test is 3DMark Vantage’s texture fill test, which to no surprise has the GTX 680 handily clobbering all prior NVIDIA cards. NVIDIA’s inclusion of 128 texture units on GK104 versus 64 on their previous generation GPUs gives the GTX 680 far better texturing performance. The 30%+ core clock difference only serves to further widen the gap.

DirectX11 Detail Tessellation Sample - Normal

DirectX11 Detail Tessellation Sample - Max

Our third theoretical test is the set of settings we use with Microsoft’s Detail Tessellation sample program out of the DX11 SDK. Overall while NVIDIA didn’t make any significant changes to their tessellation hardware (peak triangle rate is still 4/cycle), they have been working on further improving performance at absurdly high tessellation factors. You can see some of this in action at the max factor setting, but even then we’re running into a general performance wall since the Detail Tessellation program can’t go to the absolute highest tessellation factors NVIDIA’s hardware supports.

Unigine Heaven

Our final theoretical test is Unigine Heaven 2.5, a benchmark that straddles the line between a synthetic benchmark and a real-world benchmark as the engine is licensed but no notable DX11 games have been produced using it yet. In any case the Heaven benchmark is notable for its heavy use of tessellation, which means it’s largely a proxy test for tessellation performance. Here we can see the GTX 680 shoot well ahead of the GTX 580 – by more than we saw in the DX11 Detail Tessellation sample – but at the same time there’s a lot more going on in Heaven than just tessellation.

Honestly at this point in time I’m not sure just how much more tessellation performance is going to matter. Until DX11 is the baseline API for games, tessellation is still an add-on feature, which means it’s being used to add fine detail to specific models rather than being used on everything in a game world. This demands good tessellation at high factors but at the same time it’s subject to diminishing returns on the improvement to image quality as triangles reach single pixel sizes and smaller. To that end I’m still waiting to see the day where we see tessellation scale similarly to textures – that is by using full MIP chaining of displacement maps – at which point we can evaluate tessellation performance similar to texture performance when it comes to both measuring the performance hit and evaluating the difference in image quality.

Compute: What You Leave Behind? Power, Temperature, & Noise
Comments Locked

404 Comments

View All Comments

  • sngbrdb - Friday, March 30, 2012 - link

    *from : P
  • Mombasa69 - Wednesday, April 4, 2012 - link

    This is just a rebadged mid-range card, the 680 has less memory bandwidth than GPU's brought out 4 years ago lol, what a ripp, I can see the big fat directors at Nvidia laughing at all the mugs that have gone out and bought one, thinking this is the real big boy to replace the 580... muppets. lol.
  • N4v1N - Wednesday, April 4, 2012 - link

    Nvidia is the bestest! No AMD is the betterest!
    lol...
  • CeriseCogburn - Friday, April 6, 2012 - link

    Yes Nvidia clocked the ram over 6Ghz because their ram controller is so rockin'.
    In any case, the 7970 is now being overclocked, both are to 7000Ghz ram.
    Unfortunately the 7970 still winds up behind most of the time, even in 2650X1200 screen triple gaming.
  • raghu78 - Saturday, April 7, 2012 - link

    In the Reference Radeon HD 7970 AND XFX RADEON HD 7970 review the DirectX 11 compute shader Fluid simulation perfomance is far more than in this review.

    http://www.anandtech.com/show/5261/amd-radeon-hd-7...

    http://www.anandtech.com/show/5314/xfxs-radeon-hd-...

    http://images.anandtech.com/graphs/graph5314/43383...

    Reference HD 7970 -133 and XFX HD 7970 -145 . In this review Reference HD 7970 -115.5

    What has changed in between these reviews. Is it that performance has actually decreased with the latest drivers
  • oddnutz - Thursday, April 12, 2012 - link

    well i have been an ATI fanboi forever. So I am due a gfx upgrade which would of already happened if ATI priced their latest cards similar to previous generations. I will watch ATI prices over the next few weeks but it looks like i might be turning green soon.
  • blanarahul - Friday, April 13, 2012 - link

    Actually the GTX 680 REFERENCE BOARD was designed for 375 Watts of power.
    It has a total of 2 6-pin and one 8-pin connector on the board! I realized this after seeing the back of the board.
  • Commander Bubble - Thursday, April 19, 2012 - link

    I agree with some of the sensible posts littered in here that Witcher 2 should be included as a comparison point, and most notably the ubersampling setup.
    i run 2x 580GTX SLI @1920 and i can't manage a minimum 60fps with that turned on. That would be a good test for current cards as it absoultely hammers them.

    also, i don't know whether CeriseCogburn is right or wrong, and i don't care, but i'm just sick of seeing his name in the comment list. go outside and meet people, do something else. you are clearly spending way too much time on here...
  • beiker44 - Tuesday, April 24, 2012 - link

    I can't wait to get one...or wait for the bad ace Dual 690!!! decisions decisions
  • Oxford Guy - Thursday, July 5, 2012 - link

    "At the end of the day NVIDIA already had a strong architecture in Fermi, so with Kepler they’ve gone and done the most logical thing to improve their performance: they’ve simply doubled Fermi."

    Fermi Lite, you mean.

    "Now how does the GTX 680 fare in load noise? The answer depends on what you want to compare it to. Compared to the GTX 580, the GTX 680 is practically tied – no better and no worse – which reflects NVIDIA’s continued use of a conservative cooling strategy that favors noise over temperatures."

    No, the 680's cooling performance is inferior because it doesn't use a vapor chamber. Nvidia skimped on the cooling to save money, it seems.

Log in

Don't have an account? Sign up now