Compute: What You Leave Behind?

As always our final set of benchmarks is a look at compute performance. As we mentioned in our discussion on the Kepler architecture, GK104’s improvements seem to be compute neutral at best, and harmful to compute performance at worst. NVIDIA has made it clear that they are focusing first and foremost on gaming performance with GTX 680, and in the process are deemphasizing compute performance. Why? Let’s take a look.

Our first compute benchmark comes from Civilization V, which uses DirectCompute to decompress textures on the fly. Civ V includes a sub-benchmark that exclusively tests the speed of their texture decompression algorithm by repeatedly decompressing the textures required for one of the game’s leader scenes. Note that this is a DX11 DirectCompute benchmark.

Compute: Civilization V

Remember when NVIDIA used to sweep AMD in Civ V Compute? Times have certainly changed. AMD’s shift to GCN has rocketed them to the top of our Civ V Compute benchmark, meanwhile the reality is that in what’s probably the most realistic DirectCompute benchmark we have has the GTX 680 losing to the GTX 580, never mind the 7970. It’s not by much, mind you, but in this case the GTX 680 for all of its functional units and its core clock advantage doesn’t have the compute performance to stand toe-to-toe with the GTX 580.

At first glance our initial assumptions would appear to be right: Kepler’s scheduler changes have weakened its compute performance relative to Fermi.

Our next benchmark is SmallLuxGPU, the GPU ray tracing branch of the open source LuxRender renderer. We’re now using a development build from the version 2.0 branch, and we’ve moved on to a more complex scene that hopefully will provide a greater challenge to our GPUs.

SmallLuxGPU 2.0d4

CivV was bad; SmallLuxGPU is worse. At this point the GTX 680 can’t even compete with the GTX 570, let alone anything Radeon. In fact the GTX 680 has more in common with the GTX 560 Ti than it does anything else.

On that note, since we weren’t going to significantly change our benchmark suite for the GTX 680 launch, NVIDIA had a solid hunch that we were going to use SmallLuxGPU in our tests, and spoke specifically of it. Apparently NVIDIA has put absolutely no time into optimizing their now all-important Kepler compiler for SmallLuxGPU, choosing to focus on games instead. While that doesn’t make it clear how much of GTX 680’s performance is due to the compiler versus a general loss in compute performance, it does offer at least a slim hope that NVIDIA can improve their compute performance.

For our next benchmark we’re looking at AESEncryptDecrypt, an OpenCL AES encryption routine that AES encrypts/decrypts an 8K x 8K pixel square image file. The results of this benchmark are the average time to encrypt the image over a number of iterations of the AES cypher.

AESEncryptDecrypt

Starting with our AES encryption benchmark NVIDIA begins a recovery. GTX 680 is still technically slower than GTX 580, but only marginally so. If nothing else it maintains NVIDIA’s general lead in this benchmark, and is the first sign that GTX 680’s compute performance isn’t all bad.

For our fourth compute benchmark we wanted to reach out and grab something for CUDA, given the popularity of NVIDIA’s proprietary API. Unfortunately we were largely met with failure, for similar reasons as we were when the Radeon HD 7970 launched. Just as many OpenCL programs were hand optimized and didn’t know what to do with the Southern Islands architecture, many CUDA applications didn’t know what to do with GK104 and its Compute Capability 3.0 feature set.

To be clear, NVIDIA’s “core” CUDA functionality remains intact; PhysX, video transcoding, etc all work. But 3rd party applications are a much bigger issue. Among the CUDA programs that failed were NVIDIA’s own Design Garage (a GTX 480 showcase package), AccelerEyes’ GBENCH MatLab benchmark, and the latest Folding@Home client. Since our goal here is to stick to consumer/prosumer applications in reflection of the fact that the GTX 680 is a consumer card, we did somewhat limit ourselves by ruling out a number of professional CUDA applications, but  there’s no telling that compatibility there would fare any better.

We ultimately started looking at Distributed Computing applications and settled on PrimeGrid, whose CUDA accelerated GENEFER client worked with GTX 680. Interestingly enough it primarily uses double precision math – whether this is a good thing or not though is up to the reader given the GTX 680’s anemic double precision performance.

PrimeGrid GENEFER 1.06: 1325824^32768+1

Because it’s based around double precision math the GTX 680 does rather poorly here, but the surprising bit is that it did so to a larger degree than we’d expect. The GTX 680’s FP64 performance is 1/24th its FP32 performance, compared to 1/8th on GTX 580 and 1/12th on GTX 560 Ti. Still, our expectation would be that performance would at least hold constant relative to the GTX 560 Ti, given that the GTX 680 has more than double the compute performance to offset the larger FP64 gap.

Instead we found that the GTX 680 takes 35% longer, when on paper it should be 20% faster than the GTX 560 Ti (largely due to the difference in the core clock). This makes for yet another test where the GTX 680 can’t keep up with the GTX 500 series, be it due to the change in the scheduler, or perhaps the greater pressure on the still-64KB L1 cache. Regardless of the reason, it is becoming increasingly evident that NVIDIA has sacrificed compute performance to reach their efficiency targets for GK104, which is an interesting shift from a company that was so gung-ho about compute performance, and a slightly concerning sign that NVIDIA may have lost faith in the GPU Computing market for consumer applications.

Finally, our last benchmark is once again looking at compute shader performance, this time through the Fluid simulation sample in the DirectX SDK. This program simulates the motion and interactions of a 16k particle fluid using a compute shader, with a choice of several different algorithms. In this case we’re using an (O)n^2 nearest neighbor method that is optimized by using shared memory to cache data.

DirectX11 Compute Shader Fluid Simulation - Nearest Neighbor

Redemption at last? In our final compute benchmark the GTX 680 finally shows that it can still succeed in some compute scenarios, taking a rather impressive lead over both the 7970 and the GTX 580. At this point it’s not particularly clear why the GTX 680 does so well here and only here, but the fact that this is a compute shader program as opposed to an OpenCL program may have something to do with it. NVIDIA needs solid compute shader performance for the games that use it; OpenCL and even CUDA performance however can take a backseat.

Civilization V Theoreticals
Comments Locked

404 Comments

View All Comments

  • will54 - Thursday, March 22, 2012 - link

    I noticed in the review they said this was based on the GF114 not the GF110 but than they mention that this is the flagship card for Nvidia. Does this mean that this will be the top Nvidia card until the GTX780 or are they going to bring out a more powerful in the next couple months based off the GF110 such as a GTX 685.
  • von Krupp - Friday, March 23, 2012 - link

    That depends entirely on how AMD responds. If AMD were to respond with a single GPU solution that convincingly trumps the GTX 680 (this is extremely improbable), then yes, you could expect GK110.

    However, I expect Nvidia to hold on to Gk110 and instead answer the dual-GPU HD 7990 with a dual-GK104 GTX 690.
  • Sq7 - Thursday, March 22, 2012 - link

    ...my 6950 still plays everything smooth as ice at ultra settings :o Eye candy check. Tesselation check. No worries check. To be honest I am not that interested in the current generation of gfx cards. When UE4 comes out I think it will be an optimal time to upgrade.

    But mostly in the end $500 is just too much for a graphics card. And I don't care if the Vatican made it. When I need to upgrade there will always be a sweet little card with my name on it at $300 - $400 be it blue or green. And this launch has just not left me drooling enough to even consider going out of my price range. If Diablo 3 really blows on my current card... Maybe. But somehow I doubt it.
  • ShieTar - Friday, March 23, 2012 - link

    That just means you need a bigger monitor. Or newer games ;-)

    Seriously though, good for you.

    I have two crossfired, overclocked 6950s feeding my 30'', and still find myself playing MMOs like SWTOR or Rift with Shadows and AA switched of, so that i have a chance to stay at > 40 FPS even in scenes with large groups of characters and effects on the screen at once. The same is true for most Offline-RPGs, like DA2 and The Witcher 2.

    I don't think I have played any games that hit 60 FPS @ 2560x1600 @ "Ultra Settings" except for games that are 5-10 years old.

    Of course, I won't be paying the $500 any more than you will (or 500€ in my case), because stepping up just one generation of GPUs never makes much sense. Even if it a solid step up as with this generation, you still pay the full price for only getting an 20% to 25% performance increase. That's why I usually skip at least one generation, like going from 2x260 to 2x6950 last summer. That's when you really get your moneys worth.
  • von Krupp - Friday, March 23, 2012 - link

    Precisely.

    I jumped up from a single GeForce 7800 GT (paired with an Athlon 64 3200+) to dual HD 7970s (paired with an i7-3820). At present, there's nothing I can't crank all the way up at 2560x1440, though I don't foresee being able to continue that within two years. I got 7 years of use out of the previous rig (2005-2012) using a 17" 1280x1024 monitor and I expect to get at least four out of this at 1920x1080 on my U2711.

    Long story short, consoles make it easy to not have to worry about frequent graphics upgrades so that when you finally do upgrade, you can get your money's worth.
  • cmdrdredd - Thursday, March 22, 2012 - link

    Why is Anandtech using Crysis Warhead still and not Crysis 2 with the High Resolution textures and DX11 modification?
  • Malih - Thursday, March 22, 2012 - link

    Pricing is better, but 7970 is not much worse than 680, like some has claimed (well, leaks).

    With similar pricing, AMD is not that far off, although It remains to be seen whether AMD will lower the price.

    For me, I'm a mainstream guy, so I'll see how the mainstream parts perform, and whether AMD will lower the price on their current mainstream (78x0), I was thinking about getting 7870, but AMD's pricing is too high for me, it gets them money on some market, but not from my pocket.
  • CeriseCogburn - Tuesday, March 27, 2012 - link

    AMD is $120 too high. That's not chump change. That's breathe down your throat game changing 1000% at any other time on anandtech !
  • nyran125 - Friday, March 23, 2012 - link

    some games it wins, others it doesnt. But a pretty damn awesome card regardless.
  • asrey1975 - Friday, March 23, 2012 - link

    Your better off with an AMD card.

    Personally, I'm stlil thinking about buying 2x 6870's to replace my 5870 which runs BF3 no problem on my 27" 1900x1200 Dell monitor.

    It will cost me $165 each so for $330 all up, its stlil cheaper than any $500 card (insert brand/model) and will totally kick ass over 680 or 7970!

Log in

Don't have an account? Sign up now