Compute: What You Leave Behind?

As always our final set of benchmarks is a look at compute performance. As we mentioned in our discussion on the Kepler architecture, GK104’s improvements seem to be compute neutral at best, and harmful to compute performance at worst. NVIDIA has made it clear that they are focusing first and foremost on gaming performance with GTX 680, and in the process are deemphasizing compute performance. Why? Let’s take a look.

Our first compute benchmark comes from Civilization V, which uses DirectCompute to decompress textures on the fly. Civ V includes a sub-benchmark that exclusively tests the speed of their texture decompression algorithm by repeatedly decompressing the textures required for one of the game’s leader scenes. Note that this is a DX11 DirectCompute benchmark.

Compute: Civilization V

Remember when NVIDIA used to sweep AMD in Civ V Compute? Times have certainly changed. AMD’s shift to GCN has rocketed them to the top of our Civ V Compute benchmark, meanwhile the reality is that in what’s probably the most realistic DirectCompute benchmark we have has the GTX 680 losing to the GTX 580, never mind the 7970. It’s not by much, mind you, but in this case the GTX 680 for all of its functional units and its core clock advantage doesn’t have the compute performance to stand toe-to-toe with the GTX 580.

At first glance our initial assumptions would appear to be right: Kepler’s scheduler changes have weakened its compute performance relative to Fermi.

Our next benchmark is SmallLuxGPU, the GPU ray tracing branch of the open source LuxRender renderer. We’re now using a development build from the version 2.0 branch, and we’ve moved on to a more complex scene that hopefully will provide a greater challenge to our GPUs.

SmallLuxGPU 2.0d4

CivV was bad; SmallLuxGPU is worse. At this point the GTX 680 can’t even compete with the GTX 570, let alone anything Radeon. In fact the GTX 680 has more in common with the GTX 560 Ti than it does anything else.

On that note, since we weren’t going to significantly change our benchmark suite for the GTX 680 launch, NVIDIA had a solid hunch that we were going to use SmallLuxGPU in our tests, and spoke specifically of it. Apparently NVIDIA has put absolutely no time into optimizing their now all-important Kepler compiler for SmallLuxGPU, choosing to focus on games instead. While that doesn’t make it clear how much of GTX 680’s performance is due to the compiler versus a general loss in compute performance, it does offer at least a slim hope that NVIDIA can improve their compute performance.

For our next benchmark we’re looking at AESEncryptDecrypt, an OpenCL AES encryption routine that AES encrypts/decrypts an 8K x 8K pixel square image file. The results of this benchmark are the average time to encrypt the image over a number of iterations of the AES cypher.

AESEncryptDecrypt

Starting with our AES encryption benchmark NVIDIA begins a recovery. GTX 680 is still technically slower than GTX 580, but only marginally so. If nothing else it maintains NVIDIA’s general lead in this benchmark, and is the first sign that GTX 680’s compute performance isn’t all bad.

For our fourth compute benchmark we wanted to reach out and grab something for CUDA, given the popularity of NVIDIA’s proprietary API. Unfortunately we were largely met with failure, for similar reasons as we were when the Radeon HD 7970 launched. Just as many OpenCL programs were hand optimized and didn’t know what to do with the Southern Islands architecture, many CUDA applications didn’t know what to do with GK104 and its Compute Capability 3.0 feature set.

To be clear, NVIDIA’s “core” CUDA functionality remains intact; PhysX, video transcoding, etc all work. But 3rd party applications are a much bigger issue. Among the CUDA programs that failed were NVIDIA’s own Design Garage (a GTX 480 showcase package), AccelerEyes’ GBENCH MatLab benchmark, and the latest Folding@Home client. Since our goal here is to stick to consumer/prosumer applications in reflection of the fact that the GTX 680 is a consumer card, we did somewhat limit ourselves by ruling out a number of professional CUDA applications, but  there’s no telling that compatibility there would fare any better.

We ultimately started looking at Distributed Computing applications and settled on PrimeGrid, whose CUDA accelerated GENEFER client worked with GTX 680. Interestingly enough it primarily uses double precision math – whether this is a good thing or not though is up to the reader given the GTX 680’s anemic double precision performance.

PrimeGrid GENEFER 1.06: 1325824^32768+1

Because it’s based around double precision math the GTX 680 does rather poorly here, but the surprising bit is that it did so to a larger degree than we’d expect. The GTX 680’s FP64 performance is 1/24th its FP32 performance, compared to 1/8th on GTX 580 and 1/12th on GTX 560 Ti. Still, our expectation would be that performance would at least hold constant relative to the GTX 560 Ti, given that the GTX 680 has more than double the compute performance to offset the larger FP64 gap.

Instead we found that the GTX 680 takes 35% longer, when on paper it should be 20% faster than the GTX 560 Ti (largely due to the difference in the core clock). This makes for yet another test where the GTX 680 can’t keep up with the GTX 500 series, be it due to the change in the scheduler, or perhaps the greater pressure on the still-64KB L1 cache. Regardless of the reason, it is becoming increasingly evident that NVIDIA has sacrificed compute performance to reach their efficiency targets for GK104, which is an interesting shift from a company that was so gung-ho about compute performance, and a slightly concerning sign that NVIDIA may have lost faith in the GPU Computing market for consumer applications.

Finally, our last benchmark is once again looking at compute shader performance, this time through the Fluid simulation sample in the DirectX SDK. This program simulates the motion and interactions of a 16k particle fluid using a compute shader, with a choice of several different algorithms. In this case we’re using an (O)n^2 nearest neighbor method that is optimized by using shared memory to cache data.

DirectX11 Compute Shader Fluid Simulation - Nearest Neighbor

Redemption at last? In our final compute benchmark the GTX 680 finally shows that it can still succeed in some compute scenarios, taking a rather impressive lead over both the 7970 and the GTX 580. At this point it’s not particularly clear why the GTX 680 does so well here and only here, but the fact that this is a compute shader program as opposed to an OpenCL program may have something to do with it. NVIDIA needs solid compute shader performance for the games that use it; OpenCL and even CUDA performance however can take a backseat.

Civilization V Theoreticals
POST A COMMENT

404 Comments

View All Comments

  • blppt - Thursday, March 22, 2012 - link

    Wondering if you guys could also add a benchmark for one the current crop of 1ghz core 7970s that are available now (if you've tested any). Otherwise, great review. Reply
  • tipoo - Thursday, March 22, 2012 - link

    With everything being said by Nvidia, I thought this would be a Geforce 8k series class jump, while its really nothing close to that and trades blows with AMDs 3 month old card. GCN definitely had headroom so I can see lower priced, higher clocked AMD cards coming out soon to combat this. Still, I'm glad this will bring things down to sane prices. Reply
  • MarkusN - Thursday, March 22, 2012 - link

    Well to be honest, this wasn't supposed to be Nvidias successor to the GTX 580 anyway. This graphics card replaced the GTX 560 Ti, not the GTX 580. GK 110 will replace the GTX 580, even if you can argue that the GTX 680 is now their high-end card, it's just a replacement for the GTX 560 Ti so I can just dream about the performance of the GTX 780 or whatever they're going to call it. ;) Reply
  • tipoo - Thursday, March 22, 2012 - link

    I didn't know that, thanks. Ugh, even more confusing naming schemes. Reply
  • Articuno - Thursday, March 22, 2012 - link

    If this is supposed to replace the 560 Ti then why does it cost $500 and why was it released before the low-end parts instead of before the high-end parts? Reply
  • MarkusN - Thursday, March 22, 2012 - link

    It costs that much because Nvidia realized that it outperforms/trades blows with the HD 7970 and saw an opportunity to make some extra cash, which basically sucks for us consumers. There are those that say that the GTX 680 is cheaper and better than the HD 7970 and think it costs just the right amount, but as usual it's us, the customers, that are getting the shaft again. This card should've been around $300-350 in my opinion, no matter if it beats the HD 7970. Reply
  • coldpower27 - Thursday, March 22, 2012 - link

    Nah, they aren't obligated to give more then what the market will bear, no sense in starting a price war when they can have much fatter margins, it beats the 7970 already it's just enough.

    Now the ball is in AMD's court let's see if they can drop prices to compete $450 would be a nice start, but $400 is necessary to actually cause significant competition.
    Reply
  • CeriseCogburn - Friday, March 23, 2012 - link

    This whole thing is so nutso but everyone is saying it.
    Let's take a thoughtful sane view...
    The GTX580 flagship was just $500, and a week or two ago it was $469 or so.
    In what world, in what release, in the past let's say ten years even, has either card company released their new product with $170 or $200 off their standard flagship price when it was standing near $500 right before the release ?
    The answer is it has never, ever happened, not even close, not once.
    With the GTX580 at $450, there's no way a card 40% faster is going to be dropped in at $300, no matter what rumor Charlie Demejerin at Semi0-Accurate has made up from thin air up as an attack on Nvidia, a very smart one for not too bright people it seems.
    Please, feel free to tell me what flagship has ever dropped in cutting nearly $200 off the current flagship price ?
    Any of you ?!?
    Reply
  • Lepton87 - Thursday, March 22, 2012 - link

    Because nVidia decided to screw its costumer and nickle and dime them. That's why. All because 7970 underperformed and nv could get away with it. Reply
  • JarredWalton - Thursday, March 22, 2012 - link

    Or: Because NVIDIA and AMD and Intel are all businesses, and when you launch a hot new product and lots of people are excited to get one, you sell at a price premium for as long as you can. Then supply equals demand and then exceeds demand and that's when you start dropping prices. 7970 didn't underperform; people just expected/wanted more. Realistically, we're getting to the point where doubling performance with a process shrink isn't going to happen, and even 50% improvements are rare. 7970 and 680 are a reflection of that fact. Reply

Log in

Don't have an account? Sign up now