Compute

With GTX 980 NVIDIA surprised us with their stunning turnaround in in GPU compute performance, which saw them capable of reaching the top in many compute benchmarks they couldn’t before. GTX 970 meanwhile should benefit from these architectural and driver improvements, though since compute is nearly analogous to shader performance this is also a case where the performance difference between the GTX 970 and GTX 980 stands to be among its widest.

As always we’ll start with LuxMark2.0, the official benchmark of SmallLuxGPU 2.0. SmallLuxGPU is an OpenCL accelerated ray tracer that is part of the larger LuxRender suite. Ray tracing has become a stronghold for GPUs in recent years as ray tracing maps well to GPU pipelines, allowing artists to render scenes much more quickly than with CPUs alone

Compute: LuxMark 2.0

Thanks to GTX 980 taking the top spot here, GTX 970 still maintains a small lead over R9 290XU. So even with the GTX 970's weaker performance, it can still manage to outperform AMD's flagship in this case.

For our second set of compute benchmarks we have CompuBench 1.5, the successor to CLBenchmark. We’re not due for a benchmark suite refresh until the end of the year, however as CLBenchmark does not know what to make of GTX 980 and is rather old overall, we’ve upgraded to CompBench 1.5 for this review.

Compute: CompuBench 1.5 - Face Detection

Compute: CompuBench 1.5 - Optical Flow

Compute: CompuBench 1.5 - Particle Simulation 64K

In the cases where the GTX 980 does well, so does the GTX 970. In the cases where the GTX 980 wasn’t fast enough to top the charts, the GTX 970 will be similarly close behind. Overall compared to AMD’s lineup we see the whole gamut, from a tie between the GTX 970 and R9 290XU to victories for either card.

Our 3rd compute benchmark is Sony Vegas Pro 12, an OpenGL and OpenCL video editing and authoring package. Vegas can use GPUs in a few different ways, the primary uses being to accelerate the video effects and compositing process itself, and in the video encoding step. With video encoding being increasingly offloaded to dedicated DSPs these days we’re focusing on the editing and compositing process, rendering to a low CPU overhead format (XDCAM EX). This specific test comes from Sony, and measures how long it takes to render a video.

Compute: Sony Vegas Pro 12 Video Render

As expected, GTX 970 sheds a bit of performance here. AMD still holds a lead here overall, and against GTX 970 that lead is a little bit larger.

Moving on, our 4th compute benchmark is FAHBench, the official Folding @ Home benchmark. Folding @ Home is the popular Stanford-backed research and distributed computing initiative that has work distributed to millions of volunteer computers over the internet, each of which is responsible for a tiny slice of a protein folding simulation. FAHBench can test both single precision and double precision floating point performance, with single precision being the most useful metric for most consumer cards due to their low double precision performance. Each precision has two modes, explicit and implicit, the difference being whether water atoms are included in the simulation, which adds quite a bit of work and overhead. This is another OpenCL test, utilizing the OpenCL path for FAHCore 17.

Compute: Folding @ Home: Explicit, Single Precision

Compute: Folding @ Home: Implicit, Single Precision

Compute: Folding @ Home: Explicit, Double Precision

With the GTX 980 holding such a commanding lead here, even with the GTX 970’s lower performance it still is more than enough to easily beat any other card in single precision Folding @ Home workloads. Only in double precision with NVIDIA’s anemic 1:32 ratio does GTX 970 falter.

Wrapping things up, our final compute benchmark is an in-house project developed by our very own Dr. Ian Cutress. SystemCompute is our first C++ AMP benchmark, utilizing Microsoft’s simple C++ extensions to allow the easy use of GPU computing in C++ programs. SystemCompute in turn is a collection of benchmarks for several different fundamental compute algorithms, with the final score represented in points. DirectCompute is the compute backend for C++ AMP on Windows, so this forms our other DirectCompute test.

Compute: SystemCompute v0.5.7.2 C++ AMP Benchmark

Recently this has been a stronger benchmark for AMD cards than NVIDIA cards, and consequently GTX 970 doesn’t enjoy quite the lead it sees elsewhere. Though not too far behind R9 280X and even R9 290, like GTX 980 it can’t crunch numbers quite fast enough to keep up with R9 290XU.

Synthetics Power, Temperature, & Noise
Comments Locked

155 Comments

View All Comments

  • Casecutter - Friday, September 26, 2014 - link

    I’m confident in if we had two of what where the normal "AIB OC customs" of both a 970 and 290 things between might not appear so skewed. First as much as folks want this level of card to get them into 4K, there not... So it really just boils down to seeing what similarly generic OC custom offer and say "spar back and forth" @2560x1440 depending on the titles.

    As to power I wish these reviews would halt the inadequate testing like it’s still 2004! The power (complete PC) should for each game B-M’d, and should record in retime the oscillation of power in milliseconds, then output the 'mean' over the test duration. As we know each title fluctuates boost frequency across every title, the 'mean' across each game is different. Then each 'mean' can be added and the average from the number of titles would offer to most straight-forward evaluation of power while gaming. Also, as most folk today "Sleep" their computers (and not many idle for more than 10-20min) I believe the best calculation for power is what a graphics card "suckles" while doing nothing like 80% each month. I’d more like to see how AMD ZeroCore impacts a machines power usage over a months’ time, verse the savings only during gaming. Consider gaming 3hr a day which constitutes 12.5% of a month, does the 25% difference in power gaming beat the 5W saved with Zerocore 80% of that month. Saving energy while using and enjoying something is fine, although wasting watts while doing nothing is incomprehensible.
  • Impulses - Sunday, September 28, 2014 - link

    Ehh, I recently bought 2x custom 290, but I've no doubt that even with a decent OC the 970 can st the very least still match it in most games... I don't regret the 290s, but I also only paid $350/360 for my WF Gigabyte cards, had I paid closer to $400 I'd be kicking myself right about now.
  • Iketh - Monday, September 29, 2014 - link

    most PCs default to sleeping during long idles and most people shut it off
  • dragonsqrrl - Friday, September 26, 2014 - link

    Maxwell truly is an impressive architecture, I just wish Nvidia would stop further gimping double precision performance relative to single precision with each successive generation of their consumer cards. GF100/110 were capped at 1/8, GK110 was capped at 1/24, and now GM204 (and likely GM210) is capped at 1/32... What's still yet to be seen is how they're capping the performance on GM204, whether it's a hardware limitation like GK104, or a clock speed limitation in firmware like GK110.

    Nvidia: You peasants want any sort of reasonable upgrade in FP64 performance? Pay up.
  • D. Lister - Friday, September 26, 2014 - link

    "Company X: You peasants want any sort of reasonable upgrade in product Y? Pay up."

    Well, that's capitalism for ya... :p. Seriously though, if less DP ability means a cheaper GPU then as a gamer I'm all for it. If a dozen niche DP hobbyists get screwed over, and a thousand gamers get a better deal on a gaming card then why not? Remember what all that bit mining nonsense did to the North American prices of the Radeons?
  • D. Lister - Friday, September 26, 2014 - link

    Woah, it seems they do tags differently here at AT :(. Sorry if the above message appears improperly formatted.
  • Mr Perfect - Friday, September 26, 2014 - link

    It's not you, the italic tag throws in a couple extra line breaks. Bold might too, I seem to remember that mangling a post of mine in the past.
  • D. Lister - Sunday, September 28, 2014 - link

    Oh, okay, thanks for the explanation :).
  • wetwareinterface - Saturday, September 27, 2014 - link

    this^

    you seem to be under the illusion that nvidia intended to keep shooting themselves in the foot forever in regards to releasing their high end gpgpu chip under a gaming designation and relying on the driver (which is easy to hack) to keep people from buying a gamer card for workstation loads. face it they wised up and charge extra for fp64 and the higher ram count now. no more cheap workstation cards. the benefit as already described is cheaper gaming cards that are designed to be more efficient at gaming and leave the workstation loads to the workstation cards.
  • dragonsqrrl - Saturday, September 27, 2014 - link

    This is only partially true, and I think D. Lister basically suggested the same thing so I'll just make a single response for both. The argument for price and efficiency would really only be the case for a GK104 type scenario, where on die FP64 performance is physically limited to 1/24 FP32 due to there being 1/24 the Cuda cores. But what about GK110? There is no reason to limit it to 1/24 SP other than segmentation. There's pretty much no efficiency or price argument there, and we see proof of that in the Titan, no less efficient at gaming and really no more expensive to manufacture outside the additional memory and maybe some additional validation. In other words there's really no justification (or at least certainly not the justification you guys are suggesting) for why the GTX780 Ti couldn't have had 1/12 SP with 3GB GDDR5 at the same $700 MSRP, for instance. Of course other than further (and in my opinion unreasonable) segmentation.

    This is why I was wondering how they're capping performance in GM204.

Log in

Don't have an account? Sign up now