Compute

On the other hand compared to our Kepler cards GTX 980 introduces a bunch of benefits. Higher CUDA core occupancy is going to be extremely useful in compute benchmarks. So will the larger L2 cache and the 96KB per SMM of shared memory. Even more important, compares to GK104 (GTX 680/770) GTX 980 inherits the compute enhancements that were introduced in GK110 (GTX 780/780 Ti) including changes that relieved pressure on register file bandwidth and capacity. So although GTX 980 is not strictly a compute card – it is first and foremost a graphics card – it has a lot of resources available to spend on compute.

As always we’ll start with LuxMark2.0, the official benchmark of SmallLuxGPU 2.0. SmallLuxGPU is an OpenCL accelerated ray tracer that is part of the larger LuxRender suite. Ray tracing has become a stronghold for GPUs in recent years as ray tracing maps well to GPU pipelines, allowing artists to render scenes much more quickly than with CPUs alone.

Compute: LuxMark 2.0

Out of the gate GTX 980 takes off like a rocket. AMD’s cards could easily best even GTX 780 Ti here, but GTX 980 wipes out AMD’s lead and then some. At 1.6M samples/sec, GTX 980 Ti is 15% faster than R9 290X and 54% faster than GTX 780 Ti. This, as it’s important to remind everyone, is for a part that technically only has 71% of the CUDA cores of GTX 780 Ti. So per CUDA core, GTX 980 delivers over 2x the LuxMark performance of GTX 780 Ti. Meanwhile against GTX 680 and GTX 780 the lead is downright silly. GTX 980 comes close to tripling its GK104 based predecessors.

I’ve spent some time pondering this, and considering that GTX 750 Ti looked very good in this test as well it’s clear that Maxwell’s architecture has a lot to do with this. I don’t know if NVIDIA hasn’t also been throwing in some driver optimizations here, but a big part is being played by parts of the architecture. GTX 750 Ti and GTX 980 both share the general architecture and 2MB of L2 cache, while it seems like we can run out GTX 980’s larger 96KB shared memory since GTX 750 Ti did not have that. This may just come down to those CUDA core occupancy improvements, especially if you start comparing GTX 980 to GTX 780 Ti.

For our second set of compute benchmarks we have CompuBench 1.5, the successor to CLBenchmark. We’re not due for a benchmark suite refresh until the end of the year, however as CLBenchmark does not know what to make of GTX 980 and is rather old overall, we’ve upgraded to CompBench 1.5 for this review.

Compute: CompuBench 1.5 - Face Detection

The first sub-benchmark is Face Detection, which like LuxMark puts GTX 980 in a very good light. It’s quite a bit faster than GTX 780 Ti or R9 290X, and comes close to trebling GTX 680.

Compute: CompuBench 1.5 - Optical Flow

The second sub-benchmark of Optical Flow on the other hand sees AMD put GTX 980 in its place. GTX 980 fares only as well as GTX 780 Ti here, which means performance per CUDA core is up, but not enough to offset the difference in cores. And it doesn’t get GTX 980 anywhere close to beating R9 290X. As a computer vision test this can be pretty memory bandwidth intensive, so this may be a case of GTX 980 succumbing to its lack of memory bandwidth rather than a shader bottleneck.

Compute: CompuBench 1.5 - Particle Simulation 64K

The final sub-benchmark of the particle simulation puts GTX 980 back on top, and by quite a lot. NVIDIA does well in this benchmark to start with – GTX 780 Ti is the number 2 result – and GTX 980 only improves on that. It’s 35% faster than GTX 780 Ti, 73% faster than R9 290X, and GTX 680 is nearly trebled once again. CUDA core occupancy is clearly a big part of these results, though I wonder if the L2 cache and shared memory increase may also be playing a part compared to GTX 780 Ti.

Our 3rd compute benchmark is Sony Vegas Pro 12, an OpenGL and OpenCL video editing and authoring package. Vegas can use GPUs in a few different ways, the primary uses being to accelerate the video effects and compositing process itself, and in the video encoding step. With video encoding being increasingly offloaded to dedicated DSPs these days we’re focusing on the editing and compositing process, rendering to a low CPU overhead format (XDCAM EX). This specific test comes from Sony, and measures how long it takes to render a video.

Compute: Sony Vegas Pro 12 Video Render

Traditionally a benchmark that favored AMD, the GTX 980 doesn’t manage to beat the R9 290X, but it closes the gap significantly compared to GTX 780 Ti. This test is a mix of simple shaders and blends, so it’s likely we’re seeing a bit of both here. More ROPs for more blending, and improved shader occupancy for when the task is shader-bound.

Moving on, our 4th compute benchmark is FAHBench, the official Folding @ Home benchmark. Folding @ Home is the popular Stanford-backed research and distributed computing initiative that has work distributed to millions of volunteer computers over the internet, each of which is responsible for a tiny slice of a protein folding simulation. FAHBench can test both single precision and double precision floating point performance, with single precision being the most useful metric for most consumer cards due to their low double precision performance. Each precision has two modes, explicit and implicit, the difference being whether water atoms are included in the simulation, which adds quite a bit of work and overhead. This is another OpenCL test, utilizing the OpenCL path for FAHCore 17.

Compute: Folding @ Home: Explicit, Single PrecisionCompute: Folding @ Home: Implicit, Single Precision

This is another success story for the GTX 980. In both single precision tests the GTX 980 comes out on top, holding a significant lead over the R9 290X. Furthermore we’re seeing some big performance gains over GTX 780 Ti, and outright massive gains over GTX 680, to the point that GTX 980 comes just short of quadrupling GTX 680’s performance in single precision explicit. This test is basically all about shading/compute, so we expect we’re seeing a mix of improvements to CUDA core occupancy, shared memory/cache improvements, and against GTX 680 those register file improvements.

Compute: Folding @ Home: Explicit, Double Precision

Double precision on the other hand is going to be the GTX 980’s weak point for obvious reasons. GM204 is a graphics GPU first and foremost, so it only has very limited 1:32 rate FP64 performance, leaving it badly outmatched by anything with a better rate. This includes GTX 780/780 Ti (1:24), AMD’s cards (1:8 FP64), and even ancient GTX 580 (1:8). If you want to do real double precision work, NVIDIA clearly wants you buying their bigger, compute-focused products such as GTX Titan, Quadro, and Tesla.

Wrapping things up, our final compute benchmark is an in-house project developed by our very own Dr. Ian Cutress. SystemCompute is our first C++ AMP benchmark, utilizing Microsoft’s simple C++ extensions to allow the easy use of GPU computing in C++ programs. SystemCompute in turn is a collection of benchmarks for several different fundamental compute algorithms, with the final score represented in points. DirectCompute is the compute backend for C++ AMP on Windows, so this forms our other DirectCompute test.

Compute: SystemCompute v0.5.7.2 C++ AMP Benchmark

Once again NVIDIA’s compute performance is showing a strong improvement, even under DirectCompute. 17% over GTX 780 Ti and 88% over GTX 680 shows that NVIDIA is getting more work done per CUDA core than ever before. Though this won’t be enough to surpass the even faster R9 290X.

Overall, while NVIDIA can’t win every compute benchmark here, the fact that they are winning so many and by so much – and otherwise not terribly losing the rest – shows that NVIDIA and GM204 have corrected the earlier compute deficiencies in GK104. As an x04 part GM204 may still be first and foremost consumer graphics, but if it’s faced with a compute workload most of the time it’s going to be able to power on through it just as well as it does with games and other graphical workloads.

It would be nice to see GPU compute put to better use than it is today, and having strong(er) compute performance in consumer parts is going to be one of the steps that needs to happen for that outcome to occur.

Synthetics Power, Temperature, & Noise
Comments Locked

274 Comments

View All Comments

  • atlantico - Friday, September 19, 2014 - link

    I'm sorry, but I couldn't care less about power efficiency on an enthusiast GPU unit. The 780Ti was a 250W card and that is a great card because it performs well. It delivers results.

    I have a desktop computer, a full ATX tower. Not a laptop. PSUs are cheap enough, it's even a question of that.

    So please, stuff the power requirements of this GTX980. The fact is if it sucked 250W and was more powerful, then it would have been a better card.
  • A5 - Friday, September 19, 2014 - link

    They'll be more than happy to sell you a $1000 GM210 Titan Black Ultra GTX, I'm sure.

    Fact is that enthusiast cards aren't really where they make their money anymore, and they're orienting their R&D accordingly.
  • Fallen Kell - Friday, September 19, 2014 - link

    Exactly. Not only that, the "real" money is in getting the cards in OEM systems which sell hundreds of thousands of units. And those are very power and cooling specific.
  • Antronman - Sunday, September 21, 2014 - link

    Yep, yep, and yep again.

    For OEMs, the difference between spending 10 more or less dollars is huge.

    More efficient cards means less power from the PSU. It's one of the reasons why GeForce cards are so much more popular in OEM systems.

    I have to disagree with the statement about enthusiast cards not being of value to Nvidia.

    Many people are of the opinion that Nvidia has always had better performance than AMD/ATI.
  • Tikcus9666 - Friday, September 19, 2014 - link

    For desktop cards power consumption is meaningless to the 99%
    Price/Performance is much more important. if Card A uses 50w more under full load than card B, but performs around the same and is £50 cheaper to buy at 15 p per kwh cost for energy it would take 6666 hours of running to get your £50 back. Add to this if Card A produces more heat into the room, in winter months your heating system will use less energy, meanning it takes even longer to get your cash back.... tldr Wattage is only important in laptops and tablets and things that need batterys to run
  • jwcalla - Friday, September 19, 2014 - link

    At least in this case it appears the power efficiency allows for a decent overclock. So you can get more performance and heat up your room at the same time.

    Of course I'm sure they're leaving some performance on the table for a refresh next year. Pascal is still a long way's off so they have to extend Maxwell's lifespan. Same deal as with Fermi and Kepler.
  • Icehawk - Friday, September 19, 2014 - link

    When I built my mATX current box one criteria was that it be silent, or nearly so while still being a full power rig (i7 OC'd & 670), and the limitation really is GPU draw - thankfully NVs had dropped by the 6xx series enough I was able to use a fanless PSU and get my machine dead silent. I am glad I don't need a tower box that sounds like a jet anymore :)

    I would love to see them offer a high TDP, better cooled, option though for the uber users who won't care about costs, heat, sound and are just looking for the max performance to drive those 4k/surround setups.
  • Yojimbo - Friday, September 19, 2014 - link

    I agree that power consumption in itself isn't so important to most consumer desktop users, as long as they don't require extra purchases to accommodate the cards. But since power consumption and noise seem to be directly related for GPUs, power efficiency is actually an important consideration for a fair number of consumer desktop users.
  • RaistlinZ - Sunday, September 21, 2014 - link

    Yeah, but they're still limited by the 250W spec. So the only way to give us more and more powerful GPU's while staying within 250W is to increase efficiency.
  • kallogan - Friday, September 19, 2014 - link

    dat beast

Log in

Don't have an account? Sign up now