On the other hand compared to our Kepler cards GTX 980 introduces a bunch of benefits. Higher CUDA core occupancy is going to be extremely useful in compute benchmarks. So will the larger L2 cache and the 96KB per SMM of shared memory. Even more important, compares to GK104 (GTX 680/770) GTX 980 inherits the compute enhancements that were introduced in GK110 (GTX 780/780 Ti) including changes that relieved pressure on register file bandwidth and capacity. So although GTX 980 is not strictly a compute card – it is first and foremost a graphics card – it has a lot of resources available to spend on compute.

As always we’ll start with LuxMark2.0, the official benchmark of SmallLuxGPU 2.0. SmallLuxGPU is an OpenCL accelerated ray tracer that is part of the larger LuxRender suite. Ray tracing has become a stronghold for GPUs in recent years as ray tracing maps well to GPU pipelines, allowing artists to render scenes much more quickly than with CPUs alone.

Compute: LuxMark 2.0

Out of the gate GTX 980 takes off like a rocket. AMD’s cards could easily best even GTX 780 Ti here, but GTX 980 wipes out AMD’s lead and then some. At 1.6M samples/sec, GTX 980 Ti is 15% faster than R9 290X and 54% faster than GTX 780 Ti. This, as it’s important to remind everyone, is for a part that technically only has 71% of the CUDA cores of GTX 780 Ti. So per CUDA core, GTX 980 delivers over 2x the LuxMark performance of GTX 780 Ti. Meanwhile against GTX 680 and GTX 780 the lead is downright silly. GTX 980 comes close to tripling its GK104 based predecessors.

I’ve spent some time pondering this, and considering that GTX 750 Ti looked very good in this test as well it’s clear that Maxwell’s architecture has a lot to do with this. I don’t know if NVIDIA hasn’t also been throwing in some driver optimizations here, but a big part is being played by parts of the architecture. GTX 750 Ti and GTX 980 both share the general architecture and 2MB of L2 cache, while it seems like we can run out GTX 980’s larger 96KB shared memory since GTX 750 Ti did not have that. This may just come down to those CUDA core occupancy improvements, especially if you start comparing GTX 980 to GTX 780 Ti.

For our second set of compute benchmarks we have CompuBench 1.5, the successor to CLBenchmark. We’re not due for a benchmark suite refresh until the end of the year, however as CLBenchmark does not know what to make of GTX 980 and is rather old overall, we’ve upgraded to CompBench 1.5 for this review.

Compute: CompuBench 1.5 - Face Detection

The first sub-benchmark is Face Detection, which like LuxMark puts GTX 980 in a very good light. It’s quite a bit faster than GTX 780 Ti or R9 290X, and comes close to trebling GTX 680.

Compute: CompuBench 1.5 - Optical Flow

The second sub-benchmark of Optical Flow on the other hand sees AMD put GTX 980 in its place. GTX 980 fares only as well as GTX 780 Ti here, which means performance per CUDA core is up, but not enough to offset the difference in cores. And it doesn’t get GTX 980 anywhere close to beating R9 290X. As a computer vision test this can be pretty memory bandwidth intensive, so this may be a case of GTX 980 succumbing to its lack of memory bandwidth rather than a shader bottleneck.

Compute: CompuBench 1.5 - Particle Simulation 64K

The final sub-benchmark of the particle simulation puts GTX 980 back on top, and by quite a lot. NVIDIA does well in this benchmark to start with – GTX 780 Ti is the number 2 result – and GTX 980 only improves on that. It’s 35% faster than GTX 780 Ti, 73% faster than R9 290X, and GTX 680 is nearly trebled once again. CUDA core occupancy is clearly a big part of these results, though I wonder if the L2 cache and shared memory increase may also be playing a part compared to GTX 780 Ti.

Our 3rd compute benchmark is Sony Vegas Pro 12, an OpenGL and OpenCL video editing and authoring package. Vegas can use GPUs in a few different ways, the primary uses being to accelerate the video effects and compositing process itself, and in the video encoding step. With video encoding being increasingly offloaded to dedicated DSPs these days we’re focusing on the editing and compositing process, rendering to a low CPU overhead format (XDCAM EX). This specific test comes from Sony, and measures how long it takes to render a video.

Compute: Sony Vegas Pro 12 Video Render

Traditionally a benchmark that favored AMD, the GTX 980 doesn’t manage to beat the R9 290X, but it closes the gap significantly compared to GTX 780 Ti. This test is a mix of simple shaders and blends, so it’s likely we’re seeing a bit of both here. More ROPs for more blending, and improved shader occupancy for when the task is shader-bound.

Moving on, our 4th compute benchmark is FAHBench, the official Folding @ Home benchmark. Folding @ Home is the popular Stanford-backed research and distributed computing initiative that has work distributed to millions of volunteer computers over the internet, each of which is responsible for a tiny slice of a protein folding simulation. FAHBench can test both single precision and double precision floating point performance, with single precision being the most useful metric for most consumer cards due to their low double precision performance. Each precision has two modes, explicit and implicit, the difference being whether water atoms are included in the simulation, which adds quite a bit of work and overhead. This is another OpenCL test, utilizing the OpenCL path for FAHCore 17.

Compute: Folding @ Home: Explicit, Single PrecisionCompute: Folding @ Home: Implicit, Single Precision

This is another success story for the GTX 980. In both single precision tests the GTX 980 comes out on top, holding a significant lead over the R9 290X. Furthermore we’re seeing some big performance gains over GTX 780 Ti, and outright massive gains over GTX 680, to the point that GTX 980 comes just short of quadrupling GTX 680’s performance in single precision explicit. This test is basically all about shading/compute, so we expect we’re seeing a mix of improvements to CUDA core occupancy, shared memory/cache improvements, and against GTX 680 those register file improvements.

Compute: Folding @ Home: Explicit, Double Precision

Double precision on the other hand is going to be the GTX 980’s weak point for obvious reasons. GM204 is a graphics GPU first and foremost, so it only has very limited 1:32 rate FP64 performance, leaving it badly outmatched by anything with a better rate. This includes GTX 780/780 Ti (1:24), AMD’s cards (1:8 FP64), and even ancient GTX 580 (1:8). If you want to do real double precision work, NVIDIA clearly wants you buying their bigger, compute-focused products such as GTX Titan, Quadro, and Tesla.

Wrapping things up, our final compute benchmark is an in-house project developed by our very own Dr. Ian Cutress. SystemCompute is our first C++ AMP benchmark, utilizing Microsoft’s simple C++ extensions to allow the easy use of GPU computing in C++ programs. SystemCompute in turn is a collection of benchmarks for several different fundamental compute algorithms, with the final score represented in points. DirectCompute is the compute backend for C++ AMP on Windows, so this forms our other DirectCompute test.

Compute: SystemCompute v0.5.7.2 C++ AMP Benchmark

Once again NVIDIA’s compute performance is showing a strong improvement, even under DirectCompute. 17% over GTX 780 Ti and 88% over GTX 680 shows that NVIDIA is getting more work done per CUDA core than ever before. Though this won’t be enough to surpass the even faster R9 290X.

Overall, while NVIDIA can’t win every compute benchmark here, the fact that they are winning so many and by so much – and otherwise not terribly losing the rest – shows that NVIDIA and GM204 have corrected the earlier compute deficiencies in GK104. As an x04 part GM204 may still be first and foremost consumer graphics, but if it’s faced with a compute workload most of the time it’s going to be able to power on through it just as well as it does with games and other graphical workloads.

It would be nice to see GPU compute put to better use than it is today, and having strong(er) compute performance in consumer parts is going to be one of the steps that needs to happen for that outcome to occur.

Synthetics Power, Temperature, & Noise


View All Comments

  • Sttm - Thursday, September 18, 2014 - link

    "How will AMD and NVIDIA solve the problem they face and bring newer, better products to the market?"

    My suggestion is they send their CEOs over to Intel to beg on their knees for access to their 14nm process. This is getting silly, GPUs shouldn't be 4 years behind CPUs on process node. Someone cut Intel a big fat check and get this done already.
  • joepaxxx - Thursday, September 18, 2014 - link

    It's not just about having access to the process technology and fab. The cost of actually designing and verifying an SoC at nodes past 28nm is approaching the breaking point for most markets, that's why companies aren't jumping on to them. I saw one estimate of 500 million for development of a 16/14nm device. You better have a pretty good lock on the market to spend that kind of money. Reply
  • extide - Friday, September 19, 2014 - link

    Yeah, but the GPU market is not one of those markets where the verification cost will break the bank, dude. Reply
  • Samus - Friday, September 19, 2014 - link

    Seriously, nVidia's market cap is $10 billion dollars, they can spend a tiny fortune moving to 20nm and beyond...if they want too.

    I just don't think they want to saturate their previous products with such leaps and bounds in performance while also absolutely destroying their competition.

    Moving to a smaller process isn't out of nVidia's reach, I just don't think they have a competitive incentive to spend the money on it. They've already been accused of becoming a monopoly after purchasing 3Dfx, and it'd be painful if AMD/ATI exited the PC graphics market because nVidia's Maxwell's, being twice as efficient as GCN, were priced identically.
  • bernstein - Friday, September 19, 2014 - link

    atm. it is out of reach to them. at least from a financial perspective.
    while it would be awesome to have maxwell designed for & produced on intel's 14nm process, intel doesn't even have the capacity to produce all of their own cpus... until fall 2015 (broadwell xeon-ep release)...
  • kron123456789 - Friday, September 19, 2014 - link

    "it also marks the end of support for NVIDIA’s D3D10 GPUs: the 8, 9, 100, 200, and 300 series. Beginning with R343 these products are no longer supported in new driver branches and have been moved to legacy status." - This is it. The time has come to buy a new card to replace my GeForce 9800GT :) Reply
  • bobwya - Friday, September 19, 2014 - link

    Such a modern card - why bother :-) The 980 will finally replace my 8800 GTX. Now that's a genuinely old card!!
    Actually I mainly need to do the upgrade because the power bills are so ridiculous for the 8800 GTX! For pities sake the card only has one power profile (high power usage).
  • djscrew - Friday, September 19, 2014 - link

    Like +1 Reply
  • kron123456789 - Saturday, September 20, 2014 - link

    Oh yeah, modern :) It's only 6 years old) But it can handle even Tomb Raider at 1080p with 30-40fps at medium settings :) Reply
  • SkyBill40 - Saturday, September 20, 2014 - link

    I've got an 8800 GTS 640MB still running in my mom's rig that's far more than what she'd ever need. Despite getting great performance from my MSI 660Ti OC 2GB Power Edition, it might be time to consider moving up the ladder since finding another identical card at a decent price for SLI likely wouldn't be worth the effort.

    So, either I sell off this 660Ti, give it to her, or hold onto it for a HTPC build at some point down the line. Decision, decisions. :)

Log in

Don't have an account? Sign up now