Compute

Shifting gears, we have our look at compute performance. Since GTX Titan X has no compute feature advantage - no fast double precision support like what's found in the Kepler generation Titans - the performance difference between the GTX Titan X and GTX 980 Ti should be very straightforward.

Starting us off for our look at compute is LuxMark3.0, the latest version of the official benchmark of LuxRender 2.0. LuxRender’s GPU-accelerated rendering mode is an OpenCL based ray tracer that forms a part of the larger LuxRender suite. Ray tracing has become a stronghold for GPUs in recent years as ray tracing maps well to GPU pipelines, allowing artists to render scenes much more quickly than with CPUs alone.

Compute: LuxMark 3.0 - Hotel

With the pace set for GM200 by GTX Titan X, there’s little to say here that hasn’t already been said. Maxwell does not fare well in LuxMark, and while GTX 980 Ti continues to stick very close to GTX Titan X, it none the less ends up right behind the Radeon HD 7970 in this benchmark.

For our second set of compute benchmarks we have CompuBench 1.5, the successor to CLBenchmark. CompuBench offers a wide array of different practical compute workloads, and we’ve decided to focus on face detection, optical flow modeling, and particle simulations.

Compute: CompuBench 1.5 - Face Detection

Compute: CompuBench 1.5 - Optical Flow

Compute: CompuBench 1.5 - Particle Simulation 64K

Although GTX T980 Ti struggled at LuxMark, the same cannot be said for CompuBench. Though taking the second spot in all 3 sub-tests - right behind GTX Titan X - there's a bit wider of a gap than normal between the two GM200 cards, causing GTX 980 Ti to trail a little more significantly than in other tests. Given the short nature of these tests, GTX 980 Ti doesn't get to enjoy its usual clockspeed advantage, making this one of the only benchmarks where the theoretical 9% performance difference between the cards becomes a reality.

Our 3rd compute benchmark is Sony Vegas Pro 13, an OpenGL and OpenCL video editing and authoring package. Vegas can use GPUs in a few different ways, the primary uses being to accelerate the video effects and compositing process itself, and in the video encoding step. With video encoding being increasingly offloaded to dedicated DSPs these days we’re focusing on the editing and compositing process, rendering to a low CPU overhead format (XDCAM EX). This specific test comes from Sony, and measures how long it takes to render a video.

Compute: Sony Vegas Pro 13 Video Render

Traditionally a benchmark that favors AMD, GTX 980 Ti fares as well as GTX Titan X, closing the gap some. But it's still not enough to surpass Radeon HD 7970, let alone Radeon R9 290X.

Moving on, our 4th compute benchmark is FAHBench, the official Folding @ Home benchmark. Folding @ Home is the popular Stanford-backed research and distributed computing initiative that has work distributed to millions of volunteer computers over the internet, each of which is responsible for a tiny slice of a protein folding simulation. FAHBench can test both single precision and double precision floating point performance, with single precision being the most useful metric for most consumer cards due to their low double precision performance. Each precision has two modes, explicit and implicit, the difference being whether water atoms are included in the simulation, which adds quite a bit of work and overhead. This is another OpenCL test, utilizing the OpenCL path for FAHCore 17.

Compute: Folding @ Home: Explicit, Single Precision

Compute: Folding @ Home: Implicit, Single Precision

Folding @ Home’s single precision tests reiterate GM200's FP32 compute credentials. Second only to GTX Titan X, GTX 980 Ti fares very well here.

Compute: Folding @ Home: Explicit, Double Precision

Meanwhile Folding @ Home’s double precision test reiterates GM200's poor FP64 compute performance. At 6.3ns/day, it, like the GTX Titan X, occupies the lower portion of our benchmark charts, below AMD's cards and NVIDIA's high-performnace FP64 cards.

Wrapping things up, our final compute benchmark is an in-house project developed by our very own Dr. Ian Cutress. SystemCompute is our first C++ AMP benchmark, utilizing Microsoft’s simple C++ extensions to allow the easy use of GPU computing in C++ programs. SystemCompute in turn is a collection of benchmarks for several different fundamental compute algorithms, with the final score represented in points. DirectCompute is the compute backend for C++ AMP on Windows, so this forms our other DirectCompute test.

Compute: SystemCompute v0.5.7.2 C++ AMP Benchmark

We end up ending our benchmarks where we started: with the GTX 980 Ti slightly trailing the GTX Titan X, and with the two GM200 cards taking the top two spots overall. So as with GTX Titan X, GTX 980 Ti is a force to be reckoned with for FP32 compute, which for a pure consumer card should be a good match for consumer compute workloads.

Synthetics Power, Temperature, & Noise
Comments Locked

290 Comments

View All Comments

  • douglord - Monday, June 1, 2015 - link

    I need to know if the 980ti can output 10-bit color correctly? Is it ready for UHD Blueray?
  • dragonsqrrl - Monday, June 1, 2015 - link

    To my knowledge only Quadro's and Firepro's output 10 bit color depth.
  • johnpombrio - Monday, June 1, 2015 - link

    Any card that can do true RGB color schemes are NOT MEANT for normal users. It brings a lot of drawbacks for games and normal tasks. These type of cards are for graphics professionals only. Google it to see why.
  • mapesdhs - Wednesday, June 3, 2015 - link

    Indeed, the way colourspaces interact with different types of monitor can result in some nasty issues for accurate colour presentation. For home users, it's really not suitable since so many normal apps & games aren't written to utilise such modes correctly. Besides, I doubt any 4K TVs could properly resolve 10bis/channel anyway. Funny though that people are still asking about 10bit colour when pro users were already using 12bit more than 20 years ago. :D Also 16bit greyscale for medical/GIS/etc.
  • johnpombrio - Monday, June 1, 2015 - link

    Yikes! That overclock ability! I always buy EVGA's superclocked NVidia cards as they as super stable and have great benchmarks (as well as playing games well, heh). I might buy into this even tho I have a GTX980.

    As for AMD, NVidia has 76% of the discrete GPU graphics card market (and still rising) while AMD has lost 12% market share in the last 12 months alone. Whatever AMD has up for new products, it better hurry and be a LOT better than NVidia cards. AMD has tried the " rebadge existing GPU family cards, reduce its price, and bundle games" for too long and IT IS NOT WORKING. C'mon AMD, get back into the fight.
  • mapesdhs - Wednesday, June 3, 2015 - link

    True, I kept finding EVGA's cards work really well. The ACX2 980 (1266MHz) is particularly good.
  • Nfarce - Monday, June 1, 2015 - link

    Well I recently upgraded with a second 970 for SLI for 1440p gaming and have them overclocked to 980 performance. It's roughly 15% faster than this single card solution for $700 vs. $650 (7.5% increase in cost). But one thing is for certain: we are still a long time away from realistic 4K gaming with a G-sync 120Hz monitor when those come out. I would much prefer 1440p gaming with max quality and high AA settings and faster FPS matched to screen Hz than detuned 4K settings (even if AA is less meaningful at 2160p).

    By the way: are you guys ever going to add Project Cars to your benchmarks? It has rapidly become THE racer to own. Grid Autosport is not really a good benchmark these days because it's just a rehash of the Grid 2 engine (EGO 3.0)...easy on GPUs. Many, including me, haven't touched Autosport since PCars was released and may never touch it again.
  • mapesdhs - Wednesday, June 3, 2015 - link

    Project Cards is one game that runs badly in CF atm (driver issues), which would make the 295x2 look horrible. Might be better to wait until AMD has fixed the issue first.
  • agentbb007 - Monday, June 1, 2015 - link

    A GTX Titan X for $649, DOH BART! Oh well I've enjoyed my SLI Titan X's for a few months so I guess that was worth the $700 premium. I keep falling for nVidia's Titan brand gimmick, I also bought the original Titan luckily just 1 of them and ended up selling it for about half what I paid.
    Lesson learned, AGAIN, don't buy the Titan brand wait for the regular GTX version instead.
  • mapesdhs - Tuesday, March 12, 2019 - link

    2019 calling! I wonder if he bought the 2080 Ti or RTX Titan... :}

Log in

Don't have an account? Sign up now