Compute

Shifting gears, we have our look at compute performance. Since GTX Titan X has no compute feature advantage - no fast double precision support like what's found in the Kepler generation Titans - the performance difference between the GTX Titan X and GTX 980 Ti should be very straightforward.

Starting us off for our look at compute is LuxMark3.0, the latest version of the official benchmark of LuxRender 2.0. LuxRender’s GPU-accelerated rendering mode is an OpenCL based ray tracer that forms a part of the larger LuxRender suite. Ray tracing has become a stronghold for GPUs in recent years as ray tracing maps well to GPU pipelines, allowing artists to render scenes much more quickly than with CPUs alone.

Compute: LuxMark 3.0 - Hotel

With the pace set for GM200 by GTX Titan X, there’s little to say here that hasn’t already been said. Maxwell does not fare well in LuxMark, and while GTX 980 Ti continues to stick very close to GTX Titan X, it none the less ends up right behind the Radeon HD 7970 in this benchmark.

For our second set of compute benchmarks we have CompuBench 1.5, the successor to CLBenchmark. CompuBench offers a wide array of different practical compute workloads, and we’ve decided to focus on face detection, optical flow modeling, and particle simulations.

Compute: CompuBench 1.5 - Face Detection

Compute: CompuBench 1.5 - Optical Flow

Compute: CompuBench 1.5 - Particle Simulation 64K

Although GTX T980 Ti struggled at LuxMark, the same cannot be said for CompuBench. Though taking the second spot in all 3 sub-tests - right behind GTX Titan X - there's a bit wider of a gap than normal between the two GM200 cards, causing GTX 980 Ti to trail a little more significantly than in other tests. Given the short nature of these tests, GTX 980 Ti doesn't get to enjoy its usual clockspeed advantage, making this one of the only benchmarks where the theoretical 9% performance difference between the cards becomes a reality.

Our 3rd compute benchmark is Sony Vegas Pro 13, an OpenGL and OpenCL video editing and authoring package. Vegas can use GPUs in a few different ways, the primary uses being to accelerate the video effects and compositing process itself, and in the video encoding step. With video encoding being increasingly offloaded to dedicated DSPs these days we’re focusing on the editing and compositing process, rendering to a low CPU overhead format (XDCAM EX). This specific test comes from Sony, and measures how long it takes to render a video.

Compute: Sony Vegas Pro 13 Video Render

Traditionally a benchmark that favors AMD, GTX 980 Ti fares as well as GTX Titan X, closing the gap some. But it's still not enough to surpass Radeon HD 7970, let alone Radeon R9 290X.

Moving on, our 4th compute benchmark is FAHBench, the official Folding @ Home benchmark. Folding @ Home is the popular Stanford-backed research and distributed computing initiative that has work distributed to millions of volunteer computers over the internet, each of which is responsible for a tiny slice of a protein folding simulation. FAHBench can test both single precision and double precision floating point performance, with single precision being the most useful metric for most consumer cards due to their low double precision performance. Each precision has two modes, explicit and implicit, the difference being whether water atoms are included in the simulation, which adds quite a bit of work and overhead. This is another OpenCL test, utilizing the OpenCL path for FAHCore 17.

Compute: Folding @ Home: Explicit, Single Precision

Compute: Folding @ Home: Implicit, Single Precision

Folding @ Home’s single precision tests reiterate GM200's FP32 compute credentials. Second only to GTX Titan X, GTX 980 Ti fares very well here.

Compute: Folding @ Home: Explicit, Double Precision

Meanwhile Folding @ Home’s double precision test reiterates GM200's poor FP64 compute performance. At 6.3ns/day, it, like the GTX Titan X, occupies the lower portion of our benchmark charts, below AMD's cards and NVIDIA's high-performnace FP64 cards.

Wrapping things up, our final compute benchmark is an in-house project developed by our very own Dr. Ian Cutress. SystemCompute is our first C++ AMP benchmark, utilizing Microsoft’s simple C++ extensions to allow the easy use of GPU computing in C++ programs. SystemCompute in turn is a collection of benchmarks for several different fundamental compute algorithms, with the final score represented in points. DirectCompute is the compute backend for C++ AMP on Windows, so this forms our other DirectCompute test.

Compute: SystemCompute v0.5.7.2 C++ AMP Benchmark

We end up ending our benchmarks where we started: with the GTX 980 Ti slightly trailing the GTX Titan X, and with the two GM200 cards taking the top two spots overall. So as with GTX Titan X, GTX 980 Ti is a force to be reckoned with for FP32 compute, which for a pure consumer card should be a good match for consumer compute workloads.

Synthetics Power, Temperature, & Noise
Comments Locked

290 Comments

View All Comments

  • chizow - Monday, June 1, 2015 - link

    Yes, its unprecedented to launch a full stack of rebrands with just 1 new ASIC, as AMD has done not once, not 2x, not even 3x, but 4 times with GCN (7000 to Boost/GE, 8000 OEM, R9 200, and now R9 300) Generally it is only the low-end, or a gap product to fill a niche. The G92/b isn't even close to this as it was rebranded numerous times over a short 9 month span (Nov 2007 to July 2008), while we are bracing ourselves for AMD rebrands going back to 2011 and Pitcairn.
  • Gigaplex - Monday, June 1, 2015 - link

    If it's the 4th time as you claim, then by definition, it's most definitely not unprecedented.
  • chizow - Monday, June 1, 2015 - link

    The first 3 rebrands were still technically within that same product cycle/generation. This rebrand certainly isn't, so rebranding an entire stack with last-gen parts is certainly unprecedented. At least, relative to Nvidia's full next-gen product stack. Hard to say though given AMD just calls everything GCN 1.x, like inbred siblings they have some similarities, but certainly aren't the same "family" of chips.
  • Refuge - Monday, June 1, 2015 - link

    Thanks Gigaplex, you beat me to it... lol
  • chizow - Monday, June 1, 2015 - link

    Cool maybe you can beat each other and show us the precedent where a GPU maker went to market with a full stack of rebrands against the competition's next generation line-up. :)
  • FlushedBubblyJock - Wednesday, June 10, 2015 - link

    Nothing like total fanboy denial
  • Kevin G - Monday, June 1, 2015 - link

    The G92 got its last prebrand in 2009 and was formally replaced on in 2010 by the GTX 460. It had a full three year life span on the market.

    The GTS/GTX 200 series as mostly rebranded. There was the GT200 chip on the high end that was used for the GTX 260 and up. The low end silently got the GT216 for the Geforce 210 a year after the GTX 260/280 launch. At this time, AMD was busy launching the Radeon 4000 series which brought a range of new chips to market as a new generation.

    Pitcairn came out in 2012, not 2011. This would mimic the life span of the G92 as well as the number of rebrands. (It never had a vanilla edition, it started with the Ghz edition as the 7870.)
  • chizow - Monday, June 1, 2015 - link

    @Kevin G, nice try at revisionist history, but that's not quite how it went down. G92 was rebranded numerous times over the course of a year or so, but it did actually get a refresh from 65nm to 55nm. Indeed, G92 was even more advanced than the newer GT200 in some ways, with more advanced hardware encoding/decoding that was on-die, rather than on a complementary ASIC like G80/GT200.

    Also, at the time, prices were much more compacted at the time due to economic recession, so the high-end was really just a glorified performance mid-range due to the price wars started by the 4870 and the economics of the time.

    Nvidia found it was easier to simply manipulate the cores on their big chip than to come out with a number of different ASICs, which is how we ended up with GTX 260 core 192, core 216 and the GTX 275:

    Low End: GT205, 210, GT 220, GT 230
    Mid-range: GT 240, GTS 250
    High-end: GTX 260, GTX 275
    Enthusiast: GTX 280, GTX 285, GTX 295

    The only rebranded chip in that entire stack is the G92, so again, certainly not the precedent for AMD's entire stack of Rebrandeon chips.
  • Kevin G - Wednesday, June 3, 2015 - link

    @chizow
    Out of that list of GTS/GTX200 series, the new chip in that line up in 2008 was the GT200 and the GT218 that was introduced over a year later in late 2009. For 9 months on the market the three chips used in the 200 series were rebrands of the G94, rebrands of the G92 and the new GT200. The ultra low end at this time was filled in by cards still carrying the 9000 series branding.

    The G92 did have a very long life as it was introduced as the 8800GTS with 512 MB in late 2007. In 2008 it was rebranded the 9800GTX roughly six months after it was first introduced. A year later in 2009 the G92 got a die shrink and rebranded as both the GTS 150 for OEMs and GTS 250 for consumers.

    So yeah, AMD's R9 300 series launch really does mimic what nVidia did with the GTS/GTX 200 series.
  • FlushedBubblyJock - Wednesday, June 10, 2015 - link

    G80 was not G92 not G92b nor G94 mr kevin g

Log in

Don't have an account? Sign up now