Compute

Shifting gears, we have our look at compute performance.

As we outlined earlier, GTX Titan X is not the same kind of compute powerhouse that the original GTX Titan was. Make no mistake, at single precision (FP32) compute tasks it is still a very potent card, which for consumer level workloads is generally all that will matter. But for pro-level double precision (FP64) workloads the new Titan lacks the high FP64 performance of the old one.

Starting us off for our look at compute is LuxMark3.0, the latest version of the official benchmark of LuxRender 2.0. LuxRender’s GPU-accelerated rendering mode is an OpenCL based ray tracer that forms a part of the larger LuxRender suite. Ray tracing has become a stronghold for GPUs in recent years as ray tracing maps well to GPU pipelines, allowing artists to render scenes much more quickly than with CPUs alone.

Compute: LuxMark 3.0 - Hotel

While in LuxMark 2.0 AMD and NVIDIA were fairly close post-Maxwell, the recently released LuxMark 3.0 finds NVIDIA trailing AMD once more. While GTX Titan X sees a better than average 41% performance increase over the GTX 980 (owing to its ability to stay at its max boost clock on this benchmark) it’s not enough to dethrone the Radeon R9 290X. Even though GTX Titan X packs a lot of performance on paper, and can more than deliver it in graphics workloads, as we can see compute workloads are still highly variable.

For our second set of compute benchmarks we have CompuBench 1.5, the successor to CLBenchmark. CompuBench offers a wide array of different practical compute workloads, and we’ve decided to focus on face detection, optical flow modeling, and particle simulations.

Compute: CompuBench 1.5 - Face Detection

Compute: CompuBench 1.5 - Optical Flow

Compute: CompuBench 1.5 - Particle Simulation 64K

Although GTX Titan X struggled at LuxMark, the same cannot be said for CompuBench. Though the lead varies with the specific sub-benchmark, in every case the latest Titan comes out on top. Face detection in particular shows some massive gains, with GTX Titan X more than doubling the GK110 based GTX 780 Ti's performance.

Our 3rd compute benchmark is Sony Vegas Pro 13, an OpenGL and OpenCL video editing and authoring package. Vegas can use GPUs in a few different ways, the primary uses being to accelerate the video effects and compositing process itself, and in the video encoding step. With video encoding being increasingly offloaded to dedicated DSPs these days we’re focusing on the editing and compositing process, rendering to a low CPU overhead format (XDCAM EX). This specific test comes from Sony, and measures how long it takes to render a video.

Compute: Sony Vegas Pro 13 Video Render

Traditionally a benchmark that favors AMD, GTX Titan X closes the gap some. But it's still not enough to surpass the R9 290X.

Moving on, our 4th compute benchmark is FAHBench, the official Folding @ Home benchmark. Folding @ Home is the popular Stanford-backed research and distributed computing initiative that has work distributed to millions of volunteer computers over the internet, each of which is responsible for a tiny slice of a protein folding simulation. FAHBench can test both single precision and double precision floating point performance, with single precision being the most useful metric for most consumer cards due to their low double precision performance. Each precision has two modes, explicit and implicit, the difference being whether water atoms are included in the simulation, which adds quite a bit of work and overhead. This is another OpenCL test, utilizing the OpenCL path for FAHCore 17.

Compute: Folding @ Home: Explicit, Single Precision

Compute: Folding @ Home: Implicit, Single Precision

Folding @ Home’s single precision tests reiterate just how powerful GTX Titan X can be at FP32 workloads, even if it’s ostensibly a graphics GPU. With a 50-75% lead over the GTX 780 Ti, the GTX Titan X showcases some of the remarkable efficiency improvements that the Maxwell GPU architecture can offer in compute scenarios, and in the process shoots well past the AMD Radeon cards.

Compute: Folding @ Home: Explicit, Double Precision

On the other hand with a native FP64 rate of 1/32, the GTX Titan X flounders at double precision. There is no better example of just how much the GTX Titan X and the original GTX Titan differ in their FP64 capabilities than this graph; the GTX Titan X can’t beat the GTX 580, never mind the chart-topping original GTX Titan. FP64 users looking for an entry level FP64 card would be well advised to stick with the GTX Titan Black for now. The new Titan is not the prosumer compute card that was the old Titan.

Wrapping things up, our final compute benchmark is an in-house project developed by our very own Dr. Ian Cutress. SystemCompute is our first C++ AMP benchmark, utilizing Microsoft’s simple C++ extensions to allow the easy use of GPU computing in C++ programs. SystemCompute in turn is a collection of benchmarks for several different fundamental compute algorithms, with the final score represented in points. DirectCompute is the compute backend for C++ AMP on Windows, so this forms our other DirectCompute test.

Compute: SystemCompute v0.5.7.2 C++ AMP Benchmark

With the GTX 980 already performing well here, the GTX Titan X takes it home, improving on the GTX 980 by 31%. Whereas GTX 980 could only hold even with the Radeon R9 290X, the GTX Titan X takes a clear lead.

Overall then the new GTX Titan X can still be a force to be reckoned with in compute scenarios, but only when the workloads are FP32. Users accustomed to the original GTX Titan’s FP64 performance on the other hand will find that this is a very different card, one that doesn’t live up to the same standards.

Synthetics Power, Temperature, & Noise
Comments Locked

276 Comments

View All Comments

  • modeless - Tuesday, March 17, 2015 - link

    This *is* a compute card, but for an application that doesn't need FP64: deep learning. In fact, deep learning would do even better with FP16. What deep learning does need is lots of ALUs (check) and lots of RAM (double check). Deep learning people were asking for more RAM and they got it. I'm considering buying one just for training neural nets.
  • Yojimbo - Tuesday, March 17, 2015 - link

    Yes, I got that idea from the keynote address, and I think that's why they have 12GB of RAM. But how much deep-learning-specific compute demand is there? Are there lots of people who use compute just for deep learning and nothing else that demands FP64 performance? Enough that it warrants building an entire GPU (M200) just for them? Surely NVIDIA is counting mostly on gaming sales for Titan and whatever cut-down M200 card arrives later.
  • Yojimbo - Wednesday, March 18, 2015 - link

    Oh, and of course also counting on the Quadro sales in the workstation market.
  • DAOWAce - Tuesday, March 17, 2015 - link

    Nearly double the performance of a single 780 when heavily OC'd, jesus christ, I wish I had disposable income.

    I already got burned by buying a 780 though ($722 before it dropped $200 a month later due to the Ti's release), so I'd much rather at this point extend the lifespan of my system by picking up some cheap second hand 780 and dealing with SLI's issues again (haven't used it since my 2x 460's) while I sit and wait for the 980 Ti to get people angry again or even until the next die shrink.

    At any rate, I won't get burned again buying my first ever enthusiast card, that's for damn sure.
  • Will Robinson - Wednesday, March 18, 2015 - link

    Well Titan X looks like a really mean machine.A bit pricey but Top Dog has always been like that for NV so you can't ping it too badly on that.
    I'm really glad NVDA has set their "Big Maxwell" benchmark because now it's up to R390X to defeat it.
    This will be flagship V flagship with the winner taking all the honors.
  • poohbear - Wednesday, March 18, 2015 - link

    Couldn't u show us a chart of VRAM usage for Shadows of Mordor instead of minimum frames? Argus Monitor charts VRAM usage, it would've been great to see how much average and maximum VRAM Shadows of Mordor uses (of the available 12gb).
  • Meaker10 - Wednesday, March 18, 2015 - link

    They only show paged ram, not actual usage.
  • ChristopherJack - Wednesday, March 18, 2015 - link

    I'm surprised how often the ageing 7990 tops this. I had no doubt what so ever that the 295x2 was going to stomp all over this & that's what bothered me about everyone claiming the Titan X was going to be the fastest graphics card, blah, blah, blah. Yes I'm aware those are dual GPU cards in xfire, no I don't care because they're single cards & can be found for significantly lower prices if price/performance is the only concern.
  • Pc_genjin - Wednesday, March 18, 2015 - link

    So... as a person who has the absolute worst timing ever when it comes to purchasing technology, I built a brand new PC - FOR THE FIRST TIME IN NINE YEARS - just three days ago with 2 x GTX 980s. I haven't even received them yet, and I run across several reviews for this - today. Now, the question is: do I attempt to return the two 980s, saving $100 in the process? Or is it just better to keep the 980s? (Thankfully I didn't build the system yet, and consequently open them already, or I'd be livid.). Thanks for any advice, and sorry for any arguments I spark, yikes.)
  • D. Lister - Wednesday, March 18, 2015 - link

    The 2x980s would be significantly more powerful than a single Titan X, even with 1/3rd the total VRAM.

Log in

Don't have an account? Sign up now