Compute

Shifting gears, we have our look at compute performance.

As we outlined earlier, GTX Titan X is not the same kind of compute powerhouse that the original GTX Titan was. Make no mistake, at single precision (FP32) compute tasks it is still a very potent card, which for consumer level workloads is generally all that will matter. But for pro-level double precision (FP64) workloads the new Titan lacks the high FP64 performance of the old one.

Starting us off for our look at compute is LuxMark3.0, the latest version of the official benchmark of LuxRender 2.0. LuxRender’s GPU-accelerated rendering mode is an OpenCL based ray tracer that forms a part of the larger LuxRender suite. Ray tracing has become a stronghold for GPUs in recent years as ray tracing maps well to GPU pipelines, allowing artists to render scenes much more quickly than with CPUs alone.

Compute: LuxMark 3.0 - Hotel

While in LuxMark 2.0 AMD and NVIDIA were fairly close post-Maxwell, the recently released LuxMark 3.0 finds NVIDIA trailing AMD once more. While GTX Titan X sees a better than average 41% performance increase over the GTX 980 (owing to its ability to stay at its max boost clock on this benchmark) it’s not enough to dethrone the Radeon R9 290X. Even though GTX Titan X packs a lot of performance on paper, and can more than deliver it in graphics workloads, as we can see compute workloads are still highly variable.

For our second set of compute benchmarks we have CompuBench 1.5, the successor to CLBenchmark. CompuBench offers a wide array of different practical compute workloads, and we’ve decided to focus on face detection, optical flow modeling, and particle simulations.

Compute: CompuBench 1.5 - Face Detection

Compute: CompuBench 1.5 - Optical Flow

Compute: CompuBench 1.5 - Particle Simulation 64K

Although GTX Titan X struggled at LuxMark, the same cannot be said for CompuBench. Though the lead varies with the specific sub-benchmark, in every case the latest Titan comes out on top. Face detection in particular shows some massive gains, with GTX Titan X more than doubling the GK110 based GTX 780 Ti's performance.

Our 3rd compute benchmark is Sony Vegas Pro 13, an OpenGL and OpenCL video editing and authoring package. Vegas can use GPUs in a few different ways, the primary uses being to accelerate the video effects and compositing process itself, and in the video encoding step. With video encoding being increasingly offloaded to dedicated DSPs these days we’re focusing on the editing and compositing process, rendering to a low CPU overhead format (XDCAM EX). This specific test comes from Sony, and measures how long it takes to render a video.

Compute: Sony Vegas Pro 13 Video Render

Traditionally a benchmark that favors AMD, GTX Titan X closes the gap some. But it's still not enough to surpass the R9 290X.

Moving on, our 4th compute benchmark is FAHBench, the official Folding @ Home benchmark. Folding @ Home is the popular Stanford-backed research and distributed computing initiative that has work distributed to millions of volunteer computers over the internet, each of which is responsible for a tiny slice of a protein folding simulation. FAHBench can test both single precision and double precision floating point performance, with single precision being the most useful metric for most consumer cards due to their low double precision performance. Each precision has two modes, explicit and implicit, the difference being whether water atoms are included in the simulation, which adds quite a bit of work and overhead. This is another OpenCL test, utilizing the OpenCL path for FAHCore 17.

Compute: Folding @ Home: Explicit, Single Precision

Compute: Folding @ Home: Implicit, Single Precision

Folding @ Home’s single precision tests reiterate just how powerful GTX Titan X can be at FP32 workloads, even if it’s ostensibly a graphics GPU. With a 50-75% lead over the GTX 780 Ti, the GTX Titan X showcases some of the remarkable efficiency improvements that the Maxwell GPU architecture can offer in compute scenarios, and in the process shoots well past the AMD Radeon cards.

Compute: Folding @ Home: Explicit, Double Precision

On the other hand with a native FP64 rate of 1/32, the GTX Titan X flounders at double precision. There is no better example of just how much the GTX Titan X and the original GTX Titan differ in their FP64 capabilities than this graph; the GTX Titan X can’t beat the GTX 580, never mind the chart-topping original GTX Titan. FP64 users looking for an entry level FP64 card would be well advised to stick with the GTX Titan Black for now. The new Titan is not the prosumer compute card that was the old Titan.

Wrapping things up, our final compute benchmark is an in-house project developed by our very own Dr. Ian Cutress. SystemCompute is our first C++ AMP benchmark, utilizing Microsoft’s simple C++ extensions to allow the easy use of GPU computing in C++ programs. SystemCompute in turn is a collection of benchmarks for several different fundamental compute algorithms, with the final score represented in points. DirectCompute is the compute backend for C++ AMP on Windows, so this forms our other DirectCompute test.

Compute: SystemCompute v0.5.7.2 C++ AMP Benchmark

With the GTX 980 already performing well here, the GTX Titan X takes it home, improving on the GTX 980 by 31%. Whereas GTX 980 could only hold even with the Radeon R9 290X, the GTX Titan X takes a clear lead.

Overall then the new GTX Titan X can still be a force to be reckoned with in compute scenarios, but only when the workloads are FP32. Users accustomed to the original GTX Titan’s FP64 performance on the other hand will find that this is a very different card, one that doesn’t live up to the same standards.

Synthetics Power, Temperature, & Noise
Comments Locked

276 Comments

View All Comments

  • Refuge - Thursday, March 19, 2015 - link

    Honestly this looks more like a Ti than a Titan.
  • D. Lister - Tuesday, March 17, 2015 - link

    Nice performance/watt, but at $1000, I find the performance/dollar to be unacceptable. Without a double-precision edge, this GPU is essentially a 980Ti, and Nvidia seems to want to get away with slapping on a Titan decal (and the consequential 1K price tag) by just adding a useless amount of graphics memory.

    Take out about 4 gigs of VRAM, hold the "Titan" brand, add maybe 5-10% core clock, with an MSRP of at least $300 less, and I'll be interested. But I guess, for Nvidia to feel the need to do something like that, we'll have to wait for the next Radeon launch.
  • chizow - Tuesday, March 17, 2015 - link

    It's 980Ti with double the VRAM, a year earlier, if you are going off previous timelines. Don't undervalue the fact this is the first big Maxwell only 6 months after #2 Maxwell.

    I agree the pricing has gotten ridiculous on these graphics cards, but this is the market we live and play in now. I typically spent $800-$1000 every 2 years on graphics cards, but I would get 2 flagship cards. After the whole 7970/680 debacle where mid-range became flagship, I can now get 2 high-end midrange for that much, or 1 super premium flagship. Going with the flagship, and I'm happy! :D
  • D. Lister - Tuesday, March 17, 2015 - link

    @chizow
    It's 980Ti with double the VRAM
    Yes, pretty much - a Ti GPU, with more VRAM than necessary, with the price tag of a Titan.
    I agree the pricing has gotten ridiculous on these graphics cards, but this is the market we live and play in now.
    The market is the way it is because we, consumers, let it be that way, through our choices. For us to obediently accept, at any time, overpricing as an acceptable trend of the market, is basically like agreeing with the fox who wants to be a guard for our henhouse.
  • chizow - Wednesday, March 18, 2015 - link

    Except the 780Ti came much later, it was the 3rd GK210 chip to be released, so there is a premium on that time and money. While this is the 1st GM200 based chip, no need to look any further beyond it. Also, how many 780Ti owners complained about not enough VRAM? Looks like Nvidia addressed that. There's just no compromises with this card, its Nvidia's best foot forward for this chip and only 6 months after GTX 980. No complaints here and I had plenty when Titan launched.

    Sure the market is this way partially because we allow it, but the reality is, the demand is overwhelmingly there. I was thoroughly against paying $1000 for what I used to get for $500-$650 for Nvidia's big chip flagship card with the original Titan, but the reality is, Nvidia has raised the bar on all fronts (and AMD has done well also) and they are looking to be rewarded for doing so. I used to buy 2x cards before because 1 just wasn't good enough. Now, 1 is good enough, so I don't mind paying the same amount for that relative level of performance and enjoyment.
  • D. Lister - Wednesday, March 18, 2015 - link

    @chizow
    Except the 780Ti came much later, ...... plenty when Titan launched.
    Both the 780Ti and the Titan X were released exactly when Nvidia needed them in the market. For the 780Ti, the reason was to challenge the 290X for the top spot. The Titan X was made available sooner because a) Nvidia needed the positive press after the 970 VRAM fiasco and b) because Nvidia wanted to take some attention away from the recent 3xx announcements by AMD.

    Hence I really can't find any logical reason to agree with your spin that the Nvidia staff was doing overtime as some sort of a public service, and so deserve some reward for their noble sacrifices.

    Sure the market is this way partially because we allow it, but the reality is, the demand is overwhelmingly there. I was thoroughly against paying $1000 for what I used to get for $500-$650 for Nvidia's big chip flagship card with the original Titan, but the reality is, Nvidia has raised the bar on all fronts (and AMD has done well also) and they are looking to be rewarded for doing so. I used to buy 2x cards before because 1 just wasn't good enough. Now, 1 is good enough, so I don't mind paying the same amount for that relative level of performance and enjoyment.
    http://media2.giphy.com/media/13ayyyRnHJKrug/giphy...
  • chizow - Monday, March 23, 2015 - link

    Uh, you make a lot of assumptions while trying to dismiss the fact there is a huge difference in time to market and relative geography on Nvidia's release timeline for Titan X, and that difference carries a premium to anyone who observed or felt burned by how Titan and Kepler launches played out over 2012, 2013, 2014.

    Fact remains, Titan X is the full chip very close to the front of Maxwell's line-up release, while the 780Ti came near the end of Kepler's life cycle. The correct comparison is if Nvidia launched Titan Black in 2013 instead of the original Titan, because that's what Titan X is.

    The bolded portion should be pretty easy to digest, not sure why you are having trouble with it. Nvidia's advancement on the 28nm node has been so good (someone showed a 4x increase from the 40nm GTX 480 to the Titan X, which is damn amazing on the same node) and the relatively slow advancement in game requirements mean I no longer need 2 GPUs to push the game resolutions and settings I need. A single, super flagship card is all I need, and Nvidia has provided just that with the Titan X.

    For those who don't think it is worth it, you can always wait for something cheaper and faster to come along, but for me, I'm good until Pascal in 2016 (maybe? Oh wait, don't need to worry about that).
  • chizow - Tuesday, March 17, 2015 - link

    Bit of a sidenote, but wow looks like 980 SLI scaling has REALLY improved in the last few months. I don't recall it being that good at launch, but that's not a huge surprise given Maxwell was a new architecture and has gone through a number of big (on paper) driver improvements. Looks really good though, made it harder to go with the Titan X over a 2nd 980 for SLI, but I think I'll be happier this way for now.
  • mdriftmeyer - Tuesday, March 17, 2015 - link

    Buy these like hotcakes. And when the R9 390/390X arrives in June I pick either up and laugh at all that used hardware being dumped on EBay.
  • TheJian - Tuesday, March 17, 2015 - link

    You're assuming they'll beat this card, and I doubt you'll see them in June as the channel is stuffed with AMD's current stuff. I say Q3 and won't be as good as you think. HBM will cause pricing issues, won't net any perf (isn't needed, bandwidth isn't a problem, so wasted extra cost here) so the gpu will have to win on it's own vs. NV. You'd better hope AMD's is good enough to sell like hotcakes, as they really need the profits finally. This Q already wasted and will result in a loss most likely, and NV is good for the next 3 months at least until something competitive arrives, at which point NV just drops pricing eating any chance of AMD profits anyway. AMD has a very tough road ahead and console sales drop due to mobile closing the gap at 16/14nm for xmas (good enough that is, to have some say screw a console this gen, and screw $60 game pricing - go android instead).

Log in

Don't have an account? Sign up now