Compute

Shifting gears, we have our look at compute performance. As an FP64 card, the R9 Fury X only offers the bare minimum FP64 performance for a GCN product, so we won’t see anything great here. On the other hand with a theoretical FP32 performance of 8.6 TFLOPs, AMD could really clean house on our more regular FP32 workloads.

Starting us off for our look at compute is LuxMark3.0, the latest version of the official benchmark of LuxRender 2.0. LuxRender’s GPU-accelerated rendering mode is an OpenCL based ray tracer that forms a part of the larger LuxRender suite. Ray tracing has become a stronghold for GPUs in recent years as ray tracing maps well to GPU pipelines, allowing artists to render scenes much more quickly than with CPUs alone.

Compute: LuxMark 3.0 - Hotel

The results with LuxMark ended up being quite a bit of a surprise, and not for a good reason. Compute workloads are shader workloads, and these are workloads that should best illustrate the performance improvements of R9 Fury X over R9 290X. And yet while the R9 Fury X is the fastest single GPU AMD card, it’s only some 16% faster, a far cry from the 50%+ that it should be able to attain.

Right now I have no reason to doubt that the R9 Fury X is capable of utilizing all of its shaders. It just can’t do so very well with LuxMark. Given the fact that the R9 Fury X is first and foremost a gaming card, and OpenCL 1.x traction continues to be low, I am wondering whether we’re seeing a lack of OpenCL driver optimizations for Fiji.

For our second set of compute benchmarks we have CompuBench 1.5, the successor to CLBenchmark. CompuBench offers a wide array of different practical compute workloads, and we’ve decided to focus on face detection, optical flow modeling, and particle simulations.

Compute: CompuBench 1.5 - Face Detection

Compute: CompuBench 1.5 - Optical Flow

Compute: CompuBench 1.5 - Particle Simulation 64K

Quickly taking some of the air out of our driver theory, the R9 Fury X’s performance on CompuBench is quite a bit better, and much closer to what we’d expect given the hardware of the R9 Fury X. The Fury X only wins overall at Optical Flow, a somewhat memory-bandwidth heavy test that to no surprise favors AMD’s HBM additions, but otherwise the performance gains across all of these tests are 40-50%. Overall then the outcome over who wins is heavily test dependent, though this is nothing new.

Our 3rd compute benchmark is Sony Vegas Pro 13, an OpenGL and OpenCL video editing and authoring package. Vegas can use GPUs in a few different ways, the primary uses being to accelerate the video effects and compositing process itself, and in the video encoding step. With video encoding being increasingly offloaded to dedicated DSPs these days we’re focusing on the editing and compositing process, rendering to a low CPU overhead format (XDCAM EX). This specific test comes from Sony, and measures how long it takes to render a video.

Compute: Sony Vegas Pro 13 Video Render

At this point Vegas is becoming increasingly CPU-bound and will be due for replacement. The Fury X none the less shaves off an additional second of rendering time, bringing it down to 21 seconds.

Moving on, our 4th compute benchmark is FAHBench, the official Folding @ Home benchmark. Folding @ Home is the popular Stanford-backed research and distributed computing initiative that has work distributed to millions of volunteer computers over the internet, each of which is responsible for a tiny slice of a protein folding simulation. FAHBench can test both single precision and double precision floating point performance, with single precision being the most useful metric for most consumer cards due to their low double precision performance. Each precision has two modes, explicit and implicit, the difference being whether water atoms are included in the simulation, which adds quite a bit of work and overhead. This is another OpenCL test, utilizing the OpenCL path for FAHCore 17.

Compute: Folding @ Home: Explicit, Single Precision

Compute: Folding @ Home: Implicit, Single Precision

Compute: Folding @ Home: Explicit, Double Precision

Both of the FP32 tests for FAHBench show smaller than expected performance gains given the fact that the R9 Fury X has such a significant increase in compute resources and memory bandwidth. 25% and 34% respectively are still decent gains, but they’re smaller gains than anything we saw on CompuBench. This does lend a bit more support to our theory about driver optimizations, though FAHBench has not always scaled well with compute resources to begin with.

Meanwhile FP64 performance dives as expected. With a 1/16 rate it’s not nearly as bad as the GTX 900 series, but even the Radeon HD 7970 is beating the R9 Fury X here.

Wrapping things up, our final compute benchmark is an in-house project developed by our very own Dr. Ian Cutress. SystemCompute is our first C++ AMP benchmark, utilizing Microsoft’s simple C++ extensions to allow the easy use of GPU computing in C++ programs. SystemCompute in turn is a collection of benchmarks for several different fundamental compute algorithms, with the final score represented in points. DirectCompute is the compute backend for C++ AMP on Windows, so this forms our other DirectCompute test.

Compute: SystemCompute v0.5.7.2 C++ AMP Benchmark

Our C++ AMP benchmark is another case of decent, though not amazing, GPU compute performance gains. The R9 Fury X picks up 35% over the R9 290X. And in fact this is enough to vault it over NVIDIA’s cards to retake the top spot here, though not by a great amount.

Synthetics Power, Temperature, & Noise
Comments Locked

458 Comments

View All Comments

  • looncraz - Friday, July 3, 2015 - link

    75MHz on a factory low-volting GPU is actually to be expected. If the voltage scaled automatically, like nVidia's, there is no telling where it would go. Hopefully someone cracks the voltage lock and gets to cranking of the hertz.
  • chizow - Friday, July 3, 2015 - link

    North of 400W is probably where we'll go, but I look forward to AMD exposing these voltage controls, it makes you wonder why they didn't release them from the outset given they made the claims the card was an "Overclocker's Dream" despite the fact it is anything but this.
  • Refuge - Friday, July 3, 2015 - link

    It isn't unlocked yet, so nobody has overclocked it yet.
  • chizow - Monday, July 6, 2015 - link

    But but...AMD claimed it was an Overclocker's Dream??? Just another good example of what AMD says and reality being incongruent.
  • Thatguy97 - Thursday, July 2, 2015 - link

    would you say amd is now the "geforce fx 5800"
  • sabrewings - Thursday, July 2, 2015 - link

    That wasn't so much due to ATI's excellence. It had a lot to do with NVIDIA dropping the ball horribly, off a cliff, into a black hole.

    They learned their lessons and turned it around. I don't think either company "lost" necessarily, but I will say NVIDIA won. They do more with less. More performance with less power, less transistors, less SPs, and less bandwidth. Both cards perform admirably, but we all know the Fury X would've been more expensive had the 980 Ti not launched where it did. So, to perform arguably on par, AMD is living with smaller margins on probably smaller volume while Nvidia has plenty of volume with the 980 Ti and their base cost is less as they're essentially using Titan X throw away chips.
  • looncraz - Thursday, July 2, 2015 - link

    They still had to pay for those "Titan X throw away chips" and they cost more per chip to produce than AMD's Fiji GPU. Also, nVidia apparently had to not cut down the GPU as much as they were planning as a response to AMD's suspected performance. Consumers win, of course, but it isn't like nVidia did something magical, they simply bit the bullet and undercut their own offerings by barely cutting down the Titan X to make the 980Ti.

    That said, it is very telling that the AMD GCN architecture is less balanced in relation to modern games than the nVidia architecture, however the GCN architecture has far more features that are going unused. That is one long-standing habit ATi and, now, AMD engineers have had: plan for the future in their current chips. It's actually a bad habit as it uses silicon and transistors just sitting around sucking up power and wasting space for, usually, years before the features finally become useful... and then, by that time, the performance level delivered by those dormant bits is intentionally outdone by the competition to make AMD look inferior.

    AMD had tessellation years before nVidia, but it went unused until DX11, by which time nVidia knew AMD's capabilities and intentionally designed a way to stay ahead in tessellation. AMD's own technology being used against it only because it released it so early. HBM, I fear, will be another example of this. AMD helped to develop HBM and interposer technologies and used them first, but I bet nVidia will benefit most from them.

    AMD's only possible upcoming saving grace could be that they might be on Samsung's 14nm LPP FinFet tech at GloFo and nVidia will be on TSMC's 16nm FinFet tech. If AMD plays it right they can keep this advantage for a couple generations and maximize the benefits that could bring.
  • vladx - Thursday, July 2, 2015 - link

    Afaik, even though TSMC's GinFet will be 16nm it's a superior process overall to GloFo's 14nm FF so I dount AMD will gain any advantage.
  • testbug00 - Sunday, July 5, 2015 - link

    TSMC's FinFET 16nm process might be better than GloFo's own canceled 14XM or whatever they called it.

    Better than Samsung's 14nm? Dubious. Very unlikely.
  • chizow - Sunday, July 5, 2015 - link

    Why is it dubious? What's the biggest chip Samsung has fabbed? If they start producing chips bigger than the 100mm^2 chips for Apple, then we can talk but as much flak as TSMC gets flak over delays/problems, they still produce what are arguably the world's most advanced seminconductors, right there next to Intel's biggest chips in size and complexity.

Log in

Don't have an account? Sign up now