Compute

Shifting gears, we have our look at compute performance.

Starting us off for our look at compute is LuxMark3.0, the latest version of the official benchmark of LuxRender 2.0. LuxRender’s GPU-accelerated rendering mode is an OpenCL based ray tracer that forms a part of the larger LuxRender suite. Ray tracing has become a stronghold for GPUs in recent years as ray tracing maps well to GPU pipelines, allowing artists to render scenes much more quickly than with CPUs alone.

Compute: LuxMark 3.0 - Hotel

LuxMark ends up being a great corner case for where having a fully enabled Fiji GPU is more important than having the highest clockspeeds. With the R9 Nano able to flirt with its full 1000MHz clockspeed here, the card is able to pass the R9 Fury here. The only thing stopping it from taking the second-place spot is the R9 390X, as Hawaii still sees strong performance here even with fewer SPs.

For our second set of compute benchmarks we have CompuBench 1.5, the successor to CLBenchmark. CompuBench offers a wide array of different practical compute workloads, and we’ve decided to focus on face detection, optical flow modeling, and particle simulations.

Compute: CompuBench 1.5 - Face Detection

Compute: CompuBench 1.5 - Optical Flow

Compute: CompuBench 1.5 - Particle Simulation 64K

CompuBench provides us another case of where the R9 Nano ends up outpacing the R9 Fury. As a result AMD’s latest card tends to perform somewhere between an R9 Fury and R9 Fury X, with all of the strengths and weaknesses that come from that. This puts the R9 Nano in a good place for Optical Flow, while it will still trail NVIDIA”s best cards under Face Detection and the 64K particle simulation.

Meanwhile it’s interesting to note that AMD’s particle sim scores have significantly improved in the recent drivers. GCN 1.2 cards have seen 20%+ performance improvements here, which may point to some new OpenCL compiler optimizations from AMD.

Our 3rd compute benchmark is Sony Vegas Pro 13, an OpenGL and OpenCL video editing and authoring package. Vegas can use GPUs in a few different ways, the primary uses being to accelerate the video effects and compositing process itself, and in the video encoding step. With video encoding being increasingly offloaded to dedicated DSPs these days we’re focusing on the editing and compositing process, rendering to a low CPU overhead format (XDCAM EX). This specific test comes from Sony, and measures how long it takes to render a video.

Compute: Sony Vegas Pro 13 Video Render

With Vegas there are no surprises; the R9 Nano ties the R9 Fury.

Moving on, our 4th compute benchmark is FAHBench, the official Folding @ Home benchmark. Folding @ Home is the popular Stanford-backed research and distributed computing initiative that has work distributed to millions of volunteer computers over the internet, each of which is responsible for a tiny slice of a protein folding simulation. FAHBench can test both single precision and double precision floating point performance, with single precision being the most useful metric for most consumer cards due to their low double precision performance. Each precision has two modes, explicit and implicit, the difference being whether water atoms are included in the simulation, which adds quite a bit of work and overhead. This is another OpenCL test, utilizing the OpenCL path for FAHCore 17.

Compute: Folding @ Home: Explicit, Single Precision

Compute: Folding @ Home: Implicit, Single Precision

Compute: Folding @ Home: Explicit, Double Precision

Much like CompuBench and LuxMark, the R9 Nano punches above its weight here. The lack of a graphics workload – and resulting demands on graphics hardware like the ROPs – means most of the card’s power can be allocated to the shaders, allowing higher clockspeeds. This gives the Nano a boost in this situation to bring it much closer to the Fury X, though as far as Folding goes AMD will still trail NVIDIA’s best cards.

Wrapping things up, our final compute benchmark is an in-house project developed by our very own Dr. Ian Cutress. SystemCompute is our first C++ AMP benchmark, utilizing Microsoft’s simple C++ extensions to allow the easy use of GPU computing in C++ programs. SystemCompute in turn is a collection of benchmarks for several different fundamental compute algorithms, with the final score represented in points. DirectCompute is the compute backend for C++ AMP on Windows, so this forms our other DirectCompute test.

Compute: SystemCompute v0.5.7.2 C++ AMP Benchmark

Our final test sees the R9 Nano brought back to its place behind the R9 Fury, as the C++ AMP sub-tests are strenuous enough to cause more significant clockspeed throttling. Even behind the R9 Fury the R9 Nano does well for itself here, coming in behind the GTX 980 Ti and head of the R9 390X and GTX 980.

Synthetics Power, Temperature, & Noise
Comments Locked

284 Comments

View All Comments

  • SeanJ76 - Thursday, September 10, 2015 - link

    AMD is about to claim bankruptcy......
  • silverblue - Friday, September 11, 2015 - link

    Somebody just bought 20% of their shares. If you want them to file chapter 11, be a little more patient, grasshopper.
  • close - Thursday, September 10, 2015 - link

    Guess Nvidia is dead to you as a brand also for the whole 3.5GB issue (which we all know how well was handled). That leaves you with the Intel iGPU. But some people have the little fetish of being crapped on from a single direction.

    Saying "they're dead to me as a brand" is the same as saying "from now on I will disconsider their offerings even if they may be better value or simply better". And this does you no favors, trust me.
  • Azix - Thursday, September 10, 2015 - link

    Does AMD not give out review guidelines? It seems that's something nvidia does. eg when the Ashes benchmark came out they told review sites not to use AA, a lot didn't. Maybe AMD figures some sites will ignore this guidance. eg. if they said nano was not to be compared to the 980ti or fury X and was a niche product for small cases, some sites like kitguru would still compare it to a 980ti rather than the closest mini GPU
  • gw74 - Thursday, September 10, 2015 - link

    It is none of the companies' business how their products are reviewed. Their only business to make good products. Anyone can compare anything they like to anything else and benchmark it using anything they want.
  • ianmills - Thursday, September 10, 2015 - link

    I wish it was but even anandtech falls in line with this and overuses company's marketing terms to make it hard to compare to previous generations
  • Ryan Smith - Thursday, September 10, 2015 - link

    Interesting. I'm certainly not trying to "fall in line" or otherwise use specific marketing terms, so if I'm doing that then it's unplanned. What terms have I been using, so that I can watch out for it in the future?
  • Alexvrb - Friday, September 11, 2015 - link

    Yeah! Tell em gw! Same with automotive testing. No guidelines, no rules! If they loan you a 1-ton pickup truck and you compare it to sports cars on a twisty track, bash the truck and give it a horrible review for "poor handling vs 500K exotic sports cars" - well that's none of their business!

    /sarcasm
  • gw74 - Sunday, September 13, 2015 - link

    I am talking about no guidelines or rules from the manufacturers, genius. That obviously does not mean the reviewing party does not use its brain to compare and test in a sensible way. You absolute clown.
  • Kutark - Thursday, September 10, 2015 - link

    They're not demands, they're just telling people ahead of time if there is a particular game that is exhibiting issues with a particular setting. Which especially if its an in progress issue they're debugging, doesn't paint a good picture of the product, and only serves to give ammunition for detractors to cherry pick data points to use in their crusades.

Log in

Don't have an account? Sign up now