Compute

Jumping into compute, our expectations regarding compute performance are going to be a mixed bag. On the one hand as part of the newer GCN 1.2 architecture AMD has been doing some tweaking under the hood, but on the other hand the most important aspects of the architecture – the memory model and thread execution – are not fundamentally different from the GCN 1.0 R9 280. As a result we’re not necessarily expecting to find any performance leaps here but there is the possibility that we will find some along the way.

As always we’ll start with LuxMark2.0, the official benchmark of SmallLuxGPU 2.0. SmallLuxGPU is an OpenCL accelerated ray tracer that is part of the larger LuxRender suite. Ray tracing has become a stronghold for GPUs in recent years as ray tracing maps well to GPU pipelines, allowing artists to render scenes much more quickly than with CPUs alone.

Compute: LuxMark 2.0

Right off the bat we find an unexpected regression in performance with LuxMark. All things considered we would expect the R9 285 to score similarly to the R9 280 given their nearly identical theoretical FP32 throughput, similar to what we’ve seen in our gaming benchmarks. Instead we have the R9 285 trailing its predecessor by 15%, and coming very close to tying the otherwise much slower R9 270X. Given that this is a new architecture there are a few possibilities here including a lack of OpenCL driver optimizations on AMD’s part, though we can’t entirely rule out bandwidth either since ray tracing can burn up bandwidth at times. Tonga is after all first and foremost a graphics product, and AMD’s memory bandwidth saving compression technology is similarly designed for graphics and not compute, meaning the R9 285 doesn’t have much to make up for the loss of bandwidth in compute tasks versus the R9 280.

In any case, even with R9 285 lagging the R9 280, it’s otherwise a strong showing for AMD. AMD cards overall perform very well on this benchmark compared to NVIDIA’s offerings, so the R9 285 has no trouble shooting well past the GTX 760.

Our 2nd compute benchmark is Sony Vegas Pro 12, an OpenGL and OpenCL video editing and authoring package. Vegas can use GPUs in a few different ways, the primary uses being to accelerate the video effects and compositing process itself, and in the video encoding step. With video encoding being increasingly offloaded to dedicated DSPs these days we’re focusing on the editing and compositing process, rendering to a low CPU overhead format (XDCAM EX). This specific test comes from Sony, and measures how long it takes to render a video.

Compute: Sony Vegas Pro 12 Video Render

Unlike LuxMark, we aren’t seeing a performance gain nor a regression here. The R9 285 is every bit as fast as the R9 280. Meanwhile as has consistently been the case in this benchmark, all of AMD’s cards are well ahead of our NVIDIA cards.

Our 3rd benchmark set comes from CLBenchmark 1.1. CLBenchmark contains a number of subtests; for our standard benchmark suite we focus on the most practical of them, the computer vision test and the fluid simulation test. The former is a useful proxy for computer imaging tasks where systems are required to parse images and identify features (e.g. humans), while fluid simulations are common in professional graphics work and games alike.

Compute: CLBenchmark 1.1 Fluid Simulation

Compute: CLBenchmark 1.1 Computer Vision

Depending on which subtest we’re looking at, the R9 285 either outperforms or trails the R9 280. The fluid simulation subtest finds the R9 285 performing just shy of the more powerful R9 280X, while the R9 285 comes up short of the R9 280 in computer vision. Computer vision is the more bandwidth sensitive benchmark of the two, so it follows that it’s the benchmark more likely to be influenced by the loss of raw memory bandwidth. Otherwise the R9 285’s strong showing in the fluid simulation is unexpected, and given what we know we’re at a bit of a loss to explain it.

Looking at the broader picture, this is yet another test where AMD’s cards do well against NVIDIA’s non-compute cards. Overall the R9 285 is 2-3x faster than the GTX 760 here.

Moving on, our fourth compute benchmark is FAHBench, the official Folding @ Home benchmark. Folding @ Home is the popular Stanford-backed research and distributed computing initiative that has work distributed to millions of volunteer computers over the internet, each of which is responsible for a tiny slice of a protein folding simulation. FAHBench can test both single precision and double precision floating point performance, with single precision being the most useful metric for most consumer cards due to their low double precision performance. Each precision has two modes, explicit and implicit, the difference being whether water atoms are included in the simulation, which adds quite a bit of work and overhead. This is another OpenCL test, utilizing the OpenCL path for FAHCore 17.

Compute: Folding @ Home: Explicit, Single Precision

Compute: Folding @ Home: Implicit, Single Precision

When it comes to single precision the R9 285 edges out the R9 280, though not significantly so. R9 285 still seemingly benefits from some of the GCN 1.2 architectural optimizations, but not to the same extent we’ve seen in other benchmarks.

Overall AMD’s GCN cards are a strong performer in this benchmark and the R9 285 is no exception. GTX 760 trails R9 285 when it comes to implicit single precision, and is blown away in the explicit single precision benchmark.

Compute: Folding @ Home: Explicit, Double Precision

Meanwhile for double precision the R9 285 falls well behind the R9 280. Since Tonga is not designed to pull double-duty as a graphics and high performance compute GPU like Tahiti was, Tonga is configured for 1/16 rate double precision performance, 1/4 the rate of the more powerful Tahiti. As a result it can never keep up with the R9 280 in a double precision workload. Consequently AMD and the R9 285 still have a lead in F@H with double precision, but not to the degree we’ve seen elsewhere. The R9 285 is only about 30% faster than the GTX 760 here.

Wrapping things up, our final compute benchmark is an in-house project developed by our very own Dr. Ian Cutress. SystemCompute is our first C++ AMP benchmark, utilizing Microsoft’s simple C++ extensions to allow the easy use of GPU computing in C++ programs. SystemCompute in turn is a collection of benchmarks for several different fundamental compute algorithms, with the final score represented in points. DirectCompute is the compute backend for C++ AMP on Windows, so this forms our other DirectCompute test.

Compute: SystemCompute v0.5.7.2 C++ AMP Benchmark

SystemCompute exposes another case where the R9 285 comes up short compared to the R9 280, though only slightly. AMD’s latest card can deliver 93% of the performance of an R9 280, and most likely it’s suffering just a bit from the reduction in memory bandwidth. Otherwise it’s still more than 50% ahead of the GTX 760 and still comfortably ahead of the more powerful GTX 770.

Synthetics Power, Temperature, & Noise
Comments Locked

86 Comments

View All Comments

  • chizow - Thursday, September 11, 2014 - link

    If Tonga is a referendum on Mantle, it basically proves Mantle is a failure and will never succeed. This pretty much shows most of what AMD said about Mantle is BS, that it takes LESS effort (LMAO) on the part of the devs to implement than DX.

    If Mantle requires both an application update (game patch) from devs AFTER the game has already run past its prime shelf-date AND also requires AMD to release optimized drivers every time a new GPU is released, then there is simply no way Mantle will ever succeed in a meaningful manner with that level of effort. Simply put, no one is going to put in that kind of work if it means re-tweaking every time a new ASIC or SKU is released. Look at BF4, its already in the rear-view mirror from DICE's standpoint, and no one even cares anymore as they are already looking toward the next Battlefield#
  • TiGr1982 - Thursday, September 11, 2014 - link

    Please stop calling GPUs ASICs - this looks ridiculous.
    Please go to Wikipedia and read what "ASIC" is.
  • chizow - Thursday, September 11, 2014 - link

    Is this a joke or are you just new to the chipmaking industry? Maybe you should try re-reading the Wikipedia entry to understand GPUs are ASICs despite their more recent GPGPU functionality. GPU makers like AMD and Nvidia have been calling their chips ASICs for decades and will continue to do so, your pedantic objections notwithstanding.

    But no need to take my word for it, just look at their own internal memos and job listings:

    https://www.google.com/#q=intel+asic
    https://www.google.com/#q=amd+asic
    https://www.google.com/#q=nvidia+asic
  • TiGr1982 - Thursday, September 11, 2014 - link

    OK, I accept your arguments, but I still don't like this kind of terminology. To me, one may call things like fixed-function video decoder "ASIC" (for example UVD blocks inside Radeon GPUs), but not GPU as a whole, because people do GPGPU for a number of years on GPUs, and "General Purpose" in GPGPU contradicts with "Aplication Specific" in ASIC, isn't it?
    So, overall it's a terminology/naming issue; everyone uses the naming whatever he wants to use.
  • chizow - Thursday, September 11, 2014 - link

    I think you are over-analyzing things a bit. When you look at the entire circuit board for a particular device, you will see each main component or chip is considered an ASIC, because each one has a specific application.

    For example, even the CPU is an ASIC even though it handles all general processing, but its specific application for a PC mainboard is to serve as the central processing unit. Similarly, a southbridge chip handles I/O and communications with peripheral devices, Northbridge handles traffic to/from CPU and RAM and so on and so forth.
  • TiGr1982 - Thursday, September 11, 2014 - link

    OK, then according to this (broad) understanding, every chip in silicon industry may be called ASIC :)
    Let it be.
  • chizow - Friday, September 12, 2014 - link

    Yes, that is why everyone in the silicon industry calls their chips that have specific applications ASICs. ;)

    Something like a capacitor, or resistor etc. would not be as they are of common commodity.
  • Sabresiberian - Thursday, September 11, 2014 - link

    I reject the notion that we should be satisfied with a slower rate of GPU performance increase. We have more use than ever before for a big jump in power. 2560x1440@144Hz. 4K@60Hz.

    Of course it's all good for me to say that without being a micro-architecture design engineer myself, but I think it's time for a total re-think. Or if the companies are holding anything back - bring it out now, please! :)
  • Stochastic - Thursday, September 11, 2014 - link

    Process node shrinks are getting more and more difficult, equipment costs are rising, and the benefits of moving to a smaller node are also diminishing. So sadly I think we'll have to adjust to a more sedate pace in the industry.
  • TiGr1982 - Thursday, September 11, 2014 - link

    I'm a longstanding AMD Radeon user for more than 10 years, but after reading this R9 285 review I can't help but think that, based on results of smaller GM107 in 750 Ti, GM204 in GTX 970/980 may offer much better performance/Watt/die area (at least for gaming tasks) in comparison to the whole AMD GPU lineup. Soon we'll see whether or not this will be the case.

Log in

Don't have an account? Sign up now