Compute: The Real Reason for GCN

Moving on from our game tests we’ve now reached the compute benchmark segment of our review. While the gaming performance of the 7970 will have the most immediate ramifications for AMD and the product, it is the compute performance that I believe is the more important metric in the long run. GCN is both a gaming and a compute architecture, and while its gaming pedigree is well defined its real-world compute capabilities still need to be exposed.

With that said, we’re going to open up this section with a rather straightforward statement: the current selection of compute applications for AMD GPUs is extremely poor. This is especially true for anything that would be suitable as a benchmark. Perhaps this is because developers ignored Evergreen and Northern Islands due to their low compute performance, or perhaps this is because developers still haven’t warmed up to OpenCL, but here at the tail end of 2011 there just aren’t very many applications that can make meaningful use of the pure compute capabilities of AMD’s GPUs.

Aggravating this some is that of the applications that can use AMD’s compute capabilities, some of the most popular ones among them have been hand-tuned for AMD’s previous architectures to the point that they simply will not run on Tahiti right now. Folding@Home, FLACC, and a few other candidates we looked into for use as compute benchmarks all fall under this umbrella, and as a result we only have a limited toolset to work with for proving the compute performance of GCN.

So with that out of the way, let’s get started.

Since we just ended with Civilization V as a gaming benchmark, let’s start with Civilization V as a compute benchmark. We’ve seen Civilization V’s performance skyrocket on 7970 and we’ve theorized that it’s due to improvements in compute shader performance, and now we have a chance to prove it.

Compute: Civilization V

And there’s our proof. Compared to the 6970, the 7970’s performance on this benchmark has jumped up by 58%, and even the previously leading GTX 580 is now beneath the 7970 by 12%. GCN’s compute ambitions are clearly paying off, and in the case of Civilization V it’s even enough to dethrone NVIDIA entirely. If you’re AMD there’s not much more you can ask for.

Our next benchmark is SmallLuxGPU, the GPU ray tracing branch of the open source LuxRender renderer. We’re now using a development build from the version 2.0 branch, and we’ve moved on to a more complex scene that hopefully will provide a greater challenge to our GPUs.

Compute: SmallLuxGPU 2.0d4

Again the 7970 does incredibly well here compared to AMD’s past architectures. AMD already did rather well here even with the limited compute performance of their VLIW4 architecture, and with GCN AMD once again puts their old architectures to shame, and puts NVIDIA to shame too in the process. Among single-GPU cards the GTX 580 is the closest competitor and even then the 7970 leads it by 72%. The story is much the same for the 7970 versus the 6970, where the 7970 leads by 74%. If AMD can continue to deliver on performance gains like these, the GCN is going to be a formidable force in the HPC market when it eventually makes its way there.

For our next benchmark we’re once again looking at compute shader performance, this time through the Fluid simulation sample in the DirectX SDK. This program simulates the motion and interactions of a 16k particle fluid using a compute shader, with a choice of several different algorithms. In this case we’re using two of them: a highly optimized grid search that Microsoft based on an earlier CUDA implementation, and an (O)n^2 nearest neighbor method that is optimized by using shared memory to cache data.

Compute: DirectX11 Compute Shader Fluid Simulation

There are many things we can gather from this data, but let’s address the most important conclusions first. Regardless of the algorithm used, AMD’s VLIW4 and VLIW5 architectures had relatively poor performance in this simulation; NVIDIA meanwhile has strong performance with the grid search algorithm, but more limited performance with the shared memory algorithm. 7970 consequently manages to blow away the 6970 in all cases, and while it can’t beat the GTX 580 at the grid search algorithm it is 45% faster than the GTX 580 with the shared memory algorithm.

With GCN AMD put a lot of effort into compute performance, not only with respect to their shader/compute hardware, but with the caches and shared memory to feed that hardware. I don’t believe we have enough data to say anything definitive about how Tahiti/GCN’s cache compares to Fermi’s cache, this benchmark does raise the possibility that GCN cache design is better suited for less than optimal brute force algorithms. In which case what this means for AMD could be huge, as it could open up new HPC market opportunities for them that NVIDIA could never access, and certainly it could help AMD steal market share from NVIDIA.

Moving on to our final two benchmarks, we’ve gone spelunking through AMD’s OpenCL archive to dig up a couple more compute scenarios to use to evaluate GCN. The first of these is AESEncryptDecrypt, an OpenCL AES encryption routine that AES encrypts/decrypts an 8K x 8K pixel square image file. The results of this benchmark are the average time to encrypt the image over a number of iterations of the AES cypher.

Compute: AESEncryptDecrypt

We went into the AMD OpenCL sample archives knowing that the projects in it were likely already well suited for AMD’s previous architectures, and there is definitely a degree of that in our results. The 6970 already performs decently in this benchmark and ultimately the GTX 580 is the top competitor. However the 7970 still manages to improve on the 6970 by a sizable degree, and accomplishes this encryption task in only 65% the time. Meanwhile compared to the GTX 580 it trails by roughly 12%, which shows that if nothing else Fermi and GCN are going to have their own architectural strengths and weaknesses, although there’s obviously some room for improvement.

One interesting fact we gathered from this compute benchmark is that it benefitted from the increase in bandwidth offered by PCI Express 3.0. With PCIe 3.0 the 7970 improves by about 10%, showcasing just how important transport bandwidth is for some compute tasks. Ultimately we’ll reach a point where even games will be able to take full advantage of PCIe 3.0, but for right now it’s the compute uses that will benefit the most.

Our final benchmark also comes from the AMD OpenCL archives, and it’s a variant of the Monte Carlo method implemented in OpenCL. Here we’re timing how long it takes to execute a 400 step simulation.

Compute: MonteCarloAsian

For our final benchmark the 7970 once again takes the lead. The rest of the Radeon pack is close behind so GCN isn’t providing an immense benefit here, but AMD still improves upon the 6970 by 14%. Meanwhile the lead over the GTX 580 is larger at 33%.

Ultimately from these benchmarks it’s clear that AMD is capable of delivering on at least some of the theoretical potential for compute performance that GCN brings to the table. Not unlike gaming performance this is often going to depend on the task at hand, but the performance here proves that in the right scenario Tahiti is a very capable compute GPU. Will it be enough to make a run at NVIDIA’s domination with Tesla? At this point it’s too early to tell, but the potential is there, which is much more than we could say about VLIW4.

Civilization V Theoreticals & Tessellation
Comments Locked

292 Comments

View All Comments

  • Zingam - Thursday, December 22, 2011 - link

    And at the time when it is available in D3D. AMD's implementation won't be compatible... :D That's sounds familiar. So will have to wait for another generation to get the things right.
  • Ryan Smith - Thursday, December 22, 2011 - link

    As for your question about FP64, it's worth noting that of the FP64 rates AMD listed for GCN, "0" was not explicitly an option. It's quite possible that anything using GCN will have at a minimum 1/16th FP64.
  • Sind - Thursday, December 22, 2011 - link

    Excellent review thanks Ryan. Looking forward to see what the 7950 performance and pricing will end up. Also to see what nv has up their sleeves. Although I can't shake the feeling amd is holding back.
  • chizow - Thursday, December 22, 2011 - link

    Another great article, I really enjoyed all the state-of-the-industry commentary more than the actual benchmarks and performance numbers.

    One thing I may have missed was any coverage at all of GCN. Usually you guys have all those block diagrams and arrows explaining the changes in architecture. I know you or Anand did a write-up on GCN awhile ago, but I may have missed the link to it in this article. Or maybe put a quick recap in there with a link to the full write-up.

    But with GCN, I guess we can close the book on AMD's past Vec5/VLIW4 archs as compute failures? For years ATI/AMD and their supporters have insisted it was the better compute architecture, and now we're on the 3rd major arch change since unified shaders, while Nvidia has remained remarkably consistent with their simple SP approach. I think the most striking aspect of this consistency is that you can run any CUDA or GPU accelerated apps on GPUs as old as G80, while you even noted you can't even run some of the most popular compute apps on 7970 because of arch-specific customizations.

    I also really enjoyed the ISV and driver/support commentary. It sounds like AMD is finally serious about "getting in the game" or whatever they're branding it nowadays, but I have seen them ramp up their efforts with their logo program. I think one important thing for them to focus on is to get into more *quality* games rather than just focusing on getting their logo program into more games. Still, as long as both Nvidia and AMD are working to further the compatibility of their cards without pushing too many vendor-specific features, I think that's a win overall for gamers.

    A few other minor things:

    1) I believe Nvidia will soon be countering MLAA with a driver-enabled version of their FXAA. While FXAA is available to both AMD and Nvidia if implemented in-game, providing it driver-side will be a pretty big win for Nvidia given how much better performance and quality it offers over AMD's MLAA.

    2) When referring to active DP adapter, shouldn't it be DL-DVI? In your blurb it said SL-DVI. Its interesting they went this route with the outputs, but providing the active adapter was definitely a smart move. Also, is there any reason GPU mfgs don't just add additional TMDS transmitters to overcome the 4x limitation? Or is it just a cost issue?

    3) The HDMI discussion is a bit fuzzy. HDMI 1.4b specs were just finalized, but haven't been released. Any idea whether or not SI or Kepler will support 1.4b? Biggest concern here is for 120Hz 1080p 3D support.

    Again, thoroughly enjoyed reading the article, great job as usual!
  • Ryan Smith - Thursday, December 22, 2011 - link

    Thanks for the kind words.

    Quick answers:

    2) No, it's an active SL-DVI adapter. DL-DVI adapters exist, but are much more expensive and more cumbersome to use because they require an additional power source (usually USB).

    As for why you don't see video cards that support more than 2 TMDS-type displays, it's both an engineering and a cost issue. On the engineering side each TMDS source (and thus each supported TMDS display) requires its own clock generator, whereas DisplayPort only requires 1 common clock generator. On the cost side those clock generators cost money to implement, but using TMDS also requires paying royalties to Silicon Image. The royalty is on the order of cents, but AMD and NVIDIA would still rather not pay it.

    3) SI will support 1080P 120Hz frame packed S3D.
  • ericore - Thursday, December 22, 2011 - link

    Core Next: It appears AMD is playing catchup to Nvidia's Cuda, but to an extent that halves the potential performance metrics; I see no other reason why they could not have achieved at varying 25-50% improvement in FPS. That is going to cost them, not just for marginally better performance 5-25%, but they are price matching GTX 580 which means less sales though I suppose people who buy 500$ + GPUs buy them no matter what. Though in this case, they may wait to see what Nvidia has to offer.

    Other New AMD GPUs: Will be releasing in February and April are based on the current architecture, but with two critical differences; smaller node + low power based silicon VS the norm performance based silicon. We will see very similar performance metrics, but the table completely flips around: we will see them, cheaper, much more power efficient and therefore very quiet GPUs; I am excited though I would hate to buy this and see Nvidia deliver where AMD failed.

    Thanks Anand, always a pleasure reading your articles.
  • Angrybird - Thursday, December 22, 2011 - link

    any hint on 7950? this card should go head to head with gtx580 when it release. good job for AMD, great review for Ryan!
  • ericore - Thursday, December 22, 2011 - link

    I should add with over 4 billion transistors, they've added more than 35% more transistors but only squeeze 5-25% improvement; unacceptable. That is a complete fail in that context relative to advancement in gaming. Too much catchup with Nvidia.
  • Finally - Thursday, December 22, 2011 - link

    ...that saying? It goes like this:
    If you don't show up for a race, you lose by default.
    Your favourite company lost, so their fanboys may become green of envydia :)

    Besides that - I'd never shell out more than 150€ for a petty GPU, so neither company's product would have appealed to me...
  • piroroadkill - Thursday, December 22, 2011 - link

    Wait, catchup? In my eyes, they were already winning. 6950 with dual BIOS, unlock it to 6970.. unbelievable value.. profit??

    Already has a larger framebuffer than the GTX580, so...

Log in

Don't have an account? Sign up now