Compute: The Real Reason for GCN

Moving on from our game tests we’ve now reached the compute benchmark segment of our review. While the gaming performance of the 7970 will have the most immediate ramifications for AMD and the product, it is the compute performance that I believe is the more important metric in the long run. GCN is both a gaming and a compute architecture, and while its gaming pedigree is well defined its real-world compute capabilities still need to be exposed.

With that said, we’re going to open up this section with a rather straightforward statement: the current selection of compute applications for AMD GPUs is extremely poor. This is especially true for anything that would be suitable as a benchmark. Perhaps this is because developers ignored Evergreen and Northern Islands due to their low compute performance, or perhaps this is because developers still haven’t warmed up to OpenCL, but here at the tail end of 2011 there just aren’t very many applications that can make meaningful use of the pure compute capabilities of AMD’s GPUs.

Aggravating this some is that of the applications that can use AMD’s compute capabilities, some of the most popular ones among them have been hand-tuned for AMD’s previous architectures to the point that they simply will not run on Tahiti right now. Folding@Home, FLACC, and a few other candidates we looked into for use as compute benchmarks all fall under this umbrella, and as a result we only have a limited toolset to work with for proving the compute performance of GCN.

So with that out of the way, let’s get started.

Since we just ended with Civilization V as a gaming benchmark, let’s start with Civilization V as a compute benchmark. We’ve seen Civilization V’s performance skyrocket on 7970 and we’ve theorized that it’s due to improvements in compute shader performance, and now we have a chance to prove it.

Compute: Civilization V

And there’s our proof. Compared to the 6970, the 7970’s performance on this benchmark has jumped up by 58%, and even the previously leading GTX 580 is now beneath the 7970 by 12%. GCN’s compute ambitions are clearly paying off, and in the case of Civilization V it’s even enough to dethrone NVIDIA entirely. If you’re AMD there’s not much more you can ask for.

Our next benchmark is SmallLuxGPU, the GPU ray tracing branch of the open source LuxRender renderer. We’re now using a development build from the version 2.0 branch, and we’ve moved on to a more complex scene that hopefully will provide a greater challenge to our GPUs.

Compute: SmallLuxGPU 2.0d4

Again the 7970 does incredibly well here compared to AMD’s past architectures. AMD already did rather well here even with the limited compute performance of their VLIW4 architecture, and with GCN AMD once again puts their old architectures to shame, and puts NVIDIA to shame too in the process. Among single-GPU cards the GTX 580 is the closest competitor and even then the 7970 leads it by 72%. The story is much the same for the 7970 versus the 6970, where the 7970 leads by 74%. If AMD can continue to deliver on performance gains like these, the GCN is going to be a formidable force in the HPC market when it eventually makes its way there.

For our next benchmark we’re once again looking at compute shader performance, this time through the Fluid simulation sample in the DirectX SDK. This program simulates the motion and interactions of a 16k particle fluid using a compute shader, with a choice of several different algorithms. In this case we’re using two of them: a highly optimized grid search that Microsoft based on an earlier CUDA implementation, and an (O)n^2 nearest neighbor method that is optimized by using shared memory to cache data.

Compute: DirectX11 Compute Shader Fluid Simulation

There are many things we can gather from this data, but let’s address the most important conclusions first. Regardless of the algorithm used, AMD’s VLIW4 and VLIW5 architectures had relatively poor performance in this simulation; NVIDIA meanwhile has strong performance with the grid search algorithm, but more limited performance with the shared memory algorithm. 7970 consequently manages to blow away the 6970 in all cases, and while it can’t beat the GTX 580 at the grid search algorithm it is 45% faster than the GTX 580 with the shared memory algorithm.

With GCN AMD put a lot of effort into compute performance, not only with respect to their shader/compute hardware, but with the caches and shared memory to feed that hardware. I don’t believe we have enough data to say anything definitive about how Tahiti/GCN’s cache compares to Fermi’s cache, this benchmark does raise the possibility that GCN cache design is better suited for less than optimal brute force algorithms. In which case what this means for AMD could be huge, as it could open up new HPC market opportunities for them that NVIDIA could never access, and certainly it could help AMD steal market share from NVIDIA.

Moving on to our final two benchmarks, we’ve gone spelunking through AMD’s OpenCL archive to dig up a couple more compute scenarios to use to evaluate GCN. The first of these is AESEncryptDecrypt, an OpenCL AES encryption routine that AES encrypts/decrypts an 8K x 8K pixel square image file. The results of this benchmark are the average time to encrypt the image over a number of iterations of the AES cypher.

Compute: AESEncryptDecrypt

We went into the AMD OpenCL sample archives knowing that the projects in it were likely already well suited for AMD’s previous architectures, and there is definitely a degree of that in our results. The 6970 already performs decently in this benchmark and ultimately the GTX 580 is the top competitor. However the 7970 still manages to improve on the 6970 by a sizable degree, and accomplishes this encryption task in only 65% the time. Meanwhile compared to the GTX 580 it trails by roughly 12%, which shows that if nothing else Fermi and GCN are going to have their own architectural strengths and weaknesses, although there’s obviously some room for improvement.

One interesting fact we gathered from this compute benchmark is that it benefitted from the increase in bandwidth offered by PCI Express 3.0. With PCIe 3.0 the 7970 improves by about 10%, showcasing just how important transport bandwidth is for some compute tasks. Ultimately we’ll reach a point where even games will be able to take full advantage of PCIe 3.0, but for right now it’s the compute uses that will benefit the most.

Our final benchmark also comes from the AMD OpenCL archives, and it’s a variant of the Monte Carlo method implemented in OpenCL. Here we’re timing how long it takes to execute a 400 step simulation.

Compute: MonteCarloAsian

For our final benchmark the 7970 once again takes the lead. The rest of the Radeon pack is close behind so GCN isn’t providing an immense benefit here, but AMD still improves upon the 6970 by 14%. Meanwhile the lead over the GTX 580 is larger at 33%.

Ultimately from these benchmarks it’s clear that AMD is capable of delivering on at least some of the theoretical potential for compute performance that GCN brings to the table. Not unlike gaming performance this is often going to depend on the task at hand, but the performance here proves that in the right scenario Tahiti is a very capable compute GPU. Will it be enough to make a run at NVIDIA’s domination with Tesla? At this point it’s too early to tell, but the potential is there, which is much more than we could say about VLIW4.

Civilization V Theoreticals & Tessellation
Comments Locked

292 Comments

View All Comments

  • Scali - Saturday, December 24, 2011 - link

    I have never heard Jen-Hsun call the mock-up a working board.
    They DID however have working boards on which they demonstrated the tech-demos.
    Stop trying to make something out of nothing.
  • Scali - Saturday, December 24, 2011 - link

    Actually, since Crysis 2 does not 'tessellate the crap' out of things (unless your definition of that is: "Doesn't run on underperforming tessellation hardware"), the 7970 is actually the fastest card in Crysis 2.
    Did you even bother to read some other reviews? Many of them tested Crysis 2, you know. Tomshardware for example.
    If you try to make smart fanboy remarks, at least make sure they're smart first.
  • Scali - Saturday, December 24, 2011 - link

    But I know... being a fanboy must be really hard these days..
    One moment you have to spread nonsense about how Crysis 2's tessellation is totally over-the-top...
    The next moment, AMD comes out with a card that has enough of a boost in performance that it comes out on top in Crysis 2 again... So you have to get all up to date with the latest nonsense again.
    Now you know what the AMD PR department feels like... they went from "Tessellation good" to "Tessellation bad" as well, and have to move back again now...
    That is, they would, if they weren't all fired by the new management.
  • formulav8 - Tuesday, February 21, 2012 - link

    Your worse than anything he said. Grow up
  • CeriseCogburn - Sunday, March 11, 2012 - link

    He's exactly correct. I quite understand for amd fanboys that's forbidden, one must tow the stupid crybaby line and never deviate to the truth.
  • crazzyeddie - Sunday, December 25, 2011 - link

    Page 4:

    " Traditionally the ROPs, L2 cache, and memory controllers have all been tightly integrated as ROP operations are extremely bandwidth intensive, making this a very design for AMD to use. "
  • Scali - Monday, December 26, 2011 - link

    Ofcourse it isn't. More polygons is better. Pixar subdivides everything on screen to sub-pixel level.
    That's where games are headed as well, that's progress.

    Only fanboys like you cry about it.... even after AMD starts winning the benchmarks (which would prove that Crysis is not doing THAT much tessellation, both nVidia and new AMD hardware can deal with it adequately).
  • Wierdo - Monday, January 2, 2012 - link

    http://techreport.com/articles.x/21404

    "Crytek's decision to deploy gratuitous amounts of tessellation in places where it doesn't make sense is frustrating, because they're essentially wasting GPU power—and they're doing so in a high-profile game that we'd hoped would be a killer showcase for the benefits of DirectX 11
    ...
    But the strange inefficiencies create problems. Why are largely flat surfaces, such as that Jersey barrier, subdivided into so many thousands of polygons, with no apparent visual benefit? Why does tessellated water roil constantly beneath the dry streets of the city, invisible to all?
    ...
    One potential answer is developer laziness or lack of time
    ...
    so they can understand why Crysis 2 may not be the most reliable indicator of comparative GPU performance"

    I'll take the word of professional reviewers.
  • CeriseCogburn - Sunday, March 11, 2012 - link

    Give them a month or two to adjust their amd epic fail whining blame shift.
    When it occurs to them that amd is actually delivering some dx11 performance for the 1st time, they'll shift to something else they whine about and blame on nvidia.
    The big green MAN is always keeping them down.
  • Scali - Monday, December 26, 2011 - link

    Wrong, they showed plenty of demos at the introduction. Else the introduction would just be Jen-Hsun holding up the mock card, and nothing else... which was clearly not the case.
    They demo'ed Endless City, among other things. Which could not have run on anything other than real Fermi chips.
    And yea, I'm really going to go to SemiAccurate to get reliable information!

Log in

Don't have an account? Sign up now