Compute Performance

Moving on from our look at gaming performance, we have our customary look at compute performance. With GCN AMD significantly overhauled their architecture in order to improve compute performance, as their long-run initiatives rely on GPU compute performance becoming far more important than it is today.

With such a move however AMD has to solve the chicken and the egg problem on their own, in this case by improving compute performance before there are really a large variety of applications ready to take advantage of it. As we’ll see AMD has certainly achieved that goal, but it raises the question of what was the tradeoff for that? We have some evidence that GCN is more efficient than VLIW5 on a per-shader basis even in games, but at the same time we can’t forget that AMD has gone from 800 SPs to 640 SPs in the move from Juniper to Cape Verde, in spite of a full node jump in fabrication technology. In the long run AMD will be better off, but I suspect we’re looking at that tradeoff today with the 7700 series.

Our first compute benchmark comes from Civilization V, which uses DirectCompute to decompress textures on the fly. Civ V includes a sub-benchmark that exclusively tests the speed of their texture decompression algorithm by repeatedly decompressing the textures required for one of the game’s leader scenes. Note that this is a DX11 DirectCompute benchmark.

Theoretically the 5770 has a 5% compute performance advantage over the 7770. In practice the 5770 doesn’t stand a chance. Even the much, much slower 7750 is ahead by 12%, meanwhile the 7770 is in a class of its own, competing with the likes of the 6870. The 7770 series still trails the GTX 560 to some degree, but once again we’re looking at the proof of just how much the GCN architecture has improved AMD’s compute performance.

Our next benchmark is SmallLuxGPU, the GPU ray tracing branch of the open source LuxRender renderer. We’re now using a development build from the version 2.0 branch, and we’ve moved on to a more complex scene that hopefully will provide a greater challenge to our GPUs.

SmallLuxGPU is another good showing for the GCN based 7700 series, with the 7770 once again moving well up the charts. This time it’s between the 6850 and 6870, and well, well ahead of the GTX 560 or any other NVIDIA video cards. Throwing in an overclock pushes things even farther, leading to the XFX BESDD tying the 6870 in this benchmark.

For our next benchmark we’re looking at AESEncryptDecrypt, an OpenCL AES encryption routine that AES encrypts/decrypts an 8K x 8K pixel square image file. The results of this benchmark are the average time to encrypt the image over a number of iterations of the AES cypher.

Under our AESEncryptDecrypt benchmark the 7770 does even better yet, this time taking the #2 spot and only losing to its overclocked self. PCIe 3.0 helps here, but as we’ve seen with the 7900 series there’s no replacement for a good compute architecture.

Finally, our last benchmark is once again looking at compute shader performance, this time through the Fluid simulation sample in the DirectX SDK. This program simulates the motion and interactions of a 16k particle fluid using a compute shader, with a choice of several different algorithms. In this case we’re using an (O)n^2 nearest neighbor method that is optimized by using shared memory to cache data.

It would appear we’ve saved the best for last, as in our fluid simulation benchmark the top three cards are all 7700 series cards. This benchmark strongly favors a well organized cache, leading to the 7700 series blowing past the 6800 series and never looking back. Even NVIDIA’s Fermi based video cards can’t keep up.

Civilization V Theoretical Performance
Comments Locked

155 Comments

View All Comments

  • kallogan - Wednesday, February 15, 2012 - link

    HD 6850 is still the way to go.
  • zepi - Wednesday, February 15, 2012 - link

    So basically in couple of generations we've gone
    4870 > 5770/6770 > 7770

    Chip size
    260mm2 > 165mm2 > ~120mm2 chip.

    Performance is about
    100 > 100 > 120

    Power consumption in gaming load according to Techpowerup (just graphics card):
    150W - 108W - 83W

    And soon we should have 1 inch thick laptops with these things inside. I'm not complaining.
  • silverblue - Wednesday, February 15, 2012 - link

    Good point. One thing I think people forget is that smaller processing technologies will yield either better performance at the same power, or reduced consumption at the same performance... or a mix of the two. You could throw two cards in dual-GPU config for similar power to one you had two years back, and still not have to worry too much if CrossFire or SLi doesn't work properly (well, if you forget the microstuttering, of course).
  • cactusdog - Wednesday, February 15, 2012 - link

    WHy is the 6770 left out of benchmarks?? Isnt that odd considering the 7770 replaces the 6770? I really wish reviewers would be independant when reviewing cards, instead of following manufacturer guidelines.
  • Markstar - Wednesday, February 15, 2012 - link

    No, since the 6770 is EXACTLY the same card as the 5770 (just relabeled). So it makes sense to continue using the 5770 and remind AMD (and us) that we do not fall for their shenanigans (sadly, many do fall for it).
  • gnorgel - Wednesday, February 15, 2012 - link

    For your 6850. It should sell a lot better now. Maybe they really stopped producing it and need to get rid of stocks. But when it's sold out almost anyone should go for a gtx 560, 7% more expensive and 30% faster.
    The only reason to buy a 7770 now is if your powersupply can't support it and you would have to get a new one.
  • duploxxx - Wednesday, February 15, 2012 - link

    by the time the 6850 is out of stock the 78xx series are launched which will knock out 560

    don't understand what evryone is complaining about, its faster then the 57xx-67xx series, les spower. sure it's not cheap but neither are the 57-67 @ launch. Combined with old gen available and NV products a bit to expensive but this is just starting price....
  • akbo - Wednesday, February 15, 2012 - link

    Moore's law apparently doesn't apply to graphic cards. People expectations do. People expect that every two years gpus at the same price point have double transistors and thus be faster by so. Obviously perf does not scale like so since the 28 nm shrink only has a 50% improvement from 40 nm. However that would mean a 50% improvement is expected. Imperfect scaling would mean a 40% improvement.

    So people expect that a card which is 20% faster than a card from 2 years ago to be 1.2/1.4 the price at launch, or an ~ 85% of the 5770 launch price in this case. That would mean that the card should retail at around $130-140 or so for the 7750 and sub-$100 pricing, like $90 or so. I expect it to be that price too.
  • chizow - Wednesday, February 15, 2012 - link

    Moore's Law does actually hold true for GPUs in the direct context of the original law as you stated, roughly doubled transistors every 2 years with a new process node. The performance has deviated however for some time now with imperfect scaling relative to transistors, but at least ~50% has been the benchmark for performance improvements over previous generations.

    Tahiti and the rest of Southern Islands itself isn't that much of a disappointment relative to Moore's Law, because it does offer 40-50% improvement over AMD's previous flagship GPU. The problem is, it only offers 15-25% improvement over the overall last-gen performance leader the GTX 580 but somewhat comically, AMD wants to price it in that light.

    So we end up with this situation, the worst price performance metrics ever where a new GPU architecture and process node only offers 15-25% performance increase at the same price (actually 10% more in the 7970 case). This falls far short of the expectations of even low-end Moore's Law observer estimates that would expect to see at least +50% over the last-gen overall high-end in order to command that top pricing spot.
  • arjuna1 - Wednesday, February 15, 2012 - link

    DX11.1?? With only one true DX11 game on the market, BF3, there is literally no incentive to upgrade to this generation of cards 7xxx/kepler.

    Unless nvidia comes out with something big, and I mean big as in out of this world, I'll just skip to the next gen, and if AMD insists in being an ass with pricing, I'll go Ngreen when the time comes.

    Now, the worrying thing is that it's becoming evident, both parties are becoming too cynical with price fixing, when is that anti trust lawsuit coming?

Log in

Don't have an account? Sign up now