Compute Performance

Moving on from our look at gaming performance, we have our customary look at compute performance. With GCN AMD significantly overhauled their architecture in order to improve compute performance, as their long-run initiatives rely on GPU compute performance becoming far more important than it is today.

With such a move however AMD has to solve the chicken and the egg problem on their own, in this case by improving compute performance before there are really a large variety of applications ready to take advantage of it. As we’ll see AMD has certainly achieved that goal, but it raises the question of what was the tradeoff for that? We have some evidence that GCN is more efficient than VLIW5 on a per-shader basis even in games, but at the same time we can’t forget that AMD has gone from 800 SPs to 640 SPs in the move from Juniper to Cape Verde, in spite of a full node jump in fabrication technology. In the long run AMD will be better off, but I suspect we’re looking at that tradeoff today with the 7700 series.

Our first compute benchmark comes from Civilization V, which uses DirectCompute to decompress textures on the fly. Civ V includes a sub-benchmark that exclusively tests the speed of their texture decompression algorithm by repeatedly decompressing the textures required for one of the game’s leader scenes. Note that this is a DX11 DirectCompute benchmark.

Theoretically the 5770 has a 5% compute performance advantage over the 7770. In practice the 5770 doesn’t stand a chance. Even the much, much slower 7750 is ahead by 12%, meanwhile the 7770 is in a class of its own, competing with the likes of the 6870. The 7770 series still trails the GTX 560 to some degree, but once again we’re looking at the proof of just how much the GCN architecture has improved AMD’s compute performance.

Our next benchmark is SmallLuxGPU, the GPU ray tracing branch of the open source LuxRender renderer. We’re now using a development build from the version 2.0 branch, and we’ve moved on to a more complex scene that hopefully will provide a greater challenge to our GPUs.

SmallLuxGPU is another good showing for the GCN based 7700 series, with the 7770 once again moving well up the charts. This time it’s between the 6850 and 6870, and well, well ahead of the GTX 560 or any other NVIDIA video cards. Throwing in an overclock pushes things even farther, leading to the XFX BESDD tying the 6870 in this benchmark.

For our next benchmark we’re looking at AESEncryptDecrypt, an OpenCL AES encryption routine that AES encrypts/decrypts an 8K x 8K pixel square image file. The results of this benchmark are the average time to encrypt the image over a number of iterations of the AES cypher.

Under our AESEncryptDecrypt benchmark the 7770 does even better yet, this time taking the #2 spot and only losing to its overclocked self. PCIe 3.0 helps here, but as we’ve seen with the 7900 series there’s no replacement for a good compute architecture.

Finally, our last benchmark is once again looking at compute shader performance, this time through the Fluid simulation sample in the DirectX SDK. This program simulates the motion and interactions of a 16k particle fluid using a compute shader, with a choice of several different algorithms. In this case we’re using an (O)n^2 nearest neighbor method that is optimized by using shared memory to cache data.

It would appear we’ve saved the best for last, as in our fluid simulation benchmark the top three cards are all 7700 series cards. This benchmark strongly favors a well organized cache, leading to the 7700 series blowing past the 6800 series and never looking back. Even NVIDIA’s Fermi based video cards can’t keep up.

Civilization V Theoretical Performance
POST A COMMENT

155 Comments

View All Comments

  • Dianoda - Wednesday, February 15, 2012 - link

    Yeah, I jumped on that BB/Visiontek HD4850 512MB deal as well. Bought the card about a week before the official launch and at a $50 discount on top of that. Timing was perfect, too, as I had just finished my build, short one video card (borrowed a 3850 from a friend for a few weeks).

    I finally upgraded from that card to a 6950 2GB (BIOS modded to 6970) about a month ago - Skyrim was just too much for the 4850 to handle @ 2560x1440. The 6950 2GB is a great card for the price if you're willing to perform the BIOS mod (and don't mind rebate forms).
    Reply
  • nerrawg - Thursday, February 16, 2012 - link

    Exactly! I bought 2 4850's in the UK in 2009 for £65 ($95) each - best GPU purchase I have made in 12 years! I now have a single 6870 that I bought in 2011 for £120- but its not really an upgrade at all. Thought I would wait and get a second one cheaper but now I don't think that will happen

    2008-2009 was the sweet spot of a decade for Desktop GPUs. The way things are going with the Desktop (AMD bullsnoozer etc. etc.) I fear that it might have even been the sweet spot of GPU performance for the decade to come as well. I would love to see some massive progress in graphics, but it seems that the all the "suites" care about now days is "smart" this and "smart" that. I can't really blame them either, because until pc programmers get their act together and actually start making apps and games that push what is possible on current hardware I don't see any reason why I need 2X the GPU and CPU compute power every 1-2 years.

    Come on guys - we are all waiting for the next "Crysis" - if it doesn't come then it might spell the end of the enthusiast desktop
    Reply
  • StrangerGuy - Wednesday, February 15, 2012 - link

    AMD having a fail product at $160 that couldn't even beat an almost 1.5 year old $150 6870 isn't surprising considering they are also the ones with the cheek to price their FX-8150 at near 2600K prices. Reply
  • thunderising - Wednesday, February 15, 2012 - link

    The only problem I have with AMD on this card is WHY THE LOW BANDWIDTH.

    The card performs nearly 10% faster when the memory is clocked at 6GHz QDR (TPU reviews) and 15% with Core Clock matching XFX's OCed Speed.

    I think that 6GHz memory modules would have taken the HD7770 a long way ahead. The performance boost would have been enough to hit HD6850 performance, or beat it in all cases, and at that point, this card at 159$ would make sense.

    Right now, until the price hits about 129$, this doesn't make sense.
    Reply
  • chizow - Wednesday, February 15, 2012 - link

    But you get GCN, 28nm and a bottle of verdetard? Reply
  • Zoomer - Wednesday, February 15, 2012 - link

    GCN is worse than useless for gamers and non compute users. Reply
  • jokeyrhyme - Wednesday, February 15, 2012 - link

    I think I've built my last system with an AMD CPU. Intel completely abused their monopoly and decimated AMD's success in the CPU department, and I don't think AMD will have an enthusiast-quality CPU ever again. :(

    That said, I think I will still use AMD GPUs for a while yet.

    nVidia's Kepler may beat AMD later this year, but AMD actually has an open-source driver developer on staff and routinely publishes hardware documentation. AMD GPUs will probably have better support for Wayland than their nVidia counterparts due to these factors. If you use Linux and want to stay on the cutting edge, then I don't think picking nVidia is particularly wise.
    Reply
  • ganeshts - Wednesday, February 15, 2012 - link

    At least in the HTPC area, NVIDIA is miles ahead of AMD in the open source support department.

    Almost all Linux HTPCs capable of HD playback have NVIDIA GPUs under the hood, thanks to their well supported VDPAU feature.

    AMD started getting serious with xvBA only towards the end of last year, and they still have a lot of catching up to do [ http://phoronix.com/forums/showthread.php?65688-XB... ]
    Reply
  • Ananke - Wednesday, February 15, 2012 - link

    AMD is several years behind NVidia on the compute side...actually they are nowhere as of today. The AMD 7xxx series is so ridiculously priced, it will not get an user base to be attractive for developers. Actually, I am at the point of considering NVidia cards for computing, despite that I hate their heat and power consumption.

    AMD had their chance and they blew it.

    Besides, we shall see where the AMD ex-VP will go - that company most likely will be the next big player in graphics and high performance computing. Probably Apple.
    Reply
  • PeskyLittleDoggy - Thursday, February 16, 2012 - link

    In my country, company policy dictates you cannot leave your company and work for a competing company if you have valuable R&D knowledge. Thats part of the restraint of trade clause in your contract.

    Basically what I'm saying is, AMD's ex-VP will not be able to work in any company with a graphics department for 2 years if the contract is similar to mine. I can't remember now but some CEO was sued for that recently.
    Reply

Log in

Don't have an account? Sign up now