Compute Performance

Moving on from our look at gaming performance, we have our customary look at compute performance. Since compute performance is by definition shader bound, the 7950 is at a bit of a disadvantage here compared to gaming performance. Whereas ROP performance scales with the core clock, shader performance is hit by both the reduction in the core clock and the disabled CU array.

Our first compute benchmark comes from Civilization V, which uses DirectCompute to decompress textures on the fly. Civ V includes a sub-benchmark that exclusively tests the speed of their texture decompression algorithm by repeatedly decompressing the textures required for one of the game’s leader scenes. Note that this is a DX11 DirectCompute benchmark.

AMD’s greatly improved compute performance continues to shine here, though in the case of Civilization V it’s largely consumed by just closing the previously large gap between the GTX 500 series and the Radeon HD 6000 series. As a result the 7950 falls ever so short of the GTX 580, while the factory overclocked Sapphire and XFX cards give the 7950 enough of a push to come within 5% of the 7970.

Our next benchmark is SmallLuxGPU, the GPU ray tracing branch of the open source LuxRender renderer. We’re now using a development build from the version 2.0 branch, and we’ve moved on to a more complex scene that hopefully will provide a greater challenge to our GPUs.

Under SmallLuxGPU the 7970 enjoyed a large lead over the GTX 580, and this continues with the 7950. Even though the 7950 is well behind the 7970—to the tune of 24%—it’s still 33% ahead of the GTX 580 and the lead only grows from there. Meanwhile the XFX and Sapphire cards can catch up to the 7970 somewhat, but as this is truly a shader-bound test, you can’t make up for the lack of shaders units on the 7950.

For our next benchmark we’re looking at AESEncryptDecrypt, an OpenCL AES encryption routine that AES encrypts/decrypts an 8K x 8K pixel square image file. The results of this benchmark are the average time to encrypt the image over a number of iterations of the AES cypher.

In spite of being a compute benchmark, AESEncryptDecrypt is not particularly sensitive to GPU performance, showcasing the impact that setup times can have. The 7950 trails the 7970 by 10%, and overclocking doesn’t change this much. Unfortunately for AMD NVIDIA is still the leader here, showing that AMD’s compute performance still has room to grow.

Finally, our last benchmark is once again looking at compute shader performance, this time through the Fluid simulation sample in the DirectX SDK. This program simulates the motion and interactions of a 16k particle fluid using a compute shader, with a choice of several different algorithms. In this case we’re using an (O)n^2 nearest neighbor method that is optimized by using shared memory to cache data.

With the compute shader fluid simulation we once again shift back into a compute task that’s much more shader-bound. The 7950 only reaches 80% of the performance of the 7970, once more proving the real impact of losing a CU array. This is still enough to handily surpass the GTX 580 however, with the 7950 taking a 15% lead.

Civilization V Power, Temperature, & Noise
Comments Locked

259 Comments

View All Comments

  • Galidou - Tuesday, January 31, 2012 - link

    Well then everything you said is nothing new, and is quite useless, that's what I meant. What's the point mentioning that it always has been like that and say it like if it's totally shocking like if it's new.

    ''In other words, if you wanted this level of performance, you could've gotten it a year ago with the GTX 580 for almost the same price....over a year ago....

    And that's why AMD's pricing of these parts fails.''

    Than marketting the way is done nowadays is a big FAIL but everyone knows it, ALWAYS has been like that... Car manufacturer makes new cars every year and sometimes it's worse than last year's, and higher priced....

    Ever heard of programmed obsolescence??
  • chizow - Tuesday, January 31, 2012 - link

    But this ISN'T how it usually goes, its unprecedented which is why many observers are pointing out the inconsistency. Look at history and even your own examples to show how this is out of the norm.

    I will leave you with one final question and you try to answer it with a straight face.

    If Nvidia launched their new "next-gen" architecture on a new process node like 15nm in 12-13 months and it was only 15-25% faster than the 7970 but cost 10% more, would you be happy with it and consider it some great victory???

    I don't know, I mean every way you possibly look at this, it just isn't right.
  • xeridea - Tuesday, January 31, 2012 - link

    I would poo my pants because Nvidia actually did a process shrink ahead of schedule. BY THE WAY 14nm is 2 shrinks from now, and will be about 5 years down the road.

    Following your conversation, you clearly don't know the reason behind pricing scheme. There is 0 competition with the 7900 series right now, and it is still better price/performace at its high price. Price will go down in time, but cards are always more expensive on release.... welcome to the real world.

    You could get the 580 last year for $50 more than the 7950, while using 30-80W more power (idle-game), running 10-25C hotter under load, making more noise, and slightly less performance (even with fairly new drivers for 7950).
  • chizow - Tuesday, January 31, 2012 - link

    Uh, no I understand perfectly why they're pricing it this way.

    They're trying to make money and capitalize on the brief period of time they can actually charge a premium for holding the performance crown.

    But that's not going to stop keen observers like myself for calling them on it, especially when they're pricing last-gen performance at next-gen prices. They might swindle a few of their unwitting fans this time around but this will only hurt them in the long run. And by long, I mean as soon as Kepler launches.
  • mdlam - Tuesday, January 31, 2012 - link

    Too bad your "keen" observations prevented you from noticing that Nvidia is also pricing their 14 month old technology at premium prices? Wait maybe it is because you are a Nvidia fanboy! I won't ever get swindled by AMD, I will only be swindled by Nvidia, says the retard.
  • SlyNine - Tuesday, January 31, 2012 - link

    Then go back and look at other launches. Get the facts and stop using Adhominem attacks and showing your ignorance.
  • chizow - Tuesday, January 31, 2012 - link

    No, Nvidia priced their last-gen performance based on last-gen premiums, you would expect the next-gen to shift these parts to obsolescence but obviously AMD doesn't feel their users possess the acumen to understand this paradigm.

    As for retards being swindled by Nvidia, lmao, the difference is, they would've been reveling in their ignorant bliss with this level of performance, 14 months ago. For the same price.

    Its truly amazing though, because you're falling into the exact trap AMD expects you to fall for with the pricing of this card. Honestly how can anyone defend the price and performance metric of this card 14 months later?
  • mdlam - Tuesday, January 31, 2012 - link

    AMD doesn't give a shit whether you fall for anything, neither does Nvidia. If enough people buy their cards at X price to support their billion dollar companies, then the price stays. If not, the price goes down. If more people than not refuse to buy the product, then price goes....up.... Whether its this metric or that metric or not fair or super clever AMD trap its all bullshit people make up who hold little educational knowledge in economics or business.
  • mdlam - Tuesday, January 31, 2012 - link

    People like you btw.
  • chizow - Wednesday, February 1, 2012 - link

    Sounds like someone missed their nap.

    AMD should care actually, because the only people who would even entertain buying one of these cards are their biggest fans, the ones who are going to feel the burn the worst when the floor drops on the pricing of these products.

    Again, there is precedent for this, AMD did it to Nvidia in 2008 and Nvidia was cutting rebate checks. Do you think AMD is willing to do the same?

Log in

Don't have an account? Sign up now