Compute Performance

Moving on from our look at gaming performance, we have our customary look at compute performance. Since compute performance is by definition shader bound, the 7950 is at a bit of a disadvantage here compared to gaming performance. Whereas ROP performance scales with the core clock, shader performance is hit by both the reduction in the core clock and the disabled CU array.

Our first compute benchmark comes from Civilization V, which uses DirectCompute to decompress textures on the fly. Civ V includes a sub-benchmark that exclusively tests the speed of their texture decompression algorithm by repeatedly decompressing the textures required for one of the game’s leader scenes. Note that this is a DX11 DirectCompute benchmark.

AMD’s greatly improved compute performance continues to shine here, though in the case of Civilization V it’s largely consumed by just closing the previously large gap between the GTX 500 series and the Radeon HD 6000 series. As a result the 7950 falls ever so short of the GTX 580, while the factory overclocked Sapphire and XFX cards give the 7950 enough of a push to come within 5% of the 7970.

Our next benchmark is SmallLuxGPU, the GPU ray tracing branch of the open source LuxRender renderer. We’re now using a development build from the version 2.0 branch, and we’ve moved on to a more complex scene that hopefully will provide a greater challenge to our GPUs.

Under SmallLuxGPU the 7970 enjoyed a large lead over the GTX 580, and this continues with the 7950. Even though the 7950 is well behind the 7970—to the tune of 24%—it’s still 33% ahead of the GTX 580 and the lead only grows from there. Meanwhile the XFX and Sapphire cards can catch up to the 7970 somewhat, but as this is truly a shader-bound test, you can’t make up for the lack of shaders units on the 7950.

For our next benchmark we’re looking at AESEncryptDecrypt, an OpenCL AES encryption routine that AES encrypts/decrypts an 8K x 8K pixel square image file. The results of this benchmark are the average time to encrypt the image over a number of iterations of the AES cypher.

In spite of being a compute benchmark, AESEncryptDecrypt is not particularly sensitive to GPU performance, showcasing the impact that setup times can have. The 7950 trails the 7970 by 10%, and overclocking doesn’t change this much. Unfortunately for AMD NVIDIA is still the leader here, showing that AMD’s compute performance still has room to grow.

Finally, our last benchmark is once again looking at compute shader performance, this time through the Fluid simulation sample in the DirectX SDK. This program simulates the motion and interactions of a 16k particle fluid using a compute shader, with a choice of several different algorithms. In this case we’re using an (O)n^2 nearest neighbor method that is optimized by using shared memory to cache data.

With the compute shader fluid simulation we once again shift back into a compute task that’s much more shader-bound. The 7950 only reaches 80% of the performance of the 7970, once more proving the real impact of losing a CU array. This is still enough to handily surpass the GTX 580 however, with the 7950 taking a 15% lead.

Civilization V Power, Temperature, & Noise
Comments Locked

259 Comments

View All Comments

  • chizow - Wednesday, February 1, 2012 - link

    Sorry, but this is incorrect. Nvidia and AMD are direct competitors when it comes to GPUs so relative performance directly influences price.

    This is why AMD cannot sell CPUs for more than $200. They don't have anything faster than Intel's 10th+ fastest processors (spread over 2-3 generations, its pretty sad actually), so they can't just price Bulldozer at $1000 by slapping an X on it and expect to sell any.

    There is a ceiling on the prices they can charge however due to economic and external factors like price elasticity of demand, disposable income, GDP, competing products (consoles etc) so within that construct, AMD and Nvidia have to price their products to make them most attractive to prospective buyers.

    They know exactly what % of the market will bite at each price and performance tier using their own gathered market research as well as independent firms like Peddie etc. $400+ is high-end enthusiast, in order to price here, you have to be the top dog, or the 2nd tier. The top dog sets the table for every other GPU, it doesn't matter who makes it.

    Historically, this next-gen top dog has shifted the price and performance metric for all next-gen GPUs because the market expects and demands it. That's just progress. Tahiti brings nothing to the table in this regard, its performance is incremental but its pricing just maintains the status quo.

    The problem is Tahiti's pricing indicates the GTX 580 was the target it was shooting for, the problem is, they should've been taking aim at Kepler.
  • JNo - Thursday, February 2, 2012 - link

    chizow - the new pirks?
  • TerdFerguson - Tuesday, January 31, 2012 - link

    Chizow is right, you guys are wrong. Get over it.

    Consumer electronics are supposed to get cheaper AND faster at tremendous rates. In failing to improve their price/performance ratio over a couple of generations, AMD has failed. NVidia is failing pretty badly right now, too, but since this is an AMD release, AMD is getting the flack at the moment. If you apply AMD's pricing model to any other consumer electronics product, it becomes very evident that things are very broken. Would you pay $4k for a Ivy Bridge CPU, because IB > SB > Core2 > Core > P4 > P3 > P2 > Pentium > 486 > etc, and a better chip must always command a price premium? Doh, of course you wouldn't.
  • mdlam - Tuesday, January 31, 2012 - link

    Pricing is determined and adjusted based on the law of efficient markets. the 580 is 500 dollars only because people are still willing to pay for it, not because of Chizow's ridiculous theory that companies conspire these fabulous schemes to trick people out of their money. So based on this existing market of people willing to pay 500 dollars for gtx580 performance benefits, AMD is going to TAKE those customers away by giving more for less, or more for more in a linear price/performance scale. It's just how markets work, prices don't revolve around these God-like rules of tier1, tier2, tier3. Guess what, AMD is right, because these cards right now are selling higher than the $550 retail price. They should have priced it at $650!
  • mdlam - Tuesday, January 31, 2012 - link

    And there is no flawed pricing model to AMD that would end up with a slippery slope of $4k for an Ivy bridge. Prices = aggregate buying desire of the market. All markets usually hit a ceiling price for an item, no matter what it is. Some people have a high ceiling, some people have a low ceiling, its not anyone's fault, its just the fact of life. Any company, AMD or NVIDIA, or INTEL, will price to sell to people with higher ceilings, and when demand is met, lower price to increase adoption from folks with lower ceilings.
  • chizow - Wednesday, February 1, 2012 - link

    Sorry, not in this market.

    If you think this is OK, there would never be any progress in the semiconductor market. Its not like we're talking cars here where a new model year means a few minor upgrades.

    With GPUs, CPUs and any other semiconductor, you expect FASTER performance at the SAME prices or CHEAPER prices. That's called progress.

    The law of efficient markets would tell you if you bought a GTX 580 14 months ago, you made the right call. Buying today, you're setting yourself up for some heartache, but more probably, you're kicking yourself for waiting.
  • Arnir69 - Friday, February 3, 2012 - link

    I'm really disappointed with 7950 too, it's a little bit better than 580 but not enough to justify a such a long wait, it's performance is well short of expectation in BF3.
  • hyperdoggy - Tuesday, January 31, 2012 - link

    While I'm not in favor of the prices AMD has set for the new cards, you do realize that Nvidia has never prices their cards low right? A quick price check history will show you since the FX day Nvidia has priced their card to sell your kidney. It was the tnt days that Nvidia did a price favor vs their competitors. I bet you my right kidney(i sold my left one for a 8800gtx for $650 day 1 of lunch) that Kepler will be no different, regardless of what its performance will be.

    I never got the fanboy aspect of things, you see gamers that can calculate min-max fps better than most math majors yet somehow only see red or green when the numbers are laid out right in front of them. I'm shame to say i'm old enough to been around from the voodoo days, i went to Voodoo, Nvidia, Ati, Nvidia, Ati, Nvidia, and now name AMD for more than i can remember. Go for what's best at the time you need an upgrade. Stop making yourself colored hulk when your team doesn't have a product to be competitive.
  • SlyNine - Tuesday, January 31, 2012 - link

    Yea and the 8800GTX kicked stomped the crap out of the competition. This is just a bump up, and kick stomp prices.

    Plus this is AMD not Nvidia. Where is the 5870, the 9700pro. This is closer to a 5800Ultra or a 2900XT. Of course those cards at least had some real competition in the form of a 8800GTX and 9700pro.

    If the 8800GTX and 9700pro would have only increased performance as much as say the 6970 or 580GTX ( compared to their previous cards 5870/480) then the analogy would truly work and the 7970 would basicly be the 2900XT/5800Ultra of its day.
  • chizow - Tuesday, January 31, 2012 - link

    Actually if you look at recent pricing history, you'd see Nvidia has kept their flagship pricing in-line and much lower than what we are seeing here with SI, despite the fact Nvidia had the leading part for that generation in both cases with the GTX 480 and GTX 580.

    Both of those parts launched at $500 and were faster than AMD's competing same-generation part. If Nvidia did the same as AMD, the 580 would've been priced at $550-600 for that 10-15% performance bump over the 480, but they kept their pricing constant while increasing performance. As I stated earlier, AMD definitely had a hand in this when they undercut the GTX 280 so badly in 2008, but Nvidia did learn their mistake and has not raised the pricing metric since.

    Now Nvidia does have a decision to make. If they beat SI with Kepler as expected, they can go with AMD's pricing which will again, make no sense. Or they can stick to their historical price/performance model and make AMD look really bad just as AMD did to them 3 1/2 years ago.

Log in

Don't have an account? Sign up now