Compute Performance

Moving on from our look at gaming performance, we have our customary look at compute performance. Since compute performance is by definition shader bound, the 7950 is at a bit of a disadvantage here compared to gaming performance. Whereas ROP performance scales with the core clock, shader performance is hit by both the reduction in the core clock and the disabled CU array.

Our first compute benchmark comes from Civilization V, which uses DirectCompute to decompress textures on the fly. Civ V includes a sub-benchmark that exclusively tests the speed of their texture decompression algorithm by repeatedly decompressing the textures required for one of the game’s leader scenes. Note that this is a DX11 DirectCompute benchmark.

AMD’s greatly improved compute performance continues to shine here, though in the case of Civilization V it’s largely consumed by just closing the previously large gap between the GTX 500 series and the Radeon HD 6000 series. As a result the 7950 falls ever so short of the GTX 580, while the factory overclocked Sapphire and XFX cards give the 7950 enough of a push to come within 5% of the 7970.

Our next benchmark is SmallLuxGPU, the GPU ray tracing branch of the open source LuxRender renderer. We’re now using a development build from the version 2.0 branch, and we’ve moved on to a more complex scene that hopefully will provide a greater challenge to our GPUs.

Under SmallLuxGPU the 7970 enjoyed a large lead over the GTX 580, and this continues with the 7950. Even though the 7950 is well behind the 7970—to the tune of 24%—it’s still 33% ahead of the GTX 580 and the lead only grows from there. Meanwhile the XFX and Sapphire cards can catch up to the 7970 somewhat, but as this is truly a shader-bound test, you can’t make up for the lack of shaders units on the 7950.

For our next benchmark we’re looking at AESEncryptDecrypt, an OpenCL AES encryption routine that AES encrypts/decrypts an 8K x 8K pixel square image file. The results of this benchmark are the average time to encrypt the image over a number of iterations of the AES cypher.

In spite of being a compute benchmark, AESEncryptDecrypt is not particularly sensitive to GPU performance, showcasing the impact that setup times can have. The 7950 trails the 7970 by 10%, and overclocking doesn’t change this much. Unfortunately for AMD NVIDIA is still the leader here, showing that AMD’s compute performance still has room to grow.

Finally, our last benchmark is once again looking at compute shader performance, this time through the Fluid simulation sample in the DirectX SDK. This program simulates the motion and interactions of a 16k particle fluid using a compute shader, with a choice of several different algorithms. In this case we’re using an (O)n^2 nearest neighbor method that is optimized by using shared memory to cache data.

With the compute shader fluid simulation we once again shift back into a compute task that’s much more shader-bound. The 7950 only reaches 80% of the performance of the 7970, once more proving the real impact of losing a CU array. This is still enough to handily surpass the GTX 580 however, with the 7950 taking a 15% lead.

Civilization V Power, Temperature, & Noise
Comments Locked

259 Comments

View All Comments

  • mak360 - Tuesday, January 31, 2012 - link

    i would easily buy the HD7950 over the old tech - outdated - hot - power hungry - loud GTX580 junk. The HD7950 is same price, new tech, uses 72 watts less, is cooler, is silent, is 28nm, is faster, has compute, has pcie3, has x3 monitors, has audio over each channel, also slaps the 590 if thats what you want lol.

    its a win-win, you would have to be an idiot to buy anything nvidia has currently in the high end.
  • chizow - Wednesday, February 1, 2012 - link

    Anyone interested in high-end already owns Nvidia and is hitting the snooze button on this launch until Kepler.

    There's only a 15-25% reason to buy a 7970, 0-5% reason to buy a 7950.
  • Death666Angel - Wednesday, February 1, 2012 - link

    You keep repeating it, and you keep being wrong. There are a million reasons for someone to upgrade their system now. Maybe they got a better monitor for christmas and need the graphics card upgrade but waited a month until AMD revealed their new tech? Maybe it's someones birthday and he can get a big card. Or someone got a new job and wants a new card today? Not everyone who has the money and need for such a card now had it in the months before it.
  • chizow - Wednesday, February 1, 2012 - link

    In that case, they should probably wait for the real next-gen, since that's what most anyone was doing prior to the disappointing Tahiti reveal.

    Or go ahead and pick up a 6970/570 for much better price/performance return. Although we may actually see the prices go back up now that its obvious Tahiti did nothing to force downward pricing pressure.
  • yankeeDDL - Wednesday, February 1, 2012 - link

    Just look at the review from Tomshardware.
    Based on performance, they were expecting the 7950 to be prices around $480. Then they were informed about the MSRP of $450 and took it extremely well.
    Just sayin'
  • Spunjji - Wednesday, February 1, 2012 - link

    Unfortunately you gave yourself away as a bit of an idiot as soon as you failed to address anything about the product other than its raw performance.

    Precisely what makes you think that AMD /has/ to price their products at this level? They have a smaller chip that performs better for less power. As soon as nVidia releases competing products they'll drop trou on the price and everyone can be happy. Right now they're price-gouging the performance-obsessed, just like nVidia have been for as long as they've had the top product.

    Personally, I'm disappointed that they've abandoned the 3/4/5000 series approach of providing fantastic value for money, but apparently that wasn't earning them any money. Big shame, don't care, move on. I'll be waiting for Kepler to show before I make any buying decisions.
  • chizow - Wednesday, February 1, 2012 - link

    If the only thing AMD is able to bring to the table from a full node process shrink is a reduction in power consumption, they've already failed.

    What compounds their failure however, is the fact they're trying to price this card that doesn't even significantly outperform last-gen parts at existing prices.

    If they actually priced this where it should be ~$380-$400, it'd be a completely different story. Because they'd actually be offering you all of those fringe benefits you listed as well as either high-end performance at a much lower price OR significantly higher performance at the same price.

    These are the kinds of metrics people look at when deciding to upgrade, or not. Pricing a product that performs the same as a part that's been available for 14 months already just doesn't make any sense, sorry.
  • ven - Thursday, February 2, 2012 - link

    after all the conversation you have given i came to only one conclusion you all guys created as much hype for the kepler. Nvidia will be much delighted for this.after reading all these I would be not surprised if Nvidia print a link to these website page on their kepler card boxes as part of their advertisement.
  • chizow - Thursday, February 2, 2012 - link

    I don't think Nvidia cares about what's written here tbh, I don't think it took more than looking at the benchmarks for them to get excited.

    What they care about:

    -AMD's top 28nm = only 15-25% faster than their last-gen top 40nm
    -AMD's 2nd 28nm = only 0-5% faster than their last-gen top 40nm

    The result is the rumors and indirect quotes attributed to Nvidia personnel at CES amounting to:

    "We expected more from AMD's HD7900 series."

    But really, this quote could and should be attributed to anyone, especially at the asking price. It seems most people feel this way, makes you wonder why AMD fans don't.

  • Galidou - Saturday, February 4, 2012 - link

    it's fun to see comparison of parts only by the size of the transistor..... the thing is the 40nm parts from nvidia from last gen are BIG gpus, you gotta compare the quantity of transistor to transistor to understand the % increase in performance....

    AMD's smaller gpus smaller power enveloppe that maxes performance/die size vs Nvidia's maximum die size/max performance attainable with good yields...

Log in

Don't have an account? Sign up now