Final Words

The launch of the Radeon HD 7970 has presented us with a great deal of data and even more subjects to consider, so it’s difficult in the best of times to try to whittle this down to a single conclusion. Nevertheless, based on our testing I believe there are two distinct conclusions to be drawn: the case for gaming, and the case for computing.

Gaming

At the end of the day the 7970 is specifically targeted as a gaming workhorse. Regardless of any architecture changes, what’s important is how fast the card is, how much it costs, whether it works correctly, and what its physical attributes are like. With respect to all of these aspects AMD has made an acceptable card, but this is not a groundbreaking product like we’ve seen in the past.

The fact of the matter is that since 2008 we’ve become spoiled by AMD’s aggressive pricing. More than anything else the low prices of the Radeon HD 4870 and Radeon HD 5870 made those products superstars thanks to their performance for the price and their undercutting of NVIDIA’s competing cards. The Radeon HD 5870 was definitely fast, but at $379 it was a steal, which is part of the reason prices for it never stabilized at that low a level.

At the same time the 7970 is not the 5870. The 5870 relative to both NVIDIA and AMD’s previous generation video cards was faster on a percentage basis. It was more clearly a next-generation card, and DX11 only helped to seal the deal. Meanwhile if you look at straight averages the 7970 is only around 15-25% faster than the GTX 580 in our tests, with its advantage being highly game dependent. It always wins at 2560 and 1920, but there are some cases where it’s not much of a win. The 7970’s domination of the 6970 is more absolute, but then again the 6970 is a good $200 cheaper at this point in time.

Meanwhile the presence of previous generation dual-GPU cards will continue to muddle the picture a bit further. We remain as sheepish as ever on multi-GPU cards and believe a high performance single GPU card is still a better investment in most situations, but there’s no denying that the GTX 590 and Radeon HD 6990 are quite capable cards today if you can put up with the noise and the inherent issues with alternate frame rendering.

Ultimately the past few years have seen AMD make great technical progress, but on the business side of things it’s NVIDIA that has made all the money. GCN will help AMD here by improving their professional product line, but the other part of that equation is for AMD to stop selling their cards for so little when they don’t have to. And this is what we’re seeing with the Radeon HD 7970. AMD has chosen to price the 7970 like a current generation card – it’s priced relative to a 3GB GTX 580 – and that’s a fair metric. What it isn’t is groundbreaking in any sense.

So at the end of the day AMD has once again retaken the performance crown for single-GPU cards, bringing them back to a position they last held nearly 2 years ago with the 5870. To that AMD deserves kudos, and if you’re in the market for a $500+ video card the 7970 is clearly the card to get – it’s a bit more expensive than the GTX 580, but it’s reasonably faster and cooler all at once. However if you’ve been waiting for 28nm GPUs to bring about another rapid decrease in video card prices as we saw with the 5870, you’re going to be waiting a bit longer.

Compute

The Radeon HD 7970 may be a gaming product, but today was just as much a launch for AMD’s Graphics Core Next architecture as it was for their new single-GPU king. GCN is the biggest architecture overhaul for AMD since R600 in 2007, and it shows. AMD has laid out a clear plan to seriously break into the GPU computing market and GCN is the architecture that will take them there. This is their Fermi moment.

At this point I’m not comfortable speaking about the compute performance of GCN in absolutes, but based on our limited testing with the 7970 it’s clear the potential is there. At times it’s competitive with the Fermi-based GTX 580 and at other times it’s quite a bit faster. In the hands of experienced developers and given enough time to learn the quirks of GCN, I believe GCN will prove itself. It’s much too early to tell if it will be able to withstand the eventual arrival of NVIDIA’s Kepler, but certainly this is the best shot AMD has ever had.

Performance aside, it’s clear that AMD’s SIMD architecture will make GPU compute development for GCN much easier; of that there is no question. This is important as GCN isn’t just about HPC computing, it’s about fully embracing Fusion. AMD’s CPU plans are built upon GCN just as much as they’re built upon Bulldozer, and for GCN to deliver on its half of the heterogeneous computing aspect of Fusion it will need to be easy to program and it will need to perform well. It would appear AMD has the hardware to make the former happen, now time will tell if GCN Fusion can deliver on the latter.

Power, Temperature, & Noise
Comments Locked

292 Comments

View All Comments

  • Esbornia - Thursday, December 22, 2011 - link

    Fan boy much?
  • CeriseCogburn - Thursday, March 8, 2012 - link

    Finally, piroroadkill, Esbornia - the gentleman ericore merely stated what all the articles here have done as analysis while the radeonite fans repeated it ad infinitum screaming nvidia's giant core count doesn't give the percentage increase it should considering transistor increase.
    Now, when it's amd's turn, we get ericore under 3 attacks in a row...---
    So you three all take it back concerning fermi ?
  • maverickuw - Thursday, December 22, 2011 - link

    I want to know when the 7950 will come out and hopefully it'll come out at $400
  • duploxxx - Thursday, December 22, 2011 - link

    Only the fact that ATI is able to bring a new architecture on a new process and result in such a performance increase for that power consumption is a clear winner.

    looking at the past with Fermy 1st launch and even Cayman VLIW4 they had much more issues to start with.

    nice job, while probably nv680 will be more performing it will take them at least a while to release that product and it will need to be also huge in size.
  • ecuador - Thursday, December 22, 2011 - link

    Nice review, although I really think testing 1680x1050 for a $550 is a big waste of time, which could have to perhaps multi-monitor testing etc.
  • Esbornia - Thursday, December 22, 2011 - link

    Its Anand you should expect this kind of shiet.
  • Ryan Smith - Thursday, December 22, 2011 - link

    In this case the purpose of 1680 is to allow us to draw comparisons to low-end cards and older cards, which is something we consider to be important. The 8800GT and 3870 in particular do not offer meaningful performance at 1920.
  • poohbear - Thursday, December 22, 2011 - link

    Why do you bencmark @ 1920x1200 resolution? according to the Steam December survey only 8% of gamers have that resolution, whereas 24% have 1920x1080 and 18% use 1680x1050 (the 2 most popular). Also, minimum FPS would be nice to know in your benchmarks, that is really useful for us! just a heads up for next time u benchmark a video card! Otherwise nice review! lotsa good info at the beginning!:)
  • Galcobar - Thursday, December 22, 2011 - link

    Page 4, comments section.
  • Esbornia - Thursday, December 22, 2011 - link

    They dont want to show the improvements on min FPS cause they hate AMD, you should know that already.

Log in

Don't have an account? Sign up now