There are only a handful of metrics by which 2009 didn’t end as a successful year for AMD. With the launch of the Radeon HD 5800 series in September of that year AMD got a significant and unusually long-standing jump on the competition. By being the first company to transition a high-end GPU to TSMC’s 40nm process they were able to bring about the next generation of faster and cheaper video cards, quickly delivering better performance at better prices than their 55nm predecessors and competitors alike. At the same time they were the first company to produce a GPU for the new DirectX 11 standard, giving them access to a number of new features, a degree of future proofness, and good will with developers eager to get their hands on DX11 hardware.

Ultimately AMD held the high-end market for over 6 months until NVIDIA was able to counter back with the Fermi based GTX 400 series. Though it’s not unprecedented for a company to rule the high-end market for many months at a time, it’s normally in the face of slower but similar cards from the competition – to stand alone is far more rare. This is not to say that it was easy for AMD, as TSMC’s 40nm production woes kept AMD from fully capitalizing on their advantages until 2010. But even with 40nm GPUs in short supply, it was clearly a good year for AMD.

Now in the twilight of the year 2010, the landscape has once again shifted. NVIDIA did deliver the GTX 400 series, and then they delivered the GTX 500 series, once more displacing AMD from the high-end market as NVIDIA’s build’em big strategy is apt to do. In October we saw AMD reassert themselves in the mid-range market with the Radeon HD 6800 series, delivering performance close to the 5800 series for lower prices and at a greater power efficiency, and provoking a price war that quickly lead to NVIDIA dropping GTX 460 prices. With the delivery of the 6800 series, the stage has been set for AMD’s return to the high-end market with the launch of the Radeon HD 6900 series.

Launching today are the Radeon HD 6970 and Radeon HD 6950, utilizing AMD’s new Cayman GPU. Born from the ashes of TSMC’s canceled 32nm node, Cayman is the biggest change to AMD’s GPU microarchitecture since the original Radeon HD 2900. Just because AMD doesn’t have a new node to work with this year doesn’t mean they haven’t been hard at work, and as we’ll see Cayman and the 6900 series will brings that hard work to the table. So without further ado, let’s dive in to the Radeon HD 6900 series.

  AMD Radeon HD 6970 AMD Radeon HD 6950 AMD Radeon HD 6870 AMD Radeon HD 6850 AMD Radeon HD 5870
Stream Processors 1536 1408 1120 960 1600
Texture Units 96 88 56 48 80
ROPs 32 32 32 32 32
Core Clock 880MHz 800MHz 900MHz 775MHz 850MHz
Memory Clock 1.375GHz (5.5GHz effective) GDDR5 1.25GHz (5.0GHz effective) GDDR5 1.05GHz (4.2GHz effective) GDDR5 1GHz (4GHz effective) GDDR5 1.2GHz (4.8GHz effective) GDDR5
Memory Bus Width 256-bit 256-bit 256-bit 256-bit 256-bit
Frame Buffer 2GB 2GB 1GB 1GB 1GB
FP64 1/4 1/4 N/A N/A 1/5
Transistor Count 2.64B 2.64B 1.7B 1.7B 2.15B
Manufacturing Process TSMC 40nm TSMC 40nm TSMC 40nm TSMC 40nm TSMC 40nm
Price Point $369 $299 $239 $179 ~$249

Following AMD’s unfortunate renaming of its product stack with the Radeon HD 6800 series, the Radeon HD 6900 series is thus far a 3 part, 2 chip lineup. Today we are looking at the Cayman based 6970 and 6950, composing the top of AMD’s single-GPU product line. Above that is Antilles, the codename for AMD’s dual-Cayman Radeon HD 6990. Originally scheduled to launch late this year, the roughly month-long delay of Cayman has pushed that back; we’ll now be seeing the 3rd member of the 6900 series next year. So today the story is all about Cayman and the single-GPU cards it powers.

At the top we have the Radeon HD 6970, AMD’s top single-GPU part. Featuring a complete Cayman GPU, it has 1536 stream processors, 96 texture units, and 32 ROPs. It is clocked at 880MHz for the core clock and 1375MHz (5.5GHz data rate) for its 2GB of GDDR5 RAM. TDP (or the closest thing to it) is 250W, while reflecting the maturity and AMD’s familiarity with the 40nm process typical idle power draw is down from the 5800 series to 20W.

Below that we have the Radeon HD 6950, the traditional lower power card using a slightly cut-down GPU. The 6950 has 1408 stream processors, 88 texture units, and still all 32 ROPs attached to the same 2GB of GDDR5. The core clock is similarly reduced to 800MHz, while the memory clock is 1250MHz (5GHz data rate). TDP is 200W, while idle power is the same as with the 6970 at 20W.

From the specifications alone it’s quickly apparent that something new is happening with Cayman, as at 1536 SPs it has fewer SPs than the 1600 SP Cypress/5870 it replaces. We have a great deal to talk about here, but we’ll stick to a high-level overview for our introduction. In the biggest change to AMD’s core GPU architecture since the launch of their first DX10/unified shader Radeon HD 2900 in 2007, AMD is moving away from the Very Long Instruction Word-5 (VLIW5) architecture we have come to know them for, in favor of a slightly less wide VLIW4 architecture. In a nutshell AMD’s SIMDs are narrower but there are more of them, as AMD looks to find a new balance in their core architecture. Although it’s not a new core architecture outright, the change from VLIW5 to VLIW4 brings a number of ramifications that we will be looking at. And this is just one of the many facets of AMD’s new architecture.

Getting right to the matter of performance, the 6970 performs very close to the GTX 570/480 on average, while the 6950 is in a class of its own, occupying the small hole between the 5870/470 and the 6970/570. With that level of performance the pricing for today’s launch is rather straightforward: the 6970 will be launching slightly above the 570 at $379, while the 6950 will be launching at the $299 sweet spot. Further down the line AMD’s partners will be launching 1GB versions of these cards, which will be bringing prices down as a tradeoff for potential memory bottlenecks.

Today’s launch is going to be hard launch, with both the 6970 and the 6950 available. AMD is being slightly more cryptic than usual about just what the launch quantities are; our official guidance is “available in quantity” and “tens of thousands” of cards. On the one hand we aren’t expecting anything nearly as constrained as the 5800 series launch, and at the same time AMD is not filling us with confidence that it will be widely available like the 6800 either. If at the end of this article you decide you want a 6900 card, your best bet is to grab one sooner than later.


AMD's Current Product Stack

With the launch of the 6900 series, the 5800 series is facing its imminent retirement. There are still a number of cards on the market and they’re priced to move, but AMD is looking at cleaning out its Cypress inventory over the next couple of months, so officially the 5800 series is no longer part of AMD’s current product stack. Meanwhile AMD’s dual-GPU 5970 remains an outlier, as its job is not quite done until the 6990 arrives – until then it’s still officially AMD’s highest-end card and their closest competitor to the GTX 580.

Meanwhile NVIDIA’s product stack and pricing stands as-is.

Winter 2010 Video Card MSRPs
NVIDIA Price AMD
$500  
  $470 Radeon HD 5970
$410  
  $369 Radeon HD 6970
$350  
  $299 Radeon HD 6950
 
$250 Radeon HD 5870
$240 Radeon HD 6870
$180-$190 Radeon HD 6850
Refresher: The 6800 Series’ New Features
Comments Locked

168 Comments

View All Comments

  • AnnonymousCoward - Wednesday, December 15, 2010 - link

    First of all, 30fps is choppy as hell in a non-RTS game. ~40fps is a bare minimum, and >60fps all the time is hugely preferred since then you can also use vsync to eliminate tearing.

    Now back to my point. Your counter was "you know that non-AA will be higher than AA, so why measure it?" Is that a point? Different cards will scale differently, and seeing 2560+AA doesn't tell us the performance landscape at real-world usage which is 2560 no-AA.
  • Dug - Wednesday, December 15, 2010 - link

    Is it me, or are the graphs confusing.
    Some leave out cards on certain resolutions, but add some in others.

    It would be nice to have a dynamic graph link so we can make our own comparisons.
    Or a drop down to limit just ati, single card, etc.

    Either that or make a graph that has the cards tested at all the resolutions so there is the same number of cards in each graph.
  • benjwp - Wednesday, December 15, 2010 - link

    Hi,

    You keep using Wolfenstein as an OpenGL benchmark. But it is not. The single player portion uses Direct3D9. You can check this by checking which DLLs it loads or which functions it imports or many other ways (for example most of the idTech4 renderer debug commands no longer work).

    The multiplayer component does use OpenGL though.

    Your best bet for an OpenGL gaming benchmark is probably Enemy Territory Quake Wars.
  • Ryan Smith - Wednesday, December 15, 2010 - link

    We use WolfMP, not WolfSP (you can't record or playback timedemos in SP).
  • 7Enigma - Wednesday, December 15, 2010 - link

    Hi Ryan,

    What benchmark do you use for the noise testing? Is it Crysis or Furmark? Along the same line of questioning I do not think you can use Furmark in the way you have the graph setup because it looks like you have left Powertune on (which will throttle the power consumption) while using numbers from NVIDIA's cards where you have faked the drivers into not throttling. I understand one is a program cheat and another a TDP limitation, but it seems a bit wrong to not compare them in the unmodified position (or VERBALLY mention this had no bearing on the test and they should not be compared).

    Overall nice review, but the new cards are pretty underwhelming IMO.
  • Ryan Smith - Thursday, December 16, 2010 - link

    Hi 7Enigma;

    For noise testing it's FurMark. As is the case with the rest of our power/temp/noise benchmarks, we want to establish the worst case scenario for these products and compare them along those lines. So the noise results you see are derived from the same tests we do for temperatures and power draw.

    And yes, we did leave PowerTune at its default settings. How we test power/temp/noise is one of the things PowerTune made us reevaluate. Our decision is that we'll continue to use whatever method generates the worst case scenario for that card at default settings. For NVIDIA's GTX 500 series, this means disabling OCP because NVIDIA only clamps FurMark/OCCT, and to a level below most games at that. Other games like Program X that we used in the initial GTX 580 article clearly establish that power/temp/noise can and do get much worse than what Crysis or clamped FurMark will show you.

    As for the AMD cards the situation is much more straightforward: PowerTune clamps everything blindly. We still use FurMark because it generates the highest load we can find (even with it being reduced by over 200MHz), however because PowerTune clamps everything, our FurMark results are the worst case scenario for that card. Absolutely nothing will generate a significantly higher load - PowerTune won't allow it. So we consider it accurate for the purposes of establishing the worst case scenario for noise.

    In the long run this means that results will come down as newer cards implement this kind of technology, but then that's the advantage of such technology: there's no way to make the card louder without playing wit the card's settings. For the next iteration of the benchmark suite we will likely implement a game-based noise test, even though technologies like PowerTune are reducing the dynamic range.

    In conclusion: we use FurMark, we will disable any TDP limiting technology that discriminates based on the program type or is based on a known program list, and we will allow any TDP limiting technology that blindly establishes a firm TDP cap for all programs and games.

    -Thanks
    Ryan Smith
  • 7Enigma - Friday, December 17, 2010 - link

    Thanks for the response Ryan! I expected it to be lost in the slew of other posts. I highly recommend (as you mentioned in your second to last paragraph) that a game-based benchmark is used along with the Furmark for power/noise. Until both adopt the same TDP limitation it's going to put the NVIDIA cards in a bad light when comparisons are made. This could be seen as a legitimate beef for the fanboys/trolls, and we all know the less ammunition they have the better. :)

    Also to prevent future confusion it would be nice to have what program you are using for the power draw/noise/heat IN the graph title itself. Just something as simple as "GPU Temperature (Furmark-Load)" would make it instantly understandable.

    Thanks again for the very detailed review (on 1 week nonetheless!)
  • Hrel - Wednesday, December 15, 2010 - link

    I really hope these architexture changes lead to better minimum FPS results. AMD is ALWAYS behind Nvidia on minimum FPS and in many ways that's the most important measurment since min FPS determines if the game is playable or not. I dont' care if it maxes out 122 FPS if when the shit hits the fan I get 15 FPS, I won't be able to accurately hit anything.
  • Soldier1969 - Wednesday, December 15, 2010 - link

    I'm dissapointed in the 6970, its not what I was expecting over my 5870. I will wait to see what the 6990 brings to the table next month. I'm looking for a 30-40% boost from my 5870 at 2560 x 1600 res I game at.
  • stangflyer - Wednesday, December 15, 2010 - link

    Now that we see the power requirements for the 6970 and that it needs more power than the 5870 how would they make a 6990 without really cutting off the performance like the 5970?

    I had a 5970 for a year b4 selling it 3 weeks ago in preparation of getting 570 in sli or 6990.
    It would obviously have to be 2x8 pin power! Or they would have to really use that powertune feature.

    I liked my 5970 as I didn't have the stuttering issues (or i don't notice them) And actually have no issues with eyefinity as i have matching dell monitors with native dp inputs.

    If I was only on one screen I would not even be thinking upgrade but the vram runs out when using aa or keeping settings high as I play at 5040x1050. That is the only reason I am a little shy of getting the 570 in sli.

    Don't see how they can make a 6990 without really killing the performance of it.

    I used my 5970 at 5870 and beyond speeds on games all the time though.

Log in

Don't have an account? Sign up now