There are only a handful of metrics by which 2009 didn’t end as a successful year for AMD. With the launch of the Radeon HD 5800 series in September of that year AMD got a significant and unusually long-standing jump on the competition. By being the first company to transition a high-end GPU to TSMC’s 40nm process they were able to bring about the next generation of faster and cheaper video cards, quickly delivering better performance at better prices than their 55nm predecessors and competitors alike. At the same time they were the first company to produce a GPU for the new DirectX 11 standard, giving them access to a number of new features, a degree of future proofness, and good will with developers eager to get their hands on DX11 hardware.

Ultimately AMD held the high-end market for over 6 months until NVIDIA was able to counter back with the Fermi based GTX 400 series. Though it’s not unprecedented for a company to rule the high-end market for many months at a time, it’s normally in the face of slower but similar cards from the competition – to stand alone is far more rare. This is not to say that it was easy for AMD, as TSMC’s 40nm production woes kept AMD from fully capitalizing on their advantages until 2010. But even with 40nm GPUs in short supply, it was clearly a good year for AMD.

Now in the twilight of the year 2010, the landscape has once again shifted. NVIDIA did deliver the GTX 400 series, and then they delivered the GTX 500 series, once more displacing AMD from the high-end market as NVIDIA’s build’em big strategy is apt to do. In October we saw AMD reassert themselves in the mid-range market with the Radeon HD 6800 series, delivering performance close to the 5800 series for lower prices and at a greater power efficiency, and provoking a price war that quickly lead to NVIDIA dropping GTX 460 prices. With the delivery of the 6800 series, the stage has been set for AMD’s return to the high-end market with the launch of the Radeon HD 6900 series.

Launching today are the Radeon HD 6970 and Radeon HD 6950, utilizing AMD’s new Cayman GPU. Born from the ashes of TSMC’s canceled 32nm node, Cayman is the biggest change to AMD’s GPU microarchitecture since the original Radeon HD 2900. Just because AMD doesn’t have a new node to work with this year doesn’t mean they haven’t been hard at work, and as we’ll see Cayman and the 6900 series will brings that hard work to the table. So without further ado, let’s dive in to the Radeon HD 6900 series.

  AMD Radeon HD 6970 AMD Radeon HD 6950 AMD Radeon HD 6870 AMD Radeon HD 6850 AMD Radeon HD 5870
Stream Processors 1536 1408 1120 960 1600
Texture Units 96 88 56 48 80
ROPs 32 32 32 32 32
Core Clock 880MHz 800MHz 900MHz 775MHz 850MHz
Memory Clock 1.375GHz (5.5GHz effective) GDDR5 1.25GHz (5.0GHz effective) GDDR5 1.05GHz (4.2GHz effective) GDDR5 1GHz (4GHz effective) GDDR5 1.2GHz (4.8GHz effective) GDDR5
Memory Bus Width 256-bit 256-bit 256-bit 256-bit 256-bit
Frame Buffer 2GB 2GB 1GB 1GB 1GB
FP64 1/4 1/4 N/A N/A 1/5
Transistor Count 2.64B 2.64B 1.7B 1.7B 2.15B
Manufacturing Process TSMC 40nm TSMC 40nm TSMC 40nm TSMC 40nm TSMC 40nm
Price Point $369 $299 $239 $179 ~$249

Following AMD’s unfortunate renaming of its product stack with the Radeon HD 6800 series, the Radeon HD 6900 series is thus far a 3 part, 2 chip lineup. Today we are looking at the Cayman based 6970 and 6950, composing the top of AMD’s single-GPU product line. Above that is Antilles, the codename for AMD’s dual-Cayman Radeon HD 6990. Originally scheduled to launch late this year, the roughly month-long delay of Cayman has pushed that back; we’ll now be seeing the 3rd member of the 6900 series next year. So today the story is all about Cayman and the single-GPU cards it powers.

At the top we have the Radeon HD 6970, AMD’s top single-GPU part. Featuring a complete Cayman GPU, it has 1536 stream processors, 96 texture units, and 32 ROPs. It is clocked at 880MHz for the core clock and 1375MHz (5.5GHz data rate) for its 2GB of GDDR5 RAM. TDP (or the closest thing to it) is 250W, while reflecting the maturity and AMD’s familiarity with the 40nm process typical idle power draw is down from the 5800 series to 20W.

Below that we have the Radeon HD 6950, the traditional lower power card using a slightly cut-down GPU. The 6950 has 1408 stream processors, 88 texture units, and still all 32 ROPs attached to the same 2GB of GDDR5. The core clock is similarly reduced to 800MHz, while the memory clock is 1250MHz (5GHz data rate). TDP is 200W, while idle power is the same as with the 6970 at 20W.

From the specifications alone it’s quickly apparent that something new is happening with Cayman, as at 1536 SPs it has fewer SPs than the 1600 SP Cypress/5870 it replaces. We have a great deal to talk about here, but we’ll stick to a high-level overview for our introduction. In the biggest change to AMD’s core GPU architecture since the launch of their first DX10/unified shader Radeon HD 2900 in 2007, AMD is moving away from the Very Long Instruction Word-5 (VLIW5) architecture we have come to know them for, in favor of a slightly less wide VLIW4 architecture. In a nutshell AMD’s SIMDs are narrower but there are more of them, as AMD looks to find a new balance in their core architecture. Although it’s not a new core architecture outright, the change from VLIW5 to VLIW4 brings a number of ramifications that we will be looking at. And this is just one of the many facets of AMD’s new architecture.

Getting right to the matter of performance, the 6970 performs very close to the GTX 570/480 on average, while the 6950 is in a class of its own, occupying the small hole between the 5870/470 and the 6970/570. With that level of performance the pricing for today’s launch is rather straightforward: the 6970 will be launching slightly above the 570 at $379, while the 6950 will be launching at the $299 sweet spot. Further down the line AMD’s partners will be launching 1GB versions of these cards, which will be bringing prices down as a tradeoff for potential memory bottlenecks.

Today’s launch is going to be hard launch, with both the 6970 and the 6950 available. AMD is being slightly more cryptic than usual about just what the launch quantities are; our official guidance is “available in quantity” and “tens of thousands” of cards. On the one hand we aren’t expecting anything nearly as constrained as the 5800 series launch, and at the same time AMD is not filling us with confidence that it will be widely available like the 6800 either. If at the end of this article you decide you want a 6900 card, your best bet is to grab one sooner than later.


AMD's Current Product Stack

With the launch of the 6900 series, the 5800 series is facing its imminent retirement. There are still a number of cards on the market and they’re priced to move, but AMD is looking at cleaning out its Cypress inventory over the next couple of months, so officially the 5800 series is no longer part of AMD’s current product stack. Meanwhile AMD’s dual-GPU 5970 remains an outlier, as its job is not quite done until the 6990 arrives – until then it’s still officially AMD’s highest-end card and their closest competitor to the GTX 580.

Meanwhile NVIDIA’s product stack and pricing stands as-is.

Winter 2010 Video Card MSRPs
NVIDIA Price AMD
$500  
  $470 Radeon HD 5970
$410  
  $369 Radeon HD 6970
$350  
  $299 Radeon HD 6950
 
$250 Radeon HD 5870
$240 Radeon HD 6870
$180-$190 Radeon HD 6850
Refresher: The 6800 Series’ New Features
Comments Locked

168 Comments

View All Comments

  • B3an - Thursday, December 16, 2010 - link

    Very stupid uninformed and narrow-minded comment. People like you never look to the future which anyone should do when buying a graphics card, and you completely lack any imagination. Theres already tons of uses for GPU computing, many of which the average computer user can make use of, even if it's simply encoding a video faster. And it will be use a LOT more in the future.

    Most people, especially ones that game, dont even have 17" monitors these days. The average size monitor for any new computer is at least 21" with 1680 res these days. Your whole comment is as if everyone has the exact same needs as YOU. You might be happy with your ridiculously small monitor, and playing games at low res on lower settings, and it might get the job done, but lots of people dont want this, they have standards and large monitors and needs to make use of these new GPU's. I cant exactly see many people buying these cards with a 17" monitor!
  • CeepieGeepie - Thursday, December 16, 2010 - link

    Hi Ryan,

    First, thanks for the review. I really appreciate the detail and depth on the architecture and compute capabilities.

    I wondered if you had considered using some of the GPU benchmarking suites from the academic community to give even more depth for compute capability comparisons. Both SHOC (http://ft.ornl.gov/doku/shoc/start) and Rodinia (https://www.cs.virginia.edu/~skadron/wiki/rodinia/... look like they might provide a very interesting set of benchmarks.
  • Ryan Smith - Thursday, December 16, 2010 - link

    Hi Ceepie;

    I've looked in to SHOC before. Unfortunately it's *nix-only, which means we can't integrate it in to our Windows-based testing environment. NVIDIA and AMD both work first and foremost on Windows drivers for their gaming card launches, so we rarely (if ever) have Linux drivers available for the launch.

    As for Rodinia, this is the first time I've seen it. But it looks like their OpenCL codepath isn't done, which means it isn't suitable for cross-vendor comparisons right now.
  • IdBuRnS - Thursday, December 16, 2010 - link

    "So with that in mind a $370 launch price is neither aggressive nor overpriced. Launching at $20 over the GTX 570 isn’t going to start a price war, but it’s also not so expensive to rule the card out. "

    At NewEgg right now:

    Cheapest GTX 570 - $509
    Cheapest 6970 - $369

    $30 difference? What are you smoking? Try $140 difference.
  • IdBuRnS - Thursday, December 16, 2010 - link

    Oops, $20 difference. Even worse.
  • IdBuRnS - Thursday, December 16, 2010 - link

    570...not 580...

    /hangsheadinshame
  • epyon96 - Thursday, December 16, 2010 - link

    This was a very interesting discussion to me in the article.

    I'm curious if Anandtech might expand on this further in a future dedicated article comparing what NVIDIA is using to AMD.

    Are they also more similar to VLIW4 or VLIW5?

    Can someone else shed some light on it?
  • Ryan Smith - Thursday, December 16, 2010 - link

    We wrote something almost exactly like you're asking for for our Radeon HD 4870 review.

    http://www.anandtech.com/show/2556

    AMD and NVIDIA's compute architectures are still fundamentally the same, so just about everything in that article still holds true. The biggest break is VLIW4 for the 6900 series, which we covered in our article this week.

    But to quickly answer your question, GF100/GF110 do not immediately compare to VLIW4 or VLIW5. NVIDIA is using a pure scalar architecture, which has a number of fundamental differences from any VLIW architecture.
  • dustcrusher - Thursday, December 16, 2010 - link

    The cheap insults are nothing but a detriment to what is otherwise an interesting argument, even if I don't agree with you.

    As far as the intellect of Anandtech readers goes, this is one of the few sites where almost all of the comments are worth reading; most sites are the opposite- one or two tiny bits of gold in a big pan of mud.

    I'm not going to "vastly overestimate" OR underestimate your intellect though- instead I'm going to assume that you got caught up in the moment. This isn't Tom's or Dailytech, a little snark is plenty.
  • Arnulf - Thursday, December 16, 2010 - link

    When you launch an application (say a game), it is likely to be the only active thread running on the system, or perhaps one of very few active threads. CPU with Turbo function will clock up as high as possible to run this main thread. When further threads are launched by the application, CPU will inevitably increase its power consumption and consequently clock down.

    While CPU manufacturers don't advertise this functionality in this manner, it is really no different from PowerTune.

    Would PowerTune technology make you feel any better if it was marketed the other way around, the way CPUs are ? (mentioning lowest frequencies and clock boost provided that thermal cap isn't met yet)

Log in

Don't have an account? Sign up now