There are only a handful of metrics by which 2009 didn’t end as a successful year for AMD. With the launch of the Radeon HD 5800 series in September of that year AMD got a significant and unusually long-standing jump on the competition. By being the first company to transition a high-end GPU to TSMC’s 40nm process they were able to bring about the next generation of faster and cheaper video cards, quickly delivering better performance at better prices than their 55nm predecessors and competitors alike. At the same time they were the first company to produce a GPU for the new DirectX 11 standard, giving them access to a number of new features, a degree of future proofness, and good will with developers eager to get their hands on DX11 hardware.

Ultimately AMD held the high-end market for over 6 months until NVIDIA was able to counter back with the Fermi based GTX 400 series. Though it’s not unprecedented for a company to rule the high-end market for many months at a time, it’s normally in the face of slower but similar cards from the competition – to stand alone is far more rare. This is not to say that it was easy for AMD, as TSMC’s 40nm production woes kept AMD from fully capitalizing on their advantages until 2010. But even with 40nm GPUs in short supply, it was clearly a good year for AMD.

Now in the twilight of the year 2010, the landscape has once again shifted. NVIDIA did deliver the GTX 400 series, and then they delivered the GTX 500 series, once more displacing AMD from the high-end market as NVIDIA’s build’em big strategy is apt to do. In October we saw AMD reassert themselves in the mid-range market with the Radeon HD 6800 series, delivering performance close to the 5800 series for lower prices and at a greater power efficiency, and provoking a price war that quickly lead to NVIDIA dropping GTX 460 prices. With the delivery of the 6800 series, the stage has been set for AMD’s return to the high-end market with the launch of the Radeon HD 6900 series.

Launching today are the Radeon HD 6970 and Radeon HD 6950, utilizing AMD’s new Cayman GPU. Born from the ashes of TSMC’s canceled 32nm node, Cayman is the biggest change to AMD’s GPU microarchitecture since the original Radeon HD 2900. Just because AMD doesn’t have a new node to work with this year doesn’t mean they haven’t been hard at work, and as we’ll see Cayman and the 6900 series will brings that hard work to the table. So without further ado, let’s dive in to the Radeon HD 6900 series.

  AMD Radeon HD 6970 AMD Radeon HD 6950 AMD Radeon HD 6870 AMD Radeon HD 6850 AMD Radeon HD 5870
Stream Processors 1536 1408 1120 960 1600
Texture Units 96 88 56 48 80
ROPs 32 32 32 32 32
Core Clock 880MHz 800MHz 900MHz 775MHz 850MHz
Memory Clock 1.375GHz (5.5GHz effective) GDDR5 1.25GHz (5.0GHz effective) GDDR5 1.05GHz (4.2GHz effective) GDDR5 1GHz (4GHz effective) GDDR5 1.2GHz (4.8GHz effective) GDDR5
Memory Bus Width 256-bit 256-bit 256-bit 256-bit 256-bit
Frame Buffer 2GB 2GB 1GB 1GB 1GB
FP64 1/4 1/4 N/A N/A 1/5
Transistor Count 2.64B 2.64B 1.7B 1.7B 2.15B
Manufacturing Process TSMC 40nm TSMC 40nm TSMC 40nm TSMC 40nm TSMC 40nm
Price Point $369 $299 $239 $179 ~$249

Following AMD’s unfortunate renaming of its product stack with the Radeon HD 6800 series, the Radeon HD 6900 series is thus far a 3 part, 2 chip lineup. Today we are looking at the Cayman based 6970 and 6950, composing the top of AMD’s single-GPU product line. Above that is Antilles, the codename for AMD’s dual-Cayman Radeon HD 6990. Originally scheduled to launch late this year, the roughly month-long delay of Cayman has pushed that back; we’ll now be seeing the 3rd member of the 6900 series next year. So today the story is all about Cayman and the single-GPU cards it powers.

At the top we have the Radeon HD 6970, AMD’s top single-GPU part. Featuring a complete Cayman GPU, it has 1536 stream processors, 96 texture units, and 32 ROPs. It is clocked at 880MHz for the core clock and 1375MHz (5.5GHz data rate) for its 2GB of GDDR5 RAM. TDP (or the closest thing to it) is 250W, while reflecting the maturity and AMD’s familiarity with the 40nm process typical idle power draw is down from the 5800 series to 20W.

Below that we have the Radeon HD 6950, the traditional lower power card using a slightly cut-down GPU. The 6950 has 1408 stream processors, 88 texture units, and still all 32 ROPs attached to the same 2GB of GDDR5. The core clock is similarly reduced to 800MHz, while the memory clock is 1250MHz (5GHz data rate). TDP is 200W, while idle power is the same as with the 6970 at 20W.

From the specifications alone it’s quickly apparent that something new is happening with Cayman, as at 1536 SPs it has fewer SPs than the 1600 SP Cypress/5870 it replaces. We have a great deal to talk about here, but we’ll stick to a high-level overview for our introduction. In the biggest change to AMD’s core GPU architecture since the launch of their first DX10/unified shader Radeon HD 2900 in 2007, AMD is moving away from the Very Long Instruction Word-5 (VLIW5) architecture we have come to know them for, in favor of a slightly less wide VLIW4 architecture. In a nutshell AMD’s SIMDs are narrower but there are more of them, as AMD looks to find a new balance in their core architecture. Although it’s not a new core architecture outright, the change from VLIW5 to VLIW4 brings a number of ramifications that we will be looking at. And this is just one of the many facets of AMD’s new architecture.

Getting right to the matter of performance, the 6970 performs very close to the GTX 570/480 on average, while the 6950 is in a class of its own, occupying the small hole between the 5870/470 and the 6970/570. With that level of performance the pricing for today’s launch is rather straightforward: the 6970 will be launching slightly above the 570 at $379, while the 6950 will be launching at the $299 sweet spot. Further down the line AMD’s partners will be launching 1GB versions of these cards, which will be bringing prices down as a tradeoff for potential memory bottlenecks.

Today’s launch is going to be hard launch, with both the 6970 and the 6950 available. AMD is being slightly more cryptic than usual about just what the launch quantities are; our official guidance is “available in quantity” and “tens of thousands” of cards. On the one hand we aren’t expecting anything nearly as constrained as the 5800 series launch, and at the same time AMD is not filling us with confidence that it will be widely available like the 6800 either. If at the end of this article you decide you want a 6900 card, your best bet is to grab one sooner than later.


AMD's Current Product Stack

With the launch of the 6900 series, the 5800 series is facing its imminent retirement. There are still a number of cards on the market and they’re priced to move, but AMD is looking at cleaning out its Cypress inventory over the next couple of months, so officially the 5800 series is no longer part of AMD’s current product stack. Meanwhile AMD’s dual-GPU 5970 remains an outlier, as its job is not quite done until the 6990 arrives – until then it’s still officially AMD’s highest-end card and their closest competitor to the GTX 580.

Meanwhile NVIDIA’s product stack and pricing stands as-is.

Winter 2010 Video Card MSRPs
NVIDIA Price AMD
$500  
  $470 Radeon HD 5970
$410  
  $369 Radeon HD 6970
$350  
  $299 Radeon HD 6950
 
$250 Radeon HD 5870
$240 Radeon HD 6870
$180-$190 Radeon HD 6850
Refresher: The 6800 Series’ New Features
Comments Locked

168 Comments

View All Comments

  • AnnihilatorX - Thursday, December 16, 2010 - link

    I disagree with you rarson

    This is what sets Anandtech apart, it has quality over quantity.
    Anandtech is the ONLY review site which offers me comprehensive information on the architecture, with helpful notes on the expected future gaming performance. It mention AMD intended the 69xx to run on 35nm, and made sacrifices. If you go to Guru3D''s review, the editor in the conclusion stated that he doesn't know why the performance lacks the wow factor. Anandtech answered that question with the process node.

    If you want to read reviews only, go onto google and search for 6850 review, or go to DailyTech's daily recent hardware review post, you can find over 15 plain reviews. Even easier, just use the Quick Navigation menu or the Table of Content in the freaking first page of article. This laziness does not entrice sypathy.
  • Quidam67 - Thursday, December 16, 2010 - link

    Rarson's comments may have been a little condescending in their tone, but I think the critism was actually constructive in nature.

    You can argue the toss about whether the architecture should be in a separate article or not, but personally speaking, I actually would prefer it was broken out. I mean, for those who are interested, simply provide a hyper-link, that way everyone gets what they want.

    In my view, a review is a review and an analysis on architecture can compliment that review but should not actually a part of the review itself. A number of other sites follow this formula, and provide both, but don't merge them together as one super-article, and there are other benefits to this if you read on.

    The issue of spelling anf grammer is trivial, but in fact could be symptomatic of a more serious problem, such as the sheer volume of work Ryan has to perform in the time-frame provided, and the level of QA being squeesed in with it. Given the nature of NDA's, perhaps it might take the pressure off if the review did come first, and the architecture second, so the time-pressures weren't quite so restrictive.

    Lastly, employing a professional proof-reader is hardly an insult to the original author. It's no different than being a software engineer (which I am) and being backed up by a team of quality test analysts. It certainly makes you sleep better when stuff goes into production. Why should Ryan shoulder all the responsibility?
  • silverblue - Thursday, December 16, 2010 - link

    I do hope you're joking. :) (can't tell at this early time)
  • Arnulf - Thursday, December 16, 2010 - link

    "... unlike Turbo which is a positive feedback mechanism."

    Turbo is a negative feedback mechanism. If it was a positive feedback mechanism (= a consequence of an action resulting in further action in same direction) the CPU would probably burn up almost instantly after Turbo triggered as its clock would increase indefinitely, ever more following each increase, the higher the temperature, the higher the frequency. This is not how Turbo works.

    Negative feedback mechanism is a result of an action resulting in reaction (= action in the opposite direction). In the case of CPUs and Turbo it's this to temperature reaction that keeps CPU frequency under control. The higher the temperature, the lower the frequency. This is how Turbo and PowerTune work.

    The fact that Turbo starts at lower frequency and ramps it up and that PowerTune starts at higher frequency and brings it down has no bearing on whether the mechanism of control is called "positive" or "negative" feedback.

    Considering your fondness for Wikipedia (as displayed by the reference in the article) you might want to check out these:

    http://en.wikipedia.org/wiki/Negative_feedback
    http://en.wikipedia.org/wiki/Positive_feedback

    and more specifically:

    http://en.wikipedia.org/wiki/Negative_feedback#Con...
  • Ryan Smith - Thursday, December 16, 2010 - link

    Hi Arnulf;

    Fundamentally you're right, so I won't knock you. I guess you could say I'm going for a very loose interpretation there. The point I'm trying to get across is that Turbo provides a performance floor, while PowerTune is a performance ceiling. People like getting extra performance for "free" more than they like "losing" performance. Hence one experience is positive and one is negative.

    I think in retrospect I should have used positive/negative reinforcement instead of feedback.
  • Soda - Thursday, December 16, 2010 - link

    Anyone noticed that the edge missing og the boards 8-pin power connector ?

    Apparently the AMD made a mistake in the reference design of the board and didn't calculating the space needed by the cooler.

    If you look closely on the power connector in http://images.anandtech.com/doci/4061/6970Open.jpg you'll notice the missing edge.

    For a full story on the matter you can go to http://www.hardwareonline.dk/nyheder.aspx?nid=1060...
    For the english speaking people I suggest the googlish version here http://translate.google.com/translate?hl=da&sl...

    There are some pictures to backup the claim the mistake made AMD here.

    Though it haven't been confirmed by AMD if this is only a mistake on the review boards or all cards of the 69xx series.
  • versesuvius - Thursday, December 16, 2010 - link

    I have a 3870, on a 17 inch monitor, and everything is fine as long as games go. The hard disk gets in the way sometimes, but that is just about it. All the games run fine. No problem at all. Oh, there's more: They run better on the lousy XBOX. Why the new GPU then? Giant monitors? Three of them? Six of them? (The most fun I had on Anandtech was looking at pictures of AT people trying to stabilize them on a wall). Oh, the "Compute GPU"? Wouldn't that fit on a small PCI card, and act like the old 486 coprecessor, for those who have some use for it? Or is it just a silly excuse for not doing much at all, or rather not giving much to the customers, and still charge the same? The "High End"! In an ideal world the prices of things go down, and more and more people can afford them. That lovely capitalist idea was turned on its head, sometime in the eighties of the last century, and instead the notion of value was reinvented. You get more value, for the same price. You still have to pay $400 for your graphic card, even though you do not need the "Compute GPU", and you do not need the aliased superduper antialiasing that nobody yet knows how to achieve in software. Can we have a cheap 4870? No that is discontinued. The 58 series? Discontinued. There are hundreds of thousands or to be sure, millions of people who will pay 50 dollars for one. All ATI or Nvidia need to do is to fine tune the drivers and reduce power consumption. Then again, that must be another "High End" story. In fact the only tale that is being told and retold is "High End"s and "Fool"s, (i.e. "We can do whatever we want with the money that you don't have".) Until better, saner times. For now, long live the console. I am going to buy one, instead of this stupid monstrosity and its equally stupid competitive monstrosity. Cheaper, and gets the job done in more than one way.

    End of Rant.
    God Bless.
  • Necc - Thursday, December 16, 2010 - link

    So True.
  • Ananke - Thursday, December 16, 2010 - link

    Agree. I have 5850 and it does work fine, and I got it on day one at huge discount, but still - it is kind of worthless. Our entertainment comes more exclusively from consoles, and I discrete high end card that commands above $100 price tag is worthless. It is nice touch, but I have no application for it in everyday life, and several months later is already outdated or discontinued.

    My guess, integrated in the CPU graphics will take over, and the mass market discrete cards will have the fate of the dinosaurs very soon.
  • Quidam67 - Thursday, December 16, 2010 - link

    Wonderfully subversive commentary. Loved it.

    Still, the thing I like about the High end (I'll never buy it until my Mortgage is done with) is that it filters down to the middle/low end.

    Yes, lots of discontinued product lines but for example, I thought the HD5770 was a fantastic product. Gave ample performance for maintstream gamers in a small form-factor (you can even get it in single slot) with low heat and power requirements meaning it was a true drop-in upgrade to your existing rig, with a practical upgrade path to Crossfire X.

    As for the xbox, that hardware is so outdated now that even the magic of software optimisation (a seemingly lost art in the world of PC's) cannot disguise the fact that new games are not going to look any better, or run any faster, than those that came out at launch. Was watching GT5 in demo the other day and with all the hype about how realistic it looks (and plays) I really couldn't get past the massive amount of Jaggies on screen. Also, very limited damage modelling, and in my view that's a nod towards hardware limitations rather than a game-design consideration.

Log in

Don't have an account? Sign up now