After being AnandTech’s senior GPU editor for nearly a year and a half and through more late-night GPU launches than I care to count, there’s a very specific pattern I’ve picked up on: the GPU market may be competitive, but it’s the $200-$300 that really brings out the insanity. I’m not sure if it’s the volume, the profit margins, or just the desire to be seen as affordable, but AMD and NVIDIA seem to take out all the stops to one-up each other whenever either side plans on launching a new video card in this price range.

Today was originally supposed to be about the newly released GeForce GTX 560 Ti – NVIDIA’s new GF114-based $250 video card. Much as was the case with the launch of AMD’s Radeon HD 6800 series however, AMD is itching to spoil NVIDIA’s launch with their own push. Furthermore they intend to do so on two fronts: directly above the GTX 560 Ti at $259 is the Radeon HD 6950 1GB, and somewhere below it one of many factory overclocked Radeon HD 6870 cards, in our case an XFX Radeon HD 6870 Black Edition. The Radeon HD 6950 1GB is effectively the GTX 560 Ti’s direct competition, while the overclocked 6870 serves to be the price spoiler.

It wasn’t always meant to be this way, and indeed 5 days ago things were quite different. But before we get too far, let’s quickly discuss today’s cards.

  AMD Radeon HD 6970 AMD Radeon HD 6950 2GB AMD Radeon HD 6950 1GB XFX Radeon HD 6870 Black AMD Radeon HD 6870
Stream Processors 1536 1408 1408 1120 1120
Texture Units 96 88 88 56 56
ROPs 32 32 32 32 32
Core Clock 880MHz 800MHz 800MHz 940MHz 900MHz
Memory Clock 1.375GHz (5.5GHz effective) GDDR5 1.25GHz (5.0GHz effective) GDDR5 1.25GHz (5.0GHz effective) GDDR5 1.15GHz (4.6GHz effective) GDDR5 1.05GHz (4.2GHz effective) GDDR5
Memory Bus Width 256-bit 256-bit 256-bit 256-bit 256-bit
Frame Buffer 2GB 2GB 1GB 1GB 1GB
FP64 1/4 1/4 1/4 N/A N/A
Transistor Count 2.64B 2.64B 2.64B 1.7B 1.7B
Manufacturing Process TSMC 40nm TSMC 40nm TSMC 40nm TSMC 40nm TSMC 40nm
Price Point $369 ~$279 $259 $229 ~$219

Back when the Radeon HD 6950 launched, AMD told us to expect 1GB cards sometime in the near future as a value option. Because the 6900 series is using fairly new 2Gb GDDR5, such chips are still in short supply and cost more versus the very common and very available 1Gb variety. It’s not a massive difference once you all up the bill of materials on a video card, but for the card manufactures if they can save $10 on RAM then that’s $10 they can mark down a card and snag that many more sales. Furthermore we’re not quite to the point where 2GB is essential in the sub-$300 market - where 2560x1600 monitors are rare – so the performance penalty isn’t a major concern. As a result it was only a matter of time until 1GB 6900 series cards hit the market, to fill in the gap until 2Gb GDDR5 came down in price.

The day has finally come for the Radeon HD 6950 1GB, and today is that day. Truth be told it’s actually a bit anticlimactic – the reference 6950 1GB is virtually identical to the reference 6950 2GB. It’s the same PCB attached to the same vapor chamber cooler with the same power and heat characteristics. There is one and only one difference: the 1GB card uses 8 1Gb GDDR5 chips, and the 2GB card uses 8 2Gb GDDR5 chips. Everything else is equal, and indeed when the 6950 is not RAM limited even the performance is equal.

The second card we’re taking a quick look at is the XFX Radeon HD 6870 Black Edition, the obligatory factory overclocked Radeon HD 6870. Utilizing XFX’s open-air custom HSF, it’s clocked at 940MHz core and 1150MHz (4.6Gbps data rate) memory, representing a 40Mhz (4%) core overclock and a 100MHz (9%) memory overclock. Truth be told it’s not much of an overclock, and if it wasn’t for the cooler it wouldn’t be a very remarkable card as far as factory overclocking goes, and for that reason it’s almost a footnote today. But it wasn’t meant to be, and that’s where our story begins.

When One Counter Isn’t Enough
Comments Locked

111 Comments

View All Comments

  • 7Enigma - Tuesday, January 25, 2011 - link

    Here's the point. There is no measurable difference with it on or not from a framerate perspective. So in this case it doesn't matter. That should tell you that the only possible difference in this instance would be a possible WORSENING of picture quality since the GPU wars are #1 about framerate and #2 about everything else. I'm sure a later article will delve into what the purpose of this setting is for but right now it clearly has no benefit from the test suite that was chosen.

    I agree with you though that I would have liked a slightly more detailed description of what it is supposed to do...

    For instance is there any power consumption (and thus noise) differences with it on vs. off?
  • Ryan Smith - Tuesday, January 25, 2011 - link

    For the time being it's necessary that we use Use Application Setting so that newer results are consistent with our existing body of work. As this feature did not exist prior to to the 11.1a drivers, using it would impact our results by changing the test parameters - previously it wasn't possible to cap tessellation factors like this so we didn't run our tests with such a limitation.

    As we rebuild our benchmark suite every 6 months, everything is up for reevaluation at that time. We may or may not continue to disable this feature, but for the time being it's necessary for consistent testing.
  • Dark Heretic - Wednesday, January 26, 2011 - link

    Thanks for the reply Ryan, that's a very valid point on keeping the testing parameters consistent with current benchmark results.

    Would it be possible to actually leave the drivers at default settings for both Nvidia and AMD in the next benchmark suite. I know there will be some inconsistent variations between both sets of drivers, but it would allow for a more accurate picture on both hardware and driver level (as intended by Nvidia / AMD when setting defaults)

    I use both Nvidia and AMD cards, and do find differences between picture quality / performances from both sides of the fence. However i also tend to leave drivers at default settings to allow both Nvidia and AMD the benefit of knowing what works best with their hardware on a driver level, i think it would allow for a more "real world" set of benchmark results.

    @B3an, perhaps you should have used the phrase "lacking in cognitive function", it's much more polite. You'll have to forgive the oversight of not thinking about the current set of benchmarks overall as Ryan has politely pointed out.
  • B3an - Wednesday, January 26, 2011 - link

    You post is simply retarded for lack of a better word.

    Ryan is completely right in disabling this feature, even though it has no effect on the results (yet) in the current drivers. And it should always be disabled in the future.

    The WHOLE point of articles like this is to get the results as fair as possible. If you're testing a game and it looks different and uses different settings on one card to another, how is that remotely fair? What is wrong with you?? Bizarre logic.
    It would be the exact same thing as if AMD was to disable AA by default in all games even if the game settings was set to use AA, and then having the nVidia card use AA in the game tests while the AMD card did not. The results would be absolutely useless, no one would know which card is actually faster.
  • prdola0 - Thursday, January 27, 2011 - link

    Exactly. We should compare apples-to-apples. And let's not forget about the FP16 Demotion "optimization" in the AMD drivers that reduces the render target width from R16G16B16A16 to R11G11B10, effectively reducing bandwidth from 64bits to 32bits at the expense of quality. All this when the Catalyst AI is turned on. AMD claims it doesn't have any effect on the quality, but multiple sources already confirmed that it is easily visible without much effort in some titles, while in some others it doesn't have. However it affects performance for up to 17%. Just google "fp16 demotion" and you will see a plenty of articles about it.
  • burner1980 - Tuesday, January 25, 2011 - link

    Thanks for not listening to your readers.

    Why do you have to include an apple to orange comparison again ?

    Is it so hard to test Non-OC vs Non-OC and Oc vs. OC ?

    The article itself is fine, but please stop this practice.

    Proposal for an other review: Compare ALL current factory stock graphic card models with their highest "reasonable" overclock against each other. Which valus does the customer get when taking OC into (buying) consideration ?
  • james.jwb - Tuesday, January 25, 2011 - link

    quite a good idea if done correctly. Sort of 460's and above would be nice to see.
  • AnnonymousCoward - Thursday, January 27, 2011 - link

    Apparently the model number is very important to you. What if every card above 1MHz was called OC? Then you wouldn't want to consider them. But the 6970@880MHz and 6950@800MHz are fine! Maybe you should focus on price, performance, and power, instead of the model name or color of the plastic.

    I'm going to start my own comments complaint campaign: Don't review cards that contain any blue in the plastic! Apples to apples, people.
  • AmdInside - Tuesday, January 25, 2011 - link

    Can someone tell me where to find a 6950 for ~$279? Sorry but after rebates do not count.
  • Spoelie - Tuesday, January 25, 2011 - link

    If you look at the numbers, the 6870BE is more of a competitor than the article text would make you believe - in the games where the nvidia cards do not completely trounce the competition.

    Look at the 1920x1200 charts of the following games and tell me the 6870BE is outclassed:
    *crysis warhead
    *metro
    *battlefield (except waterfall? what is the point of that benchmark btw)
    *stalker
    *mass effect2
    *wolfenstein

    If you now look at the remaining games where the NVIDIA card owns:
    *hawx (rather inconsequential at these framerates)
    *civ5
    *battleforge
    *dirt2
    You'll notice in those games that the 6950 is just as outclassed. So you're better of with an nvidia card either way.

    It all depends on the games that you pick, but a blanket statement that 6870BE does not compete is not correct either.

Log in

Don't have an account? Sign up now