When One Counter Isn’t Enough

Early on the week of January 17th, AMD sent out the customary email letting the press know of some recent changes to AMD’s product lineup. AMD’s partners were launching their factory overclocked cards, and AMD like a proud papa had to let the world know and was happily mailing out cigars (sample cards) in the process. Meanwhile on the horizon AMD would be working with their partners to launch the Radeon HD 6950 1GB in mid-February for around $269-279. The final piece of news was that AMD was posting their Catalyst 11.1a Hotfix drivers for the press to preview ahead of a January 26th launch.

The fact of the matter is that these kinds of announcements are routine, and also very transparent. Given the timing of the arrival of AMD’s sample hardware and the launch date of the new Catalyst driver it was clear this was meant to garner attention at the same time as NVIDIA’s launch of the GTX 560 Ti. This isn’t meant to be damning for any party – this is just the way the GPU industry operates. NVIDIA did something very similar for the Radeon HD 6800 series launch, shipping the EVGA GeForce GTX 460 1GB FTW to us unannounced while we were returning from AMD's press confernece.

If this is how things actually happened however, we wouldn’t be telling this story. For competitive reasons AMD and NVIDIA like to withhold performance and pricing information from everyone as long as possible so that the other party doesn’t get it. Meanwhile the other party is doing everything they can to get that information as soon as possible, so that they have as much time as possible for any counters of their own.

AMD's First GTX 560 Ti Competitor: The XFX Raden HD 6870 Black Edition

On the morning of Thursday the 20th I was awoken by FedEx, who was delivering a priority overnight package from AMD. At the same time I received an email from AMD announcing that the 6950 1GB was sampling to the press immediately, and that we were under NDA until January 25th.

Something had changed at AMD.

I don’t believe we’ll ever know the full details about what AMD was doing that week – some stories are simply never meant to be told – but it quickly became clear that AMD had to make a very sudden change of plans. On Monday the message from AMD was that the 6870OC was their immediate GTX 560 Ti competitor, and here 3 days later the message had suddenly changed to the 6950 1GB being their GTX 560 Ti competitor.

There are a million different reasons why this could be, but I believe it’s because in that intervening period AMD got access to reliable GTX 560 Ti performance data - if not the price too. If they did have that data then they would quickly see that the GTX 560 Ti was 10-15% faster than the 6870OC, reducing the 6870OC from a competitor to a price spoiler at best. The 6870OC could not and would not work as AMD’s GTX 560 Ti challenger.

The final piece of the puzzle only came together yesterday afternoon, when AMD announced that the 6950 1GB’s retail launch was getting pushed up from mid-February to January 24th, or in other words yesterday. The 6950 1GB was to be available immediately for $259 – over half a month sooner than expected, and for roughly $20 less than AMD first said it would be.

Based on the performance of the GTX 560 Ti, the 6870OC, and the 6950 1GB, the only reasonable explanation we have at this time is that early last week AMD did an about-face and put everything in to launching the 6950 1GB ahead of schedule. Whatever motivated this about-face and however they managed to do it, all indications are that they managed to get Sapphire and XFX to manufacture a steady supply of 1GB cards in order for Newegg to have them up for sale Monday afternoon.

Index Meet The Radeon HD 6950 1GB and XFX Radeon HD 6870 Black Edition
Comments Locked

111 Comments

View All Comments

  • 7Enigma - Tuesday, January 25, 2011 - link

    Here's the point. There is no measurable difference with it on or not from a framerate perspective. So in this case it doesn't matter. That should tell you that the only possible difference in this instance would be a possible WORSENING of picture quality since the GPU wars are #1 about framerate and #2 about everything else. I'm sure a later article will delve into what the purpose of this setting is for but right now it clearly has no benefit from the test suite that was chosen.

    I agree with you though that I would have liked a slightly more detailed description of what it is supposed to do...

    For instance is there any power consumption (and thus noise) differences with it on vs. off?
  • Ryan Smith - Tuesday, January 25, 2011 - link

    For the time being it's necessary that we use Use Application Setting so that newer results are consistent with our existing body of work. As this feature did not exist prior to to the 11.1a drivers, using it would impact our results by changing the test parameters - previously it wasn't possible to cap tessellation factors like this so we didn't run our tests with such a limitation.

    As we rebuild our benchmark suite every 6 months, everything is up for reevaluation at that time. We may or may not continue to disable this feature, but for the time being it's necessary for consistent testing.
  • Dark Heretic - Wednesday, January 26, 2011 - link

    Thanks for the reply Ryan, that's a very valid point on keeping the testing parameters consistent with current benchmark results.

    Would it be possible to actually leave the drivers at default settings for both Nvidia and AMD in the next benchmark suite. I know there will be some inconsistent variations between both sets of drivers, but it would allow for a more accurate picture on both hardware and driver level (as intended by Nvidia / AMD when setting defaults)

    I use both Nvidia and AMD cards, and do find differences between picture quality / performances from both sides of the fence. However i also tend to leave drivers at default settings to allow both Nvidia and AMD the benefit of knowing what works best with their hardware on a driver level, i think it would allow for a more "real world" set of benchmark results.

    @B3an, perhaps you should have used the phrase "lacking in cognitive function", it's much more polite. You'll have to forgive the oversight of not thinking about the current set of benchmarks overall as Ryan has politely pointed out.
  • B3an - Wednesday, January 26, 2011 - link

    You post is simply retarded for lack of a better word.

    Ryan is completely right in disabling this feature, even though it has no effect on the results (yet) in the current drivers. And it should always be disabled in the future.

    The WHOLE point of articles like this is to get the results as fair as possible. If you're testing a game and it looks different and uses different settings on one card to another, how is that remotely fair? What is wrong with you?? Bizarre logic.
    It would be the exact same thing as if AMD was to disable AA by default in all games even if the game settings was set to use AA, and then having the nVidia card use AA in the game tests while the AMD card did not. The results would be absolutely useless, no one would know which card is actually faster.
  • prdola0 - Thursday, January 27, 2011 - link

    Exactly. We should compare apples-to-apples. And let's not forget about the FP16 Demotion "optimization" in the AMD drivers that reduces the render target width from R16G16B16A16 to R11G11B10, effectively reducing bandwidth from 64bits to 32bits at the expense of quality. All this when the Catalyst AI is turned on. AMD claims it doesn't have any effect on the quality, but multiple sources already confirmed that it is easily visible without much effort in some titles, while in some others it doesn't have. However it affects performance for up to 17%. Just google "fp16 demotion" and you will see a plenty of articles about it.
  • burner1980 - Tuesday, January 25, 2011 - link

    Thanks for not listening to your readers.

    Why do you have to include an apple to orange comparison again ?

    Is it so hard to test Non-OC vs Non-OC and Oc vs. OC ?

    The article itself is fine, but please stop this practice.

    Proposal for an other review: Compare ALL current factory stock graphic card models with their highest "reasonable" overclock against each other. Which valus does the customer get when taking OC into (buying) consideration ?
  • james.jwb - Tuesday, January 25, 2011 - link

    quite a good idea if done correctly. Sort of 460's and above would be nice to see.
  • AnnonymousCoward - Thursday, January 27, 2011 - link

    Apparently the model number is very important to you. What if every card above 1MHz was called OC? Then you wouldn't want to consider them. But the 6970@880MHz and 6950@800MHz are fine! Maybe you should focus on price, performance, and power, instead of the model name or color of the plastic.

    I'm going to start my own comments complaint campaign: Don't review cards that contain any blue in the plastic! Apples to apples, people.
  • AmdInside - Tuesday, January 25, 2011 - link

    Can someone tell me where to find a 6950 for ~$279? Sorry but after rebates do not count.
  • Spoelie - Tuesday, January 25, 2011 - link

    If you look at the numbers, the 6870BE is more of a competitor than the article text would make you believe - in the games where the nvidia cards do not completely trounce the competition.

    Look at the 1920x1200 charts of the following games and tell me the 6870BE is outclassed:
    *crysis warhead
    *metro
    *battlefield (except waterfall? what is the point of that benchmark btw)
    *stalker
    *mass effect2
    *wolfenstein

    If you now look at the remaining games where the NVIDIA card owns:
    *hawx (rather inconsequential at these framerates)
    *civ5
    *battleforge
    *dirt2
    You'll notice in those games that the 6950 is just as outclassed. So you're better of with an nvidia card either way.

    It all depends on the games that you pick, but a blanket statement that 6870BE does not compete is not correct either.

Log in

Don't have an account? Sign up now