Final Words

AMD’s GTX 560 Ti counter-offensive leaves us with a few different thoughts, none of which have much to do with the GTX 560 Ti.

First and foremost we have the newly launched Radeon HD 6950 1GB. Having 2GB of VRAM does have its advantages, but at this point in time there aren’t any games that can exploit this advantage at the common resolutions of 1920x1200, 1920x1080, or 1680x1050. It’s only once we get to 2560x1600 or similarly large Eyefinity resolutions that we see the 1GB 6950 fall behind its 2GB counterpart.

In the long run (e.g. a year or longer) I believe having that extra 1GB of VRAM is going to make a difference at resolutions like 1920x1200, but amidst my prognostics we’re effectively making an argument on the futureproofness of a product, which is a difficult argument to make even in the best of times. Perhaps the best argument is one of price: the 6950 1GB starts at $259, while the 6950 2GB can be found for as little as $269, putting a $10 premium on the extra 1GB. For $10 I would suggest taking the plunge, however if your budget is absolutely critical then it’s clear under most games right now you will never notice the difference between a 1GB 6950 and a 2GB 6950.

Our second card presents a more interesting scenario. The factory overclock on the XFX Radon 6870 Black Edition is not very high, but then neither is the effective price of the overclock. Instead this is a story about a custom cooler, and whether at about $10 over the average price of a reference Radeon HD 6870 it’s worth the price. While I would not call the reference 6870 loud, I also would not call it quiet by any stretch of the word; if anything I would call it cheaply built. If you don’t care about noise then the Black Edition brings little to the table, but in a suitable case those of you with sensitive ears will be in for quite a surprise. Thus while the XFX 6870 comes up short as a true GTX 560 Ti competitor as AMD would seem to be hoping for, it clearly has other redeeming values.

With AMD’s latest cards squared away, our final thought is on today’s launch in general. If nothing else, hopefully today’s write-up has entertained you, and with any luck we’ve imparted upon you a bit of practical wisdom about how the GPU industry operates. As far as we can gather AMD went through quite a bit of effort to launch a viable GTX 560 Ti competitor today – a feat they appear to have succeeded at. The GPU industry is competitive from top to bottom, but there’s something special about the $200-$300 price range that brings out the insanity on all sides. And we wouldn’t have it any other way.

Power, Temperature, & Noise
Comments Locked

111 Comments

View All Comments

  • 7Enigma - Tuesday, January 25, 2011 - link

    Here's the point. There is no measurable difference with it on or not from a framerate perspective. So in this case it doesn't matter. That should tell you that the only possible difference in this instance would be a possible WORSENING of picture quality since the GPU wars are #1 about framerate and #2 about everything else. I'm sure a later article will delve into what the purpose of this setting is for but right now it clearly has no benefit from the test suite that was chosen.

    I agree with you though that I would have liked a slightly more detailed description of what it is supposed to do...

    For instance is there any power consumption (and thus noise) differences with it on vs. off?
  • Ryan Smith - Tuesday, January 25, 2011 - link

    For the time being it's necessary that we use Use Application Setting so that newer results are consistent with our existing body of work. As this feature did not exist prior to to the 11.1a drivers, using it would impact our results by changing the test parameters - previously it wasn't possible to cap tessellation factors like this so we didn't run our tests with such a limitation.

    As we rebuild our benchmark suite every 6 months, everything is up for reevaluation at that time. We may or may not continue to disable this feature, but for the time being it's necessary for consistent testing.
  • Dark Heretic - Wednesday, January 26, 2011 - link

    Thanks for the reply Ryan, that's a very valid point on keeping the testing parameters consistent with current benchmark results.

    Would it be possible to actually leave the drivers at default settings for both Nvidia and AMD in the next benchmark suite. I know there will be some inconsistent variations between both sets of drivers, but it would allow for a more accurate picture on both hardware and driver level (as intended by Nvidia / AMD when setting defaults)

    I use both Nvidia and AMD cards, and do find differences between picture quality / performances from both sides of the fence. However i also tend to leave drivers at default settings to allow both Nvidia and AMD the benefit of knowing what works best with their hardware on a driver level, i think it would allow for a more "real world" set of benchmark results.

    @B3an, perhaps you should have used the phrase "lacking in cognitive function", it's much more polite. You'll have to forgive the oversight of not thinking about the current set of benchmarks overall as Ryan has politely pointed out.
  • B3an - Wednesday, January 26, 2011 - link

    You post is simply retarded for lack of a better word.

    Ryan is completely right in disabling this feature, even though it has no effect on the results (yet) in the current drivers. And it should always be disabled in the future.

    The WHOLE point of articles like this is to get the results as fair as possible. If you're testing a game and it looks different and uses different settings on one card to another, how is that remotely fair? What is wrong with you?? Bizarre logic.
    It would be the exact same thing as if AMD was to disable AA by default in all games even if the game settings was set to use AA, and then having the nVidia card use AA in the game tests while the AMD card did not. The results would be absolutely useless, no one would know which card is actually faster.
  • prdola0 - Thursday, January 27, 2011 - link

    Exactly. We should compare apples-to-apples. And let's not forget about the FP16 Demotion "optimization" in the AMD drivers that reduces the render target width from R16G16B16A16 to R11G11B10, effectively reducing bandwidth from 64bits to 32bits at the expense of quality. All this when the Catalyst AI is turned on. AMD claims it doesn't have any effect on the quality, but multiple sources already confirmed that it is easily visible without much effort in some titles, while in some others it doesn't have. However it affects performance for up to 17%. Just google "fp16 demotion" and you will see a plenty of articles about it.
  • burner1980 - Tuesday, January 25, 2011 - link

    Thanks for not listening to your readers.

    Why do you have to include an apple to orange comparison again ?

    Is it so hard to test Non-OC vs Non-OC and Oc vs. OC ?

    The article itself is fine, but please stop this practice.

    Proposal for an other review: Compare ALL current factory stock graphic card models with their highest "reasonable" overclock against each other. Which valus does the customer get when taking OC into (buying) consideration ?
  • james.jwb - Tuesday, January 25, 2011 - link

    quite a good idea if done correctly. Sort of 460's and above would be nice to see.
  • AnnonymousCoward - Thursday, January 27, 2011 - link

    Apparently the model number is very important to you. What if every card above 1MHz was called OC? Then you wouldn't want to consider them. But the 6970@880MHz and 6950@800MHz are fine! Maybe you should focus on price, performance, and power, instead of the model name or color of the plastic.

    I'm going to start my own comments complaint campaign: Don't review cards that contain any blue in the plastic! Apples to apples, people.
  • AmdInside - Tuesday, January 25, 2011 - link

    Can someone tell me where to find a 6950 for ~$279? Sorry but after rebates do not count.
  • Spoelie - Tuesday, January 25, 2011 - link

    If you look at the numbers, the 6870BE is more of a competitor than the article text would make you believe - in the games where the nvidia cards do not completely trounce the competition.

    Look at the 1920x1200 charts of the following games and tell me the 6870BE is outclassed:
    *crysis warhead
    *metro
    *battlefield (except waterfall? what is the point of that benchmark btw)
    *stalker
    *mass effect2
    *wolfenstein

    If you now look at the remaining games where the NVIDIA card owns:
    *hawx (rather inconsequential at these framerates)
    *civ5
    *battleforge
    *dirt2
    You'll notice in those games that the 6950 is just as outclassed. So you're better of with an nvidia card either way.

    It all depends on the games that you pick, but a blanket statement that 6870BE does not compete is not correct either.

Log in

Don't have an account? Sign up now