Final Words

AMD’s GTX 560 Ti counter-offensive leaves us with a few different thoughts, none of which have much to do with the GTX 560 Ti.

First and foremost we have the newly launched Radeon HD 6950 1GB. Having 2GB of VRAM does have its advantages, but at this point in time there aren’t any games that can exploit this advantage at the common resolutions of 1920x1200, 1920x1080, or 1680x1050. It’s only once we get to 2560x1600 or similarly large Eyefinity resolutions that we see the 1GB 6950 fall behind its 2GB counterpart.

In the long run (e.g. a year or longer) I believe having that extra 1GB of VRAM is going to make a difference at resolutions like 1920x1200, but amidst my prognostics we’re effectively making an argument on the futureproofness of a product, which is a difficult argument to make even in the best of times. Perhaps the best argument is one of price: the 6950 1GB starts at $259, while the 6950 2GB can be found for as little as $269, putting a $10 premium on the extra 1GB. For $10 I would suggest taking the plunge, however if your budget is absolutely critical then it’s clear under most games right now you will never notice the difference between a 1GB 6950 and a 2GB 6950.

Our second card presents a more interesting scenario. The factory overclock on the XFX Radon 6870 Black Edition is not very high, but then neither is the effective price of the overclock. Instead this is a story about a custom cooler, and whether at about $10 over the average price of a reference Radeon HD 6870 it’s worth the price. While I would not call the reference 6870 loud, I also would not call it quiet by any stretch of the word; if anything I would call it cheaply built. If you don’t care about noise then the Black Edition brings little to the table, but in a suitable case those of you with sensitive ears will be in for quite a surprise. Thus while the XFX 6870 comes up short as a true GTX 560 Ti competitor as AMD would seem to be hoping for, it clearly has other redeeming values.

With AMD’s latest cards squared away, our final thought is on today’s launch in general. If nothing else, hopefully today’s write-up has entertained you, and with any luck we’ve imparted upon you a bit of practical wisdom about how the GPU industry operates. As far as we can gather AMD went through quite a bit of effort to launch a viable GTX 560 Ti competitor today – a feat they appear to have succeeded at. The GPU industry is competitive from top to bottom, but there’s something special about the $200-$300 price range that brings out the insanity on all sides. And we wouldn’t have it any other way.

Power, Temperature, & Noise


View All Comments

  • rdriskill - Tuesday, January 25, 2011 - link

    Given that it is a lot easier to find a 1920 x 1080 monitor now than it is to find a 1920 x 1200 monitor, would that resolution make more sense to list in these kinds of comparisions? I realise it wouldn't make much of a difference, but it is kind of strange to not see what, at least in my area, is the most common native resolution. Reply
  • james.jwb - Tuesday, January 25, 2011 - link

    wouldn't mind seeing 27" res include at the high end (2560x1440) as up there pushes the cards much harder and could make all the difference between playable and unplayable. I realize this is more work though :) Reply
  • Ryan Smith - Tuesday, January 25, 2011 - link

    As 16:9 monitors have 90% of the resolution of 16:10 monitors, the performance is very similar. We may very well have to switch from 19x12 to 19x10 because 19x12 monitors are becoming so rare, but there's not a lot of benefit in running both resolutions.

    The same goes for 25x14 vs. 25x16. Though in that case, 25x16 monitors aren't going anywhere.
  • Makaveli - Tuesday, January 25, 2011 - link

    Great review.

    As for the complaints GTFO, is it somehow affecting your manhood that there is an overclocked card in the review?

    Some of you really need to get a life!
  • ctbaars - Tuesday, January 25, 2011 - link

    Hey! Don't you talk to Becky that way .... Reply
  • silverblue - Tuesday, January 25, 2011 - link

    It's overclocked, sure, but it's an official AMD product line. If AMD had named it the 6880, I don't think anyone would've questioned it really. Reply
  • Shadowmaster625 - Tuesday, January 25, 2011 - link

    The 6870 has 56 texture units and the 6950 has 88 , or 57% more. Yet if you add up all the scores of each you find that the 6950 is only 8% faster on average. This implies a wasted 45% increase in SPs and/or texture units (which one?), as well as about 800 million wasted transistors. Clearly AMD needed to add more ROPs to the 6950. Also, since the memory clock is faster on the 6950, this implies even more wasted transistors. If both cards had the same exact memory bandwidth, they might very well only be 4% apart in performance! AMD's gpu clearly responds much more favorably to an increase in memory bandwidth than it does to increased texture units. It really looks like they're going off the wheels and into the weeds. What they need is to increase memory bandwidth to 216G/s, and increase their ROP-to-SIMD ratio to around 2:1.

    Yes I know about VLIW4... but where is the performance? Improvements should be seen by now. Like what Nvidia did with Civ 5. I'm not seeing anything like that from AMD and we should have been seeing that by now, in spades.
  • B3an - Wednesday, January 26, 2011 - link

    ....I like how you've completely missed out the fact that the 6870 is clocked 100MHz higher on the core, and the 6870 Black is 140MHz higher. You list all these other factors, and memory speeds, but dont even mention or realise that the 6870/Black have considerably higher core clocks than the 6950. Reply
  • Shadowmaster625 - Wednesday, January 26, 2011 - link

    It is probably clocked higher because it has almost a billion fewer transistors. Which begs the question.... what the hell are all those extra transistors there for if they do not improve performance? Reply
  • DarkHeretic - Tuesday, January 25, 2011 - link

    This is my first post, i've been reading Anand for at least a year, and this concerned me enough to actually create a user and post.

    "For NVIDIA cards all tests were done with default driver settings unless otherwise noted. As for AMD cards, we are disabling their new AMD Optimized tessellation setting in favor of using application settings (note that this doesn’t actually have a performance impact at this time), everything else is default unless otherwise noted."

    While i read your concerns about where to draw the line on driver optimisation Ryan, i disagree with your choice to disable select features from one set of drivers to the next. How many PC users play around with these settings apart from the enthusiasts among us striving for extra performance or quality?

    Surely it would make be far fairer for testing to leave drivers at default settings when benchmarking hardware and / or new sets of drivers? Essentially driver profiles have been tweaking performance for a while now from both AMD and Nvidia, so where to draw the line on altering the testing methodology in "tweaking drivers" to suit?

    I'll admit, regardless of whether disabling a feature makes a difference to the results or not, it actually made me stop reading the rest of the review as from my own stance the results have been skewed. No two sets of drivers from AMD or Nvidia will ever be equal (i hope), however deliberately disabling features meant for the benefit of the end users, just seems completely the wrong direction to take.

    As you are concerned about where AMD is taking their driver features in this instance, equally i find myself concerned about where you are taking your testing methodology.

    I hope you can understand my concerns on this and leave drivers as intended in the future to allow a more neutral review.


Log in

Don't have an account? Sign up now