Final Words

So this may come as a surprise to some, but the AMD Radeon HD 4870 1GB Quad CrossFire leads in our benchmarks when focusing on the resolution that matters for this hardware (2560x1600).

While driver issues and a lack of other "stuff" like PhysX and CUDA to do matter with GPUs in situations where hardware doesn't scale, the AMD solution leads the GeForce GTX 295 in more benchmarks (Age of Conan, Left 4 Dead, and Far Cry 2), and ties the NVIDIA solution in one title (Fallout 3). Not shown in our numbers is Race Driver GRID, as we have a continuing issue in FRAPS that gets in the way of recording performance numbers with 4-way NVIDIA solutions. We were able to watch frame rate, however, and it was clear that the NVIDIA hardware didn't reach the performance levels of AMD hardware in GRID.

Certainly this isn't a sweeping victory for AMD, and the outcome, because it is close, rests incredibly heavy on the benchmarks we chose and were able to run. Different titles may have produced different results. Thus there is no clear winner in terms of absolute performance. This will depend greatly on title preference. It is worth noting, however, that when Quad GTX 295 leads Quad 4870 1GB, the NVIDIA card comes in at the very top in terms of performance more often than does AMD. But the dark horse in the 4-way focused article is the 3-way high NVIDIA GPUs.

The 3-way GTX 280/285 leads the 4-way GTX 295 in half our tests: it's a wash and it's either slightly cheaper or slightly more expensive depending on the specific flavor. The 4-way Radeon HD 4870 1GB only leads 3-way GTX 280/285 in 2 out of the 6 tests, though it ties in one of them (Fallout 3 again). If GRID were added back in, it's likely the playing field would be completely even on that count.

If you want an added twist, moving from 2-way to 4-way, AMD tends to scale better at 2560x1600 than NVIDIA. Whether that's because of lower baseline performance of the 2-way option and less system limitation at the high end, it's still impressive that the playing field is this even.

So what's the bottom line? Wow ... It's very hard to say that the differentiator is only performance itself. But as we had less trouble with 3-way than 4-way, so our very slight preference for this one is the 3-way GeForce GTX 285. Overclocked hardware will get you even further into the stratosphere. Enjoy the ride.

If you don't happen to have a motherboard that supports 3x double-slot x16 physical PCIe cards, 4-way will have to be the option. In that case, Quad HD 4870 1GB scores points for keeping up with the Joneses, scaling, and bang for the buck. In terms of performance per dollar, which some people may not care about at these top end price points, AMD leads. At the same time, we must consider that heavy investors like things to play with and PhysX and CUDA do add a potential benefit over AMD that some enthusiasts may like.

So who's got the true halo? Who can provide the best highest-possible-end option? In spite of our leanings and recommendations and considerations, It's a wash. This one goes down in the history books as a battle for the high end that will come down to brand preference.

Power Consumption
POST A COMMENT

44 Comments

View All Comments

  • Razorbladehaze - Thursday, March 05, 2009 - link

    No, I am definitely not wrong. You have obviously never taken any courses in research statistic, or methods. No matter what this dictionary is that you pull this from experimental procedures remain the same and these, for FPS, are OBJECTIVE.

    Your logic is OK regarding programing for specific configurations (not really accurate, but that is besides the overall point here), what is important here is your reasoning is completely wrong. You apply it to one set of programs but not to another and futher deny that such logic you posited even exists within one program set. WTH.

    Yea come back after completing your courses in research methods and experimental procedures.

    Furthermore you may want to buff up on your courses in logic and reasoning.

    To quote the soup NAZI (from Seinfeld):
    NO SOUP FOR YOU!!!!
    Reply
  • Razorbladehaze - Thursday, March 05, 2009 - link

    Ok after thinking briefly i decided to post again.

    Basically what you need to understand for the whole testing aspect, is basically anything that can be operationally defined can be tested objectively.

    Furthermore my ideas on your thoughts relating to 3d mark and game benchmarks.

    3d mark has what many would say is good reliability in testing terms such as test-retest. but so would any good benchmark for games, especially built-in game benchmarks.

    What is most important is validity, that the outcome actually measures, what it is supposed to measure.

    3d mark scores are a conglomerate score and actually relate to nothing other than that final score. These measures are taken from data such as a CPU score, FPS score, and other data correct??

    To quote you "Every scientific experiment strives to remove variables from the testing process..."

    umm hello, how does 3d mark determine its outcome score again, yeah multiple variables, that's right MULTIPLE VARIABLES.

    Ok so going back to what is important is Validity: that which is being measured is actually being measured.

    A final outcome score in 3d mark is valid only for comparisons to other 3d marks scores. NO IF, AND's, OR BUTTS!!!!

    FPS - in each game is a valid OBJECTIVE data outcome score, as it is the operational definition of GPU rendering used here. Running multiple benchmarking runs in each game provides reliability. Measuring FPS across multiple games with a similar game engine lends concurrent validity to the experiments. This type of testing also allows convergent and divergent validity to be addressed here when using multiple game titles with different engines or similar offshoots.

    So in laymen's terms for the world of experimental procedures 3d mark can suck my balls

    Reply
  • Hrel - Saturday, March 21, 2009 - link

    yes 3D mark can only be compared to 3D mark, I already said that, thanks for repeating what a more intelligent person already said as that seems to be what you do; classes, so you DO let other people tell you how and what to think. Anyway, 3D mark rates the graphics card, and scores can be compared across a wide range of cards without worrying about variables like with games, where one game is better than another but only on some GPU's. 3D mark actually has a score for JUST the GPU, but I guess you've never used it.

    I already said the only way to get an accurate representation of GPU performance is to test multiple games and 3D mark, it would best if you could just test every game out there, but no one has that kind of time.

    so yeah, game testing, even when using the games "in game" benchmark still only measures performance in THAT game, so it means nothing to people who don't play that game. As it can't be used to determine performance in any other game; where as 3D Mark can. One game test could say that the HD4870 is faster and subsequently a more powerful card than the GTX285 even though we all know it's not just because that game runs better on AMD hardware. That's all I was saying and all I was trying to get across, I really wasn't looking for a pissing contest with an anonamyss internet user.

    A test in one game, can't give you an accurate idea of how a GPU will perform in other games; 3D Mark, when compared to the scores of other cards, and used in conjunction with FPS scores, CAN!!!!
    Reply
  • Razorbladehaze - Thursday, March 05, 2009 - link

    Oh yeah and in reviewing what I wrote, I glossed over subjective data. and will further punish 3d mark.

    most common subjective data sets are: Ranking data, Rating data, interview data, categorical data.

    The first two; Ranking and rating is really what the outcome score of 3d mark.

    It is a rating on their own scale and it allows them to rank systems by that score.

    So with this final bit of info answer if 3d mark outcomes are really subjective or objective(what does it measure besides its own ranking and rating scales).
    Reply
  • npp - Sunday, March 01, 2009 - link

    It's just common, healthy sense that tells me - something is terribly wrong here. I can't imagine anyone in his right mind spending at least $880 for a (non-professional) graphics solution which draws more than 700W under load, and delivers almost no additional performance, compared to far cheaper alternatives! (If you look closely at the charts you'll see that some games are perfectly playable on even single-GPU solutions). Ok, there are some gains, but they are so minimal that the performance/price and performance/power ratios are simply disturbingly low. I compared the situation once to a quad-core CPU that delivers at most, say, 2,2x speedup over single-cores - with dual cores delivering 1,9. Who would buy such a CPU?

    Of course, I'm not trying to say people should buying such things; it's the vendors who need to do some hard work on that poor scaling. I would never consider buying 4 GPUs if they deliver anything under 3,5x speedup. It sounds crazy right now, but who knows, there may be interesting things to come.
    Reply
  • SiliconDoc - Wednesday, March 18, 2009 - link

    " (If you look closely at the charts you'll see that some games are perfectly playable on even single-GPU solutions). "

    CORRECTION : All games are perfectly playable on plenty more than just a few single gpu solutions.

    YES - that's the real truth.
    Reply
  • mastrdrver - Sunday, March 01, 2009 - link

    It would have been since to see some AA scaling. It was made apparent one the first page that scaling was going to be under performing when at 4 gpus from either 2 or 3. I think just about everyone saw that coming, especially with max of 4x AA.

    With this much power, it would have been interesting to see what happened to scaling when AA gets turned up. Run the normal 4xAA for all the games. Then, start cranking up the AA and see if that might make the the move from 2 to 4 gpus look better.

    I know there are those who, and I'll agree to an extent, say that there isn't a difference from 4x to something like 12x or 24x edge detect (and whatever nVidia uses). Even the move from 4x to 8x is rather small in image improvement. Still though, if your going to drop that kind of money into a set of cards, why not put it to use? While I'm not sure about the 12x to 24x move on edge detect, I do seem to notice things here and there by moving from 4x to 8x and even a few things in 12x AAED. I've seen a hand full of sites look at doing the higher AA levels, but they always do it with 1, or at most 2, gpus when the game is already stressing them at 4x. Why not take a look at these higher levels of AA when there is an abundance of gpu power that seems to be idle.

    Overall, it would be an interesting look into something that I'm not sure I've seen any site look at in any kind of depth when getting to the power level of 4, or even 3, gpus.

    So are the games not scaling with more gpu power because games/drivers are not optimized for 3 or 4 gpus, or are they not scaling because AA levels are left at "lower" levels? (Outside, of course, being CPU limited because of the abundance of gpu power.)
    Reply
  • xxtypersxx - Sunday, March 01, 2009 - link

    I have noticed a recent trend where people post comments accusing Anandtech of favoring Intel/Nvidia. Now I absolutely can't stand bias and stopped reading sites like Tom's long ago because of it, but Anandtech has never seemed even remotely unfair. I think the issue is people have noticed a lot of praise for Intel and Nvidia over the past couple of years, but the reason is they have had consistently better performance. When the 4xxx series launched, Anandtech had nothing but praise for it and it's superior value and even criticized Nvidia's stagnant technology and high prices in many articles. So basically what I'm trying to point out is it isn't Anandtech that has been biased against AMD/ATI, its the real world performance that has been skewed and they are simply reporting on it. Reply
  • SiliconDoc - Wednesday, March 18, 2009 - link

    Oh good golly. Reviewers are expected to be unbiased, but they almost never are. Unbiased is a cold, boring read for most fans.
    Anand used to have a very severe bias in favor of intel many years ago.
    Now you mentioned Tom's was biased and you left Tom's and are now here, showing of course YOUR BIAS, because if you dare to notice bias here, then you've got to leave HERE, and go somewhere else.
    Believe me, if you love ati, you'll stay for the card reviews - that have a huge bias for ati, in every worded paragraph.
    I am curious though how you percieved Tom's bias. I bet he didn't favor the red cards ? Was that it ?
    I'm very interested in that.
    Would you be so kind as to say what bias you found there ?
    Reply
  • deputc26 - Sunday, March 01, 2009 - link

    better change that to "CrossfireX" Reply

Log in

Don't have an account? Sign up now