This marks the final planned installment of our multiGPU exploration. We may (or may not) publish a follow up that looks into CPU scaling across all these parts. What we believe we'll find is that the single GPU solutions will not be anywhere near as significantly impacted as multiGPU solutions which more often hit CPU and other system limitations. We aren't guaranteeing that we'll be publishing the CPU scaling article because we still have some testing to run and this editor is soon to be the father of a second child. We will be working on completing our testing, and whether or not we are able to round this series out with a CPU scaling follow up, we will definitely be exploring CPU scaling further in future articles.

It is important that we remember, for now, that much of the diminishing return on what 3 and 4 GPU systems can deliver comes in the form of system limited performance. With single GPU systems, we expect that there is a wide range of CPUs we can select that will deliver nearly the same performance. Putting less money into the CPU than the GPU makes a lot of sense for gamers who don't need the CPU power for other tasks. But does the same hold for multiGPU systems? Maybe and maybe not. We do know that with the highest powered CPU we can buy we certainly have a good number of system limited situations.

One of the things that people who invest in the highest end multiGPU systems get is more longevity. Gamers with 3-way GTX 285 hardware will be able to go quite a while without upgrading. Of course, this has to be balanced with advancements in technology. Will 3x GTX 285 still be worth it after we have DX11 hardware and games and there are better graphics options for newer models? Additionally, gamers who want and employ this type of solution are highly likely to upgrade as often as possible to the highest end hardware possible, so the longevity issue might not be as relevant as it is with more affordable multiGPU solutions.

On the plus side, if gamers who must have the bleeding edge buy the highest end equipment they only need to make a significant investment the first time. Imagine a gamer bought 3x GTX 285 parts when they came out in January. Within a year we expect new models to come out that will be able to best the GTX 285 in performance, but the GTX 285 hardware will still be worth a significant amount. These used cards can be sold and the profits can be reinvested in new graphics hardware. This isn't as easy or useful with lower end graphics cards as you don't get the same return on your investment.

I'm not saying you'll make money or break even this way, but if you aggregate the cost over the long term, you'll be spending less on average per purchase than if you just make one high end purchase and then run it until it's worthless before you buy again. Whether or not you'll spend more or less on the whole would require further analysis though.

Now, we advocate value quite a bit on our site. People who want the absolute highest end can easily look at the graphs and simply know what they want. We don't really see much "value" in these high end parts beyond being as fast as possible. It's cheaper to buy efficient lower end hardware that gets hugely playable performance in all the games we tested. A good high end 2-way solution can go a long way while the diminishing returns of 3 and especially 4-way systems make the cost of the slightly higher performance is more than we can recommend in good conscience. But there's a market for it, so we'll take a look at it.

For the majority of our readers, though, this article will reinforce the fact that, while the highest possible end might be nice to dream about, it's really tough to justify the cost, especially when most people are on some sort of budget. So let's get a glimpse of that dream.

What We Couldn't Cover
Comments Locked

44 Comments

View All Comments

  • JarredWalton - Sunday, March 1, 2009 - link

    Fixed, thanks. Note that it's easier to fix issues if you can mention a page, just FYI. :)
  • askeptic - Sunday, March 1, 2009 - link

    This is my observation based on their review over the last couple of years
  • ssj4Gogeta - Sunday, March 1, 2009 - link

    It's called being fair and not being biased. They did give the due credit and praise to AMD for RV770 and Phenom II. You probably haven't been reading the articles.
  • SiliconDoc - Wednesday, March 18, 2009 - link

    He's a red fan freak-a-doo, with his tenth+ name, so anything he sees is biased against ati.
    Believe me, that one is totally goners, see the same freak under krxxxx names.
    He must have gotten spanked in a fps by an nvidia card user so badly he went insane.
  • Captain828 - Sunday, March 1, 2009 - link

    In the last couple of years, nVidia and Intel have had better performing hardware than the competition.
    So I don't see any bias and the charts don't show any either.
  • lk7200 - Wednesday, March 11, 2009 - link


    Shut the *beep* up f aggot, before you get your face bashed in and cut
    to ribbons, and your throat slit.
  • SiliconDoc - Wednesday, March 18, 2009 - link

    Another name so soon raging red fanboy freak ? Going to fantasize about murdering someone again, sooner rather than later ?
    If ati didn't suck so badly, and be billion dollar losers, you wouldn't be seeing red, huh, loser.
  • JonnyDough - Tuesday, March 3, 2009 - link

    Hmm...X1900 series ring a bell? Methinks you've been drinking...
  • Razorbladehaze - Sunday, March 1, 2009 - link

    Wow, what i was really looking forward to here disappeared entirely. I was expecting to see more commentary on the subjective image quality of the benchmarks, and there was even less discussion relating to that then in the past two articles kinda a bummer.

    On the side note what was shown was what I expected from piecemeal of a number of other reviews. Nice to see it combined though.

    The only nougat of information I found disturbing is to hear the impression that CUDA is better than what ATI has promoted. This in light of my understanding that nVidia just hired a head tech officer from the University where Stream (what ati uses) computing took roots. Albeit that CUDA is just an offshoot of this, it would seem to me that, this hiring would lead me to beleive that nvidia will be migrating towards stream rather than the opposite. Especially if GPGPU computing is to become commonplace.

    I think that it would be in nVidia's best interest to do this as I am afraid that Intel is right and that nvidia's future may be bleak if GPGPU computing does not take hold and this is one strategy to migrate towards their rival AMD's GPGPU to reduce resource usage to explore this tech.

    Well yeah... i think i went way way off on a tangent on this one so...yeah im done.
  • DerekWilson - Monday, March 2, 2009 - link

    Sorry about the lack of image quality discussion. It's our observation that image quality is not significantly impacted by multiGPU. There are some instances of stuttering here and there, but mostly this is in places where performance is already bad or borderline, otherwise we did note where there were issues.

    As far as GPGPU / GPU computing, CUDA is a more robust and more widely adopted solution than is ATI Stream. CUDA has made more inroads in the consumer space, and especially in the HPC space than has Stream. There aren't that many differences in the programming model, but CUDA for C does have some advantages over Brook+. I prefer the fact that ATI opens up it's ISA down to the metal (along side a virtual ISA), while NVIDIA only offers a virtual ISA.

    The key is honestly adoption though: the value of the technology only exists as far as the end user has a use for it. CUDA leads here. OpenCL, in our eyes, will close the gap between NVIDIA and ATI and should put them both on the same playing field.

Log in

Don't have an account? Sign up now