Who Scales ... And Timing

In previous articles, we took a look at some stats on how many tests scale more or less than a certain threshold. Well, it gets a little trickier here to make sense of everything, so rather than pick points ourselves, we decided to list a bunch of them. In the chart below, we've listed the number of tests that fail to scale better than the percentage listed at the top of the column. Lots of tests fail to scale at what we would call a reasonable percentage, but people looking at these parts have a different definition of reasonable.

The maximum scaling percent is 100% just like with scaling from 1 to 2 GPU. But fewer games scale past 2 GPUs, and of those that do, fewer scale as near linearly past 2 GPUs. And to top that off, many tests that do scale at all scale right into a system limitation. A good chunk of games fail to scale past 5%, and fully 13 out of 18 tests fail to scale beyond 50% in each case we tested.

<2.5 <5 <10 <15 <20 <25 <33.3 <50
NVIDIA GeForce GTX 295 Quad SLI 5 5 6 7 8 9 10 13
NVIDIA GeForce 9800 GX2 Quad SLI 7 7 8 8 8 8 8 13
ATI Radeon HD 4870 1GB Quad CrossFire 5 5 5 7 9 11 12 13
ATI Radeon HD 4850 Quad CrossFire 6 7 9 10 10 10 12 13

Looking at the lower end we can see a bunch of tests fail to scale at all. Also, at 33%, many fewer situations scale at this rate than when moving from 1 to 2 GPUs. Clearly 4-way multiGPU solutions are not designed with anything but maximum performance in mind. Scaling isn't as important as the fact that these solutions can provide some degree of higher performance in some situations.

We would also like to note that when paying ridiculous amounts of money for not quite as ridiculous performance gains, the robustness of the solution is of very high importance. No one wants to pay over $1000 and get a solution that sometimes provides good scaling and sometimes degrades performance. Neither AMD nor NVIDIA are immune to this, but we would like to see this issue tackled in more earnest beyond simply noting that SLI and CrossFire can be disabled if trouble arises.

NVIDIA does have an advantage at this level though. We would love to see AMD get their driver act together and consistently have drivers that provide good scaling and performance in newly release AAA titles on launch day. We would also love to see them refine their driver development model in order to make sure that improvements released as hotfixes always make it into the very next WHQL driver released (which is currently not the case). Everywhere else, this is merely a slight annoyance that people may take or leave. At the highest of the high end, however, a delay in getting good scaling or the need to use less recent drivers that contain more recent fixes (and juggling which is which) can prove more than just a trifle. For such a high price, NVIDIA delivers a better experience on this count.

Additionally, until OpenCL matures, CUDA is a better GPU computing alternative to what AMD offers, and PhysX can provide additional flexibility now that more titles are beginning to adopt it. Actually, this is the space in which we currently see the most value in CUDA and PhysX, as those in the market for equipment this high end will be more interested in these niche features that don't have broad enough support or large enough current impact for us to heartily recommend them as a must have for everyone.

Technophiles (like myself) that are willing to put this kind of money into hardware often get excited about the hardware on a more than practical level. The technology itself, rather than the experience it delivers, can often be a source of enjoyment for the end user. I know I like playing with PhysX and CUDA in spite of the fact that these technologies still need broader support to compel the average gamer.

Performance, itself, cannot be ignored, and is indeed of the highest importance when it comes to the highest end configurations. We will include the value graphs, but we expect that the line closest to the top of the performance charts are the key factor in decision making when it comes to Quad GPU options. The troubles that come with maintaining a 4 GPU configuration are not worth it if the system doesn't provide a consistently top of the line experience.

What We Couldn't Cover Prices, Stutter and The Test
Comments Locked

44 Comments

View All Comments

  • JarredWalton - Sunday, March 1, 2009 - link

    Fixed, thanks. Note that it's easier to fix issues if you can mention a page, just FYI. :)
  • askeptic - Sunday, March 1, 2009 - link

    This is my observation based on their review over the last couple of years
  • ssj4Gogeta - Sunday, March 1, 2009 - link

    It's called being fair and not being biased. They did give the due credit and praise to AMD for RV770 and Phenom II. You probably haven't been reading the articles.
  • SiliconDoc - Wednesday, March 18, 2009 - link

    He's a red fan freak-a-doo, with his tenth+ name, so anything he sees is biased against ati.
    Believe me, that one is totally goners, see the same freak under krxxxx names.
    He must have gotten spanked in a fps by an nvidia card user so badly he went insane.
  • Captain828 - Sunday, March 1, 2009 - link

    In the last couple of years, nVidia and Intel have had better performing hardware than the competition.
    So I don't see any bias and the charts don't show any either.
  • lk7200 - Wednesday, March 11, 2009 - link


    Shut the *beep* up f aggot, before you get your face bashed in and cut
    to ribbons, and your throat slit.
  • SiliconDoc - Wednesday, March 18, 2009 - link

    Another name so soon raging red fanboy freak ? Going to fantasize about murdering someone again, sooner rather than later ?
    If ati didn't suck so badly, and be billion dollar losers, you wouldn't be seeing red, huh, loser.
  • JonnyDough - Tuesday, March 3, 2009 - link

    Hmm...X1900 series ring a bell? Methinks you've been drinking...
  • Razorbladehaze - Sunday, March 1, 2009 - link

    Wow, what i was really looking forward to here disappeared entirely. I was expecting to see more commentary on the subjective image quality of the benchmarks, and there was even less discussion relating to that then in the past two articles kinda a bummer.

    On the side note what was shown was what I expected from piecemeal of a number of other reviews. Nice to see it combined though.

    The only nougat of information I found disturbing is to hear the impression that CUDA is better than what ATI has promoted. This in light of my understanding that nVidia just hired a head tech officer from the University where Stream (what ati uses) computing took roots. Albeit that CUDA is just an offshoot of this, it would seem to me that, this hiring would lead me to beleive that nvidia will be migrating towards stream rather than the opposite. Especially if GPGPU computing is to become commonplace.

    I think that it would be in nVidia's best interest to do this as I am afraid that Intel is right and that nvidia's future may be bleak if GPGPU computing does not take hold and this is one strategy to migrate towards their rival AMD's GPGPU to reduce resource usage to explore this tech.

    Well yeah... i think i went way way off on a tangent on this one so...yeah im done.
  • DerekWilson - Monday, March 2, 2009 - link

    Sorry about the lack of image quality discussion. It's our observation that image quality is not significantly impacted by multiGPU. There are some instances of stuttering here and there, but mostly this is in places where performance is already bad or borderline, otherwise we did note where there were issues.

    As far as GPGPU / GPU computing, CUDA is a more robust and more widely adopted solution than is ATI Stream. CUDA has made more inroads in the consumer space, and especially in the HPC space than has Stream. There aren't that many differences in the programming model, but CUDA for C does have some advantages over Brook+. I prefer the fact that ATI opens up it's ISA down to the metal (along side a virtual ISA), while NVIDIA only offers a virtual ISA.

    The key is honestly adoption though: the value of the technology only exists as far as the end user has a use for it. CUDA leads here. OpenCL, in our eyes, will close the gap between NVIDIA and ATI and should put them both on the same playing field.

Log in

Don't have an account? Sign up now