Who Scales: How Much?

To calculate this scaling data, we simply looked at percent performance improvement of two cards over one. With perfect scaling we would see 100%, while no improvement is 0% and a negative performance improvement means that the multiGPU solution actually produced worse numbers than the single card. There's a lot of data here, so we'll break it down a bit before we present it all.

It is possible to see more than 100% scaling in some tests for different reasons. Fluctuations in benchmark performance can contribute to just over 100% situations, and some times optimizations to enable better multiGPU performance can cut some work out enabling higher performance than would otherwise have been possible. In one of the cases we test today we have a situation where single GPU performance is limited at some framerate while multiple GPUs aren't hindered by the same limit. This artificially inflates the scaling percent.

When looking at games that scale overall, we end up seeing both Radeon HD 4870 configurations (512MB and 1GB) performing worse than we expected. Granted, the 4870 1GB looks better if we only take 2560x1600 into account, but even then the Radeon HD 4850, GeForce GTX 260 and GTX 280 beat out the 4870 1GB in terms of average performance improvement (when performance improves). When we add in CPU limited cases, the 4870 cards look even worse. Consistently, most of the ways we attempted to analyze the magnitude of performance improvement (averages, geometric means, per game, across games where call cards scaled, etc.), the Radeon HD 4850 and GeForce GTX 260 (and sometimes the GTX 280) did pretty well, while the Radeon HD 4870 cards came in pretty low on the list with the 1GB often looking worse because it hit harder CPU limits at lower resolutions.

Hitting CPU or system limits does speak more to value than desirability from a performance standpoint, but it's still important to look at all the cases. Configurations with lower baseline single GPU performance will have more headroom to scale, but these might not always scale enough to be playable even if they scale well. So it's important to take both value and absolute performance data into account when looking at scaling.

We've put all this data on our benchmark pages with the performance data to make it easier to see in context. There just isn't one good way to aggregate the data or we would talk about it here. Depending on the type of analysis we try to do, we could present it in ways that favor AMD and NVIDIA and since there really isn't a "correct" way to do it we've decided to just present the data per game and leave it at that.

Who Scales: How Often? Calculating Value: Performance per Dollar
Comments Locked

95 Comments

View All Comments

  • kmmatney - Monday, February 23, 2009 - link

    Especially at the 1920 x 1200 resolution - that resolution is becoming a sweetspot nowadays.
  • just4U - Monday, February 23, 2009 - link

    I disagree. I see people finally moving away from their older 17-19" flat panels directly into 22" wide screens. 24" and 1920/1200 resolutions are no where near the norm.
  • SiliconDoc - Wednesday, March 18, 2009 - link

    Correct, but he said sweet spot because his/her wallet is just getting bulgy enough to comtenplate a movement in that direction... so - even he/she is sadly stuck at "the end user resolution"...
    lol
    Yes, oh well. I'm sure everyone is driving a Mazerati until you open their garage door....or golly that "EVO" just disappeared... must have been stolen.
  • DerekWilson - Monday, February 23, 2009 - link

    The 1GB version should perform very similarly to the two 4850 cards in CrossFire.

    The short answer is that the 1GB version won't have what it takes for 2560x1600 but it might work out well for lower resolutions.

    We don't have a 1GB version, so we can't get more specific than that, though this is enough data to make a purchasing decision -- just look at the 4850 CrossFire option and take into consideration the cheaper price on the 1GB X2.
  • politbureau - Tuesday, June 1, 2010 - link

    I realize this is an older article, however I always find it interesting to read when upgrading cards.

    While I find it admirable that Derek has compared the 'older' GTX 280 SLI scaling, it is unfortunate that he hasn't pointed out that it should perform identically to the GTX 285s if the clocks were the same.

    This was also passed over in the "worthy successor" article, where it does not compare clock for clock numbers - an obvious test, if we want to discover the full value of the die shrink.

    I recently 'upgraded' to 3 GTX 285s from 3 GTX 280s through warranty program with the mfg, and there is little to no difference in performance between the 2 setups. While cabling is more convenient (no 6 to 8 pin adapters), the 285s won't clock any better than my 280s would, Vantage scores are within a couple hundred points of each other at the same clocks (the 280s actually leading), and the temperature and fan speed of the new cards hasn't improved.

    I think this is a valuable point in an article that compares performance per dollar, and while slightly outside the scope of the article, I think it's a probabtive observation to make.

Log in

Don't have an account? Sign up now