Graphics is infinitely parallel. There is always more work that can be done, and the work can be broken up into millions of completely independent operations. This is very different than most tasks we see done on the CPU that don't scale quite as easily with the number of cores. While we might see small improvements by adding another CPU, we can see nearly double the performance by doubling the number of processors in a graphics card (as long as there are no other bottlenecks anyway). This fact is why AMD and NVIDIA have invested so much money into their respective multiGPU solutions (CrossFire and SLI respectively).

MultiGPU solutions have been around for a few years now, and while we frequently include single card multiGPU solutions in our reviews, we only occasionally take an in depth look at multiGPU technology. Some time has passed since the last time we studied the issue, and now that we've fully broken in our Core i7 system, 64-bit Vista, and recent graphics drivers, it's time to get to it.

Over the past few weeks we've been benchmarking and analyzing lots of numbers. We've looked at single, two, three and four GPU systems across multiple games and resolutions. The configurations we chose to look at are current generation high-ish end hardware capable of operation in 3-way and 4-way configurations. Because of the sheer volume of data we collected, we've decided to break up our analysis into multiple articles.

This first article (the one you're reading right now) will cover single and dual GPU configurations (including single card multiGPU hardware). The next article will add 3-way solutions along with comparisons back to single and dual GPU setups. The final article will add in 4-way performance analysis and compare it back to the single, dual and 3-way data. Splitting up the analysis this way will allow us to dive deep into each type of configuration individually without spreading the content too thin. We can keep focus on a specific aspect of multiGPU performance and scaling while still making all the relevant comparisons.

The initial installment also introduces the Sapphire Radeon HD 4850 X2 2GB. Though we expected AMD to push the 4850 X2 out in the same way they launched the 4870 X2, we've only seen one version of the 4850 X2 hit the scenes late last year from Sapphire. In light of what we've seen, we are rather surprised that we haven't seen more fanfare behind this part from either AMD or other board makers. The lighter weight X2 competes more directly in price and performance to the GeForce GTX 280/285, and really fills out the lineup for AMD. Overall, the increased RAM in the 4850 X2 2GB enables great performance scaling even at resolutions the 512MB 4850 can't come close to handling.

As for the topics we'll cover, our interest will focus on scalability of the multiGPU solutions and the relative value of the same. Before jumping into the numbers, we'll cover the metrics we use to analyze our data. First, we'll look at scaling and talk about the big picture. Then we'll talk about what we've done to calculate a value comparison.

Who Scales: How Often?
POST A COMMENT

95 Comments

View All Comments

  • makdaddy626 - Monday, February 23, 2009 - link

    I really appreciate the article and all the research and work that went into it. Kudos to you for it.

    A small suggestion would be to take into account a minimal playable frame rate in the value and performance per dollar data, where a ZERO would be substituted for the frame rate in instances where a card failed to reach a playable rate for a given game/resolution. I feel this would more accurately reflect the value of the card(s) as 15 FPS in most games presents no value.
    Reply
  • Mastakilla - Monday, February 23, 2009 - link

    I agree

    Minimum framerates should be more important then average ones even...

    Interesting article though! I didn't know the 4850x2 was so competitive...

    Only in Crysis it does worse the the 285, which I had in mind for my new pc...
    That does make me wonder if the 285 might be a more future proof investment...
    Reply
  • makdaddy626 - Monday, February 23, 2009 - link

    Yes, But I meant "minimum" in the sense of what the game needs to be played at, even you measure "average" - I just don't think it's fair to say that the 9800 GTX shows the highest performance per dollar on the Crysis Warhead 2560x1600 chart when it turns in frame rates of 13.5. To me that is ZERO value for the money because it's not playable. Someone wanting to play at those settings would be wasting EVERY dollar spent on the 9800 GTX. Reply
  • 7Enigma - Tuesday, February 24, 2009 - link

    Completely agree. Statistics mean nothing when not taken in a proper context. Zero, NA, or just leaving it blank would be much better. Someone looking to use that card would then click on a lower resolution and see if it is a viable choice. It would reduce the amount of data that needed to be compared as the reader of the article, and make caveats like in the explaination section about comparing between cards/resolutions etc. almost moot. Reply
  • Spivonious - Monday, February 23, 2009 - link

    The framerate charts are all but worthless if you're focusing on how performance scales. Why not some line graphs with all three resolutions shown and card models along the x-axis? Then the reader could see how performance is affected by the lower memory of some models and how it scales linearly with the higher-end cards. Reply
  • 7Enigma - Monday, February 23, 2009 - link

    I would have to agree with this. I always enjoyed the broken line graphs that show multiple resolutions and framerates in the same graph. It makes comparisons very easy and more importantly allows EVERYONE to see their particular resolution without having to click on a link for a separate graph.

    It's fine to keep your specialized performance/$ and % increase from a single card the way you have it as I understand what you mean about not comparing between resolutions but for the general frame rate comparisons I preferred the old way.

    One thing I failed to see which I have seen in previous reviews with X-fire/SLI (or was it with tri/quad setups?) is the stuttering that can be present. I thought it was an Anand article but could have been from another site.
    Reply
  • SiliconDoc - Wednesday, March 18, 2009 - link

    The charts are designed to autopop to 2650x - and we all know the red ragers have screamed the ati cards are great there.
    EVERY CHART pops to the favored ati $2,000.00 monitor resolution.
    It's not an accident, Derek made them.
    Reply
  • C'DaleRider - Monday, February 23, 2009 - link

    Derek, if you want to impress, and this article does with its research, please invest in some writing manuals and learn some grammar.

    Take this sentence:
    "This unique card really shined and held it's own all the way up to 2560x1600."

    Your use of "IT'S" in this instance is incorrect. IT'S is a contraction for IT IS, not a possessive word, which is ITS.


    Or take this passage, "It's very surprising to us that AMD hasn't pushed this configuration and that Sapphire are the only manufacturer to have put one of these out there."

    Sapphire is a company or organization, I realize that. But in this instance, you're referring to a group in its singular fashion, or as a single unit. That context is seen by the only manufacturer in the sentence.

    That sentence should have read. "It's very surprising to us that AMD hasn't pushed this configuration and that Sapphire IS the only manufacturer to have put one of these out there."

    Here's the rule for that (taken from both MLA and APA handbooks): If the action of the verb is on the group as a whole, treat the noun as a singular noun. If the action of the verb in on members of the group as individuals, treat the noun as a plural noun.

    This means companies, such as Microsoft, IBM, Sapphire, Ford, etc., when being referred to the company as a whole collective, single entity, has to have a singular verb.

    But, if you are referring to pieces of the whole, such as "the engineers of Ford are.....", or "The programmmers at Microsoft are.....".

    Please invest in some proper English grammar texts and take time to read and learn from them. Your error laden grammar you write and use is quite distracting and detracts from what is supposed to be a professionally run hardware site.

    Hire a proofreader or good copy editor if learning proper grammar is too difficult.

    Otherwise, this was a great article!
    Reply
  • The0ne - Monday, February 23, 2009 - link

    I don't really mind Anandtech articles as much in terms of presentation, spelling and graphics. Other sites such as Ars Technica, x-bit labs, and so forth are much worst. The first is first since they've started writing articles concerning everything, it seems.

    If I did mind, I say stick to the general guidelines writing manuals, procedures, pamphlets, technical docs, etc. But being online, this isn't the case and won't ever be. Again, I don't mind as much because I do the same thing myself where I hardly pay attention to spelling or grammar when writing online. It's only when I write short stories or for work that I pay attention. Strange but comfortsure does make one do these things :)

    And yes, I do write all sorts of articles daily.

    Reply
  • oldscotch - Monday, February 23, 2009 - link

    "Your error laden grammar you write and use is quite distracting and detracts from what is supposed to be a professionally run hardware site."

    That should read "The error laden grammar you use is quite distracting..." or just "Your error laden grammer is quite distracting..."
    "Your error laden grammam you write and use..." is redundant.

    Perhaps you should learn some grammar yourself before criticizing others about theirs.
    Reply

Log in

Don't have an account? Sign up now