Final Words

I've never felt totally comfortable with single-card multi-GPU solutions. While AMD reached new levels of seamless integration with the Radeon HD 3870 X2, there was always the concern that the performance of your X2 would either be chart topping or merely midrange depending on how good AMD's driver team was that month. The same is true for NVIDIA GPUs, most games we test have working SLI profiles but there's always the concern that one won't. It's not such a big deal for us benchmarking, but it is a big deal if you've just plopped down a few hundred dollars and expect top performance across the board.

Perhaps I'm being too paranoid, but the CrossFire Sideport issue highlighted an important, um, issue for me. I keep getting the impression that multi-GPU is great for marketing but not particularly important when it comes to actually investing R&D dollars into design. With every generation, especially from AMD, I expect to see a much more seamless use of multiple GPUs, but instead we're given the same old solution - we rely on software profiles to ensure that multiple GPUs work well in a system rather than having a hardware solution where two GPUs truly appear, behave and act as one to the software. Maybe it's not in the consumer's best interest for the people making the GPUs to be the same people making the chipsets, it's too easy to try and use multi-GPU setups to sell more chipsets when the focus should really be on making multiple GPUs more attractive across the board, and just...work. But I digress.

The Radeon HD 4870 X2 is good, it continues to be the world's fastest single card solution, provided that you're running a game with CrossFire support. AMD's CF support has been quite good in our testing, scaling well in all but Assassin's Creed. Of course, that one is a doubly bitter pill for AMD when combined with the removal of DX10.1 support in the latest patch (which we did test with here). That has nothing to do with CrossFire support of course, but the lack of scaling and the fact that 4xAA has the potential to be free on AMD hardware but isn't really doesn't stack up well in that test.

In addition to being the fastest single card solution, the 4870 X2 in CrossFire is also the fastest 2 card solution at 2560x1600 in every test we ran but one (once again, Assassin's Creed). It is very important to note that 4-way CrossFire was not the fastest solution at lower than 2560x1600 in as many cases. This is generally because there is more overhead associated with 4-way CrossFire which can become the major bottle neck in performance at lower resolution. It isn't that the 4870 X2 in CrossFire is unplayable at lower resolutions, it's just a waste of money.

We do have yet to test 3-way SLI with the newest generation of NVIDIA hardware, and the 3-way GTX 260 may indeed give 2x 4870 X2 cards a run for their money. We also have no doubt that a 3x GTX 280 solution is going to be the highest performing option available (though we lament the fact that anyone would waste so much money on so much unnecessary (at this point in time) power).

For now, AMD and NVIDIA have really put it all in on this generation of hardware. AMD may not have the fastest single GPU, but they have done a good job of really shaking up NVIDIA's initial strategy and forcing them to adapt their pricing to keep up. Right now, the consumer can't go wrong with a current generation solution for less than $300 in either the GTX 260 or the HD 4870. These cards compete really well with each other and gamers will really have to pay attention to which titles they desire greater performance in before they buy.

The GTX 280 is much more reasonable at $450, but you are still paying a premium for the fastest single GPU solution available. In spite of the fact that the price is 150+% of the GTX 260 and the 4870, you just don't get that return in performance. It is faster than the GTX 260, and most of the time it is faster than the 4870 (though there are times when AMD's $300 part outperforms NVIDIA's $450 part). The bottom line is that if you want performance at a level above the $300 price point in this generation, you're going to get less performance per dollar.

When you start pushing up over $450 and into multi-GPU solutions, you do have to be prepared for even more diminished returns on your investment, and the 4870 X2 is no exception. Though it scales well in most cases and leads the pack in terms of single card performance when it scales, there is no gaurantee that scaling will be there, let alone good, in every game you want to play. AMD is putting a lot into this, and you can expect us to keep pushing them to get performance impovements as near to linear as possible with multi-GPU solutions. But until we have shared framebuffers and real cooperation on rendering frames from a multi-GPU solution we just aren't going to see the kind of robust, consistent results most people will expect when spending over $550+ on graphics hardware.

Power Consumption
Comments Locked

93 Comments

View All Comments

  • helldrell666 - Wednesday, August 13, 2008 - link

    Anandtech hates DAAMIT.Have you checked the review of the 4870/x2 cards at techreport.com?
    The cards scored much better than here.
    I mean In assassins creed it's well know that ATI cards do much better than nvidia's
    It seems that some sites like: anandtech,tweaktown"nvidiatown",guru3d,hexus... do have some good relations with NVIDIA.
    It seems that marketing these days is turning into fraud.



  • Odeen - Wednesday, August 13, 2008 - link

    With the majority of the gaming population still running 32-bit operating systems and bound by the 4GB RAM limitation, it seems that a 2GB video card (that leaves AT MOST 2GB of system RAM addressable, and, in some cases, only 1.25-1.5GB of RAM) causes more problems than it solves.

    Are there tangible benefits to having 1GB of RAM per GPU in modern gaming, or does the GPU bog down before textures require such a gargantuan amount of memory? Wouldn't it really be more sensible to make the 4870x2 a 2x512MB card, which is more compatible with 32-bit OS'es?

  • BikeDude - Wednesday, August 13, 2008 - link

    Because you can't be bothered upgrading to a 64-bit OS, the rest of the world should stop evolving?

    A 64-bit setup used to be a challenge. Most hw comes with 64-bit drivers now. The question now is: Why bother installing a 32-bit OS in new hardware? You have lots of Win16 apps around that you run on a daily basis?
  • Odeen - Thursday, August 14, 2008 - link

    Actually, no. However, a significant percentage of "enthusiast" gamers at whom this card is aimed run Windows XP (with higher performance and less memory usage than Vista), for which 64-bit support is lackluster.

    Vista 64-bit does not allow unsigned non-WHQL drivers to be installed. That means that you cannot use beta drivers, or patched drivers released to deal with the bug-of-the-week.

    Since a lot of "enthusiast" gamers update their video (and possibly sound) card drivers on a regular basis, and cannot wait until the latest drivers get Microsoft's blessing, 64-bit OS'es are not an option for them.

    I'm not saying that the world should stop evolving, but I am looking forward to a single 64-bit codebase for Windows, where the driver signing restriction can be lifted, since ALL drivers will be designed for 64-bit.
  • rhog - Wednesday, August 13, 2008 - link

    Poor Nvidia,
    DT and Anandtech have their heads in the sand if they don't see the writing on the wall for nvidia. The 4870X2 is the fastest video card out there, the 4870 is excellent in its price range and the 4850 is the same in its price range. The AMD chipsets are excellent (now that the SB750SB is out) and Intel Chipsets have always been a cut above also they really only support Crossfirenot SLI. Why would anyone buy Nvidia (this is why they lost a bunch of money last quarter,no surprise). For example, to get a 280SLI setup you have to buy an Nvidia chipset for either the AMD or Intel processors (the exception may be skulltrail ofr intel?) Neither Nvidia Chipset platform is really better than the equivalents from Intel or AMD so why would you buy them? Along with this Nvidia is currently having issues with their chips dying. Again why woudl you buy Nvidia? I feel that the writing is on the wall Nvidia needs to do something Quick to survive. What I also find Funny is that many people on this site and on others said AMD was stupid for buying ATI but in the end it seems that Nvidia is the one who will suffer the most. Give Nvidia a biased review they need all the help they can get!
  • helldrell666 - Wednesday, August 13, 2008 - link

    AMD didn't get over 40% of the X86 market share when they had the best cpus "athlon 64 /x2".
    AMD knew back then that beating INTEL "to get over 50%" of the
    x86 market share" wont happen by just having the best product.
    Now,INTEL has the better cpu/cpus and 86% of the cpu market.
    So,to fight such a beast with a huge power you have to change the battle ground.
    AMD bought ATI to get the parallel processing technology.Why?
    To get a new market where there's no INTEL.
    actually, that's not the exact reason
    Lately nvidia introduced cuda,"the parallel processing for general processing "And as we saw,The parallel procesing is much faster than the x86 processing in some taskes.
    Like in transcoding the 280gtx with a 933 Giga flops/cycle of processing power {processing power is the number of constructions or flops a gpu can handle in a single cycle} was 14 times faster than a QX 9770 clocked at 4GHz.
    NVIDIA claims that there are much more areas where the parallel processing can take over easily.
    So,We have two types of processing and each one has it's adavantages over the other.
    What i meant by changing the battle ground wasn't the gpu market.
    AMD is woking at these seconds on the first parallel+x86 processor .
    A processor that will include x86 and parallel cores working together to handle everthing much faster than a x86 processor at least in some tasks.So the x86 core will handle the tasks that they are faster at,and the parallel cores will handle tha tasks that the're faster at.
    Now,Intel claims that geometry can be handled better via the x86 processing.
    you can see it as a battle ground between INTEL and NVIDIA but,It's actually where AMD can win.
    I think that we're going to see not only x86+parallel cpus but also
    x86+parallel gpus.Easily put as much processing power of each type as it needs to make a gpu or a cpu.
    I think that AMD is going to change the micro processing industry to where it can win.
  • lee1210mk2 - Wednesday, August 13, 2008 - link

    Fastest card out - all that matters! - #1
  • Ezareth - Wednesday, August 13, 2008 - link

    I wouldn't be suprised to see the test setup done on a P45 much like Tweaktown did for their 4870X2 CF setup. Doesn't anyone realize the 2 X PCIe X8 is not the same as 2 X PCIe X16? That is the only thing that really explains the low scoring of the CF setups here.
  • Crassus - Wednesday, August 13, 2008 - link

    I think this is actually a positive sign when viewed from a little further away. Remember all the hoopla about "native quad core" with AMD's Phenom? They stuck with it, and they're barely catching up with Intel (and probably lose out big in yield).

    Here Sideport apparently doesn't bring the expected benefits - so they cut it out and moved on. No complaints from me here - at the end of the day the performance counts, not how you get there. And if disabling it lowers the power requirements a bit, with the power draw Anand measured I don't think it's an unreasonable choice to disable it. And if it makes the board cheaper, again, I don't mind paying less. :D

    And if AMD/ATI choses to enable it one or two years down the road - by then we've probably moved on by one or two generations, and the gain is negligible compared to just replacing them.

    [rant]
    At any rate, I'm happy with my 7900 GT SLI - and I can run the whole setup with a 939 4200+ on a 350 W PSU. If power requirements continue to go up like that, I see the power grid going down if s/o hosts a LAN party in my block. We already had brownouts this summer with multiple ACs kicking in at the same time, and it looks like PC PSUs are moving into the same power draw ballpark. R&D seriously needs to look into GPU power efficiency.
    [/rant]

    My $.02
  • drank12quartsstrohsbeer - Wednesday, August 13, 2008 - link

    My guess (before the reviews came out) was that the sideport would be used with the unified framebuffer memory. When the unified memory feature didn't work out, there was no need for it.

    I wonder if the non functioning unified memory was due to technical problems, or if it was disabled for strategic reasons... ie since this card already beats Nvidias, why use it. This way they can make it a feature of the firegl and GPGPU cards only.

Log in

Don't have an account? Sign up now