Assassin's Creed

Even at 2560 x 1600 the high end configurations are bumping into a frame rate limiter, any of the very high end setups are capable of running Assassin's Creed very well.


Click to Enlarge

Oblivion

The GeForce 9800 GTX+ does very well in Oblivion and a pair of them actually give the 4870 CF a run for its money, especially given that the GTX+ is a bit cheaper. While it's not the trend, it does illustrate that GPU performance can really vary from one application to the next. The Radeon HD 4870 is still faster, overall, just not in this case where it performs equally to a GTX+.


Click to Enlarge

The Witcher

We've said it over and over again: while CrossFire doesn't scale as consistently as SLI, when it does, it has the potential to outscale SLI, and The Witcher is the perfect example of that. While the GeForce GTX 280 sees performance go up 55% from one to two cards, the Radeon HD 4870 sees a full 100% increase in performance.

It is worth noting that we are able to see these performance gains due to a late driver drop by AMD that enables CrossFire support in The Witcher. We do hope that AMD looks at enabling CrossFire in games other than those we test, but we do appreciate the quick turnaround in enabling support - at least once it was brought to their attention.


Click to Enlarge

Bioshock

The Radeon HD 4000 series did very well in Bioshock in our single-GPU tests, but pair two of these things up and we're now setting performance records.


Click to Enlarge

Multi-GPU Performance in Crysis, Call of Duty 4 & Quake Wars Power Consumption, Heat and Noise
POST A COMMENT

173 Comments

View All Comments

  • jay401 - Wednesday, June 25, 2008 - link

    Good but I just wish AMD would give it a full 512-bit memory bus bandwidth. Tired of 256-bit. It's so dated and it shows in the overall bandwidth compared to NVidia's cards with 512-bit bus widths. All that fancy GDDR4/5 and it doesn't actually shoot them way ahead of NVidia's cards in memory bandwidth because they halve the bus width by going with 256-bit instead of 512-bit. When they offer 512-bit the cards will REALLY shine. Reply
  • Spoelie - Thursday, June 26, 2008 - link

    Except that when R600 had a 512bit bus, it didn't show any advantage over RV670 with a 256bit bus. And that was with GDDR3 vs GDDR3, not GDDR5 like in RV770 case. Reply
  • JarredWalton - Thursday, June 26, 2008 - link

    R600 was 512-bit ring bus with 256-bit memory interface (four 64-bit interfaces). http://www.anandtech.com/showdoc.aspx?i=2552&p...">Read about it here for a refresh. Besides being more costly to implement, it used a lot of power and didn't actually end up providing provably better performance. I think it was an interesting approach that turned out to be less than perfect... just like NetBurst was an interesting design that turned out to have serious power limitations. Reply
  • Spoelie - Thursday, June 26, 2008 - link

    Except that it was not, that was R520 ;) and R580 is the X19x0 series. That second one proved to be the superior solution over time.

    R600 is the x2900xt, and it had a 1024bit ring bus with 512bit memory interface.
    Reply
  • DerekWilson - Sunday, June 29, 2008 - link

    yeah, r600 was 512-bit

    http://www.anandtech.com/showdoc.aspx?i=2988&p...">http://www.anandtech.com/showdoc.aspx?i=2988&p...

    looking at external bus width is an interesting challenge ... and gddr5 makes things a little more crazy in that clock speed and bus width can be so low with such high data rates ...

    but the 4870 does have 16 memory modules on it ... so that's a bit of a barrier to higher bit width busses ...
    Reply
  • JarredWalton - Wednesday, June 25, 2008 - link

    I'd argue that the 512-bit memory interface on NVIDIA's cards is at least partly to blame for their high pricing. All things being equal, a 512-bit interface costs a lot more to implement than a 256-bit interface. GDDR5 at 900MHz is effectively the same as GDDR3 at 1800MHz... except no one is able to make 1800MHz GDDR3. Latencies might favor one or the other solution, but latencies are usually covered by caching and other design decisions in the GPU world. Reply
  • geok1ng - Wednesday, June 25, 2008 - link

    The tests showed what i feared: my 8800GT is getting old to pump my Apple at 2560x1600 even without AA! But the tests also showed that the 512MB of DDR5 on the 4870 justifies the higher price tag over the 4850, something that the 3870/3850 pair failed to demonstrate. It remains the question: will 1GB of DDR5 detrone NVIDIA and rule the 30 inches realm of single GPU solutions? Reply
  • IKeelU - Wednesday, June 25, 2008 - link

    "It is as if AMD and NVIDIA just started pulling out hardware and throwing it at eachother"

    This makes me crack up...I just imagine two bruised and sweaty middle-aged CEO's flinging PCBs at each other, like children in a snowball fight.
    Reply
  • Thorsson - Wednesday, June 25, 2008 - link

    The heat is worrying. I'd like to see how aftermarket coolers work with a 4870. Reply
  • Final Destination II - Wednesday, June 25, 2008 - link

    http://www.techpowerup.com/reviews/Powercolor/HD_4...">http://www.techpowerup.com/reviews/Powercolor/HD_4...

    Look! Compare the Powercolor vs. the MSI.
    Somehow MSI seems to have done a better job with 4dB less.
    Reply

Log in

Don't have an account? Sign up now