Final Words

Due to circumstances quite beyond our control, this will be essentially the third time we've covered the Radeon HD 4850. AMD has managed to make the $200 price point very exciting and competitive, and the less powerful version of RV770 that is the 4850 is a great buy for the performance.

As for the new business, the Radeon HD 4870 is not only based on an efficient architecture (both in terms of performance per area and per watt), it is an excellent buy as well. Of course we have to put out the usual disclaimer of "it depends on the benchmark you care about," but in our testing we definitely saw this $300 part perform at the level of NVIDIA's $400 GT200 variant, the GTX 260. This fact clearly sets the 4870 in a performance class beyond its price.

Once again we see tremendous potential in CrossFire. When it works, it scales extremely well, but when it doesn't - the results aren't very good. You may have noticed better CrossFire scaling in Bioshock and the Witcher since our Radeon HD 4850 preview just a few days ago. The reason for the improved scaling is that AMD provided us with a new driver drop yesterday (and quietly made public) that enables CrossFire profiles for both of these games. The correlation between the timing of our review and AMD addressing poor CF scaling in those two games is supicious. If AMD is truly going to go the multi-GPU route for its high end parts, it needs to enable more consistent support for CF across the board - regardless of whether or not we feature those games in our reviews.

That being said, AMD's strategy has validity as we've seen here today. A pair of Radeon HD 4850s can come close to the performance of a GeForce GTX 280, and a pair of Radeon HD 4870s are faster across the board - not to mention that they should be $50 less than the GTX 280 and will work on motherboards with Intel-chipsets. Quite possibly more important than the fact that AMD's multi-GPU strategy has potential is the fact that it may not even be necessary for the majority of gamers - a single Radeon HD 4850 or Radeon HD 4870 is easily enough to run anything out today. We'll still need the large monolithic GPUs (or multi-GPU solutions) to help drive the industry forward, but AMD raised the bar for single-card, single-GPU performance through good design, execution and timing with its RV770. Just as NVIDIA picked the perfect time to release its 8800 GT last year, AMD picked the perfect time to release the 4800 series this year.

Like it's RV670 based predecessors, the Radeon 4850 and 4870 both implement DX10.1 support and enable GPU computing through their CAL SDK and various high level language constructs that can compile down SPMD code to run on AMD hardware. While these features are great and we encourage developers to embrace them, we aren't going to recommend cards based on features that aren't yet widely used. Did we mention there's a tessellator in there?

On the GPGPU side of things, we love the fact that both NVIDIA and AMD are sharing more information with us, but developers are going to need more hardware detail. As we mentioned in our GT200 coverage, we are still hoping that Intel jumping in the game will stir things up enough to really get us some great low level information.

We know that NVIDIA and AMD do a whole lot of things in a similar way, but that their compute arrays are vastly different in the way they handle single threads. The differences in the architecture has the effect of causing different optimization techniques to be needed for both architectures which can make writing fast code for both quite a challenge. The future is wide open in terms of how game developers and GPGPU programs tend to favor writing code and what affect that will have on the future performance of both NVIDIA and AMD hardware.

For now, the Radeon HD 4870 and 4850 are both solid values and cards we would absolutely recommend to readers looking for hardware at the $200 and $300 price points. The fact of the matter is that by NVIDIA's standards, the 4870 should be priced at $400 and the 4850 should be around $250. You can either look at it as AMD giving you a bargain or NVIDIA charging too much, either way it's healthy competition in the graphics industry once again (after far too long of a hiatus).

Power Consumption, Heat and Noise
POST A COMMENT

173 Comments

View All Comments

  • jay401 - Wednesday, June 25, 2008 - link

    Good but I just wish AMD would give it a full 512-bit memory bus bandwidth. Tired of 256-bit. It's so dated and it shows in the overall bandwidth compared to NVidia's cards with 512-bit bus widths. All that fancy GDDR4/5 and it doesn't actually shoot them way ahead of NVidia's cards in memory bandwidth because they halve the bus width by going with 256-bit instead of 512-bit. When they offer 512-bit the cards will REALLY shine. Reply
  • Spoelie - Thursday, June 26, 2008 - link

    Except that when R600 had a 512bit bus, it didn't show any advantage over RV670 with a 256bit bus. And that was with GDDR3 vs GDDR3, not GDDR5 like in RV770 case. Reply
  • JarredWalton - Thursday, June 26, 2008 - link

    R600 was 512-bit ring bus with 256-bit memory interface (four 64-bit interfaces). http://www.anandtech.com/showdoc.aspx?i=2552&p...">Read about it here for a refresh. Besides being more costly to implement, it used a lot of power and didn't actually end up providing provably better performance. I think it was an interesting approach that turned out to be less than perfect... just like NetBurst was an interesting design that turned out to have serious power limitations. Reply
  • Spoelie - Thursday, June 26, 2008 - link

    Except that it was not, that was R520 ;) and R580 is the X19x0 series. That second one proved to be the superior solution over time.

    R600 is the x2900xt, and it had a 1024bit ring bus with 512bit memory interface.
    Reply
  • DerekWilson - Sunday, June 29, 2008 - link

    yeah, r600 was 512-bit

    http://www.anandtech.com/showdoc.aspx?i=2988&p...">http://www.anandtech.com/showdoc.aspx?i=2988&p...

    looking at external bus width is an interesting challenge ... and gddr5 makes things a little more crazy in that clock speed and bus width can be so low with such high data rates ...

    but the 4870 does have 16 memory modules on it ... so that's a bit of a barrier to higher bit width busses ...
    Reply
  • JarredWalton - Wednesday, June 25, 2008 - link

    I'd argue that the 512-bit memory interface on NVIDIA's cards is at least partly to blame for their high pricing. All things being equal, a 512-bit interface costs a lot more to implement than a 256-bit interface. GDDR5 at 900MHz is effectively the same as GDDR3 at 1800MHz... except no one is able to make 1800MHz GDDR3. Latencies might favor one or the other solution, but latencies are usually covered by caching and other design decisions in the GPU world. Reply
  • geok1ng - Wednesday, June 25, 2008 - link

    The tests showed what i feared: my 8800GT is getting old to pump my Apple at 2560x1600 even without AA! But the tests also showed that the 512MB of DDR5 on the 4870 justifies the higher price tag over the 4850, something that the 3870/3850 pair failed to demonstrate. It remains the question: will 1GB of DDR5 detrone NVIDIA and rule the 30 inches realm of single GPU solutions? Reply
  • IKeelU - Wednesday, June 25, 2008 - link

    "It is as if AMD and NVIDIA just started pulling out hardware and throwing it at eachother"

    This makes me crack up...I just imagine two bruised and sweaty middle-aged CEO's flinging PCBs at each other, like children in a snowball fight.
    Reply
  • Thorsson - Wednesday, June 25, 2008 - link

    The heat is worrying. I'd like to see how aftermarket coolers work with a 4870. Reply
  • Final Destination II - Wednesday, June 25, 2008 - link

    http://www.techpowerup.com/reviews/Powercolor/HD_4...">http://www.techpowerup.com/reviews/Powercolor/HD_4...

    Look! Compare the Powercolor vs. the MSI.
    Somehow MSI seems to have done a better job with 4dB less.
    Reply

Log in

Don't have an account? Sign up now