Meet the 5870

The card we’re looking at today is the Radeon HD 5870, based on the Cypress core.

Compared to the Radeon HD 4870, the 5870 has seen some changes to the board design. AMD has now moved to using a full sheath on their cards (including a backplate), very much like the ones that NVIDIA has been using since the 9800GTX. The card measures 10.5” long, an inch longer than the 4890 or the same as the 4870x2 and the NVIDIA GTX lineup.

The change in length means that AMD has moved the PCIe power connectors to the top of the card facing upwards, as there’s no longer enough room in the rear. Facing upwards is also a change from the 4870x2, which had them facing the front of the card. This, in our opinion, makes it easier to plug and unplug the PCIe power connectors, since it’s now possible to see what you’re doing.

Since the card has a TDP of 188W, AMD can still get away with using two 6-pin connectors. This is going to be good news for those of you with older power supplies that don’t feature 8-pin connectors, as previously the fastest cards without 8-pin connectors were the 4890 and GTX 285.

Briefly, the 5850 that we are not testing today will be slightly smaller than the 5870, coming in at 9.5”. It keeps the same cooler design, however the PCIe power connectors are back on the rear of the card.

With the 5800 series, DisplayPort is getting a much-needed kick in the pants. DisplayPort (full size) is standard on all 5800 series cards – prior to this it has been rather absent on reference cards. Along with a DisplayPort, the 5870 reference card contains a dedicated HDMI port, and a pair of DVI ports.

Making 4 ports fit on a card isn’t a trivial task, and AMD has taken an interesting direction in making it happen. Rather than putting every port on the same slot of the bracket as the card itself, one of the DVI ports is raised on to the other bracket. ATI could have just as easily only equipped these cards with 1 DVI port, and used an HDMI-to-DVI adapter for the second port. The advantage of going this direction is that the 5800 series can still drive two VGA monitors when using DVI-to-VGA adapters, and at the same time having an HDMI port built in means that no special adapters are necessary to get an HDMI port with audio capabilities. The only catch to this specific port layout is that the card still only has enough TMDS transmitters for two ports. So you can use 2x DVI or 1x DVI + HDMI, but not 2x DVI + HDMI. For 3 DVI-derived ports, you will need an active DisplayPort-to-DVI adapter.

With the configuration AMD is using, fitting that second DVI port also means that the exhaust vent of the 5800 series cards is not the full length of the card as is usually common, rather it’s a hair over half the length. The smaller size had us concerned about the 5870’s cooling capabilities, but as you’ll see with our temperature data, even with the smaller exhaust vent the load temperatures are no different than the 4870 or 4850, at 89C. And this is in spite of the fact that the 5870 is rated 28W more than the 4870.

With all of these changes also comes some changes to the loudness of the 5870 as compared to the 4870. The 27W idle power load means that AMD can reduce the speed of the fan some, and they say that the fan they’re using now is less noticeable (but not necessarily quieter) than what was on the 4870. In our objective testing the 5870 was no quieter than any of the 4800 series cards when it comes to idling at 46.6dB, and indeed it’s louder than any of those cards at 64dB at load. But in our subjective testing it has less of a whine. If you go by the objective data, this is a push at idle and louder at load.

Speaking of whining, we’re glad to report that the samples we received do not have the characteristic VRM whine/singing that has plagued many last-generation video cards. Most of our GTX cards and roughly half of our 4800 series cards generated this noise under certain circumstances, but the 5870 does not.

Finally, let’s talk about memory. Despite of doubling just about everything compared to RV770, Cypress and the 5800 series cards did not double their memory bandwidth. Moving from the 4870 and it’s 900MHz base memory clock, the 5870 only jumps up by 33% to 1.2Ghz, in effect increasing the ratio of GPU compute elements to memory bandwidth.

When looking back at the RV770, AMD believes that they were not bandwidth starved on the cards that used GDDR5. And since they had more bandwidth than they needed, it was not necessary to go for significantly more bandwidth for Cypress. This isn’t something we can easily test, but in our benchmarks the 5870 never doubles the performance of the 4870, in spite of being nearly twice the card. Graphics processing is embarrassingly parallel, but that doesn’t mean it perfectly scales. The different may be a product of that or a product of the lack of scaling in memory bandwidth, we can’t tell. What’s for certain however is that we don’t have any hard-capped memory bandwidth limited situations, the 5870 always outscores the 4870 by a great deal more than 33%.

Index Meet the Rest of the Evergreen Family
Comments Locked

327 Comments

View All Comments

  • RubberJohnny - Thursday, September 24, 2009 - link

    Well silicondoc you sure have some hatred for ATI/love for nvidia.

    It's almost as if you work for the green team...

    You seem to have all this time on your hands to go around the net looking for links to spread FUD...sitting on new egg watching these cards come in and out of stock like you have a vested interest in seeing ATI fail...unlike any sane person it appears you want nvidia to have a monopoly on the industry?

    Maybe you are privy to some inside info over at nvidia and know they have nothing to counter the 5870 with?

    Maybe the cash they paid you to spin these BS comments would have been better spent on R&D?
  • SiliconDoc - Thursday, September 24, 2009 - link

    That's a nice personal, grating, insulting ripppp, it's almost funny, too.
    ---
    The real problems remain.
    I bring up this stuff because of course, no one else will, it is almost forbidden. Telling the truth shouldn't be that hard, and calling it fairly and honestly should not be such a burden.
    I will gladly take correction when one of you noticing insulters has any to offer. Of course, that never comes.
    Break some new ground, won't you ?
    I don't think you will, nor do I think anyone else will - once again, that simply confirms my factual points.
    I guess I'll give you a point for complaining about delivery, if that's what you were doing, but frankly, there are a lot of complainers here no different - let's take for instance the ATI Radeon HD 4890 vs. NVIDIA GeForce GTX 275 article here.
    http://www.anandtech.com/video/showdoc.aspx?i=3539">http://www.anandtech.com/video/showdoc.aspx?i=3539
    Boy, the red fans went into rip mode, and Anand came in and changed the articles (Derek's) words and hence "result", from GTX275 wins to ATI4890 wins.
    --
    No, it's not just me, it's just the bias here consistently leans to ati, and wether it's rooting for the underdog that causes it, or the brooding undercurrent hatred that surfaces for "the bigshot" "greedy" "ripoff artist" "nvidia overchargers" "industry controlling and bribing" "profit demon" Nvidia, who knows...
    I'm just not afraid to point it out, since it's so sickening, yes, probably just to me, "I'm sure".
    How about this glaring one I have never pointed out even to this day, but will now:
    ATI is ALWAYS listed first, or "on top" - and of course, NVIDIA, second, and it is no doubt, in the "reviewer's minds" because of "the alphabet", and "here we go in alphabetical order".
    A very, very convenient excuse, that quite easily causes a perception bias, that is quite marked for the readers.
    But, that's ok.
    ---
    So, you want to tell me why I shouldn't laugh out loud when ATI uses NVIDIA cards to develope their "PhysX" competition Bullet ?
    ROFLMAO
    I have heard 100 times here (from guess whom) that the ati has the wanted "new technology", so will that same refrain come when NVIDIA introduces their never before done MIMD capable cores in a few months ? LOL
    I can hardly wait to see the "new technology" wannabes proclaiming their switched fealty.
    Gee sorry for noticing such things, I guess I should be a mind numbed zombie babbling along with the PC required fanning for ati ?
  • silverblue - Thursday, September 24, 2009 - link

    No; if he did work for nVidia, he'd be far better informed and far less prone to using the phrase "red rooster" every five seconds.
  • crackshot91 - Wednesday, September 23, 2009 - link

    Any possibility of benchmarks with a core 2 duo?

    I wanna know if it will be necessary to upgrade to an i5 or i7 (All new mobo) to see big performance gains over my 8800GT. Will a C2D E6750 @ 3.2GHz bottleneck it?
  • Ryan Smith - Wednesday, September 23, 2009 - link

    Our recent Core i7 860 article should do an adequate job of answering that question. Several of the benchmarks were taken right out of this article.
  • therealnickdanger - Wednesday, September 23, 2009 - link

    You dedicated a full page to the flawless performance of its A/V output, but didn't mention it in the "features" part of the conclusion. It's a very powerful feature, IMO. Granted, this card may be a tad too hot and loud to find a home in a lot of HTPCs, but it's still an awesome feature and you should probably append your conclusion... just a suggestion though.

    Ultimately, I have to admit to being a little disappointed by the performance of this card. All the Eyefinity hype and playable framerates at massive 7000x3000 resolutions led me to believe that this single card would scale down and simply dominate everything at the 30" level and below. It just seems logical, so I was taken aback when it was beat by, well, anything else. I expected the 5870 and 5870CF to be at the top of every chart. Oh well.

    Awesome article though! I'm sure there's a 5850 in my future!
  • MrMom - Wednesday, September 23, 2009 - link

    Does anyone have a good explanation why the massive HD5870 is still slower/@par with the GTX295?

    Thanks
  • SiliconDoc - Thursday, September 24, 2009 - link

    Yes, because the ati core "really sucks". It needs DDR5, and much higher MHZ to compete with Nvidia, and their what, over 1 year old core. LOL Even their own 4870x2.
    Or the 3 year old G92 vs the ddr3 "4850" the "topcore" before yesterday. (the ati topcore minus the well done 3m mhz+ REBRAND ring around the 4890)
    That's the sad, actual truth. That's the truth many cannot bear to bring themselves to realize, and it's going to get WORSE for them very soon, with nvidia's next release, with ddr5, a 512 bit bus, and the NEW TECHNOLOGY BY NVIDIA THAT ATI DOES NOT HAVE MIMD capable cores.
    Oh, I can hardly wait, but you bet I'm going to wait, you can count on that 100%.

  • Spoelie - Thursday, September 24, 2009 - link

    because those are 2 480mm² dies, while this is only 1 360mm² die?
  • Griswold - Wednesday, September 23, 2009 - link

    Its one GPU instead of two, maybe?

Log in

Don't have an account? Sign up now