Meet the 5870

The card we’re looking at today is the Radeon HD 5870, based on the Cypress core.

Compared to the Radeon HD 4870, the 5870 has seen some changes to the board design. AMD has now moved to using a full sheath on their cards (including a backplate), very much like the ones that NVIDIA has been using since the 9800GTX. The card measures 10.5” long, an inch longer than the 4890 or the same as the 4870x2 and the NVIDIA GTX lineup.

The change in length means that AMD has moved the PCIe power connectors to the top of the card facing upwards, as there’s no longer enough room in the rear. Facing upwards is also a change from the 4870x2, which had them facing the front of the card. This, in our opinion, makes it easier to plug and unplug the PCIe power connectors, since it’s now possible to see what you’re doing.

Since the card has a TDP of 188W, AMD can still get away with using two 6-pin connectors. This is going to be good news for those of you with older power supplies that don’t feature 8-pin connectors, as previously the fastest cards without 8-pin connectors were the 4890 and GTX 285.

Briefly, the 5850 that we are not testing today will be slightly smaller than the 5870, coming in at 9.5”. It keeps the same cooler design, however the PCIe power connectors are back on the rear of the card.

With the 5800 series, DisplayPort is getting a much-needed kick in the pants. DisplayPort (full size) is standard on all 5800 series cards – prior to this it has been rather absent on reference cards. Along with a DisplayPort, the 5870 reference card contains a dedicated HDMI port, and a pair of DVI ports.

Making 4 ports fit on a card isn’t a trivial task, and AMD has taken an interesting direction in making it happen. Rather than putting every port on the same slot of the bracket as the card itself, one of the DVI ports is raised on to the other bracket. ATI could have just as easily only equipped these cards with 1 DVI port, and used an HDMI-to-DVI adapter for the second port. The advantage of going this direction is that the 5800 series can still drive two VGA monitors when using DVI-to-VGA adapters, and at the same time having an HDMI port built in means that no special adapters are necessary to get an HDMI port with audio capabilities. The only catch to this specific port layout is that the card still only has enough TMDS transmitters for two ports. So you can use 2x DVI or 1x DVI + HDMI, but not 2x DVI + HDMI. For 3 DVI-derived ports, you will need an active DisplayPort-to-DVI adapter.

With the configuration AMD is using, fitting that second DVI port also means that the exhaust vent of the 5800 series cards is not the full length of the card as is usually common, rather it’s a hair over half the length. The smaller size had us concerned about the 5870’s cooling capabilities, but as you’ll see with our temperature data, even with the smaller exhaust vent the load temperatures are no different than the 4870 or 4850, at 89C. And this is in spite of the fact that the 5870 is rated 28W more than the 4870.

With all of these changes also comes some changes to the loudness of the 5870 as compared to the 4870. The 27W idle power load means that AMD can reduce the speed of the fan some, and they say that the fan they’re using now is less noticeable (but not necessarily quieter) than what was on the 4870. In our objective testing the 5870 was no quieter than any of the 4800 series cards when it comes to idling at 46.6dB, and indeed it’s louder than any of those cards at 64dB at load. But in our subjective testing it has less of a whine. If you go by the objective data, this is a push at idle and louder at load.

Speaking of whining, we’re glad to report that the samples we received do not have the characteristic VRM whine/singing that has plagued many last-generation video cards. Most of our GTX cards and roughly half of our 4800 series cards generated this noise under certain circumstances, but the 5870 does not.

Finally, let’s talk about memory. Despite of doubling just about everything compared to RV770, Cypress and the 5800 series cards did not double their memory bandwidth. Moving from the 4870 and it’s 900MHz base memory clock, the 5870 only jumps up by 33% to 1.2Ghz, in effect increasing the ratio of GPU compute elements to memory bandwidth.

When looking back at the RV770, AMD believes that they were not bandwidth starved on the cards that used GDDR5. And since they had more bandwidth than they needed, it was not necessary to go for significantly more bandwidth for Cypress. This isn’t something we can easily test, but in our benchmarks the 5870 never doubles the performance of the 4870, in spite of being nearly twice the card. Graphics processing is embarrassingly parallel, but that doesn’t mean it perfectly scales. The different may be a product of that or a product of the lack of scaling in memory bandwidth, we can’t tell. What’s for certain however is that we don’t have any hard-capped memory bandwidth limited situations, the 5870 always outscores the 4870 by a great deal more than 33%.

Index Meet the Rest of the Evergreen Family
Comments Locked

327 Comments

View All Comments

  • SiliconDoc - Monday, September 28, 2009 - link

    When the GTX295 still beats the latest ati card, your wish probably won't come true. Not only that, ati's own 4870x2 just recently here promoted as the best value, is a slap in it's face.
    It's rather difficult to believe all those crossfire promoting red ravers suddenly getting a different religion...
    Then we have the no DX11 released yet, and the big, big problem...
    NO 5870'S IN THE CHANNELS, reports are it's runs hot and the drivers are beta problematic.
    ---
    So, celebrating a red revolution of market share - is only your smart aleck fantasy for now.
    LOL - Awwww...

  • silverblue - Monday, September 28, 2009 - link

    It's nearly as fast as a dual GPU solution. I'd say that was impressive.

    DirectX 11 comes out in less than a month... hardly a wait. It's not as if the card won't do DX9/10.

    Hot card? Designed to be that way. If it was a real issue they'd have made the exhaust larger.

    Beta problematic drivers? Most ATI launches seem to go that way. They'll be fixed soon enough.
  • SiliconDoc - Monday, September 28, 2009 - link

    Gee, I thought the red rooster said nvidia sales will be low for a while, and I pointed out why they won't be, and you, well you just couldn'r handle that.
    I'd say a 60.96% increase in a nex gen gpu is "impressive", and that's what Nvidia did just this last time with GT200.
    http://www.anandtech.com/video/showdoc.aspx?i=3334...">http://www.anandtech.com/video/showdoc.aspx?i=3334...
    --
    BTW - the 4870 to 4890 move had an additional 3M core transistors, and we were told by you and yours that was not a "rebrand".

    BUT - the G80 move to G92 added 73M core transistors, and you couldn't stop shrieking "rebrand".
    ---
    nearly as fast= second best
    DX11 in a month = not now and too early
    hot card -= IT'S OK JUST CLAIM ATI PLANNED ON IT BEING HOT !ROFL, IT'S OK TO LIE ABOUT IT IN REVIEWS, TOO ! COCKA DOODLE DOOO!
    beta drivers = ALL ATI LAUNCHES GO THAT WAY, NOT "MOST"
    ----
    Now, you can tell smart aleck this is a paper launch like the 4870, the 4770, and now this 5870 and no 5850, becuase....
    "YOU'LL PUT YOUR HEAD IN THE SAND AND SCREAM IN CAPS BECAUSE THAT'S HOW YOU ROLL IN RED ROOSERVILLE ! "
    (thanks for the composition Jared, it looks just as good here as when you add it to my posts, for "convenience" of course)
  • ClownPuncher - Monday, September 28, 2009 - link

    It would be awesome if you were to stop posting altogether.
  • SiliconDoc - Monday, September 28, 2009 - link

    It would be awesome if this 5870 was 60.96% better than the last ati card, but it isn't.
  • JarredWalton - Monday, September 28, 2009 - link

    But the 5870 *is* up to 65% faster than the 4890 in the tested games. If you were to compare the GTX 280 to the 9800 GX2, it also wasn't 60% faster. In fact, 9800 GX2 beat the GTX 280 in four out of seven tested games, tied it in one, and only trailed in two games: Enemy Territory (by 13%) and Oblivion (by 3%), making ETQW the only substantial win for the GT200.

    So we're biased while you're the beacon of impartiality, I suppose, since you didn't intentionally make a comparison similar to comparing apples with cantaloupes. Comparing ATI's new card to their last dual-GPU solution is the way to go, but NVIDIA gets special treatment and we only compare it with their single GPU solution.

    If you want the full numbers:

    1) On average, the 5870 is 30% faster than the 4890 at 1680x1050, 35% faster at 1920x1200, and 45% faster at 2560x1600.

    2) Note that the margin goes *up* as resolution increases, indicating the major bottleneck is not memory bandwidth at anything but 2560x1600 on the 5870.

    3) Based on the old article you linked, GTX 280 was on average 5% slower than 9800X2 and 59% faster than the 9800 GTX - the 9800X2 was 6.4% faster than the GTX 280 in the tested titles.

    4) Making the same comparisons, 5870 is only 3.4% faster than the 4870X2 in the tested games and 45% faster than the 4890HD.

    Now, the games used for testing are completely different, so we have to throw that out. DoW2 is a huge bonus in favor of the 5870 and it scales relatively poorly with CF, hurting the X2. But you're still trying to paint a picture of the 5870 as a terrible disappointment when in fact we could say it essentially equals what NVIDIA did with the GTX 280.

    On average, at 2560x1600, if NVIDIA's GT300 were to come out and be 60% faster than the GTX 285, it will beat ATI's 5870 by about 15%. If it's the same price, it's the clear choice... if you're willing to wait a month or two. That's several "ifs" for what amounts to splitting hairs. There is no current game that won't run well on the HD 5870 at 2560x1600, and I suspect that will hold true of the GT300 as well.

    (FWIW, Crysis: Warhead is as bad as it gets, and dropping 4xAA will boost performance by at least 25%. It's an outlier, just like Crysis, since the higher settings are too much for anything but the fastest hardware. "High" settings are more than sufficient.)
  • SiliconDoc - Tuesday, September 29, 2009 - link

    In other words, even with your best fudging and whining about games and all the rest, you can't even bring it with all the lies from the 15-30 percent people are claiming up to 60.96%
    --
    Yes, as I thought.
  • zshift - Thursday, September 24, 2009 - link

    My thoughts exactly ;)

    I knew the 5870 was gonna be great based on the design philosophy that AMD/ATi had with the 4870, but I never thought I'd see anything this impressive. LESS power, with MORE power! (pun intended), and DOUBLE the speed, at that!

    Funny thing is, I was actually considering an Nvidia gpu when I saw how impressive PhysX was on Batman AA. But I think I would rather have near double the frame rates compared to seeing extra paper fluffing around here and there (though the scenes with the scarecrow are downright amazing). I'll just have to wait and see how the GT300 series does, seeing as I can't afford any of this right now (but boy, oh boy, is that upgrade bug itching like it never has before).
  • SiliconDoc - Thursday, September 24, 2009 - link

    Fine, but performance per dollar is on the very low end, often the lowest of all the cards. That's why it was omitted here.
    http://www.techpowerup.com/reviews/ATI/Radeon_HD_5...">http://www.techpowerup.com/reviews/ATI/Radeon_HD_5...
    THE LOWEST overall, or darn near it.
  • erple2 - Friday, September 25, 2009 - link

    So what you're saying then is that everyone should buy the 9500 GT and ignore everything else? If that's the most important thing to you, then clearly, that's what you mean.

    I think that the performance per dollar metrics that are shown are misleading at best and terrible at worst. It does not take into account that any frame rates significantly above your monitor refresh are for all intents and purposes wasted, and any frame rates significantly below 30 should by heavily weighted negatively. I haven't seen how techpowerup does their "performance per dollar" or how (if at all) they weight the FPS numbers in the dollar category.

    SLI/Crossfire has always been a lose-lose in the "performance per dollar" category. Curiously, I don't see any of the nvidia SLI cards listed (other than the 295).

    That sounds like biased "reporting" on your part.

Log in

Don't have an account? Sign up now