Meet The 5670

Today’s launch is the Redwood based Radeon HD 5670. The 5670 is a full Redwood card, with all of its functional units enabled and running at its “full” clockspeed. The card we’re looking at is clocked at 775MHz core, and 1GHz(4GHz data rate) on the GDDR5 RAM. With a 128-bit memory bus, this gives the card 64GB/sec of memory bandwidth.


AMD stock photo, our sample boards are black and don't have CF connectors

AMD will be launching the card in a 512MB and 1GB configuration. The $99 card we’re looking at is a 512MB model, while the 1GB model will run $15-$20 more.

Attached to the card are 4 128MB Hynix GDDR5 RAM chips. These chips are specified for a 4GHz data rate, so AMD is only finally pairing up 5000-series cards with appropriately fast RAM. What this means is that unlike the 5700 and 5800 series, there won’t be any freebie memory overclocking to take advantage of the gap between the card’s clocks and what the RAM is specified for. What you see is what you get.

As is common for cards targeted at the sub-$100 price range, the 5670 runs sans external power. AMD puts the TDP for the card at 61W, which compares favorably to the 70W of the GT 240 that we saw last week. AMD tells us that they were merely designing this card to be under 75W, and that the 61W TDP of the shipping product is a good bit lower than they had been planning on.

With the lower power usage of this card, the need for a dual-slot cooler (and the 5000 series distinctive shroud) is gone. The 5670 is equipped with a slightly larger than normal single-slot blower, which blows air towards the front of the card. We call this cooler slightly larger than normal since AMD has extended the heatsink portion slightly to cover all of the GDDR5 RAM chips on the card, as evidenced by the heatsink jutting out of the top. This is an interesting design choice from AMD, since other cards like the 5750 do not apply any cooling to the GDDR5 RAM chips. This does leave us wondering whether cooling the RAM is necessary, or if AMD is doing it for cosmetic reasons.

The card measures at 6.61”, and finally drops AMD’s traditional Eyefinity port configuration. By moving to a single slot, AMD has dropped the 2nd DVI port, leaving the card with a DisplayPort, an HDMI port, and a dual-link DVI port. The card will be able to drive a second DVI monitor using an HDMI-to-DVI adapter, although only a single link. The 5670 still has full Eyefinity capabilities, and a 3rd monitor can be hooked up to the DisplayPort for that task. AMD tells us that the Redwood chip can actually drive 4 monitors, but none of the launch cards will configured for that (not that the 5800 cards were either). AMD’s ideal Eyefintiy configuration for this card is to pair it up with a trio of cheap 16:9 19” monitors, although as we’ll see the card doesn’t really have enough power for gaming like this.

The need for an active DisplayPort adapter is still an issue however, and at $99 the adapters are as much if not more than the card itself. At this point the best solution is a DisplayPort native monitor, but those are still fairly rare and seldom cheap.

Index 5500 Series and 5450 Pre-Announcement
Comments Locked

73 Comments

View All Comments

  • Spoelie - Friday, January 15, 2010 - link

    Because the GPU can never truly be isolated, the CPU/memory/buses need to perform some work too to keep the GPU fed with data and instructions to process.
  • Slaimus - Thursday, January 14, 2010 - link

    It is not too long ago that the Geforce 6200 debuted at $150. Low end gaming cards are slowly pickup up prices again.
  • dagamer34 - Thursday, January 14, 2010 - link

    When do the low profile 5650/5670 cards come out? I've been waiting one for my HTPC to bitstream Blu-ray HD codecs.
  • SmCaudata - Thursday, January 14, 2010 - link

    Unless you already have an HTPC why would anyone get this card. If building a new HTPC you could get a Clarkdale to bitstream the audio-codecs.

    Also...why do we care if it is bitstreamed? I have a reciever that can decode this but it doesn't matter if the digital information is converted to PCM before or after the HDMI cable. The only advantage is to see those lights on the front of my reciever...
  • papapapapapapapababy - Thursday, January 14, 2010 - link

    future what? dx11 at 5fps? no thanks ati, remember the 4770? that was a good sub $100 card, (thanks) this crap is overpriced, $45 or bust.

  • TheManY2K3 - Thursday, January 14, 2010 - link

    Ryan,

    I don't see any of the applications at 12x10 include data for the 8800GT, however, you are comparing the 8800GT to the HD5670 in most applications.

    Could you include the 8800GT in the 12x10 data, so that we can accurately gauge the performance of the HD5670?
  • Ryan Smith - Thursday, January 14, 2010 - link

    The 8800 GT data was originally collected for past articles, where we started at 16x10. The 8800 GT isn't part of my collection (it's Anand's) so I wasn't able to get 12x10 data in time for this article.
  • silverblue - Thursday, January 14, 2010 - link

    It's probably fair to point out that, in most tests, the 5670 is very close to the 8800, and as such listing it may not mean anything. However, the 1280x1024 tests are also without AA - it might be nice to see the effect of turning AA on with this oldie but goodie as compared to the more modern competition, so including it may make sense. You may think that the higher core clock of the 5670 would give it an advantage without AA but if it goes anything like Batman, this would probably be an incorrect assumption as well.
  • pjladyfox - Thursday, January 14, 2010 - link

    Last I looked ANY Radeon card with the x5xx, x6xx, or x7xx model number was denoted as a mainstream card which is clearly noted here:

    http://en.wikipedia.org/wiki/Radeon#Product_naming...">http://en.wikipedia.org/wiki/Radeon#Product_naming...

    By that definition that means that these cards were designed to run in systems that have power supplies from 350 to 400w, support HD quality video, and support games at a resolution of no higher than 1440x900 at medium quality settings with 2x AA and 8x anisotropic filtering. By putting them at settings that most will not run these cards at it makes these results for the most part worthless.

    I mean who cares how these cards run at 1920x1200 at high detail settings since we already know they're going to fail anyway? I'm more interested in how these run with all the details on at say 1440x900 or possibly 1680x1050 which are the more common widescreen monitors most people have.

    For that matter where are details about how these cards compare running HD quality video, if the fan speed can be controlled via speedfan, or even if they have fixed some of the video quality issues like black crush when outputting via HDMI?
  • Ryan Smith - Thursday, January 14, 2010 - link

    We traditionally run every game at 3 resolutions, particularly since some games are more GPU-intensive than others. Based on the 12x10 performance, I decided that 10x7 would be silly and instead went up a level to 19x12 - a few games were actually playable even at those high resolutions.

    16x10 is accounted for, and 12x10 is close enough to 14x9 that the results are practically the same.

    HD Video: All the 5000 series cards, except perhaps the Cedar are going to be exactly the same.

    Fan speed: Can be modified (I use it to cool down the cards before extraction after running FurMark)

    Black Crush: I honestly have no idea

Log in

Don't have an account? Sign up now