After a holiday break, AMD’s staggered launch of the Evergreen family picks back up today with the launch of the Radeon HD 5670. The 5670 marks the desktop launch of Redwood, the 3rd chip in the Evergreen family, designed to fit in below the Juniper chip that powers the Radeon HD 5700 series.

  ATI Radeon HD 5750 ATI Radeon HD 4850 ATI Radeon HD 4770 ATI Radeon HD 5670 ATI Radeon HD 4670
Stream Processors 720 800 640 400 320
Texture Units 36 40 32 20 32
ROPs 16 16 16 8 8
Core Clock 700MHz 625MHz 750MHz 775MHz 750MHz
Memory Clock 1.15GHz (4.6GHz data rate) GDDR5 993MHz (1986MHz data rate) GDDR3 800MHz (3200MHz data rate) GDDR5 1000MHz (4000MHz data rate) GDDR5 1000MHz (2000MHz data rate) GDDR3
Memory Bus Width 128-bit 256-bit 128-bit 128-bit 128-bit
Frame Buffer 1GB / 512MB 1GB / 512MB 512MB 1GB / 512MB 1GB / 512MB
Transistor Count 1.04B 956M 826M 627M 514M
TDP 86W 110W 80W 61W 59W
Manufacturing Process TSMC 40nm TSMC 55nm TSMC 40nm TSMC 40nm TSMC 55nm
Price Point $129 - $149 $99-$129 $129 $99 / $119 $60-$90

AMD has been relatively straightforward in designing the Evergreen family. Each chip is half of its bigger brother. This means that the Redwood chip and the 5670 is in most ways half of a Juniper/5770: half the SIMDs (400), half the ROPs (8), half the texture units (20), etc. The core clocks are also slightly changed compared to the 5870 and 5770; here we have a core clock of 775MHz instead of 850MHz as found on those cards. So on paper, the 5670 is going to be slightly less than half of a 5770 in performance.

The one hardware unit that hasn’t been halved is the memory bus – we still have the same 128-bit GDDR5 memory bus as found on the 5770, but here it’s clocked at a 4GHz data rate. So the 5670 has a higher bandwidth-to-compute ratio than the 5770 does.

In nearly chopping Juniper in half, AMD has brought the transistor count down from 1.04B to 627M. Those transistors occupy a space of 104mm2, which is understandably smaller than the 5770, but also smaller than the RV730 GPU that powers the Radeon HD 4670, the card the 5670 replaces. This smaller die brings load power down to 61W, and idle power down to 14W.

While most of the functional units have been halved, the feature set remains otherwise unchanged from the rest of the 5000 series. DirectX 11, UVD2 video decoding, angle-independent anisotropic filtering, HDMI bitstreaming, and supersample anti-aliasing are all accounted for. Eyefinity is also here, using a slightly different port configuration to continue bringing support for 3 monitor Eyefinity.

At $99, the 5670 is intended to stake out the all-important sub-$100 position for video cards, which is a big price point for price-sensitive buyers and OEMs. Bear in mind that the entire sub-$100 market encompassed 2/3rds of all video card sales last quarter, according to AMD and Mercury Research. Given the low transistor count and small die size of the 5670, we expect that AMD will have a lot of price latitude to work with going forward – as 40nm production costs and GDDR5 costs come down, this board should be cheaper to make than the 4670 ever was.

AMD considers the chief competition for this board to be the NVIDIA GeForce GT 240, which we reviewed last week. However this price point also brings AMD into competition with last year’s parts:  the GeForce 9800 GT and Radeon 4850. The former is in good supply, and the latter still available enough at this moment to be a viable alternative. As we’ll see, this is by no means a slam-dunk for AMD today.

Coming from CES, we had a chance to talk to vendors about the 40nm TSMC situation, which has been a thorn in AMD’s side since the launch of the 4770 last year. What we’re hearing is that the situation is improving (which is why 5800 series cards are finally usually in stock) but that it’s still not as good as everyone would like. For this launch there are 50k+ cards, which should be more than enough to satisfy demand. We don’t expect there to be any supply issues with the 5670.

Meet The 5670
POST A COMMENT

73 Comments

View All Comments

  • JarredWalton - Sunday, January 17, 2010 - link

    As far as CPU multiplier, if you have the i7-720QM the normal multiplier is 12X (133 bus * 12 = 1.6GHz). For the i7-820QM the stock multiplier is 13 (1.73GHz). Maximum Turbo mode on the 720QM is 2.80GHz, so you could potentially see a 21X multiplier, while on the 820QM the maximum Turbo is 3.066GHz so you'd see up to a 23X multiplier. I don't know if throttle stop tells you max and min multipliers or not, but you could even run CPU-Z and just watch to see if the multiplier is changing a lot. Reply
  • SlyNine - Tuesday, January 19, 2010 - link

    Yea I have been watching a few programs including throttle stop, Realtemp and Realtemp GT, including I7 turbo. They all show the max multiplier at 7-9 when gaming under load, even with an external monitor hooked up and this screen off it doesn't go past 10. Its worth noting that with the screen brightness turned down and CPU load only they stay at 12, but turn the brightness up and your multi falls to 8.

    The biggest problem is the clock modulation, which I'm trying to test. But it definitely correlates with real world performance, while task manager may show the CPU at 100%, throttle stop reports a 75% reduction in CPU usage. This also correlates with the delta that taskmanager indicates CPU usage at and what programs like I7turbo and real temp show the C0 state percent. Task manager will show 100% while the C0% will be at 25%, indicating a 75% reduction while under load.

    Perhaps throttle stop just measures the difference between the CO% and what the OS reports.

    I've custom set all the settings in the advanced power options to be the same on and off battery. When you unplug the system runs a great deal faster, albeit at the risk of harming the battery. I've disabled speed step as well with no difference.

    Excel isn't my strong suit(basically I'm going to have to relearn how to use it) but I'm trying to correlate frame rate with the indicated clock modulation. But I'm unsure how to record a timeline of FPS. It does appear though that the FPS do report accurately when the clock modulation kicks in.
    Reply
  • satish2685 - Monday, April 01, 2013 - link

    Hi, I would like to purchase an Entry level 1GB DDR3 Asus Geforce HD5450 Graphics Card, but considering the power requirements, i only have an 250W PSU. Is it ok to buy a graphic card that requires a minimum of 400w and connect it to my existing MB or do i need to upgrade my PSU?? Advice required. If so any consequences i could face in future ?? Reply

Log in

Don't have an account? Sign up now