Conclusion

Compared to AMD’s previous generation of bottom-tier cards, the Radeon HD 5450 doesn’t offer too many surprises. Cards at this end of the spectrum have to give up a lot of their performance to meet their cost, power, and form-factor needs, and the 5450 is no different. It certainly produces playable framerates for most games (and even at high settings for some of them), and it’s going to be a great way to convince IGP users to move up to their first discrete GPU. But for a bottom-tier GPU, spending a little more money has always purchased you a great deal more powerful video card, and this hasn’t changed with the 5450.

The concern we have right now is the same concern we’ve had for most of AMD’s other launches, which is the price. The card we tested is a $60 card, smack-dab in the middle of the territory for the Radeon HD 4550, the DDR2 Radeon HD 4650, and the DDR2 GT 220. We don’t have the DDR2 cards on hand, but the performance gap between bottom-tier cards like the 5450 and those cards is enough that the DDR2 penalty won’t come close to closing the gap. If performance is all you need and you can’t spend another dime, then a last-generation card from the next tier up is going to offer more performance for the money. The 5450 does have DX11, but it’s not fast enough to make practical use of it.

Things are more in favor of the 5450 however when we move away from gaming performance. For a passively cooled low-profile card, its competition is the slower GeForce 210, and a few Radeon HD 4550s. The 4550 is still a better card from a performance standpoint, but it’s not a huge gap. Meanwhile the 5450 is cooler running and less power hungry.

Currently it’s HTPC use that puts the 5450 in the most favorable light. As the Cheese Slices test proved, it’s not quite the perfect HTPC card, but it’s very close. Certainly it’s the best passively cooled card we have tested from an image quality perspective, and it’s the only passive card with audio bitstreaming. If you specifically want or need Vector Adaptive Deinterlacing, the Radeon HD 5670 is still the cheapest/coolest/quietest card that’s going to meet your needs. But for everyone else the 5450 is plenty capable and is as close to being perfect as we’ve seen any bottom-tier card get.

To that end the Sapphire card looks particularly good, since based on our testing they're able to drop the reference 5450's clumsy double-wide heatsink for a single-wide heatsink without the card warming up too much more. For Small Form Factor PCs in particular, it's going to be a better choice than any card that uses the reference heatsink, so long as there's enough clearance for the part of the heatsink on the back side of the card.

Moving away from the 5450 for a moment, besides the Radeon HD 5770 this is the only other card in the 5000-series that is directly similar to a 4000-series card. In fact it’s the most similar, being virtually identical to the 4550 in terms of functional units and memory speeds. With this card we can finally pin down something we couldn’t quite do with the 5770: clock-for-clock, the 5000-series is slower than the 4000-series.

This is especially evident on the 5450, where the 5450 has a 50MHz core speed advantage over the 4550, and yet with everything else being held equal it is still losing to the 4550 by upwards of 10%. This seems to the worst in shader-heavy games, which leads us to believe that actual cause is that the move from DX10.1 shader hardware on the 4000-series to DX11 shader hardware on the 5000 series. Or in other words, the shaders in particular seem to be what’s slower.

AMD made several changes here, including adding features for DX11 and rearranging the caching system for GPGPU use. We aren’t sure whether the slowdown is a hardware issue, or if it’s the shader compiler being unable to fully take advantage of the new hardware. It’s something that’s going to bear keeping an eye on in future driver revisions.

This brings us back to where we are today, with the launch of the 5450. AMD has finally pushed the final Evergreen chip out the door, bringing an end to their 6 month launch plan and bringing DirectX 11 hardware from the top entirely to the bottom – and all before NVIDIA could launch a single DX11 card. AMD is still fighting to get more 40nm production capacity, but the situation is improving daily and even with TSMC’s problems it didn’t stop AMD from doing this entirely in 6 months. With the first Cedar card launched, now we’re going have a chance to see how AMD chooses to fill in the obvious gaps in their pricing structure, and more importantly how NVIDIA will ultimately end up responding to a fully launched 5000-series.

Power & Temperatures
Comments Locked

77 Comments

View All Comments

  • Lifted - Thursday, February 4, 2010 - link

    The first graph on each of the benchmark pages lists a 5670, the second graph lists a 4670. Typo or are you actually using different cards?
  • Ryan Smith - Thursday, February 4, 2010 - link

    It's not a typo. We never ran the 5670 at 1024x768, there was no reason to. It's more than fast enough for at least 1280.

    The 4670 data is from the GT 240 review, which we used 1024 on (because GT 240 couldn't cut the mustard above 1024 at times).
  • 8steve8 - Thursday, February 4, 2010 - link

    should have had the clarkdale igp in there for good measure, if you aren't gaming I'd guess that igp would be the way to go
  • MrSpadge - Thursday, February 4, 2010 - link

    Would have been interesting to compare idle power consumption: Clarkie + IGP vs. Clarkie + 5450.
  • Ryan Smith - Thursday, February 4, 2010 - link

    Testing a Clarkie requires switching out our test rig, so the results wouldn't be directly comparable since it means switching out everything including the processor. Plus Anand we only have a couple of Clarkies, which are currently in use for other projects.

    At this point Clarkie (and any other IGP) is still less than half as fast as 5450.
  • strikeback03 - Thursday, February 4, 2010 - link

    That brings up the point though that with a card this low on the totem pole it might be nice to include a benchmark or two of it paired with similarly low-priced hardware. I understand the reason for generally using the same testbed, but when it is already borderline playable it would be nice to know that it won't get any slower when actually paired with a cheap processor and motherboard.
  • Ryan Smith - Thursday, February 4, 2010 - link

    Testing a Clarkie requires switching out our test rig, so the results wouldn't be directly comparable since it means switching out everything including the processor.

    At this point Clarkie (and any other IGP) is still less than half as fast as 5450.
  • kevinqian - Thursday, February 4, 2010 - link

    Hey Ryan, I'm glad you are the first reviewer to utilize Blaubart's very helpful deinterlacing benchmark. I would just like to note that with ATI, it seems memory bandwidth plays a big part in deinterlacing method as well. For example, the HD 4650 DDR2 can only perform MA deinterlacing, even tho it has the same shaders as the (VA capable) 4670. The only bottleneck there seems to be the DDR2 memory bandwidth. On the other hand, with the HD 4550, though it has DDR3, it is limited to 64bit memory interface, so that seems to be a limiting factor.

    I have an old HD 2600Pro DDR2 AGP. When I OC the memory from 800mhz stock to 1000mhz, VA gets activated by CCC and confirmed in Cheese slices.

    Nvidia's deinterlacing algorithm seem to be less memory intensive as even the GT220 with DDR2 is able to perform VA-like deinterlacing.
  • Ryan Smith - Thursday, February 4, 2010 - link

    Yeah, I've seen the bandwidth idea thrown around. Unfortunately I don't have any additional suitable low-end cards for testing it.
  • ET - Thursday, February 4, 2010 - link

    I think I remember reading that the interpolation of input values in the pixel shader was moved from fixed function units to being done by the shaders.

Log in

Don't have an account? Sign up now