A Closer Look at RV610 and RV630

The RV6xx parts are similar to the R600 hardware we've already covered in detail. There are a few major differences between the two classes of hardware. First and foremost, the RV6xx GPUs include full video decode acceleration for MPEG-2, VC-1, and H.264 encoded content through AMD's UVD hardware. There was some confusion over this when R600 first launched, but AMD has since confirmed that UVD hardware is not at all present in their high end part.

We also have a difference in manufacturing process. R600 uses an 80nm TSMC process aimed at high speed transistors, while their RV610 and RV630 GPU based cards are fabbed on a 65nm TSMC process aimed at lower power consumption. The end result is that these GPUs will run much cooler and require much less power than their big brother the R600.

Transistor speed between these two processes ends up being similar in spite of the focus on power over performance at 65nm. RV610 is built with 180M transistors, while RV630 contains 390M. This is certainly down from the huge transistor count of R600, but nearly 400M is nothing to sneeze at.

Aside from the obvious differences of transistor count and the number of different units (shaders, texture unit, etc.), the only other major difference is in memory bus width. All RV610 GPU based hardware will have a 64-bit memory bus, while RV630 based parts will feature a 128-bit connection to memory. Here's the layout of each GPU:


RV630 Block Diagram



RV610 Block Diagram


One of the first things that jump out is that both RV6xx based designs feature only one render back end block. This part of the chip is responsible for alpha (transparency) and fog, dealing with final z/stencil buffer operations, sending MSAA samples back up to the shader to be resolved, and ultimately blending fragments and writing out final pixel color. Maximum pixel fill rate is limited by the number of render back ends.

In the case of both current RV6xx GPUs, we can only draw out a maximum of 4 pixels per clock (or we can do 8 z/stencil-only ops per clock). While we don't expect extreme resolutions to be run on these parts (at least not in games), we could run into issues with effects that make heavy use of MRTs (multiple render targets), z/stencil buffers, and antialiasing. With the move to DX10, we expect developers to make use of the additional MRTs they have available, and lower resolutions benefit from AA more than high resolutions as well. We would really like to see higher pixel draw power here. Our performance tests will reflect the fact that AA is not kind to AMD's new parts, because of the lack of hardware resolve as well as the use of only one render back end.

Among the notable features that we will see here are tessellation, which could have an even larger impact on low end hardware for enabling detailed and realistic geometry, and CFAA filtering options. Unfortunately, we might not see that much initial use made of the tessellation hardware, and with the reduced pixel draw and shading power of the RVxx series, we are a little skeptical of the benefits of CFAA.

From here, lets move on and take a look at what we actually get in retail products.

Index The Cards
Comments Locked

96 Comments

View All Comments

  • TA152H - Thursday, June 28, 2007 - link

    Because not everyone is going to run Rainbow Six, duh!!!!!!

    For some people these cards would be fine because they aren't running all the titles here, or are willing to run them at lower resolutions so they don't have to hear some damn egg beater in their computer. Resolution isn't important to everyone, not everyone is some jackass kid that thinks blowing up space aliens with the highest degree of resolution is what life is all about and would be willing to sacrifice some of that for something that is quieter and cooler. I would have thought that much was obvious.
  • DerekWilson - Thursday, June 28, 2007 - link

    We would have tested the 2400 Pro if we had been able to get a hold of one. AMD was not able to send us a 2400 Pro, so we'll have to wait until we can get one from one of their board partners.
  • DerekWilson - Thursday, June 28, 2007 - link

    I'm gonna disagree.

    DX9 is much more important in these tests. How many people used a 9500 or an FX 5600 to play any serious DX9 games (read hl2 or better)? And how long did they have to wait for it when it finally mattered?

    The reason we do real world tests is because we want to evaluate how the card will behave in normal use. To the customer, the hardware is only as good as the software that runs on it. And right now the software that runs on these parts is almost exclusively DX9.

    It'll be at least a year or so before we see any real meaningful DX10 titles. Remember TRAOD, Tron 2.0 and Halo? Not the best DX9 implementations even if they were among the first.

    DX10 tests are certianly interesting, and definitely relevant. But I think DX9 is much more important right now.
  • TA152H - Thursday, June 28, 2007 - link

    Yes, but you miss the point that these cards were made for DX10. There are already some titles out, and they will become more and more popular, although initially, without all the features. It obviously wasn't the focus of the product at all, so why make it yours?

    Let me ask you a simple question. If you were buying a card, even today, would you buy it for the performance of DX9, or DX10? If you had the choice of two cards, one that had obscenely bad DX9 performance, but good DX10, and the other the reverse, which would you choose? I'd choose the one that performs well on DX10, because that's where things are going, and I'd put up with poor DX9 performance while new titles came out. However, these might suck on DX10 too, that's what we need to know.
  • swaaye - Thursday, June 28, 2007 - link

    Well, Radeon 9700 didn't have too much trouble rocking DirectX 8 games. Nor did GeForce FX (hell that's all it was really good for). G80 slaughters other cards at DirectX 9 games. I highly, highly doubt that these new cards are optimized for DirectX 10. How can they be? The first cards of each generation are usually disappointments for the new APIs.
  • TA152H - Thursday, June 28, 2007 - link

    You're missing the point, I'm not saying it will, I'm saying let's see.

    But, let's be realistic, at the price of these cards, they aren't going to be extremely powerful, but they have a great feature set for the price. For a lot of people, these are going to be good cards.

    Having said that, I'm inclined to agree they probably will not have great DX10 performance, but they didn't even test it. Strange, to say the least. Some of their decisions are baffling, and you wonder how much thought they actually put into them, if any.

    I also agree the first generation for a feature set isn't great. I'm not expecting much, but I'll withhold criticism until I see the results. Besides, in the case of the 2400, wouldn't you think that with this type of feature set, for $60 or so, it would be a very good product for a lot of people running Vista? It's not going to be for the alien blasters, of course, but don't you think it's got some market ?
  • Tamale - Thursday, June 28, 2007 - link

    you make it sound like you'd never even play any of the games tested in this review. wouldn't you be mad your "midrange" card performed this awful on OLDER technology games?

    i don't understand why anyone WOULDN'T care about dx9 performance when there are so many good dx9 games out there...
  • swaaye - Thursday, June 28, 2007 - link

    And before you rip me apart for bringing up 9700 and telling me how awesome it was for DX9, remember the mid-range 6600 GT beat it handily. Both are designed for the same API.
  • erple2 - Thursday, June 28, 2007 - link

    You're comparing apples to oranges here. Remember, the 9700 was the FIRST DX9 part available from ATI. The 6600GT was the second gen DX9 part from NVidia. I WILL say that the 9700 was light-years ahead of the nVidia competing DX9 part, the 5800XT.

    Your statement is more or less the same as saying that the 9700 was crap, because the 7600GT handily beat it (ok, I'm slightly exaggerating here...)

    The point is that this is a reversal of the DX9 situation. The 9700 did handily beat the 5800 in DX8 generation games. In this case, the 8800GTX handily beats the 2900XT (the jury's still out on the 8800GTS).

    I view this more like the 2600 appears similar to the horribly performing (in DX9) GeForce 5600.. At least the 5600 did reasonably well in DX8 games...
  • swaaye - Friday, June 29, 2007 - link

    No, I agree that the HD2600 and 2400 are reminiscent of the FX 5600 and FX5200. They are pretty awful. And I'm not going to sit here and dreamily imagine 3x the performance when they are running more complex DX10 shader code. I think these cards are flops and that's all they will really ever be. For non-gamers and HD video people, the only people who should buy these, they will of course be fine.

    If you want to play games, don't jump to DX10 dreaming. How many years did it take for DX9 to become the only API in use? Years. DX9 arrived in 2002 and only a couple of years ago at best was it becoming the primary API. UT2004, for example, is basically a DX7 engine. Guild Wars arrived with a DX8 renderer.

    DX9 had multiple OS's backing it. DX10 is Vista only. Its adoption rate is likely to really be slowed down due to this and the fact that the only cards with remotely decent DX10 performance are $300+.

    I brought up 9700 and 6600GT just to say that the first generation of cards for a new API is never very good at that API.

Log in

Don't have an account? Sign up now