A Closer Look at RV610 and RV630

The RV6xx parts are similar to the R600 hardware we've already covered in detail. There are a few major differences between the two classes of hardware. First and foremost, the RV6xx GPUs include full video decode acceleration for MPEG-2, VC-1, and H.264 encoded content through AMD's UVD hardware. There was some confusion over this when R600 first launched, but AMD has since confirmed that UVD hardware is not at all present in their high end part.

We also have a difference in manufacturing process. R600 uses an 80nm TSMC process aimed at high speed transistors, while their RV610 and RV630 GPU based cards are fabbed on a 65nm TSMC process aimed at lower power consumption. The end result is that these GPUs will run much cooler and require much less power than their big brother the R600.

Transistor speed between these two processes ends up being similar in spite of the focus on power over performance at 65nm. RV610 is built with 180M transistors, while RV630 contains 390M. This is certainly down from the huge transistor count of R600, but nearly 400M is nothing to sneeze at.

Aside from the obvious differences of transistor count and the number of different units (shaders, texture unit, etc.), the only other major difference is in memory bus width. All RV610 GPU based hardware will have a 64-bit memory bus, while RV630 based parts will feature a 128-bit connection to memory. Here's the layout of each GPU:

RV630 Block Diagram

RV610 Block Diagram

One of the first things that jump out is that both RV6xx based designs feature only one render back end block. This part of the chip is responsible for alpha (transparency) and fog, dealing with final z/stencil buffer operations, sending MSAA samples back up to the shader to be resolved, and ultimately blending fragments and writing out final pixel color. Maximum pixel fill rate is limited by the number of render back ends.

In the case of both current RV6xx GPUs, we can only draw out a maximum of 4 pixels per clock (or we can do 8 z/stencil-only ops per clock). While we don't expect extreme resolutions to be run on these parts (at least not in games), we could run into issues with effects that make heavy use of MRTs (multiple render targets), z/stencil buffers, and antialiasing. With the move to DX10, we expect developers to make use of the additional MRTs they have available, and lower resolutions benefit from AA more than high resolutions as well. We would really like to see higher pixel draw power here. Our performance tests will reflect the fact that AA is not kind to AMD's new parts, because of the lack of hardware resolve as well as the use of only one render back end.

Among the notable features that we will see here are tessellation, which could have an even larger impact on low end hardware for enabling detailed and realistic geometry, and CFAA filtering options. Unfortunately, we might not see that much initial use made of the tessellation hardware, and with the reduced pixel draw and shading power of the RVxx series, we are a little skeptical of the benefits of CFAA.

From here, lets move on and take a look at what we actually get in retail products.

Index The Cards


View All Comments

  • kilkennycat - Thursday, June 28, 2007 - link

    nVidia is well into development of the 8xxx-family successors. If you don't like any of the current Dx10 offerings, keep your wallets in your pockets till late this year or very early next year. Double-precision floating-point compute paths (think a full-fledged GPGPU, fully capable of mixing and matching GPU-functionality and compute horsepower for particle-physics etc.) with HD-decode hardware-assist integrated in all versions. Likely all on 65nm. And no doubt finally filling in the performance-gap around $200 to quiet the current laments and wailings from all sides.

    Crysis is likely to run just fine in DX9 on your current high-end DX9 cards. Enjoy the game, upgrading your CPU/Motherboard if Crysis and other next-gen games make good use of multiple cores. Defer the expenditure on prettier graphics to a more-opportune, higher-performance (and less expensive) time. Do you really, really want to invest in a first-generation Dx10 card (unless you want the HD-decode for your HTPC)? For high-end graphics cards the 8800-series is getting long-in-the-tooth, and the prices are not likely to fall much further due to the very high manufacturing cost of the giant G80 die, plus the 2900XT is not an adequate answer. All of the major upcoming game titles are fully-compatible with Dx9. Some developers may be bribed(?) by Microsoft to make their games Vista-only to push Vista's lagging sales, but Vista-only or not, no current game is restricted to Dx10-only... that would be true commercial suicide with the current tiny market-penetration of Dx10 hardware.
  • Slaimus - Thursday, June 28, 2007 - link

    Looks like ATI is giving up the high end again. The 2600XT/Pro is priced against the 8600GT/8500GT with the price drop, and the 2400Pro is well below them.

    It will work with the OEMs, but not with game developers and players.

    I guess we will see a cut-down 2900GT or something like that to fill the $150-$350 bracket where they have no DX10 products.
  • Goty - Thursday, June 28, 2007 - link

    Why are there no power consumption tests? I thought AT was all over this performance-per-watt nonsense? Reply
  • smitty3268 - Thursday, June 28, 2007 - link

    Especially after the article made a point of saying that these cards were built to maximize power efficiency rather than speed. Reply
  • avaughan - Thursday, June 28, 2007 - link

    Also missing are noise levels. Reply
  • SandmanWN - Thursday, June 28, 2007 - link

    And overclocking... Reply
  • Regs - Thursday, June 28, 2007 - link

    Just when I though things were getting better. This whole 6-12 months just one long disapointment.

    Mid-low range cards that perform sometimes worse than last generation?

    All these guys are selling now is hardware with a different name. I never seen such ridiculous stuff in my life. I hope AMD didn't spend too much money on producing these cards. How much money do you have to spend to make a card perform worse than last generations line up? Complete lack of innovation and a complete lack of any sense. I just can't make any sense at all out of this.

    I think a 7900GS or a X1800 is the way to go for mid range this year. Though to tell you the truth I wouldn't give AMD any money right now and hopefully then will they get rid of their CEO who seems to not be pulling his weight.
  • TA152H - Thursday, June 28, 2007 - link

    I don't agree with all your reasons, but I agree with Hector Ruiz going. This ass-clown has been plaguing the company for too long, and he has no vision and only a penchant for whining about Intel's anti-competitive practices.

    He really needs to go. Now!
  • defter - Thursday, June 28, 2007 - link


    How much money do you have to spend to make a card perform worse than last generations line up? Complete lack of innovation and a complete lack of any sense. I just can't make any sense at all out of this.

    They have the same problem that NVidia had with GeForce FX. They spent a lot of money to an exotic new architecture that turned to be very inefficient in terms of performance/transistor count.
  • DerekWilson - Thursday, June 28, 2007 - link

    Except that this is their second generation of a unified shader architecture. The first incarnation is the XBox360 Xenos. Reply

Log in

Don't have an account? Sign up now