A Closer Look at RV610 and RV630

The RV6xx parts are similar to the R600 hardware we've already covered in detail. There are a few major differences between the two classes of hardware. First and foremost, the RV6xx GPUs include full video decode acceleration for MPEG-2, VC-1, and H.264 encoded content through AMD's UVD hardware. There was some confusion over this when R600 first launched, but AMD has since confirmed that UVD hardware is not at all present in their high end part.

We also have a difference in manufacturing process. R600 uses an 80nm TSMC process aimed at high speed transistors, while their RV610 and RV630 GPU based cards are fabbed on a 65nm TSMC process aimed at lower power consumption. The end result is that these GPUs will run much cooler and require much less power than their big brother the R600.

Transistor speed between these two processes ends up being similar in spite of the focus on power over performance at 65nm. RV610 is built with 180M transistors, while RV630 contains 390M. This is certainly down from the huge transistor count of R600, but nearly 400M is nothing to sneeze at.

Aside from the obvious differences of transistor count and the number of different units (shaders, texture unit, etc.), the only other major difference is in memory bus width. All RV610 GPU based hardware will have a 64-bit memory bus, while RV630 based parts will feature a 128-bit connection to memory. Here's the layout of each GPU:


RV630 Block Diagram



RV610 Block Diagram


One of the first things that jump out is that both RV6xx based designs feature only one render back end block. This part of the chip is responsible for alpha (transparency) and fog, dealing with final z/stencil buffer operations, sending MSAA samples back up to the shader to be resolved, and ultimately blending fragments and writing out final pixel color. Maximum pixel fill rate is limited by the number of render back ends.

In the case of both current RV6xx GPUs, we can only draw out a maximum of 4 pixels per clock (or we can do 8 z/stencil-only ops per clock). While we don't expect extreme resolutions to be run on these parts (at least not in games), we could run into issues with effects that make heavy use of MRTs (multiple render targets), z/stencil buffers, and antialiasing. With the move to DX10, we expect developers to make use of the additional MRTs they have available, and lower resolutions benefit from AA more than high resolutions as well. We would really like to see higher pixel draw power here. Our performance tests will reflect the fact that AA is not kind to AMD's new parts, because of the lack of hardware resolve as well as the use of only one render back end.

Among the notable features that we will see here are tessellation, which could have an even larger impact on low end hardware for enabling detailed and realistic geometry, and CFAA filtering options. Unfortunately, we might not see that much initial use made of the tessellation hardware, and with the reduced pixel draw and shading power of the RVxx series, we are a little skeptical of the benefits of CFAA.

From here, lets move on and take a look at what we actually get in retail products.

Index The Cards
Comments Locked

96 Comments

View All Comments

  • IKeelU - Thursday, June 28, 2007 - link

    Wow, now I feel even better about my 8800GTS 320MB purchase.
  • LionD - Thursday, June 28, 2007 - link

    This article scores Radeon X1950Pro approximately 1.5 times lower than iXBT. Why is it so?
  • OCedHrt - Thursday, June 28, 2007 - link

    Are these drivers newer than 7.6?
  • DerekWilson - Thursday, June 28, 2007 - link

    these drivers are beta 7.7
  • erwos - Thursday, June 28, 2007 - link

    I was really hoping that AMD would pull a rabbit out of the hat and release something competitive (read: faster) with the 8600GTS. Clearly, they didn't.

    Now I've got to decide between an 8600GTS and an 8800GTS for my new build. I like the PureVideo features in the 8600GTS, but I'm not sure I'll really need them if I've got a Q6600. Then again, I'm not sure I'll really need the full gaming performance of the 8800GTS either. Bleh.

    Maybe I'll just stick with an 8600GTS for now, and upgrade to the inevitable 8900GTS.
  • autoboy - Thursday, June 28, 2007 - link

    Since we all know these cards suck for games, please make the UVD article really complete. I know you are going to be doing CPU tests, and you are going to test them with core2duos, but I beg you to test these systems for what they are made for, allowing crappy systems to play HD video. Try testing these cards with a sempron @ 1.6Ghz. You could also try finding the lowest possible cpu speed while HD video still plays smoothly by adjusting the multiplier. That could be pretty interesting and help people out who don't overbuild their HTPCs.

    Also, please try to run HQV benchmarks for both DVD and HD DVD for all the cards. We all know the 2600XT will get good scores, but the 2400pro will most likely be the best card for HTPC use (because nobody will ever play games with these crappy cards) and reviewers ussually don't review the low end models for HQV scores. Many times they don't score the same as their big brothers. If you can't get a 2400pro, you could underclock a 2400XT.
  • kilkennycat - Thursday, June 28, 2007 - link

    A terrific suggestion. Since it is now very obvious that all of the current sub-$200 DX10 cards from both nVidia and AMD/ATi are really targeted for HTPCs and the "casual" gamer -- the bulk of the PC add-on market. Not all of Anandtech's readers are bleeding-edge gaming "enthusiasts".

    (Derek, I hope you take note of this little thread)
  • Frumious1 - Thursday, June 28, 2007 - link

    I almost agree... just don't listen to that BS about a Sempron CPU! Seriously, are you people running 1.6 GHz Sempron chips with $100 GPUs? I doubt that any single core can handle H.264, even with a good GPU helping out (though it would be somewhat interesting to see if I'm wrong). Considering X2 3600+ chips start at a whopping $63 and the 3800+ is only $5 more, so I think that would be a far better choice. Those are the somewhat lower power 65nm chips as well, and the dual cores means you might actually be able to manage video encoding on your HTPC.

    What, you don't encode video with your HTPC!? I've got an E6600 in mine, because Windows MCE 2005 sucks up about ~3.5GB per hour of high-quality analog video. I can turn those clips into 700MB DivX video with no discernable quality loss, or I can even go to 350MB per hour and still have nearly the same quality. Doing so on a single core Sempron, though? Don't make me laugh! You'd end up spending five hours to encode a thirty minute show. If you record more than two hours of video per day, you could never catch up!
  • autoboy - Thursday, June 28, 2007 - link

    I am perfectly happy with my sempron 1.6Ghz. I have no problem with OTA HD mpeg2, and I can play any downloaded file I've found. It just keeps chugging along at 1.1V using less than 20W at full load, allowing me to put my HTPC in a nearly enclosed space, and run the fans at a low 500rpm. I can't upgrade to dual core on a Socket 754 board, and I'm not about to upgrade an entire system when this little gem of a $50 graphics card will allow me to run the one thing my cpu can't handle, HD-DVDs.

    Also, why would I want to re-encode my TV shows when 500Gig harddrives are only $100? For the rare times I do encode, I use my dual core office PC or my gaming rig, or I could just start it at night and come back tomorrow. I've never been in a big hurry to re-encode old episodes of America's Got Talent.

    Also, you are wrong about the sempron handlig h.264. Mine can handle downloaded 720p content already, and a chinese site has already confirmed that the UVD can easily run on a Sempron 1.6 with lots of cpu to spare.
  • lumbergeek - Thursday, June 28, 2007 - link

    There you go. Personally, I want to see next week's review of UVD vs. Purevideo. I seriously hope that they include 2400s and 2600s in the review along with 8600s and 8500s. THAT sort of information is what will form the basis for my decision on my next Vid Card. My C2D isn't a gaming machine, but a HTPC. If the 2400 series is as good at video as the 2600, then silent wins - big time.

Log in

Don't have an account? Sign up now