Finally. We're finally getting somewhere interesting in the graphics industry. Although they're sure to return, the days of reviewing $600 graphics card after $600 graphics card are on hiatus, and instead we're reviewing a new class of mainstream cards with earth-shattering performance.

NVIDIA's GeForce 8800 GT kicked off the trend, in one fell swoop making almost all of NVIDIA's product line obsolete thanks to the high performance and low price tag (we'll talk about that last part shortly). But what we saw there wasn't a fluke, it was a preemptive strike against AMD, who have been hard at work on an affordable GPU of their own.

This new product, like the 8800 GT, would be aimed squarely at the $150 - $250 market segment, something both AMD and NVIDIA did a horrible job at with mainstream releases earlier this year (2600 and 8600 both sucked guys).

Introducing the RV670

AMD's two new graphics cards launching today are both based off a new GPU, referred to internally as the RV670. The basic architecture of the hardware is largely unchanged from R600; there has been some additional functionality added, and a great deal of internal bandwidth removed, but other than that this is very much an R600 based part.

The biggest news of this part is that it is fabbed on a 55nm TSMC process. This is a half-node process based on 65nm technology, giving AMD an advantage in die size (cost) and potentially clock speed and/or power.

Historically, AMD's RV series has been a cost cut version of their R series designed for lower end volume parts, and that's where RV670 started. Right of the bat, half the external and internal memory bandwidth of R600 was cut out. External bandwidth dropped from 512-bit to 256-bit, but AMD stuck with 8 memory channels (each dropped from 64bit to 32bit).

Internally, the ring bus dropped from 1024-bit to 512-bit. This cut in bandwidth contributed to a significant drop in transistor count from R600's ~720M. RV670 is made up of 666M transistors, and this includes the addition of UVD hardware, some power saving features, the necessary additions for DX 10.1 and the normal performance tuning we would expect from another iteration of the architecture.

Processing power remains unchanged from the R600; the RV670 features 320 stream processors, 16 texture units and 16 redner back-ends. Clock speeds have gone up slightly and memory speeds have increased tremendously to make up for the narrower memory bus.

The RV670 GPU is also fully PCI Express 2.0 compliant like NVIDIA's G92, the heart and soul of the GeForce 8800 GT.

New Features you Say? UVD and DirectX 10.1
POST A COMMENT

114 Comments

View All Comments

  • NullSubroutine - Thursday, November 15, 2007 - link

    I would have to disagree. There was at least a 20 to 25 percent difference in XP very high settings vs Visa very high settings in Crysis. If you look at any number of games, there is still a deficit for performance betwen XP and Vista, while the gap is shrinking, it is still very pronounced. Reply
  • Anand Lal Shimpi - Thursday, November 15, 2007 - link

    I think the real solution to the XP/Vista issue is to do a separate article looking at driver performance in XP vs. Vista. Derek was working on such a beast before the 8800 GT launched, and as far as I remember he found that with the latest driver releases that there's finally performance parity between the OSes (and between 32-bit/64-bit versions of Vista as well, interestingly enough).

    As far as more titles go, we tried to focus on the big game releases that people were more likely upgrading their hardware for. Time is always limited with these things, but do you have any specific requests for games you'd like to see included? As long as they aren't overly CPU limited we can always look at including them.

    I'll have to confirm with Derek, but I believe UVD performance hasn't changed since our last look at UVD with these GPUs: http://www.anandtech.com/showdoc.aspx?i=3047">http://www.anandtech.com/showdoc.aspx?i=3047.

    Thanks for the suggestions, I aim to please so keep it coming :)

    Take care,
    Anand
    Reply
  • NullSubroutine - Friday, November 16, 2007 - link

    The biggest discreptency (spelling) I have seen between all reviews have been the drivers used (especially if you take in consideration the difference from say XP to Vista 64 bit new to old drivers).

    Many review sites are using drivers that came with the disks, 8.43 or 8.44 which are supposed to be out Nov 15th for download (I couldnt find them on AMD's site earlier today). There seems to be these new drivers (must be beta drivers) give a huge boost in performance (it seems) for the 3800 series.

    What I cannot figure out why they test the 3800 series with the 8.43/8.44 but the 2900s with 7.10. So its hard to tell if the newer drivers are good for the HD series in general or more specific to the 3800s.

    Has Anand tested the different drivers?
    Reply
  • Lonyo - Thursday, November 15, 2007 - link

    They can't really test in XP that easily.
    Either they test in Vista, or they test in Vista AND XP (to be able to run DX10 benchmarks).
    I expect it's just easier to do all the tests in one OS, rather than having half run in Windows XP, except for DX10 which they run in Vista.
    Reply
  • MGSsancho - Thursday, November 15, 2007 - link

    I agree with you on that. I think there will be another UVD article later. like nothing but what video cards can offload parts of the video decode, what minimial cpu is needed for like a HTPC to run HD movies.

    Xp would be cool.

    but Anand, could you do a 32b v 64b? i know you mentioned it in the article, but can you do 1gb, 2gb, 4gb, and 8gb configs? maybe current games with single core (AMD 57FX the old king), with a dual core then a quad core? i bring up 8gb for a reason. now aday we can get 2gb dimms. And some of us us our comps for other task like running a few virtual mahchines minimized. we minimize our work, game for a 30 min break then go back to work. or maybe were running apache for a home website. or many other task that simply eat up ram (leaving FF open for weeks}.
    Im not asking for a dual socket god machine. but with current mobos, its possible to do 8gb of ram. thanks for reading this and take care.
    Reply
  • Locut0s - Thursday, November 15, 2007 - link

    With all the buzz in the CPU world nowadays being about more cores and not more MHZ it's interesting to see that the latest graphics cards have been all about more MHZ and more features. It seems to me that it's in the graphics card world that more cores would make the most sense given the almost infinite scalability of rendering. Instead of making the next generation of GPUs more and more complex than the previous generation why not work instead on making these GPUs work together better. Then your next generation card could just be 4 or 5 of the current generation GPUs on the same die or card. Think of it, if they can get the scaling and drivers down pat then you could churn out blazingly fast cards just by adding more cores to the card. And as long as you are manufacturing the same generation chip and doing so at HUGE volumes the cost per chip should go down too.

    Think this is something we will start to see soon?
    Reply
  • Gholam - Thursday, November 15, 2007 - link

    In case you haven't noticed, graphics cards have been packing cores by the dozens from the beginning - and lately, by the hundreds. Reply
  • Locut0s - Thursday, November 15, 2007 - link

    Well yes I know but the "cores" that they are using are extremely simplified, more so than I was thinking of. Instead I was thinking of each "core" as being able to perform most if not all of the steps in the rendering pipeline. Reply
  • Guuts - Thursday, November 15, 2007 - link

    I think the simple answer is that in the CPU world, they hit a clockspeed wall due to thermal issues and had to change their design strategy to offer greater performance, which was to go to multiple cores.

    The GPU makers haven't reached this same wall yet, and it must be cheaper and/or easier to make one high-performing chip than redesigning for multi-GPU boards... though there are some boards that have 2 GPUs on it that act like SLI/Crossfire, but in a single board package.

    I'm sure when the GPUs start suffering the same issues, we'll start seeing multi-core graphic cards, and I would assume that nvidia and AMD are already researching and planning for that.
    Reply
  • dustinfrazier - Thursday, November 15, 2007 - link

    Going on a year for Nvidia dominance and boy does it feel good. I bought my 8800gtx pair the first day they were available last year and never expected them to dominate this long. God I can't wait to see what comes out next for the enthusiasts. It get the feeling it is gonna rock! I really wanna see what both companies have up their sleeves as I am ready to retire my 8800s.

    I understand that these latest cards are great for the finances and good energy savers, but what does it matter if they already have a hard time keeping up with current next gen games at reasonable frame rates, 1920x1200 and above? What good does saving money do if all the games you purchase in 08 end up as nothing but a slide show? I guess I just want AMD to release a card that doesn't act like playing Crysis is equivalent to solving the meaning of life. Get on with it. The enthusiasts are ready to buy!
    Reply

Log in

Don't have an account? Sign up now