G92: Funky Naming for a G80 Derivative

If we expect the G9x to represent a new architecture supporting the GeForce 9 series, we would be wrong. In spite of the fact that part of the reason we were given for NVIDIA's move away from NVxx code naming was to bring code name and product name closer to parity (G7x is GeForce 7, G8x is GeForce 8), it seems NVIDIA has broken this rule rather early on. Code names are automatically generated, but how we only ended up with three different G8x parts before we hit G9x is certainly a mystery. One that NVIDIA didn't feel like enlightening us on, as it no doubt has to do with unannounced products.

While not a new architecture, the GPU behind the 8800 GT has certainly been massaged quite a bit from the G80. The G92 is fabbed on a 65nm process, and even though it has fewer SPs, less texturing power, and not as many ROPs as the G80, it's made up of more transistors (754M vs. 681M). This is partly due to the fact that G92 integrates the updated video processing engine (VP2), and the display engine that previously resided off chip. Now, all the display logic including TMDS hardware is integrated onto the GPU itself.

In addition to the new features, there have been some enhancements to the architecture that likely added a few million transistors here and there as well. While we were unable to get any really good details, we were told that lossless compression ratios were increased in order to enable better performance at higher resolutions over the lower bandwidth memory bus attached to the G92 on 8800 GT. We also know that the proportion of texture address units to texture filtering units has increased to a 1:1 ratio (similar to the 8600 GTS, but in a context where we can actually expect decent performance). This should also improve memory bandwidth usage and texturing power in general.

Because NVIDIA was touting the addition of hardware double precision IEEE 754 floating point on their workstation hardware coming sometime before the end of the year, we suspected that G92 might include this functionality. It seems, however, that the hardware behind that advancement has been pushed back for some reason. G92 does not support hardware double precision floating point. This is only really useful for workstation and GPU computing applications at the moment, but because NVIDIA design one GPU for both consumer and workstation applications, it will be interesting to see if they do anything at all with double precision on the desktop.

With every generation, we can expect buffers and on chip memory to be tweaked based on experience with the previous iteration of the hardware. This could also have resulted in additional transistors. But regardless of the reason, this GPU packs quite a number of features into a very small area. The integration of these features into one ASIC is possible economically because of the 65nm process: even though there are more transistors, the physical die takes up much less space than the G80.

Index The Card
Comments Locked

90 Comments

View All Comments

  • AggressorPrime - Monday, October 29, 2007 - link

    I made a typo. Let us hope they are not on the same level.
  • ninjit - Monday, October 29, 2007 - link

    This page has my very confused:
    http://www.anandtech.com/video/showdoc.aspx?i=3140...">http://www.anandtech.com/video/showdoc.aspx?i=3140...

    The text of the article goes on as if the GT doesn't really compare to the GTX, except on price/performance:

    quote:

    We would be out of our minds to expect the 8800 GT to even remotely compete with the GTX, but the real question is - how much more performance do you get from the extra money you spent on the GTX over the GT?


    quote:

    But back to the real story, in spite of the fact that the 8800 GT doesn't touch the GTX, two of them will certainly beat it for either equal or less money.



    Yet all the graphs show the GT performing pretty much on par with the GTX, with at most a 5-10fps difference at the highest resolution.

    I didn't understand that last sentence I quoted above at all.
  • archcommus - Monday, October 29, 2007 - link

    This is obviously an amazing card and I hope it sets a new trend for getting good gaming performance in the latest titles for around $200 like it used to be, unlike the recent trend of having to spend $350+ for high end (not even ultra high end). However, I don't get why a GT part is higher performing than a GTS, isn't that going against their normal naming scheme a bit? I thought it was typically: Ultra -> GTX -> GTS -> GT -> GS, or something like that.
  • mac2j - Monday, October 29, 2007 - link

    I've been hearing rumors about an Nvidia 9800 card being released in the coming months .... is that the same card with an outdated/incorrect naming convention or a new architecture beyond G92?

    I guess if Nvidia had a next-gen architecture coming it would explain why they dont mind wiping some of their old products off the board with the 8800 GT which seems as though it will be a dominant part for the remaining lifetime of this generation of parts.
  • MFK - Monday, October 29, 2007 - link

    After lurking on Anandtech for two layout/design revisions, I have finally decided to post a comment. :D
    First of all hi all!

    Second of all, is it okay that nVidia decided not to introduce a proper next gen part in favour of this mid range offering? Okay so its good and what not, but what I'm wondering is, something that the article does not talk about, is what the future value of this card is. Can I expect this to play some upcoming games (Alan Wake?) on 1600 x 1200? I know its hard to predict, but industry analysts like you guys should have some idea. Also how long can I expect this card to continue playing games at acceptable framerates? Any idea, any one?
    Thanks.
  • DerekWilson - Monday, October 29, 2007 - link

    that's a tough call ....

    but really, it's up to the developers.

    UT3 looks great in DX9, and Bioshock looks great in DX10. Crysis looks amazing, but its a demo, not final code and it does run very slow.

    The bottom line is that developers need to balance the amazing effects they show off with playability -- it's up to them. They know what hardware you've got and they chose to push the envelope or not.

    I konw that's not an answer, sorry :-( ... it is just nearly impossible to say what will happen.
  • crimson117 - Monday, October 29, 2007 - link

    How much ram was on the 8800 GT used in testing? Was is 256 or 512?
  • NoBull6 - Monday, October 29, 2007 - link

    From context, I'm thinking 512. Since 512MB are the only cards available in the channel, and Derek was hypothesizing about the pricing of a 256MB version, I think you can be confident this was a 512MB test card.
  • DerekWilson - Monday, October 29, 2007 - link

    correct.

    256MB cards do not exist outside NVIDIA at this point.
  • ninjit - Monday, October 29, 2007 - link

    I was just wondering about that too.

    I thought I missed it in the article, but I didn't see it in another run through.

    I see I'm not the only one who was curious

Log in

Don't have an account? Sign up now