G92: Funky Naming for a G80 Derivative

If we expect the G9x to represent a new architecture supporting the GeForce 9 series, we would be wrong. In spite of the fact that part of the reason we were given for NVIDIA's move away from NVxx code naming was to bring code name and product name closer to parity (G7x is GeForce 7, G8x is GeForce 8), it seems NVIDIA has broken this rule rather early on. Code names are automatically generated, but how we only ended up with three different G8x parts before we hit G9x is certainly a mystery. One that NVIDIA didn't feel like enlightening us on, as it no doubt has to do with unannounced products.

While not a new architecture, the GPU behind the 8800 GT has certainly been massaged quite a bit from the G80. The G92 is fabbed on a 65nm process, and even though it has fewer SPs, less texturing power, and not as many ROPs as the G80, it's made up of more transistors (754M vs. 681M). This is partly due to the fact that G92 integrates the updated video processing engine (VP2), and the display engine that previously resided off chip. Now, all the display logic including TMDS hardware is integrated onto the GPU itself.

In addition to the new features, there have been some enhancements to the architecture that likely added a few million transistors here and there as well. While we were unable to get any really good details, we were told that lossless compression ratios were increased in order to enable better performance at higher resolutions over the lower bandwidth memory bus attached to the G92 on 8800 GT. We also know that the proportion of texture address units to texture filtering units has increased to a 1:1 ratio (similar to the 8600 GTS, but in a context where we can actually expect decent performance). This should also improve memory bandwidth usage and texturing power in general.

Because NVIDIA was touting the addition of hardware double precision IEEE 754 floating point on their workstation hardware coming sometime before the end of the year, we suspected that G92 might include this functionality. It seems, however, that the hardware behind that advancement has been pushed back for some reason. G92 does not support hardware double precision floating point. This is only really useful for workstation and GPU computing applications at the moment, but because NVIDIA design one GPU for both consumer and workstation applications, it will be interesting to see if they do anything at all with double precision on the desktop.

With every generation, we can expect buffers and on chip memory to be tweaked based on experience with the previous iteration of the hardware. This could also have resulted in additional transistors. But regardless of the reason, this GPU packs quite a number of features into a very small area. The integration of these features into one ASIC is possible economically because of the 65nm process: even though there are more transistors, the physical die takes up much less space than the G80.

Index The Card
Comments Locked

90 Comments

View All Comments

  • Spacecomber - Monday, October 29, 2007 - link

    Test
  • EateryOfPiza - Monday, October 29, 2007 - link

    What kind of G92 variants can we expect by Christmas 07?

    Or Summer 08?
  • mpc7488 - Monday, October 29, 2007 - link

    ardOCP is reporting that nVidia is increasing the 8800GTS stream processors to 112.
  • Spacecomber - Monday, October 29, 2007 - link

    Testing ;-)
  • Spacecomber - Monday, October 29, 2007 - link

    It appears that it was the bracketed h that was hiding all subsequent text. It needed a bracketed /h to close that "feature".
  • mpc7488 - Monday, October 29, 2007 - link

    Haha - thanks. I guess if anyone wants the explanation of the stream processors they can highlight the 'hidden message'.
  • mpc7488 - Monday, October 29, 2007 - link

    I'm not sure why the first post lost my text unless it was the bracket I used around the H - but HardOCP is reporting that nVidia is changing the 8800GTS 640 MB to have 112 stream processors.
  • mpc7488 - Monday, October 29, 2007 - link

    Great article Derek - I think you can tell you're mildly excited about this product :)

    Is there a reason that you didn't do any tests with anti-aliasing? I would assume that this would show more deviation between the 8800GTX and the 8800GT?
  • chizow - Monday, October 29, 2007 - link

    Nice job as usual Derek!

    Just wondering though, if you were able to test the cards at the same clock speeds. The GT by default has @100MHz advantage on the core over the GTS, which is a common reason the GTS falls so far behind in head to head testing. I expect the GT to have more OC'ing headroom than the GTS anyways, but it would be nice to see an apples to apples comparison to reveal the impact of some of the architecture changes from G80 to G92. Of note, the GT has fewer ROPs and a smaller memory bus but gains 1:1 address/filter units and 16 more stream processors.

    Also, I saw an early review that showed massive performance gains when the shader processor was overclocked on the GT; much bigger gains than significant increases to the core/memory clocks. Similar testing with the GTS/GTX don't yield anywhere near that much performance gain when the shader core clock is bumped up.

    Lastly, any idea when the G92 8800GTS refresh is going to be released? With a 640MB GTS this seems more of a lateral move to an 8800GT, although a refreshed GTS with 128SP and all the other enhancements of the G92 should undoubtedly be faster than the GTX...and maybe even the Ultra once overclocked.
  • Hulk - Monday, October 29, 2007 - link

    I'm looking to build a HTPC and this would be a great card if it does video decoding?

Log in

Don't have an account? Sign up now