NVIDIA Chipsets

Below you can see our breakdown of the GPU guide for NVIDA video cards:

NVIDIA Craphics Chips Overview
DirectX 9.0C with PS3.0 and VS3.0 Support
GF 6600 NV43 300 550 8 1 3 128/256 128
GF 6600GT NV43 500 1000 8 1 3 128/256 128
GF 6800LE NV40 320 700 8 1 5 128 256
GF 6800LE NV41 320 700 8 1 5 128 256
GF 6800 NV40 325 700 12 1 5 128 256
GF 6800 NV41 325 700 12 1 5 128 256
GF 6800GT NV40 350 1000 16 1 6 256 256
GF 6800U NV40 400 1100 16 1 6 256 256
GF 6800UE NV40 450 1200 16 1 6 256 256
DirectX 9 with PS2.0+ and VS2.0+ Support
GFFX 5200LE NV34 250 400 4 1 1 64/128 64
GFFX 5200 NV34 250 400 4 1 1 64/128/256 128
GFFX 5200U NV34 325 650 4 1 1 128 128
GFFX 5500 NV34 270 400 4 1 1 128/256 128
GFFX 5600XT NV31 235 400 4 1 1 128/256 128
GFFX 5600 NV31 325 500 4 1 1 128/256 128
GFFX 5600U NV31 350 700 4 1 1 128/256 128
GFFX 5600U FC NV31 400 800 4 1 1 128 128
GFFX 5700LE NV36 250 400 4 1 3 128/256 128
GFFX 5700 NV36 425 500 4 1 3 128/256 128
GFFX 5700U NV36 475 900 4 1 3 128/256 128
GFFX 5700U GDDR3 NV36 475 950 4 1 3 128 128
GFFX 5800 NV30 400 800 4 2 2 128 128
GFFX 5800U NV30 500 1000 4 2 2 128 128
GFFX 5900XT/SE NV35 400 700 4 2 3 128 256
GFFX 5900 NV35 400 850 4 2 3 128/256 256
GFFX 5900U NV35 450 850 4 2 3 256 256
GFFX 5950U NV38 475 950 4 2 3 256 256
DirectX 8 with PS1.3 and VS1.1 Support
GF3 Ti200 NV20 175 400 4 2 1 64/128 128
GeForce 3 NV20 200 460 4 2 1 64 128
GF3 Ti500 NV20 240 500 4 2 1 64 128
GF4 Ti4200 128 NV25 250 444 4 2 2 128 128
GF4 Ti4200 64 NV25 250 500 4 2 2 64 128
GF4 Ti4200 8X NV28 250 514 4 2 2 128 128
GF4 Ti4400 NV25 275 550 4 2 2 128 128
GF4 Ti4600 NV25 300 600 4 2 2 128 128
GF4 Ti4800 SE NV28 275 550 4 2 2 128 128
GF4 Ti4800 NV28 300 650 4 2 2 128 128
DirectX 7
GeForce 256 DDR NV10 120 300 4 1 0.5 32/64 128
GeForce 256 SDR NV10 120 166 4 1 0.5 32/64 128
GF2 MX200 NV11 175 166 2 2 0.5 32/64 64
GF2 MX NV11 175 333 2 2 0.5 32/64 64/128
GF2 MX400 NV11 200 333 2 2 0.5 32/64 128
GF2 GTS NV15 200 333 4 2 0.5 32/64 128
GF2 Pro NV15 200 400 4 2 0.5 32/64 128
GF2 Ti NV15 250 400 4 2 0.5 32/64 128
GF2 Ultra NV15 250 460 4 2 0.5 64 128
GF4 MX4000 NV19 275 400 2 2 0.5 64/128 64
GF4 MX420 NV17 250 333 2 2 0.5 64 64
GF4 MX440 SE NV17 250 333 2 2 0.5 64/128 128
GF4 MX440 NV17 275 400 2 2 0.5 32/64 128
GF4 MX440 8X NV18 275 500 2 2 0.5 64/128 128
GF4 MX460 NV17 300 550 2 2 0.5 64 128
* RAM clock is the effective clock speed, so 250 MHz DDR is listed as 500 MHz.
** Textures/Pipeline is the number of unique texture lookups. ATI has implementations that can lookup 3 textures, but two of the lookups must be from one texture.
*** Vertex pipelines is estimated on certain architectures. NVIDIA says their GFFX cards have a "vertex array", but in practice it performs as shown.

The caveats are very similar on the NVIDIA side of things. In terms of DirectX support, NVIDIA has DX7, DX8.0, DX9, and DX9.0c support. Unlike the X800 cards which support an unofficial DX spec, DX9.0c is a Microsoft standard. On the flip side, the SM2.0a features of the FX line went almost entirely unused, and the 32-bit floating point (as opposed to the 24-bit values ATI uses) appears to be part of the problem with the inferior DX9 performance of the FX series. The benefit of DX8.1 over DX8.0 was that a few more operations were added to the hardware, so tasks that would have required two passes on DX8.0 can be done in one pass on DX8.1.

When DX8 cards were all the rage, DX8.1 support was something of a non-issue, as DX8 games were hard to come by, and most opted for the more widespread 8.0 spec. Now, however, games like Far Cry and the upcoming Half-Life 2 have made DX8.1 support a little more useful. The reason for this is that every subsequent version of DirectX is a superset of the older versions, so every DX9 card must include both DX8 and DX8.1 functionality. GeForce FX cards in the beta of Counter Strike: Source default to DX8.1 rendering paths in order to get the best compromise between quality and speed, while GeForce 3 and 4 Ti cards use the DX8.0 rendering path.

Going back to ATI for a minute, it becomes a little clearer why ATI's SM2.0b isn't an official Microsoft standard. SM3.0 already supersedes it as a standard, and yet certain features of SM2.0b as ATI defines it are not present in SM3.0, for example the new 3Dc normal map compression. Only time will tell if this feature gets used with current hardware, but it will likely be included in a future version of DirectX, so it could come in useful.

In contrast to ATI, where the card generations are pretty distinct entities, the NVIDIA cards show a lot more overlap. The GF3 cards only show a slight performance increase over the GF2 Ultra, and that is only in more recent games. Back in the day, there really wasn't much incentive to leave the GF2 Ultra and "upgrade" to the GF3, especially considering the cost, and many people simply skipped the GF3 generation. Similarly, those that purchased the GF4 Ti line were left with little reason to upgrade to the FX line, as the Ti4200 remains competitive in most games all the way up to the FX5600. The FX line is only really able to keep up with - and sometimes beat - the GF4Ti cards when DX8.1 or DX9 features are used, or when enabling antialiasing and/or anisotropic filtering.

Speaking of antialiasing.... The GF2 line lacked support for multi-sample antialiasing and relied on the more simplistic super-sampling method. We say "simplistic" meaning that it was easier to implement - it is actually much more demanding on memory bandwidth, so it was less useful. The GF3 line brought the first consumer cards with multi-sample antialiasing, and NVIDIA went one step further by creating a sort of rotated-grid method called Quincunx, which offered superior quality to 2xAA while incurring less of a performance hit than 4xAA. However, as the geometrical complexity of games increased - something DX7 promised and yet failed to deliver for several years - none of these cards were able to perform well with antialiasing enabled. The GF4 line refined the antialiasing support slightly - even the GF4MX line got hardware antialiasing support, although here it was more of a checklist feature than something most people would actually enable - but for the most part it remained the same as in the GF3. The GFFX line continued with the same basic antialiasing support, and it was only with the GeForce 6 series that NVIDIA finally improved the quality of their antialiasing by switching to a rotated grid. At present, the differences in implementation and quality of antialiasing on ATI and NVIDIA hardware are almost impossible to spot in practical use. ATI does support 6X multi-sample anti-aliasing, of course, but that generally brings too much of a performance hit to use except on older games.

Anisotropic filtering for NVIDIA was a different story. First introduced with the GF2 line, it was extremely limited and rather slow - the GF2 could only provide 2xAF, called 8-tap filtering by NVIDIA because it uses 8 samples. GeForce3 added support for up to 8xAF (32-tap), along with performance improvements compared to the GF2 when anisotropic filtering was enabled. Also, the GF2 line was really better optimized for 16-bit color performance, while the GF3 and later all manage 32-bit color with a much less noticeable performance hit. This is likely related to the same enhancements that allow for better anisotropic filtering.

As games became more complex, the cost of doing "real" anisotropic filtering became too great, and so there were optimizations and accusations of cheating by many parties. The reality is that NVIDIA used a more correct distance calculation than ATI: d = x^2 + y^2 + z^2, compared to d = ax+by+cz. The latter equation is substantially faster, but the results are less correct. It ends up giving correct results only at certain angles, while other angles use a lower level of AF. Unfortunately for those who desire maximum image quality, NVIDIA solved the discrepancy in AF performance by switching to ATI's distance calculation on the GeForce 6 line. The GeForce 6 line also marks the introductions of 16xAF (64-tap) by NVIDIA, although it is nearly impossible to spot the difference in quality between 8xAF and 16xAF without some form of image manipulation. So, things have now been sorted out as far as "cheating" accusations go. It is probably safe to say that in modern games, the GF4 and earlier chips are not able to handle anisotropic filtering well enough to warrant enabling it.

NVIDIA is also using various versions of the same chip in their high end parts. The 6800 cards at present all use the same NV40 chip. Certain chips have some of the pipelines deactivated and they are then sold in lower end cards. Rumors about the ability to "mod" 6800 vanilla chips into 16 pipeline versions exist, but success rates are not yet known and are likely low, due again to the size of the chips. NVIDIA has plans to release a modified chip, a.k.a. NV41, which will only have 12 pixel pipelines and 5 vertex pipelines, in order to reduce manufacturing costs and improve yields.

Get in the Game The need, for speed
Comments Locked

43 Comments

View All Comments

  • MODEL 3 - Wednesday, September 8, 2004 - link

    A lot of mistakes for a professional hardware review site the size of Anandtech.I will only mention the de facto mistakes since I have doubts for more.I am actually surprised about the amount of mistakes in this article.I mean since I live in Greece (not the center of the world in 3d technology or hardware market) I always thought that the editors in the best hardware review sites of the world (like Anandtech) have at least the basic knowledge related to technology and they make research and doublecheck if their articles are correct.I mean they get paid, right?I mean if I can find so easily their mistakes (I have no technology related degree although I was purchase and product manager in the best Greek IT companies) they must be doing something very,very wrong indeed.Now onto the mistakes:
    ATI :
    X700 6 vertex pipelines: Actually this is no mistake since I have no information about this new part but it seems strange if X700 will have the same (6) vertex pipelines as X800XT.I guess more logical would be half as many (3) (like 6800Ultra-6600GT) or double as many as X600 (4).We will see.
    Radeon VE 183/183: The actual speed was 166/166SDR 128bit for ATI parts and as low as 143/143 for 3rd party bulk part
    Radeon 7000 PCI 166/333 The actual speed was 166/166SDR 128bit for ATI parts and as low as 143/143 for 3rd party bulk part (note that anandtech suggests 166DDR and the correct is 166 SDR)
    Radeon 7000 AGP 183/366 32/64(MB): The actual speed was 166/166SDR for ATI parts and as low as 143/143 for 3rd party bulk part (note that anandtech suggests 166DDR and the correct is 166 SDR) also at launch and for a whole year (if ever) it didn't exist a 64MB part
    Radeon 7200 64bit ram bus: The 7200 was exactly the same as Radeon DDR so the ram bus width was 128bit
    ATI has unofficial DX 9 with SM2.0b support: Actually ATI has official DX 9.0b support and Microsoft certified this "in between" version of DX9.When they enable their 2.0b feutures they don't fail WHQL compliance since 2.0b is official microsoft version (get it?).Feutures like 3Dc normal map compression are activated only in open GL mode but 3Dc compression is not part of DX9.0b.
    NVIDIA:
    GF 6800LE with 8 pixel pipelines has according to Anandtech 5 vertex pipelines: Actually this is no mistake since I have no information about this part but since 6800GT/Ultra is built with four (4) quads with 4 pixel pipelines each isn't more logical the 6800LE with half the quads to have half the pixel (8) AND half (3) the vertex pipelines?
    GFFX 5700 3 vertex pipelines: GFFX 5700 has half the number of pixel AND vertex pipelines of 5900 so if you convert the vertex array of 5900 into 3 vertex pipes (which is correct) then the 5700 would have 1,5
    GF4 4600 300/600: The actual speed is 300/325DDR 128bit
    GF2MX 175/333: The actual speed is 175/166SDR 128bit
    GF4MX series 0.5 vertex shader: Actually the GF4MX series had twice the amount of vertex shaders of GF2 so the correct number of vertex shader is 1
    According to Anandtech, the GF3 cards only show a slight performance increase over the GF2 Ultra, and that is only in more recent games : Actually GF3 (Q1 01) was based in 0,18 nm technology and the yields was extremely low.In reality GF3 parts in acceptable quantity came in Q3 01 with GF3Ti series 0,15 nm technology .If you check the performance in open GL games at and after Q3 01 and DX8 games at and after Q3 02 you will clearly see GF3 to have double the performance of GF2 clock for clock (GF3Ti500 Vs GF2Ultra)

    Now, the rest of the article is not bad and I also appreciate the effort.
  • JarredWalton - Wednesday, September 8, 2004 - link

    Sorry, ViRGE - I actually took your suggestion to heart and updated page 3 initially, since you are right about it being more common. However, I forgot to modify the DX7 performance charts. There are probably quite a few other corrections that should be made as well....
  • ViRGE - Tuesday, September 7, 2004 - link

    Jared, like I said, you're technically right about how the GF2 MX could be outfitted with either 128bit SDR or 64bit SDR/DDR, but you said it yourself that the cards were mostly 128bit SDR. Obviously any change won't have an impact, but in my humble opinion, it would be best to change the GF2 MX to better represent what historically happened, so that if someone uses this chart as a reference for a GF2 MX, they're more likely to be getting the "right" data.
  • BigLan - Tuesday, September 7, 2004 - link

    Good job with the article

    Love the office reference...

    "Can I put it in my mouth?"
  • darth_beavis - Tuesday, September 7, 2004 - link

    Sorry, now it's suddenly working. I don't know what my problem is (but I'm sure it's hard to pronounce).
  • darth_beavis - Tuesday, September 7, 2004 - link

    Actually it looks like none of them have labels. Is anandtech not mozilla compatible or something. Just use jpgs pleaz.
  • darth_beavis - Tuesday, September 7, 2004 - link

    Why is there no descriptions for the columns on the graph on pg 2. Are just supposed to guess what the numbers mean?
  • JarredWalton - Tuesday, September 7, 2004 - link

    Yes, Questar, laden with errors. All over the place. Thanks for pointing them out so that they could be corrected. I'm sure that took you quite some time.

    Seriously, though, point them out (other than omissions, as making a complete list of every single variation of every single card would be difficult at best) and we will be happy to correct them provided that they actually are incorrect. And if you really want a card included, send the details of the card, and we can add that as well.

    Regarding the ATI AIW (All In Wonder, for those that don't know) cards, they often varied from the clock and RAM speeds of the standard chips. Later models may have faster RAM or core speeds, while earlier models often had slower RAM and core speeds.
  • blckgrffn - Tuesday, September 7, 2004 - link

    Questar - if you don't like it, leave. The article clearly stated its bounds and did a great job. My $.02 - the 7500 AIW is 64 meg DDR only, unsure of the speed however. Do you want me to check that out?
  • mikecel79 - Tuesday, September 7, 2004 - link

    #22 The Geforce256 was released in October of 1999 so this is roughly the last 5 years of chips from ATI and Nvidia. If it were to include all other manufacturers it would be quite a bit longer.

    How about examples of this article being "laden or errors" instead of just stating it.

Log in

Don't have an account? Sign up now