Building NVIDIA's GT200

Here's a Streaming Processor, NVIDIA calls it an SP:

NVIDIA calls an individual SP a single processing core, which is actually true. It is a fully pipelined, single-issue, in-order microprocessor complete with two ALUs and a FPU. An SP doesn't have any cache, so it's not particularly great at anything other than cranking through tons of mathematical operations. Since an SP spends most of its time working on pixel or vertex data, the fact that it doesn't have a cache doesn't really matter. Aside from name similarities, one NVIDIA SP is a lot like a very simplified version of a SPE in the Cell microprocessor (or maybe the SPE is like a really simple version of one of NVIDIA's SMs, which we'll get to in a minute). While a single SPE in Cell has seven execution units, a single NVIDIA SP only has three.

By itself a SP is fairly useless, but NVIDIA builds GPUs and if you add up enough of these little monsters you can start to get something productive given that graphics rendering is a highly parallelizable task.

Here's a Streaming Multiprocessor, which NVIDIA abbreviates as SM:

A SM is an array of SPs, eight to be specific, along with two more processors called Special Function Units (SFUs). Each SFU has four FP multiply units which are used for transcendental operations (e.g. sin, cosin) and interpolation, the latter being used in some of the calculations for things like anisotropic texture filtering. Although NVIDIA isn't specific in saying so, we assume that each SFU is also a fully pipelined, single-issue, in-order microprocessor. There's a MT issue unit that dispatches instructions to all of the SPs and SFUs in the group.

In addition to the processor cores in a SM, there's a very small instruction cache, a read only data cache and a 16KB read/write shared memory. These cache sizes are kept purposefully small because unlike a conventional desktop microprocessor, the datasets we're trying to cache here are small. Each SP ends up working on an individual pixel and despite the move to 32-bit floating point values, there's only so much data associated with a single pixel. The 16KB memory is akin to Cell's local stores in that it's not a cache, but a software-managed data store so that latency is always predictable. With this many cores in a single SM, control and predictability and very important to making the whole thing work efficiently.

Take one more step back and you've got a Texture/Processor Cluster (TPC):


The G80/G92 TPC (left) vs. the GT200 TPC (right)

NVIDIA purposefully designed its GPU architecture to be modular, so a single TPC can be made up of any number of SMs. In the G80 architecture it was made up of two SMs but with the GT200 architecture it now has three SMs.

The components of the TPC however haven't changed; a TPC is made up of SMs, some control logic and a texture block. Remember that a SM is a total of 8 SPs and 2 SFUs, so that brings the total up to 24 SPs and 6 SFUs (must...not...type...STFU) per cluster in GT200 (up from 16 SPs and 4 SFUs in G80). The texture block includes texture addressing and filtering logic as well as a L1 texture cache.

The modular theme continues with the Streaming Processor Array (SPA) that is composed of a number of TPCs:


The GT200 SPA, that's 240 SPs in there if you want to count them

In G80 the SPA was made up of 8 TPCs, but with GT200 we've moved up to 10. Note that each TPC now has 3 SMs vs. 2, so the overall processing power of GT200 has increased by 87.5% over G80.


And here's G80/G92, only 128 SPs thanks to two SMs per TPC and 8 TPCs

At the front end of the GPU we've got schedulers and control logic to distribute workloads to the entire array of processing cores. At the other end we've got L2 texture caches and rasterization processors that handle final filtering and output of data to the frame buffer.

The culmination of all of this is that the new GT200 GPU, the heart of the GeForce GTX 280 and 260, features 240 SPs, 160KB of local memory, an even smaller amount of cache and is built on TSMC's 65nm process using 1.4 billion transistors.


1.4 Billion Transistors.  It wants vertex data.  Really bad.


754 Million Transistors

There are more transistors in this chip than there are people in China, and it's the largest, most compute-dense chip we've ever reviewed.

Index Lots More Compute, a Leetle More Texturing
Comments Locked

108 Comments

View All Comments

  • strikeback03 - Tuesday, June 17, 2008 - link

    So are you blaming nvidia for games that require powerful hardware, or just for enabling developers to write those games by making powerful hardware?
  • InquiryZ - Monday, June 16, 2008 - link

    Was AC tested with or without the patch? (the patch removes a lot of performance on the ATi cards..)
  • DerekWilson - Monday, June 16, 2008 - link

    the patch only affects performance with aa enabled.

    since the game only allows aa at up to 1680x1050, we tested without aa.

    we also tested with the patch installed.
  • PrinceGaz - Monday, June 16, 2008 - link

    nVidia say they're not saying exactly what GT200 can and cannot do to prevent AMD bribing game developers to use DX10.1 features GT200 does not support, but you mention that

    "It's useful to point out that, in spite of the fact that NVIDIA doesn't support DX10.1 and DX10 offers no caps bits, NVIDIA does enable developers to query their driver on support for a feature. This is how they can support multisample readback and any other DX10.1 feature that they chose to expose in this manner."

    Now whilst it is driver dependent and additional features could be enabled (or disabled) in later drivers, it seems to me that all AMD or anyone else would have to do is go through the whole list of DX10.1 features and query the driver about each one. Voila- an accurate list of what is and isn't supported, at least with that driver.
  • DerekWilson - Monday, June 16, 2008 - link

    the problem is that they don't expose all the features they are capable of supporting. they won't mind if AMD gets some devs on board with something that they don't currently support but that they can enable support for if they need to.

    what they don't want is for AMD to find out what they are incapable of supporting in any reasonable way. they don't want AMD to know what they won't be able to expose via the driver to developers.

    knowing what they already expose to devs is one thing, but knowing what the hardware can actually do is not something nvidia is interested in shareing.
  • emboss - Monday, June 16, 2008 - link

    Well, yes and no. The G80 is capable of more than what is implemented in the driver, and also some of the implemented driver features are actually not natively implemented in the hardware. I assume the GT200 is the same. They only implement the bits that are actually being used, and emulate the operations that are not natively supported. If a game comes along that needs a particular feature, and the game is high-profile enough for NV to care, NV will implement it in the driver (either in hardware if it is capable of it, or emulated if it's not).

    What they don't want to say is what the hardware is actually capable of. Of course, ATI can still get a reasonably good idea by looking at the pattern of performance anomalies and deducing which operations are emulated, so it's still just stupid paranoia that hurts developers.
  • B3an - Monday, June 16, 2008 - link

    @ Derek - I'd really appreciate this if you could reply...

    Games are tested at 2560x1600 in these benchmarks with the 9800GX2, and some games are even playable.
    Now when i do this with my GX2 at this res, a lot of the time even the menu screen is a slide show (often under 10FPS). Epecially if any AA is enabled. Some games that do this are Crysis, GRID, UT3, Mass Effect, ET:QW... with older games it does not happen, only newer stuff with higher res textures.

    This never happened on my 8800GTX to the same extent. So i put it down to the GX2 not having enough memory bandwidth and enough usable VRAM for such high resolution.

    So could you explain how the GX2 is getting 64FPS @ 2560x1600 with 4x AA with ET:Quake Wars? Aswell as other games at that res + AA.
  • DerekWilson - Monday, June 16, 2008 - link

    i really haven't noticed the same issue with menu screens ... except in black and white 2 ... that one sucked and i remember complaining about it.

    to be fair i haven't tested this with mass effect, grid, or ut3.

    as for menu screens, they tend to be less memory intensive than the game itself. i'm really not sure why it happens when it does, but it does suck.

    i'll ask around and see if i can get an explaination of this problem and if i can i'll write about why and when it will happen.

    thanks,
    Derek
  • larson0699 - Monday, June 16, 2008 - link

    "Massiveness" and "aggressiveness"?

    I know the article is aimed to hit as hard as the product it's introducing us to, but put a little English into your English.

    "Mass" and "aggression".

    FWIW, the GTX's numbers are unreal. I can appreciate the power-saving capabilities during lesser load, but I agree, GT200 should've been 55nm. (6pin+8pin? There's a motherboard under that SLI setup??)
  • jobrien2001 - Monday, June 16, 2008 - link

    Seems Nvidia finally dropped the ball.

    -Power consumption and the price tag are really bad.
    -Performance isnt as expected.
    -Huge Die

    Im gonna wait for a die shrink or buy an ATI. The 4870 with ddr5 seems promising from the early benchmarks... and for $350? who in their right mind wouldnt buy one.

Log in

Don't have an account? Sign up now