The GF100 Recap

NVIDIA first unveiled its GF100 (then called Fermi) architecture last September. If you've read our Fermi and GF100 architecture articles, you can skip this part. Otherwise, here's a quick refresher on how this clock ticks.

First, let’s refresh the basics. NVIDIA’s GeForce GTX 480 and 470 are based on the GF100 chip, the gaming version of what was originally introduced last September as Fermi. GF100 goes into GeForces and Fermi goes into Tesla cards. But fundamentally the two chips are the same.

At a high level, GF100 just looks like a bigger GT200, however a lot has changed. It starts at the front end. Prior to GF100 NVIDIA had a large unified front end that handled all thread scheduling for the chip, setup, rasterization and z-culling. Here’s the diagram we made for GT200 showing that:

NVIDIA's GT200

The grey boxes up top were shared by all of the compute clusters in the chip below. In GF100, the majority of that unified front end is chopped up and moved further down the pipeline. With the exception of the thread scheduling engine, everything else decreases in size, increases in quantity and moves down closer to the execution hardware. It makes sense. The larger these chips get, the harder it is to have big unified blocks feeding everything.

In the old days NVIDIA took a bunch of cores, gave them a cache, some shared memory and a couple of special function units and called the whole construct a Streaming Multiprocessor (SM). The GT200 took three of these SMs, added texture units and an L1 texture cache (as well as some scheduling hardware) and called it a Texture/Processor Cluster. The old GeForce GTX 280 had 10 of these TPCs and that’s what made up the execution engine of the GPU.

NVIDIA's GF100

Click to Enlarge

With GF100, the TPC is gone. It’s now a Graphics Processing Cluster (GPC) and is made up of much larger SMs. Each SM now has 32 cores and there are four SMs per GPC. Each GPC gets its own raster engine, instead of the entire chip sharing a larger front end. There are four GPCs on a GF100 (however no GF100 shipping today has all SMs enabled in order to improve yield).

Each SM also has what NVIDIA is calling a PolyMorph engine. This engine is responsible for all geometry execution and hardware tessellation, something NVIDIA expects to be well used in DX11 and future games. NV30 (GeForce FX 5800) and GT200 (GeForce GTX 280), the geometry performance of NVIDIA’s hardware only increases roughly 3x in performance. Meanwhile the shader performance of their cards increased by over 150x. Compared just to GT200, GF100 has 8x the geometry performance of GT200, and NVIDIA tells us this is something they have measured in their labs. This is where NVIDIA hopes to have the advantage over AMD, assuming game developers do scale up geometry and tessellation use as much as NVIDIA is counting on.

NVIDIA also clocks the chip much differently than before. In the GT200 days we had a core clock, a shader clock and a memory clock. The core clock is almost completely out of the picture now. Only the ROPs and L2 cache operate on a separate clock domain. Everything else runs at a derivative of the shader clock. The execution hardware runs at the full shader clock speed, while the texture units, PolyMorph and Raster engines all run at 1/2 shader clock speed.

Cores and Memory

While we’re looking at GF100 today through gaming colored glasses, NVIDIA is also trying to build an army of GPU compute cards. In serving that master, the GF100’s architecture also differs tremendously from its predecessors.

All of the processing done at the core level is now to IEEE spec. That’s IEEE-754 2008 for floating point math (same as RV870/5870) and full 32-bit for integers. In the past 32-bit integer multiplies had to be emulated, the hardware could only do 24-bit integer muls. That silliness is now gone. Fused Multiply Add is also included. The goal was to avoid doing any cheesy tricks to implement math. Everything should be industry standards compliant and give you the results that you’d expect. Double precision floating point (FP64) performance is improved tremendously. Peak 64-bit FP execution rate is now 1/2 of 32-bit FP, it used to be 1/8 (AMD's is 1/5).


GT200 SM

In addition to the cores, each SM has a Special Function Unit (SFU) used for transcendental math and interpolation. In GT200 this SFU had two pipelines, in GF100 it has four. While NVIDIA increased general math horsepower by 4x per SM, SFU resources only doubled. The infamous missing MUL has been pulled out of the SFU, we shouldn’t have to quote peak single and dual-issue arithmetic rates any longer for NVIDIA GPUs.


GF100 SM

NVIDIA’s GT200 had a 16KB shared memory in each SM. This didn’t function as a cache, it was software managed memory. GF100 increases the size to 64KB but it can operate as a real L1 cache now. In order to maintain compatibility with CUDA applications written for G80/GT200 the 64KB can be configured as 16/48 or 48/16 shared memory/L1 cache. GT200 did have a 12KB L1 texture cache but that was mostly useless for CUDA applications. That cache still remains intact for graphics operations. All four GPCs share a large 768KB L2 cache.

Each SM has four texture units, each capable of 1 texture address and 4 texture sample ops. We have more texture sampling units but fewer texture addressing units in GF100 vs. GT200. All texture hardware runs at 1/2 shader clock and not core clock.

 NVIDIA Architecture Comparison G80 G92 GT200 GF100 GF100 Full*
Streaming Processors per TPC/GPC 16 16 24 128 128
Texture Address Units per TPC/GPC 4 8 8 16 16
Texture Filtering Units per TPC/GPC 8 8 8 64 64
Total SPs 128 128 240 480 512
Total Texture Address Units 32 64 80 60 64
Total Texture Filtering Units 64 64 80 240 256
*There are currently no full implementations of GF100, the column to the left is the GTX 480

 

Last but not least, this brings us to the ROPs. The ROPs have been reorganized, there are now 48 of them in 6 parttions of 8, and a 64bit memory channel serving each partition. The ROPs now share the L2 cache with the rest of GF100, while under GT200 they had their own L2 cache. Each ROP can do 1 regular 32bit pixel per clock, 1 FP16 pixel over 2 clocks, or 1 FP32 pixel over 4 clocks, giving the GF100 the ability to retire 48 regular pixels per clock. The ROPs are clocked together with the L2 cache.

Threads and Scheduling

While NVIDIA’s G80 didn’t start out as a compute chip, GF100/Fermi were clearly built with general purpose compute in mind from the start. Previous architectures required that all SMs in the chip worked on the same kernel (function/program/loop) at the same time. If the kernel wasn’t wide enough to occupy all execution hardware, that hardware went idle, and efficiency dropped as a result. Remember these chips are only powerful when they’re operating near 100% utilization.

In this generation the scheduler can execute threads from multiple kernels in parallel, which allowed NVIDIA to scale the number of cores in the chip without decreasing efficiency.


GT200 (left) vs. GF100 (right)

With a more compute leaning focus, GF100 also improves switch time between GPU and CUDA mode by a factor of 10x. It’s now fast enough to switch back and forth between modes multiple times within a single frame, which should allow for more elaborate GPU accelerated physics.

NVIDIA’s GT200 was a thread monster. The chip supported over 30,000 threads in flight. With GF100, NVIDIA scaled that number down to roughly 24K as it found that the chips weren’t thread bound but rather memory bound. In order to accommodate the larger shared memory per SM, max thread count went down.

  GF100 GT200 G80
Max Threads in Flight 24576 30720 12288

 

NVIDIA groups 32 threads into a unit called a warp (taken from the looming term warp, referring to a group of parallel threads). In GT200 and G80, half of a warp was issued to an SM every clock cycle. In other words, it takes two clocks to issue a full 32 threads to a single SM.

In previous architectures, the SM dispatch logic was closely coupled to the execution hardware. If you sent threads to the SFU, the entire SM couldn't issue new instructions until those instructions were done executing. If the only execution units in use were in your SFUs, the vast majority of your SM in GT200/G80 went unused. That's terrible for efficiency.

Fermi fixes this. There are two independent dispatch units at the front end of each SM in Fermi. These units are completely decoupled from the rest of the SM. Each dispatch unit can select and issue half of a warp every clock cycle. The threads can be from different warps in order to optimize the chance of finding independent operations.

There's a full crossbar between the dispatch units and the execution hardware in the SM. Each unit can dispatch threads to any group of units within the SM (with some limitations).

The inflexibility of NVIDIA's threading architecture is that every thread in the warp must be executing the same instruction at the same time. If they are, then you get full utilization of your resources. If they aren't, then some units go idle.

A single SM can execute:

GF100 FP32 FP64 INT SFU LD/ST
Ops per clock 32 16 32 4 16

 

If you're executing FP64 instructions the entire SM can only run at 16 ops per clock. You can't dual issue FP64 and SFU operations.

The good news is that the SFU doesn't tie up the entire SM anymore. One dispatch unit can send 16 threads to the array of cores, while another can send 16 threads to the SFU. After two clocks, the dispatchers are free to send another pair of half-warps out again. As I mentioned before, in GT200/G80 the entire SM was tied up for a full 8 cycles after an SFU issue.

The flexibility is nice, or rather, the inflexibility of GT200/G80 was horrible for efficiency and Fermi fixes that.

Meet the GTX 480 and GTX 470, Cont Odds & Ends: ECC & NVIDIA Surround Missing
Comments Locked

196 Comments

View All Comments

  • Headfoot - Monday, March 29, 2010 - link

    Unless you are an insider all of this "profitability" speculuation is just that, useless speculation.

    The reason they make both companies chips is more likely due to diversification, if one company does poorly one round then they are not going to go down with them. I'd hate to make ATI chips during the 2900XT era and i'd hate to make nVidia chips during the 5800 FX era
  • blindbox - Saturday, March 27, 2010 - link

    I know this is going to take quite a bit of work, but can't you colour up the main cards and its competition in this review? By main cards, I mean GTX 470, 480 and 5850 and 5870. It's giving me a hard time to make comparison. I'm sure you guys did this before.. I think.

    It's funny how you guys only coloured the 480.
  • blindbox - Saturday, March 27, 2010 - link

    I know this is going to take quite a bit of work, but can't you colour up the main cards and its competition in this review? By main cards, I mean GTX 470, 480 and 5850 and 5870. It's giving me a hard time to make comparison. I'm sure you guys did this before.. I think.

    It's funny how you guys only coloured the 480.
  • iwodo - Saturday, March 27, 2010 - link

    If i remember correctly Nvidia makes nearly 30- 40% of their Profits from Telsa and Quadro. However Telsa and Quadro only occupies 10% of their Total GPU volume shipment. Or 20% if we only count desktop GPU.
    Which means Nvidia is selling those Perfect grade Fermi 512 Shader to the most profitable market. And they are just binning these chips to lower grade GTX 480 and GTX 470. While Fermi did not provide the explosion of HPC sales as we initially expected due to heat and power issues, but judging by pre-order numbers Nvidia still has quite a lot of orders to fulfill.

    The Best thing is we get another Die Shrink in late 2010 / early 2011 to 28nm. ( It is actually ready for volume production in 3Q 2010 ). This should bring Lower Power and Heat. Hopefully the next update will get us a much better Memory Controller, with 256Bit controller and may be 6Ghz+ GDDR5 should offer enough bandwidth while getting better yield then 384Bit Controller.

    Fermi may not be exciting now, but it will be in the future.
  • swing848 - Saturday, March 27, 2010 - link

    We are not living in the future yet.

    When the future does arrive I expect there will also be newer, better hardware.
  • Sunburn74 - Saturday, March 27, 2010 - link

    So how do you guys test temps? It's not specifically stated. Are you using a case? An open bench? Using readings from a temp meter? Or system readings from catalyst or nvidia control panel? Please enlighten. It's important because people will eventually have to extrapolate your results to their personal scenarios which involve cases of various designs. 94 degrees measured inside a case is completely different from 94 degrees measured on an open bench.

    Also, why are people saying all this stuff about switching sides and families? Just buy the best card available in your opinion. I mean it's not like ATI and Nvidia are feeding you guys and clothing your kids and paying your bills. They make gpus, something you plug into a case and forget about if it's working properly. I just don't get it :(
  • Ryan Smith - Saturday, March 27, 2010 - link

    We're using a fully assembled and closed Thermaltake Spedo with a 120mm fan directing behind the video cards feeding them air. Temperatures are usually measured with GPU-Z unless for some reason it can't grab the temps from the driver.
  • hybrid2d4x4 - Saturday, March 27, 2010 - link

    Thanks for elaborating on the temps as I was wondering about that myself. One other thing I'd like to know is how the VRM and RAM temps are on these cards. I'm assuming that the reported values are for just the core.
    The reason I ask is that on my 4870 with aftermarket cooling and the fan set pretty low, my core always stayed well below 65, while the RAM went all the way up to 115 and VRMs up to ~100 (I have obviously increased fan speeds as the RAM temps were way too hot for my liking- they now peak at ~90)
  • Ryan Smith - Saturday, March 27, 2010 - link

    Correct, it's just the core. We don't have VRM temp data for Fermi. I would have to see if the Everest guys know how to read it, once they add support.
  • shiggz - Friday, March 26, 2010 - link

    I just am not interested in a card with a TDP over 175W. When I upgraded from 8800gt to GTX 260 It was big jump in heat and noise and definitely at my tolerance limit during the summer months. I found myself under-clocking a card I had just bought.

    175W max though a 150W is preferred @ 250$ and I am ready to buy if NVIDIA wont make it then I will switch back to ATI.

Log in

Don't have an account? Sign up now