The GF100 Recap

NVIDIA first unveiled its GF100 (then called Fermi) architecture last September. If you've read our Fermi and GF100 architecture articles, you can skip this part. Otherwise, here's a quick refresher on how this clock ticks.

First, let’s refresh the basics. NVIDIA’s GeForce GTX 480 and 470 are based on the GF100 chip, the gaming version of what was originally introduced last September as Fermi. GF100 goes into GeForces and Fermi goes into Tesla cards. But fundamentally the two chips are the same.

At a high level, GF100 just looks like a bigger GT200, however a lot has changed. It starts at the front end. Prior to GF100 NVIDIA had a large unified front end that handled all thread scheduling for the chip, setup, rasterization and z-culling. Here’s the diagram we made for GT200 showing that:

NVIDIA's GT200

The grey boxes up top were shared by all of the compute clusters in the chip below. In GF100, the majority of that unified front end is chopped up and moved further down the pipeline. With the exception of the thread scheduling engine, everything else decreases in size, increases in quantity and moves down closer to the execution hardware. It makes sense. The larger these chips get, the harder it is to have big unified blocks feeding everything.

In the old days NVIDIA took a bunch of cores, gave them a cache, some shared memory and a couple of special function units and called the whole construct a Streaming Multiprocessor (SM). The GT200 took three of these SMs, added texture units and an L1 texture cache (as well as some scheduling hardware) and called it a Texture/Processor Cluster. The old GeForce GTX 280 had 10 of these TPCs and that’s what made up the execution engine of the GPU.

NVIDIA's GF100

Click to Enlarge

With GF100, the TPC is gone. It’s now a Graphics Processing Cluster (GPC) and is made up of much larger SMs. Each SM now has 32 cores and there are four SMs per GPC. Each GPC gets its own raster engine, instead of the entire chip sharing a larger front end. There are four GPCs on a GF100 (however no GF100 shipping today has all SMs enabled in order to improve yield).

Each SM also has what NVIDIA is calling a PolyMorph engine. This engine is responsible for all geometry execution and hardware tessellation, something NVIDIA expects to be well used in DX11 and future games. NV30 (GeForce FX 5800) and GT200 (GeForce GTX 280), the geometry performance of NVIDIA’s hardware only increases roughly 3x in performance. Meanwhile the shader performance of their cards increased by over 150x. Compared just to GT200, GF100 has 8x the geometry performance of GT200, and NVIDIA tells us this is something they have measured in their labs. This is where NVIDIA hopes to have the advantage over AMD, assuming game developers do scale up geometry and tessellation use as much as NVIDIA is counting on.

NVIDIA also clocks the chip much differently than before. In the GT200 days we had a core clock, a shader clock and a memory clock. The core clock is almost completely out of the picture now. Only the ROPs and L2 cache operate on a separate clock domain. Everything else runs at a derivative of the shader clock. The execution hardware runs at the full shader clock speed, while the texture units, PolyMorph and Raster engines all run at 1/2 shader clock speed.

Cores and Memory

While we’re looking at GF100 today through gaming colored glasses, NVIDIA is also trying to build an army of GPU compute cards. In serving that master, the GF100’s architecture also differs tremendously from its predecessors.

All of the processing done at the core level is now to IEEE spec. That’s IEEE-754 2008 for floating point math (same as RV870/5870) and full 32-bit for integers. In the past 32-bit integer multiplies had to be emulated, the hardware could only do 24-bit integer muls. That silliness is now gone. Fused Multiply Add is also included. The goal was to avoid doing any cheesy tricks to implement math. Everything should be industry standards compliant and give you the results that you’d expect. Double precision floating point (FP64) performance is improved tremendously. Peak 64-bit FP execution rate is now 1/2 of 32-bit FP, it used to be 1/8 (AMD's is 1/5).


GT200 SM

In addition to the cores, each SM has a Special Function Unit (SFU) used for transcendental math and interpolation. In GT200 this SFU had two pipelines, in GF100 it has four. While NVIDIA increased general math horsepower by 4x per SM, SFU resources only doubled. The infamous missing MUL has been pulled out of the SFU, we shouldn’t have to quote peak single and dual-issue arithmetic rates any longer for NVIDIA GPUs.


GF100 SM

NVIDIA’s GT200 had a 16KB shared memory in each SM. This didn’t function as a cache, it was software managed memory. GF100 increases the size to 64KB but it can operate as a real L1 cache now. In order to maintain compatibility with CUDA applications written for G80/GT200 the 64KB can be configured as 16/48 or 48/16 shared memory/L1 cache. GT200 did have a 12KB L1 texture cache but that was mostly useless for CUDA applications. That cache still remains intact for graphics operations. All four GPCs share a large 768KB L2 cache.

Each SM has four texture units, each capable of 1 texture address and 4 texture sample ops. We have more texture sampling units but fewer texture addressing units in GF100 vs. GT200. All texture hardware runs at 1/2 shader clock and not core clock.

 NVIDIA Architecture Comparison G80 G92 GT200 GF100 GF100 Full*
Streaming Processors per TPC/GPC 16 16 24 128 128
Texture Address Units per TPC/GPC 4 8 8 16 16
Texture Filtering Units per TPC/GPC 8 8 8 64 64
Total SPs 128 128 240 480 512
Total Texture Address Units 32 64 80 60 64
Total Texture Filtering Units 64 64 80 240 256
*There are currently no full implementations of GF100, the column to the left is the GTX 480

 

Last but not least, this brings us to the ROPs. The ROPs have been reorganized, there are now 48 of them in 6 parttions of 8, and a 64bit memory channel serving each partition. The ROPs now share the L2 cache with the rest of GF100, while under GT200 they had their own L2 cache. Each ROP can do 1 regular 32bit pixel per clock, 1 FP16 pixel over 2 clocks, or 1 FP32 pixel over 4 clocks, giving the GF100 the ability to retire 48 regular pixels per clock. The ROPs are clocked together with the L2 cache.

Threads and Scheduling

While NVIDIA’s G80 didn’t start out as a compute chip, GF100/Fermi were clearly built with general purpose compute in mind from the start. Previous architectures required that all SMs in the chip worked on the same kernel (function/program/loop) at the same time. If the kernel wasn’t wide enough to occupy all execution hardware, that hardware went idle, and efficiency dropped as a result. Remember these chips are only powerful when they’re operating near 100% utilization.

In this generation the scheduler can execute threads from multiple kernels in parallel, which allowed NVIDIA to scale the number of cores in the chip without decreasing efficiency.


GT200 (left) vs. GF100 (right)

With a more compute leaning focus, GF100 also improves switch time between GPU and CUDA mode by a factor of 10x. It’s now fast enough to switch back and forth between modes multiple times within a single frame, which should allow for more elaborate GPU accelerated physics.

NVIDIA’s GT200 was a thread monster. The chip supported over 30,000 threads in flight. With GF100, NVIDIA scaled that number down to roughly 24K as it found that the chips weren’t thread bound but rather memory bound. In order to accommodate the larger shared memory per SM, max thread count went down.

  GF100 GT200 G80
Max Threads in Flight 24576 30720 12288

 

NVIDIA groups 32 threads into a unit called a warp (taken from the looming term warp, referring to a group of parallel threads). In GT200 and G80, half of a warp was issued to an SM every clock cycle. In other words, it takes two clocks to issue a full 32 threads to a single SM.

In previous architectures, the SM dispatch logic was closely coupled to the execution hardware. If you sent threads to the SFU, the entire SM couldn't issue new instructions until those instructions were done executing. If the only execution units in use were in your SFUs, the vast majority of your SM in GT200/G80 went unused. That's terrible for efficiency.

Fermi fixes this. There are two independent dispatch units at the front end of each SM in Fermi. These units are completely decoupled from the rest of the SM. Each dispatch unit can select and issue half of a warp every clock cycle. The threads can be from different warps in order to optimize the chance of finding independent operations.

There's a full crossbar between the dispatch units and the execution hardware in the SM. Each unit can dispatch threads to any group of units within the SM (with some limitations).

The inflexibility of NVIDIA's threading architecture is that every thread in the warp must be executing the same instruction at the same time. If they are, then you get full utilization of your resources. If they aren't, then some units go idle.

A single SM can execute:

GF100 FP32 FP64 INT SFU LD/ST
Ops per clock 32 16 32 4 16

 

If you're executing FP64 instructions the entire SM can only run at 16 ops per clock. You can't dual issue FP64 and SFU operations.

The good news is that the SFU doesn't tie up the entire SM anymore. One dispatch unit can send 16 threads to the array of cores, while another can send 16 threads to the SFU. After two clocks, the dispatchers are free to send another pair of half-warps out again. As I mentioned before, in GT200/G80 the entire SM was tied up for a full 8 cycles after an SFU issue.

The flexibility is nice, or rather, the inflexibility of GT200/G80 was horrible for efficiency and Fermi fixes that.

Meet the GTX 480 and GTX 470, Cont Odds & Ends: ECC & NVIDIA Surround Missing
Comments Locked

196 Comments

View All Comments

  • deputc26 - Friday, March 26, 2010 - link

    "GTX 480 only has 11% more memory bandwidth than the GTX 285, and the 15% less than the GTX 285."

    and holy server lag batman.
  • 529th - Friday, March 26, 2010 - link

    Thanks for the review :)
  • ghost2code - Saturday, March 27, 2010 - link

    I'm really impressed by this article author made a great job;) But about Fermi It's seem to be really good product for scientific matters but for gamers I'm not so sure about that. The price tag, power consumption, noise! this all is to much for only 10-15% of power more than above the cheaper and much more reasonable in all this things Radeon. I guess Fermi need some final touch from Nvidia and for now it's not a final , well tested product. Temp around 100 it's not good for PCB, GPU and all electronic and I don't believe it want metter for time-life and stability of the card. I'm glad the Farmi finally came but I'm dissapointed at least for now.
  • LuxZg - Saturday, March 27, 2010 - link

    I just don't know why GTX480 is compared to HD5870, and same for GTX470 vs HD5850.. GTX470 is right in the middle between two single-GPU Radeons, and just the same can be said for GTX480 sitting right in between HD5970 & HD5870.

    Prices of this cards as presented by nVidia/ATI:
    HD5970 - 599$
    GTX480 - 499$
    HD5870 - 399$
    GTX470 - 349$
    HD5850 - 299$

    I know GTX480 is single GPU, so by this logic you'll compare it to HD5870. But GTX480 is top of the line nVidia graphics card, and HD5970 is top of the line ATI card. Besides, ATI's strategy for last 3 product cycles is producing small(er) chips and go multi-GPU, while nVidia wants to go single-monolitic-GPU way.. So following this logic, indeed GTX480 should be compared to HD5970 rather than HD5870.

    Anyway, conclusion of this article is all fine, telling both strengths and the weaknesses of solutions from both camps, but I believe readers weren't told straightforward enough that these cards don't cost the same... And HD5970 was left out of the most of the comparisions (textual ones).

    If I personaly look at these cards, they are all worth their money. nVidia cards are probably more future-proof with their commitment to future tech (tessellation, GPGPU) but AMD cards are better for older and current (and close future) titles. And they are less hot, and less noisy, which most gamers do pay a lot of attention to. Not to say - this is first review of new card in which no one mentioned GPU overclocking. I'm guessing that 90+C temperatures won't allow much better clocks in the near future ;)
  • Wwhat - Sunday, March 28, 2010 - link

    In regards to the temperature and noise: there's always watercooling to go to, I mean if you have so much money to throw at the latest card you might as well thrown in some watercooling too.
    It's too pricey for me though, I guess I'll wait for the 40nm process to be tweaked, spending so much money on a gfx card is silly if you know a while later something new will come around that's way better, and it's just not worth committing so much money to it in my view.
    It's a good card though (when watercooled), nice stuff in it and faster on all fronts, but it also seems an early sample of new roads nvidia went into and I expect they will have much improved stuff later on (if still in business)
  • LuxZg - Tuesday, March 30, 2010 - link

    Like I've said before - if you want FASTEST (and that's usually what you want if you have money to throw away), you'll be buying HD5970. Or you'll be buying HD5970+water cooling as well..
  • ViRGE - Saturday, March 27, 2010 - link

    I'm not sure where you're getting that the HD5970 is a $600 card. In the US at least, that's a $700 card (or more) everywhere.
  • wicko - Sunday, March 28, 2010 - link

    Honestly I don't even know if it should be mentioned at all even if it is 600, because there is almost no stock anywhere.
  • LuxZg - Tuesday, March 30, 2010 - link

    Oh, don't make me laugh, please! :D In that case this review shouldn't be up at all, or it should be called "PREview".. or have you actually seen any stock of GTX470/480 arround?
  • LuxZg - Sunday, March 28, 2010 - link

    It's AMD's & nVidia's recommended prices, and you can see them all in Anandtech's own articles:
    http://www.anandtech.com/video/showdoc.aspx?i=3783">http://www.anandtech.com/video/showdoc.aspx?i=3783 (nvidia prices)
    http://www.anandtech.com/video/showdoc.aspx?i=3746">http://www.anandtech.com/video/showdoc.aspx?i=3746 (ATI single-gpu cards)
    http://www.anandtech.com/video/showdoc.aspx?i=3679">http://www.anandtech.com/video/showdoc.aspx?i=3679 (ATI single/dual GPU cards)

    It is not my fault that your US shops bumped up the price in the complete absence of competition in the high end market. But US is not only market in the world either.

    You want to compare with real world prices? Here, prices from Croatia, Europe..

    HD5970 - 4290kn = 591€ (recommended is 599$, which is usually 599€ in EU)
    GTX480 - not listed, recommended is 499$/€
    HD5870 - 2530kn = 348€ (recommended is 399$/399€ in EU)
    GTX470 - not listed, recommended is 349$/€
    HD5850 - 1867kn = 257€ (recommended is 299$/299€ in EU)

    So let's say that European prices for GTX will be a bit lower than recommended ones, GTX480 would still be ~120-130€ pricier than HD5870, and HD5970 would be same ~120-130€ more expensive than GTX480.
    As for the lower priced nVidia card, it's again firmly in the middle between HD5850 & HD5870.

    Point is that there's no clear price comparision at the moment, and article's conclusion should be clear on that.
    Person that wants the FASTEST CARD will stretch for another 100$/€ to buy HD5970. Especially since this means lower noise, lower consumption, and lower heat. This all combined means you can save a few $/€ on PSU, case, cooling, and earplugs, throwing HD5970 in the arm reach of the GTX480 (price-wise) while allowing for better speeds.

    As for GTX470, again, lower consumption/heat/noise with ATI cards which means less expenses for PSU/cooling, and saving money on electrical bills. For me, well worth the 50€/$ difference in price, in fact, I'd rather spend 50$/€ more to buy HD5870 which is faster, less noisy, doesn't require me to buy new PSU (I own HD4890, which was overclocked for a while, so HD5870 would work fine just as well), and will save me 50W per hour of any game I play.. which will all make it CHEAPER than GTX470 in the long run.

    So let's talk again - why isn't conclusion made a bit more straightforward for end users, and why is HD5890 completely gone from the conclusion??

Log in

Don't have an account? Sign up now