A More Efficient Architecture

GPUs, like CPUs, work on streams of instructions called threads. While high end CPUs work on as many as 8 complicated threads at a time, GPUs handle many more threads in parallel.

The table below shows just how many threads each generation of NVIDIA GPU can have in flight at the same time:

  Fermi GT200 G80
Max Threads in Flight 24576 30720 12288

 

Fermi can't actually support as many threads in parallel as GT200. NVIDIA found that the majority of compute cases were bound by shared memory size, not thread count in GT200. Thus thread count went down, and shared memory size went up in Fermi.

NVIDIA groups 32 threads into a unit called a warp (taken from the looming term warp, referring to a group of parallel threads). In GT200 and G80, half of a warp was issued to an SM every clock cycle. In other words, it takes two clocks to issue a full 32 threads to a single SM.

In previous architectures, the SM dispatch logic was closely coupled to the execution hardware. If you sent threads to the SFU, the entire SM couldn't issue new instructions until those instructions were done executing. If the only execution units in use were in your SFUs, the vast majority of your SM in GT200/G80 went unused. That's terrible for efficiency.

Fermi fixes this. There are two independent dispatch units at the front end of each SM in Fermi. These units are completely decoupled from the rest of the SM. Each dispatch unit can select and issue half of a warp every clock cycle. The threads can be from different warps in order to optimize the chance of finding independent operations.

There's a full crossbar between the dispatch units and the execution hardware in the SM. Each unit can dispatch threads to any group of units within the SM (with some limitations).

The inflexibility of NVIDIA's threading architecture is that every thread in the warp must be executing the same instruction at the same time. If they are, then you get full utilization of your resources. If they aren't, then some units go idle.

A single SM can execute:

Fermi FP32 FP64 INT SFU LD/ST
Ops per clock 32 16 32 4 16

 

If you're executing FP64 instructions the entire SM can only run at 16 ops per clock. You can't dual issue FP64 and SFU operations.

The good news is that the SFU doesn't tie up the entire SM anymore. One dispatch unit can send 16 threads to the array of cores, while another can send 16 threads to the SFU. After two clocks, the dispatchers are free to send another pair of half-warps out again. As I mentioned before, in GT200/G80 the entire SM was tied up for a full 8 cycles after an SFU issue.

The flexibility is nice, or rather, the inflexibility of GT200/G80 was horrible for efficiency and Fermi fixes that.

Architecting Fermi: More Than 2x GT200 Efficiency Gets Another Boon: Parallel Kernel Support
POST A COMMENT

415 Comments

View All Comments

  • adilakkus - Friday, October 30, 2009 - link

    RayTracing is the way forward.


    http://adilakkus.blogspot.com">http://adilakkus.blogspot.com
    Reply
  • Fortesting - Wednesday, October 07, 2009 - link

    See this article:

    http://www.semiaccurate.com/2009/10/06/nvidia-kill...">http://www.semiaccurate.com/2009/10/06/...x260-aba...
    Reply
  • Zool - Tuesday, October 06, 2009 - link

    Maybe tesla cards in supercomputers which are closed platforms the cuda is better but for anything other commercial OpenCL will be better.
    This is a CUDA vs OpenCL test from Sisoftware http://www.sisoftware.net/index.html?dir=qa&lo...">http://www.sisoftware.net/index.html?di...ocation=...
    The conclusion from that article : We see little reason to use proprietary frameworks like CUDA or STREAM once public drivers supporting OpenCL are released - unless there are features your code depends on that are not included yet; even then, they will most likely be available as extensions (similar to OpenGL) pretty soon.
    It wouldnt be bad to see those kind of tests on anadtech. Something like GPUs vs CPUs tests with same code.
    Reply
  • Zool - Monday, October 05, 2009 - link

    I dont know how others, but the 8 time increase in DP which is one of the pr stunts doesnt seem too much if u dont compare it to the weak gt200 DP numbers. The 5870 has something over 500 GFlops DP and the gt200 had around 80 GFlops DP (but the quadro and tesla cards had higher shader clocks i think). They will be happy if they reach 1.5 times the radeon 5800 DP performance. In this pdf from nvidia site http://www.nvidia.com/content/PDF/fermi_white_pape...">http://www.nvidia.com/content/PDF/fermi...T.Halfhi... they write that the ECC will hawe a performance penalty from 5% to 20% (on tesla cards u will hawe the option to turn it off/on on GT cards it will be turned off).
    Reply
  • Zool - Monday, October 05, 2009 - link

    I also want to add that if the DP has increased 8 times from gt200 than let we say around 650 Gflops, than if the DP is half of the SP (as they state) performance in Fermi than i get 1300 Gflops ???? (with same clock speeds). For GT200 they stated 933 Gflops. Something is wrong here maybe ? Reply
  • Zool - Monday, October 05, 2009 - link

    Actualy they state 30 FMA ops per clock for 240 cuda cores in gt200 and 256 FMA ops per clock for 512 cuda cores in Fermi. Which means clock for clock and core for core they increased 4 times the DP performance. Reply
  • SymphonyX7 - Saturday, October 03, 2009 - link

    Hi. I'm a long time Anandtech reader (roughly 4 years already). I registered yesterday just because I wanted to give SiliconDoc a piece of my mind but thankfully ended being up being rational and not replying anymore.

    Now that he's gone. I just want to know what you guys think of Fermi being another big chip. Is it safe to assume that Nvidia is losing more money than ATI on high-end models being sold simply because the GTX cards are much bigger than their ATI counterparts? Moreso now that the HD 58xx cards have been released which are faster overall than any of Nvidia's single-GPU solutions. Nvidia will be forced to further lower the price of their GTX cards. I'm still boggled as to why Nvidia would still cling to really big chips rather than go ATI's "efficiency" route. From what I'm reading, this card may focus more on professional applications rather than raw performance in games. Is it possible that this may simply be a technology demonstrator in the making in addition to something that will "reassure" the market to prevent them from going ATI? I don't know why they should differentiate this much if it's intended to compete with ATI's offerings, unless that isn't entirely their intention...
    Reply
  • Nakomis - Saturday, October 03, 2009 - link

    Boy can I tell you I really wish SilDoc was still here? Anyone have his email address? I wanted to send him this:


    http://rss.slashdot.org/~r/Slashdot/slashdot/~3/9J...">http://rss.slashdot.org/~r/Slashdot/sla...es-Fermi...
    Reply
  • - Saturday, October 03, 2009 - link

    There was no benchmark, not even a demo during the so-called demonstration! This is very pathetic and it looks that Nvidia wont even meet the december timeframe. To debug a chip that doesnt work properly might cost many months. To manufacture a chip another 12 weeks. To develop the infrastructure including drivers and card manufactures another few months. Therefore, late q12010 or even 6/2010 might become realistic for a true launch and not a paperlaunch. What we could see on this demonstration was no more than the paper launch of the paper launch. Reply
  • Nate0007 - Friday, October 09, 2009 - link

    Hi, I fully agree with you 100%
    You seem to be one of very FEW people that actually see that or get it.
    You know what i can not seem to understand ??

    How can supposedly a few hundred or so of people that are knowlegable of what it is they are about too see or somewhat of why they are attending the demonstration just sit there and listen to 1 person standing up and make claims about his or a product but have no proof ?
    I understand how things are suppose to be, but have we all just become so naive to just believe what is pushed onto us through media ( ie..TV,Radio.Blogs.Magazines.ect...) and just believe it all ?
    I am not saying that what Jen Shun showed was NOT a real demo of a working Fermi Card , I am just saying that there was and still is NO proof of any sort from anyone that was able to actually confirm or denie that it actually was.
    Untill Nvidia actually shows a working sample of Fermi , even a so called ruffor demo model of it so long as it actually real I will not believe it.
    There is a huge difference between someone makeing claims on the forums of sites like this here and or Blogs and someone who is holding a news conference clainming what they have achieved .

    Next thing you know someone will stand up and say they have discoverd how to time travel and then show a video of just that.

    There is a difference be facts and reality.
    Reply

Log in

Don't have an account? Sign up now