The GF104/GF110 Refresher: Different Architecture & Different Transistors

For all practical purposes GF100 is the Fermi base design, but for sub high-end cards in particular NVIDIA has made a number of changes since we first saw the Fermi architecture a year and a half ago. For those of you reading this article who don’t regularly keep up with the latest NVIDIA hardware releases, we’re going to quickly recap what makes GF114 and GTX 560 Ti different from both the original GF100/GF110 Fermi architecture, and in turn what makes GF114 different from GF104 through NVIDIA’s transistor optimizations. If you’re already familiar with this, please feel free to skip ahead.

With that said, let’s start with architecture. The GF100/GF110 design is ultimately the compute and graphics monster that NVIDIA meant for Fermi to be. It has fantastic graphical performance, but it also extremely solid GPU computing performance in the right scenarios, which is why GF100/GF110 is the backbone of not just NVIDIA’s high-end video cards, but their Tesla line of GPU computing cards.

But Fermi’s compute characteristics only make complete sense at the high-end, as large institutions utilizing GPU computing have no need for weaker GPUs in their servers, and in the meantime home users don’t need features like ECC or full speed FP64 (at least not at this time) so much as they need a more reasonably priced graphics card. As a result only the high-end GF100/GF110 GPUs feature Fermi’s base design, meanwhile GF104 and later use a tweaked design that stripped away some aspects of Fermi’s GPU compute design while leaving much of the graphics hardware intact.

NVIDIA GF104 SM

With GF104 we saw the first GPU released using NVIDIA’s streamlined Fermi architecture that forms the basis of GF104/GF106/GF108/GF114, and we saw a number of firsts from the company. Chief among these was the use of a superscalar architecture, the first time we’ve seen such a design in an NVIDIA part. Superscalar execution allows NVIDIA to take advantage of Instruction Level Parallelism (ILP) – executing the next instruction in a thread when it doesn’t rely on the previous instruction – something they haven’t done previously. It makes this streamlined design notably different from the GF100/GF110 design. And ultimately this design is more efficient than GF100/GF110 on average, while having a wider range of best and worst case scenarios than GF100/GF110, a tradeoff that doesn’t necessarily make sense for GPU computing purposes but does for mainstream graphics.

Meanwhile in terms of low-level design, starting with GF110 NVIDIA began revising the low-level design of their GPUs for production purposes. NVIDIA’s choice of transistors with GF10x was suboptimal, and as a result they used leaky transistors in functional units and parts thereof where they didn’t want them, limiting the number of functional units they could utilize and the overall performance they could achieve in the power envelopes they were targeting.

For GF110 NVIDIA focused on better matching the types of transistors they used with what a block needed, allowing them to reduce leakage on parts of the chip that didn’t require such fast & leaky transistors. This meant not only replacing fast leaky transistors with slower, less leaky transistors in parts of the chip that didn’t require such fast transistors, but also introducing a 3rd mid-grade transistor that could bridge the gap between fast/slow transistors. With 3 speed grades of transistors, NVIDIA was able to get away with only using the leakiest transistors where they needed to, and could conserve power elsewhere.


A typical CMOS transitor: Thin gate dielectrics lead to leakage

GF110 wasn’t the only chip to see this kind of optimization however, and the rest of the GF11x line is getting the same treatment. GF114 is in a particularly interesting position since as a smaller GPU, its predecessor GF104 wasn’t as badly affected. Though we can’t speak with respect to enabling additional functional units, at the clockspeeds and voltages NVIDIA was targeting we did not have any issues with the stock voltage. In short while GF100 suffered notably from leakage, GF104 either didn’t suffer from it or did a good job of hiding it. For this reason GF114 doesn’t necessarily stand to gain the same benefit.

As we touched on in our introduction, NVIDIA is putting their gains here in to performance rather than power consumption. The official TDP is going up 10W, while performance is going up anywhere between 10% and 40%. This is the only difference compared to GF104, as GF114 does not contain any architectural changes (GF110’s changes were backported from GF104). Everything we see today will be a result of a better built chip.

Index Meet The GTX 560 Ti
POST A COMMENT

88 Comments

View All Comments

  • auhgnist - Tuesday, January 25, 2011 - link

    1920x1080 graph is wrong, should be mistakenly used that of 2560x1600 Reply
  • Ryan Smith - Tuesday, January 25, 2011 - link

    Fixed. Thanks. Reply
  • Marlin1975 - Tuesday, January 25, 2011 - link

    6950 1gig look good.

    I am guessing the 560 will either drop in price very quickly or the 6950 will sell better.
    Reply
  • Lolimaster - Tuesday, January 25, 2011 - link

    Not impressive at alla the 560, 6950 1GB is a good value over the 2GB 6950. I think if you just prefer 1GB 6870 offers more bang for buck. Reply
  • cactusdog - Tuesday, January 25, 2011 - link

    Wow, plenty of good options from AMD and Nvidia. Since the introduction of eyefinity and 3D surround, we dont need to spend a fortune to play the latest games. For most users with 1 monitor a $250 dollar card gives excellent performance. Reply
  • tech6 - Tuesday, January 25, 2011 - link

    Like top end desktop CPUs, the high end GPU really seems to be increasingly irrelevant for most gamers as the mid-range provides plenty of performance for a fraction of the cost. Reply
  • Nimiz99 - Tuesday, January 25, 2011 - link

    I was just curious about the 2.8 FPS on Crysis by the Radeon HD 5970 - is that reproducible/consistent?
    I am just curious, b/c on the first graph of average frame-rate it leads the pack; if it fluctuates that badly I would definitely like a little bit more background on it.

    'Preciate the response,
    Nimiz
    Reply
  • Ryan Smith - Tuesday, January 25, 2011 - link

    No, it's highly variable. With only 1GB of effective VRAM, the Radeon cards are forced to texture swap - the minimum framerate is chaotic at best and generally marks how long the worst texture swap took. With swapping under the control of AMD's drivers, the resulting minimum framerate ends up being quite variable. Reply
  • Shadowmaster625 - Tuesday, January 25, 2011 - link

    Can somebody explain why 1GB is not enough when 1GB is enough memory to store over 160 frames at 24 bits at 1920x1080. At 60fps, 1GB should be able to supply a constant uncompressed stream of frames for almost 3 whole seconds. Seems like more than enough memory to me. Sounds like somebody is just haphazardly wasting vast amounts of space for no reason at all. Sort of like windows with its WinSXS folder. Lets just waste a bunch of space because we can! Reply
  • ciukacz - Tuesday, January 25, 2011 - link

    are you streaming your benchmark video through youtube ?
    because i am rendering mine realtime, which requires loading all the textures, geometry etc.
    Reply

Log in

Don't have an account? Sign up now