The GF104/GF110 Refresher: Different Architecture & Different Transistors

For all practical purposes GF100 is the Fermi base design, but for sub high-end cards in particular NVIDIA has made a number of changes since we first saw the Fermi architecture a year and a half ago. For those of you reading this article who don’t regularly keep up with the latest NVIDIA hardware releases, we’re going to quickly recap what makes GF114 and GTX 560 Ti different from both the original GF100/GF110 Fermi architecture, and in turn what makes GF114 different from GF104 through NVIDIA’s transistor optimizations. If you’re already familiar with this, please feel free to skip ahead.

With that said, let’s start with architecture. The GF100/GF110 design is ultimately the compute and graphics monster that NVIDIA meant for Fermi to be. It has fantastic graphical performance, but it also extremely solid GPU computing performance in the right scenarios, which is why GF100/GF110 is the backbone of not just NVIDIA’s high-end video cards, but their Tesla line of GPU computing cards.

But Fermi’s compute characteristics only make complete sense at the high-end, as large institutions utilizing GPU computing have no need for weaker GPUs in their servers, and in the meantime home users don’t need features like ECC or full speed FP64 (at least not at this time) so much as they need a more reasonably priced graphics card. As a result only the high-end GF100/GF110 GPUs feature Fermi’s base design, meanwhile GF104 and later use a tweaked design that stripped away some aspects of Fermi’s GPU compute design while leaving much of the graphics hardware intact.

NVIDIA GF104 SM

With GF104 we saw the first GPU released using NVIDIA’s streamlined Fermi architecture that forms the basis of GF104/GF106/GF108/GF114, and we saw a number of firsts from the company. Chief among these was the use of a superscalar architecture, the first time we’ve seen such a design in an NVIDIA part. Superscalar execution allows NVIDIA to take advantage of Instruction Level Parallelism (ILP) – executing the next instruction in a thread when it doesn’t rely on the previous instruction – something they haven’t done previously. It makes this streamlined design notably different from the GF100/GF110 design. And ultimately this design is more efficient than GF100/GF110 on average, while having a wider range of best and worst case scenarios than GF100/GF110, a tradeoff that doesn’t necessarily make sense for GPU computing purposes but does for mainstream graphics.

Meanwhile in terms of low-level design, starting with GF110 NVIDIA began revising the low-level design of their GPUs for production purposes. NVIDIA’s choice of transistors with GF10x was suboptimal, and as a result they used leaky transistors in functional units and parts thereof where they didn’t want them, limiting the number of functional units they could utilize and the overall performance they could achieve in the power envelopes they were targeting.

For GF110 NVIDIA focused on better matching the types of transistors they used with what a block needed, allowing them to reduce leakage on parts of the chip that didn’t require such fast & leaky transistors. This meant not only replacing fast leaky transistors with slower, less leaky transistors in parts of the chip that didn’t require such fast transistors, but also introducing a 3rd mid-grade transistor that could bridge the gap between fast/slow transistors. With 3 speed grades of transistors, NVIDIA was able to get away with only using the leakiest transistors where they needed to, and could conserve power elsewhere.


A typical CMOS transitor: Thin gate dielectrics lead to leakage

GF110 wasn’t the only chip to see this kind of optimization however, and the rest of the GF11x line is getting the same treatment. GF114 is in a particularly interesting position since as a smaller GPU, its predecessor GF104 wasn’t as badly affected. Though we can’t speak with respect to enabling additional functional units, at the clockspeeds and voltages NVIDIA was targeting we did not have any issues with the stock voltage. In short while GF100 suffered notably from leakage, GF104 either didn’t suffer from it or did a good job of hiding it. For this reason GF114 doesn’t necessarily stand to gain the same benefit.

As we touched on in our introduction, NVIDIA is putting their gains here in to performance rather than power consumption. The official TDP is going up 10W, while performance is going up anywhere between 10% and 40%. This is the only difference compared to GF104, as GF114 does not contain any architectural changes (GF110’s changes were backported from GF104). Everything we see today will be a result of a better built chip.

Index Meet The GTX 560 Ti
Comments Locked

87 Comments

View All Comments

  • MeanBruce - Wednesday, January 26, 2011 - link

    Wonder if you can tune the fans separately in SmartDoctor? Damn cool Asus!
  • Burticus - Tuesday, January 25, 2011 - link

    I picked up a GTX460 768mb for $150 last summer. I assume the GTX560 will be down to that price point by this coming summer. I am very happy with the GTX460 except in Civ 5 and I think I am CPU limited there (Phenom II x3).

    So when this thing hits $150 I will sell my GTX460 on fleabay for $100 and upgrade, I guess. I wish I could buy one and stick it in my 360....
  • JimmiG - Tuesday, January 25, 2011 - link

    Looks like the video card market is picking up the pace again, which is both a good thing and not. I guess my GTX460 1GB from only 6 months ago now officially sucks and is only usable as a doorstop...a crippled, half-broken, semi-functional video card such as it is.

    On the other hand, it's great that technology is moving so fast. It just means that instead of buying a new video card and keeping it for 1.5 - 2 years, you once again have to upgrade every couple of months if you want to stay on top.

    Also, regardless of the marketing, anything below a 570 *sucks* for gaming above 1680x1050. Look at the results of Stalker, Metro 2033 and Warhead. You need to drop to 1680x1050 before the 560 Ti manager near 60 FPS which is the minimum for smooth gameplay.
  • Soldier1969 - Tuesday, January 25, 2011 - link

    Anything below $400 is a poor mans card period, I wouldnt stoop to that level of card running 2560 x 1600 display port max settings there is no substitute!
  • omelet - Wednesday, January 26, 2011 - link

    Congratulations.
  • silverblue - Thursday, January 27, 2011 - link

    I'm sorry to say, but knowing the 560 Ti is going to be a weaker and hence far cheaper part than the 580, why did you give it any thought? :)
  • otakuon - Tuesday, January 25, 2011 - link

    The GTX 460 is still the best card in nVidia's lineup with regards to price for performance. The 560 is just nVidia's standard interim update to keep itself relevent. I see no need for current GTX 460 owners to rush out and buy this card (or anyone who wants to replace a Fermi card for that matter) when the 600 series will be out this summer and will most likely have new arcitecture.
  • DeerDance - Tuesday, January 25, 2011 - link

    6850 beats them in price/performance, they are start at $150 at newegg
  • DeerDance - Tuesday, January 25, 2011 - link

    I was kinda surprised by final thoughts
    out of 34 pictures of fps in games, 17 won 6950, 12 gtx560 and 5 were in range of 1frame from each other (4 of those are for 6950) so I wonder why the final thoughts gave edge to GTX560.
  • omelet - Wednesday, January 26, 2011 - link

    He may have just done an average of the percentage differences between the two.So if, for instance, the 560 won by 50% in one test and lost by 10% in each of two tests, that method would call the 560 10% faster, even though it was slower in 2/3 of the tests.

    Don't get me wrong, I don't think the conclusion is accurate (I think 6950 looks more powerful overall from the benchmarks), I'm just saying how I think he might have come to his conclusion.

Log in

Don't have an account? Sign up now