The graph below is one of transistor count, not die size. Inevitably, on the same manufacturing process, a significantly higher transistor count translates into a larger die size. But for the purposes of this article, all I need to show you is a representation of transistor count.

See that big circle on the right? That's Fermi. NVIDIA's next-generation architecture.

NVIDIA astonished us with GT200 tipping the scales at 1.4 billion transistors. Fermi is more than twice that at 3 billion. And literally, that's what Fermi is - more than twice a GT200.

At the high level the specs are simple. Fermi has a 384-bit GDDR5 memory interface and 512 cores. That's more than twice the processing power of GT200 but, just like RV870 (Cypress), it's not twice the memory bandwidth.

The architecture goes much further than that, but NVIDIA believes that AMD has shown its cards (literally) and is very confident that Fermi will be faster. The questions are at what price and when.

The price is a valid concern. Fermi is a 40nm GPU just like RV870 but it has a 40% higher transistor count. Both are built at TSMC, so you can expect that Fermi will cost NVIDIA more to make than ATI's Radeon HD 5870.

Then timing is just as valid, because while Fermi currently exists on paper, it's not a product yet. Fermi is late. Clock speeds, configurations and price points have yet to be finalized. NVIDIA just recently got working chips back and it's going to be at least two months before I see the first samples. Widespread availability won't be until at least Q1 2010.

I asked two people at NVIDIA why Fermi is late; NVIDIA's VP of Product Marketing, Ujesh Desai and NVIDIA's VP of GPU Engineering, Jonah Alben. Ujesh responded: because designing GPUs this big is "fucking hard".

Jonah elaborated, as I will attempt to do here today.

A Different Sort of Launch
POST A COMMENT

416 Comments

View All Comments

  • wumpus - Thursday, October 01, 2009 - link

    Do you see any sign of commercial software support? Anybody Nvidia can point to and say "they are porting $important_app to openCL"? I haven't heard a mention. That pretty much puts Nvidia's GPU computing schemes solely in the realm of academia (where you can use grad students a cheap highly-skilled labor). If they could sell something like a FEA package for pro-engineer or solidworks, the things would fly off the shelves (at least I know companies who would buy them, but it might be more a location bias). If you have to code it yourself, that leaves either academia (which mostly just needs to look at hardware costs) and existing supercomputer users. The existing commercial users have both hardware and software (otherwise they would be "potential users"), and are unlikely to want to rewrite the software unless it is really, really, cheaper. Try to imagine all the salaries involved in running the big, big, jobs Nvidia is going after and tell me that the hardware is a good place to save money (at the cost of changing *everything*).

    I'd say Nvidia is not only killing the graphics (with all sorts of extra transistors that are in the way and are only for double point), but they aren't giving anyone (outside academia) any reason to use openCL. Maybe they have enough customers who want systems much bigger than $400k, but they will need enough of them to justify designing a >400mm chip (plus the academics, who are buying these because they don't have a lot of money).
    Reply
  • hazarama - Saturday, October 03, 2009 - link

    "Do you see any sign of commercial software support? Anybody Nvidia can point to and say "they are porting $important_app to openCL"? I haven't heard a mention. That pretty much puts Nvidia's GPU computing schemes solely in the realm of academia"

    Maybe you should check out Snow Leopard ..
    Reply
  • samspqr - Friday, October 02, 2009 - link

    Well, I do HPC for a living, and I think it's too early to push GPU computing so hard because I've tried to use it, and gave up because it required too much effort (and I didn't know exactly how much I would gain in my particular applications).

    I've also tried to promote GPU computing among some peers who are even more hardcore HPC users, and they didn't pick it up either.

    If even your typical physicist is scared by the complexity of the tool, it's too early.

    (as I'm told, there was a time when similar efforts were needed in order to use the mathematical coprocessor...)
    Reply
  • Yojimbo - Sunday, October 04, 2009 - link

    >>If even your typical physicist is scared by the complexity of the >>tool, it's too early.

    This sounds good but it's not accurate. Physicists are interested in physics and most are not too keen on learning some new programing technique unless it is obvious that it will make a big difference for them. Even then, adoption is likely to be slow due to inertia. Nvidia is trying to break that inertia by pushing gpu computing. First they need to put the hardware in place and then they need to convince people to use it and put the software in place. They don't expect it to work like a switch. If they think the tools are in place to make it viable, then how is the time to push, because it will ALWAYS require a lot of effort when making the switch.
    Reply
  • jessicafae - Saturday, October 03, 2009 - link

    Fantastic article.

    I do bioinformatics / HPC and in our field too we have had several good GPU ports for a handful for algorithms, but nothing so great to drive us to add massive amounts of GPU racks to our clusters. With OpenCL coming available this year, the programming model is dramatically improved and we will see a lot more research and prototypes of code being ported to OpenCL.

    I feel we are still in the research phase of GPU computing for HPC (workstations, a few GPU racks, lots of software development work). I am guessing it will be 2+ years till GPU/stream/OpenCL algorithms warrant wide-spread adoption of GPUs in clusters. I think a telling example is the RIKEN 12petaflop supercomputer which is switching to a complete scalar processor approach (100,000 Sparc64 VIIIfx chips with 800,000 cores)
    http://www.fujitsu.com/global/news/pr/archives/mon...">http://www.fujitsu.com/global/news/pr/archives/mon...
    Reply
  • Thatguy97 - Thursday, May 28, 2015 - link

    oh fermi how i miss ya hot underperforming ass Reply

Log in

Don't have an account? Sign up now