As we mentioned last week, we’re currently down in San Jose, California covering NVIDIA’s annual GPU Technology Conference. If Intel has IDF and Apple has the World Wide Developers Conference, then GTC is NVIDIA’s annual powwow to rally their developers and discuss their forthcoming plans. The comparison to WWDC is particularly apt, as GTC is a professional conference focused on development and business use of the compute capabilities of NVIDIA’s GPUs (e.g. the Tesla market).

NVIDIA has been pushing GPUs as computing devices for a few years now, as they see it as the next avenue of significant growth for the company. GTC is fairly young – the show emerged from NVISION and its first official year was just last year – but it’s clear that NVIDIA’s GPU compute efforts are gaining steam. The number of talks and the number of vendors at GTC is up compared to last year, and according to NVIDIA’s numbers, so is the number of registered developers.

We’ll be here for the next two days meeting with NVIDIA and other companies and checking out the show floor. Much of this trip is to get a better grasp on just where things are for NVIDIA still-fledging GPU compute efforts, especially on the consumer front where GPU compute usage has been much flatter than we were hoping for at this time last year with the announcement/release of NVIDIA and AMD’s next-generation GPUs, and the ancillary launch of APIs such as DirectCompute and OpenCL, which are intended to allow developers to write an application against these common APIs rather than targeting CUDA or Brook+/Stream. If nothing else, we’re hoping to see where our own efforts in covering GPU computing need to lie – we want to add more compute tests to our GPU benchmarks, but is the market to the point yet where there’s going to be significant GPU compute usage in consumer applications? This is what we’ll be finding out over the next two days.

Jen-Hsun Huang Announces NVIDIA’s Next Two GPUs

While we’re only going to be on the show floor Wednesday and Thursday, GTC unofficially kicked off Monday, and the first official day of the show was Tuesday. Tuesday started off with a 2 hour keynote speech by NVIDIA’s CEO Jen-Hsun Huang, which keeping with the theme of GTC focused on the use of NVIDIA GPUs in business environments.

Not unlike GTC 2009, NVIDIA is also using the show as a chance to announce their next-generation GPUs. GTC 2009 saw the announcement of the Fermi family, with NVIDIA first releasing the details of the GPU’s compute capabilities there, before moving on to focusing on gaming at CES 2010. This year NVIDIA announced the next two GPU families the company is working on, albeit not in as much detail as we got about Fermi in 2009.


The progression of NVIDIA's GPUs from a Tesla/Compute Standpoint

The first GPU is called Kepler (as in Johannes Kepler the mathematician), which will be released in the 2nd half of 2011. At this point the GPU is still a good year out, which is why NVIDIA is not talking about its details just yet. For now they’re merely talking about performance in an abstract manner, in this case Kepler should offer 3-4 times the amount of double precision floating point performance per watt of Fermi. With GF100 NVIDIA basically hit the wall for power consumption (and this is part of the reason current Tesla parts are running 448 out of 512 CUDA cores), so we’re basically looking at NVIDIA having to earn their performance improvements without increasing power consumption. They’re also going to have to earn their keep in sales, as NVIDIA is already talking about Kepler taking 2 billion dollars to develop and it’s not out for another year.

The second GPU is Maxwell (named after James Clerk Maxwell, the physicist/mathematician), and will be released some time in 2013. Compared to Fermi it should offer 10-12 times the DP FP performance per watt, which means it’s roughly another 3x increase over Kepler.


NVIDIA GPUs and manufacturing processes up to Fermi

NVIDIA still has to release the finer details of the GPUs, but we do know that Kepler and Maxwell are tied to the 28nm and 22nm processes respectively. So if nothing else, this gives us strong guidance on when they would be coming out, as production-quality 28nm fabrication suitable for GPUs is still a year out and 22nm is probably a late 2013 release at this rate. What’s clear is that NVIDIA is not going to take a tick-tock approach as stringently as Intel did – Kepler and Maxwell are going to launch against new processes – but this is only about GPUs for NVIDIA’s compute efforts. It’s likely the company will still emulate tick-tock to some degree, producing old architectures on new processes first; similar to how NVIDIA’s first 40nm products were the GT21x GPUs.  In this scenario we’re talking about low-end GPUs destined for life in consumer video cards, so the desktop/graphics side of NVIDIA isn’t bound to this schedule like the Fermi/compute side is.

At this point the biggest question is going to be what the architecture is. NVIDIA has invested heavily in their current architecture ever since the G80 days, and even Fermi built upon that. It’s a safe bet that these next two GPUs are going to maintain the same (super)scalar design for the CUDA cores, but beyond that anything is possible. This also doesn’t say anything about what the GPUs’ performance is going to be like under single precision floating point or gaming. If NVIDIA focuses almost exclusively on DP, we could see GPUs that are significantly faster at that while not being much better at anything else. Conversely they could build more of everything and these GPUs would be 3-4 times faster at more than just DP.

Gaming of course is a whole other can of worms. NVIDIA certainly hasn’t forgotten about gaming, but GTC is not the place for it. Whatever the gaming capabilities of these GPUs are, we won’t know for quite a while. After all, NVIDIA still hasn’t launched GF108 for the low-end.

Wrapping things up, don’t be surprised if Kepler details continue to trickle out over the next year. NVIDIA took some criticism for introducing Fermi months before it shipped, but it seems to have worked out well for the company anyhow. So a repeat performance wouldn’t be all that uncharacteristic for them.

And on that note, we’re out of here. We’ll have more tomorrow from the GTC show floor.

Comments Locked

82 Comments

View All Comments

  • wumpus - Wednesday, September 22, 2010 - link

    Methinks you ought to look a bit more closely at those marketing slides.

    Fermi is listed as 2009. Since this is the year it was paper launched, one should assume that implies 2011 is the scheduled date for paper launching Kepler. How do you show prototypes of powerpoint slides anyway?
  • Ryan Smith - Wednesday, September 22, 2010 - link

    Keep in mind that Fermi was supposed to make retail in 2009. It got delayed, but that didn't significantly impact the rest of the GPUs in development. Kepler has been intended to be a 2011 part for a long time now. If it gets held up like Fermi, then it will be to new issues, such as with the 28nm process.
  • AnandThenMan - Wednesday, September 22, 2010 - link

    Fermi was obviously delayed and horribly late. So then why does Jensen stand up there with a slide that still says it was a 2009 product? Makes it very difficult to believe the slide in general doesn't it.

    As far as Kepler being "intended" as a 2011 part for "a long time" (source?) good intentions are not going to help it get to market. If Nvidia decides to put all their eggs gain in TSMC's basket and hope for the best (again) they are setting themselves up for another failure.
  • Dark_Archonis - Wednesday, September 22, 2010 - link

    You mean like how ATI didn't need TSMC for their 5xxx cards ... oh wait, they used the same TSMC process just like Nvidia did. Furthermore, they're going to use the same 28nm TSMC process that Nvidia is going to use in 2011.

    The reason the slide shows Fermi as a 2009 product is because it IS a 2009 product based on Nvidia's internal schedule. The design being finished internally at a company and the product being released to consumers are two different things. Technically, Nvidia DID have the Fermi cards ready to go in late 2009. Unexpected heat and power issues, along with TSMC's 40nm issues is what ended up delaying the Fermi cards.

    Fermi was late to market, but the completed design was on-time internally within Nvidia.

    Also its foolish to think that the exact same problems that affected Fermi will affect Kepler.
  • AnandThenMan - Wednesday, September 22, 2010 - link

    So then if Nvidia's "internal schedule" does not reflect actual available, then we can add months onto the timelines of that slide. And what does it mean to say "technically" Nvidia had Fermi cards in 2009? That makes no sense at all.

    And so what if a design was completed, that helps Nvidia how? Designs don't generate revenue, actual products do. I don't think it's foolish to anticipate Kepler will be delayed, unless you mean the "design" will be on time, internally.
  • Dark_Archonis - Wednesday, September 22, 2010 - link

    Again, it is foolish, because you are making a BIG assumption that Kepler will have the same flaws or weaknesses as Fermi does. You are also assuming TSMC will have as much trouble with its 28nm process as they did with their 40nm process.

    In other words, far too many assumptions you are making.

    As for Fermi being ready in late 2009, that is exactly what it means. The design was complete and ready, but like I said several issues led to the cards coming late to market.
  • fteoath64 - Thursday, September 23, 2010 - link

    Good point!. Getting the chip into manufacturing will have its own issues to resolve. ie Heat and power consumption issues. Just hope that the delays are not going to be past a quarter or two.

    They need to milk Fermi for its worth in the market and seems like doing it already as shipped products. I hope they can keep power levels under control.
  • Touche - Wednesday, September 22, 2010 - link

    "The reason the slide shows Fermi as a 2009 product is because it IS a 2009 product based on Nvidia's internal schedule. The design being finished internally at a company and the product being released to consumers are two different things. Technically, Nvidia DID have the Fermi cards ready to go in late 2009. Unexpected heat and power issues, along with TSMC's 40nm issues is what ended up delaying the Fermi cards."

    This is just...just...a giant LOL! If it can't be manufactured - it isn't ready, at all! Intel, AMD, Nvidia...they have designs ready for products that are due out in several years. Those products are as "ready to go" now as much as Fermi was in 2009.
  • Dark_Archonis - Thursday, September 23, 2010 - link

    I don't see why this is a giant lol.

    That Nvidia slide was not designed for the average gamer or consumer. That slide was shown at the GPU Technology Conference, and someone simply took a pic of it. That slide doesn't even focus on performance directly, but efficiency and double precision instead.

    That slide is simply an illustration on Nvidia's internal timeline/schedule. So yes, Fermi being listed under 2009 makes sense under Nvidia's *internal* timeline, which is what that slide is.

    Yes the slide does not make sense for anyone else, like the rest of us. That slide is not meant for us though, so it's rather pointless to point and laugh at a slide of Nvidia's internal timeline.

    You're right, other companies have designs completed far in advance of being on sale to consumers, yet how come nobody here is making fun of those companies and their future timelines? We've seen slides from other companies as well.

    With so much negative criticism focused specifically at Nvidia and nobody else one really has to wonder about the motives of certain posters.
  • arnavvdesai - Wednesday, September 22, 2010 - link

    Hi Anand,
    Have you heard anything from AMD about when they plan to move their graphics chipsets to a newer process? Also, who are they planning to use for graphics chipsets considering the yield issues they had at the beginning for 5xxx series against 40nm?
    Also, with monitors continuing to be stagnant at 2560 & 1080p what kind of improvements are these designers planning to insert? I mean Nvidia is planning to double or triple their FP calculation abilities but what about AMD? Also, with Dx12 most likely coming up if Microsoft sticks to their schedule of releasing Win8 in 2011 what features are those folks planning to be available in next gen APIs.

Log in

Don't have an account? Sign up now