As we mentioned last week, we’re currently down in San Jose, California covering NVIDIA’s annual GPU Technology Conference. If Intel has IDF and Apple has the World Wide Developers Conference, then GTC is NVIDIA’s annual powwow to rally their developers and discuss their forthcoming plans. The comparison to WWDC is particularly apt, as GTC is a professional conference focused on development and business use of the compute capabilities of NVIDIA’s GPUs (e.g. the Tesla market).

NVIDIA has been pushing GPUs as computing devices for a few years now, as they see it as the next avenue of significant growth for the company. GTC is fairly young – the show emerged from NVISION and its first official year was just last year – but it’s clear that NVIDIA’s GPU compute efforts are gaining steam. The number of talks and the number of vendors at GTC is up compared to last year, and according to NVIDIA’s numbers, so is the number of registered developers.

We’ll be here for the next two days meeting with NVIDIA and other companies and checking out the show floor. Much of this trip is to get a better grasp on just where things are for NVIDIA still-fledging GPU compute efforts, especially on the consumer front where GPU compute usage has been much flatter than we were hoping for at this time last year with the announcement/release of NVIDIA and AMD’s next-generation GPUs, and the ancillary launch of APIs such as DirectCompute and OpenCL, which are intended to allow developers to write an application against these common APIs rather than targeting CUDA or Brook+/Stream. If nothing else, we’re hoping to see where our own efforts in covering GPU computing need to lie – we want to add more compute tests to our GPU benchmarks, but is the market to the point yet where there’s going to be significant GPU compute usage in consumer applications? This is what we’ll be finding out over the next two days.

Jen-Hsun Huang Announces NVIDIA’s Next Two GPUs

While we’re only going to be on the show floor Wednesday and Thursday, GTC unofficially kicked off Monday, and the first official day of the show was Tuesday. Tuesday started off with a 2 hour keynote speech by NVIDIA’s CEO Jen-Hsun Huang, which keeping with the theme of GTC focused on the use of NVIDIA GPUs in business environments.

Not unlike GTC 2009, NVIDIA is also using the show as a chance to announce their next-generation GPUs. GTC 2009 saw the announcement of the Fermi family, with NVIDIA first releasing the details of the GPU’s compute capabilities there, before moving on to focusing on gaming at CES 2010. This year NVIDIA announced the next two GPU families the company is working on, albeit not in as much detail as we got about Fermi in 2009.


The progression of NVIDIA's GPUs from a Tesla/Compute Standpoint

The first GPU is called Kepler (as in Johannes Kepler the mathematician), which will be released in the 2nd half of 2011. At this point the GPU is still a good year out, which is why NVIDIA is not talking about its details just yet. For now they’re merely talking about performance in an abstract manner, in this case Kepler should offer 3-4 times the amount of double precision floating point performance per watt of Fermi. With GF100 NVIDIA basically hit the wall for power consumption (and this is part of the reason current Tesla parts are running 448 out of 512 CUDA cores), so we’re basically looking at NVIDIA having to earn their performance improvements without increasing power consumption. They’re also going to have to earn their keep in sales, as NVIDIA is already talking about Kepler taking 2 billion dollars to develop and it’s not out for another year.

The second GPU is Maxwell (named after James Clerk Maxwell, the physicist/mathematician), and will be released some time in 2013. Compared to Fermi it should offer 10-12 times the DP FP performance per watt, which means it’s roughly another 3x increase over Kepler.


NVIDIA GPUs and manufacturing processes up to Fermi

NVIDIA still has to release the finer details of the GPUs, but we do know that Kepler and Maxwell are tied to the 28nm and 22nm processes respectively. So if nothing else, this gives us strong guidance on when they would be coming out, as production-quality 28nm fabrication suitable for GPUs is still a year out and 22nm is probably a late 2013 release at this rate. What’s clear is that NVIDIA is not going to take a tick-tock approach as stringently as Intel did – Kepler and Maxwell are going to launch against new processes – but this is only about GPUs for NVIDIA’s compute efforts. It’s likely the company will still emulate tick-tock to some degree, producing old architectures on new processes first; similar to how NVIDIA’s first 40nm products were the GT21x GPUs.  In this scenario we’re talking about low-end GPUs destined for life in consumer video cards, so the desktop/graphics side of NVIDIA isn’t bound to this schedule like the Fermi/compute side is.

At this point the biggest question is going to be what the architecture is. NVIDIA has invested heavily in their current architecture ever since the G80 days, and even Fermi built upon that. It’s a safe bet that these next two GPUs are going to maintain the same (super)scalar design for the CUDA cores, but beyond that anything is possible. This also doesn’t say anything about what the GPUs’ performance is going to be like under single precision floating point or gaming. If NVIDIA focuses almost exclusively on DP, we could see GPUs that are significantly faster at that while not being much better at anything else. Conversely they could build more of everything and these GPUs would be 3-4 times faster at more than just DP.

Gaming of course is a whole other can of worms. NVIDIA certainly hasn’t forgotten about gaming, but GTC is not the place for it. Whatever the gaming capabilities of these GPUs are, we won’t know for quite a while. After all, NVIDIA still hasn’t launched GF108 for the low-end.

Wrapping things up, don’t be surprised if Kepler details continue to trickle out over the next year. NVIDIA took some criticism for introducing Fermi months before it shipped, but it seems to have worked out well for the company anyhow. So a repeat performance wouldn’t be all that uncharacteristic for them.

And on that note, we’re out of here. We’ll have more tomorrow from the GTC show floor.

Comments Locked

82 Comments

View All Comments

  • Cerb - Wednesday, September 22, 2010 - link

    Have you considered getting rid of it? And what one is it (Zotac, Palit, eVGA, by some chance?)? Crashing to the desktop w/ corruption sounds bad. Like bitching at the maker and RMAing it bad.

    <-- GB "Windforce" GTX 460 1GB; no problems in Win7 64-bit or Arch Linux 64-bit.
  • kilkennycat - Wednesday, September 22, 2010 - link

    Re: Your suspect GTX460.

    First make sure that you are running the card with factory-defaults clocks and using the factory-default auto fan-control !! Next, PLEASE CHECK THAT THE HEAT-SINK IS FIRMLY SCREWED DOWN TO THE CIRCUIT-BOARD AND THAT NO SCREWS ARE MISSING. A very early batch of GTX460 from an unnamed manufacturer had a little (??) problem in this area....

    Otherwise, desperate erratic problems need desperate resolution........

    DISCLAIMER: You run these tests at your own risk !! Test #2 and Test #3 below are NOT recommended for problem-free situations !!

    Run Furmark1.8.2 in the following sequence, gradually increasing the stress.
    1. Benchmarking (only)
    2. Stability Test (only)
    3. Stability Test plus Xtreme Burning Mode (only).

    Allow >3minutes of cooldown between the tests. If the card is physically A-OK, then you will be able to complete all 3 tests without experiencing crashes or glitches.

    For each of the above tests:-

    Run full-screen at your desired resolution and watch the on-screen Furmark temperature plot (GPU Core Temperature). It should very rapidly go up and either (a) gradually flatten out at a temperature less than 100degreesC or (b) reach ~100degreesC and suddently flatten as the GF104 goes into thermal-throttling... you will then also see a corresponding frame-rate drop-off in Furmark's on-screen parameter readout . (Unlike the GTX260, many of the current GTX460 air-coolers do not have sufficient thermal-mass/surface-area/airflow to prevent the GF104 reaching its built-in thermal throttle point in Furmark particularly in the Stability-plus-Extreme Burning Mode... this is potentially true of the factory-overclocked cards, especially the "external exhaust" variety.)

    DO NOT leave the card running at the maximum temperature in either TEST#2 or TEST#3 for any length of time!! Seriously stresses the voltage regulators.

    ==================================

    If the display crashes while the card is still cold, first carefully check for a problem elsewhere.... power-supply (unlikely since you are replacing the ~equal-wattage GTX260) driver corruption..etc... If the display is OK while the card is still cold but shows glitches or crashes during the heat-cycles, I suggest that you RMA the card. .
  • TheJian - Thursday, September 23, 2010 - link

    They made two GTX 260's. Core216 and the old one. I still think this guy doesn't own one, but great troubleshooting for anyone that just replace any graphics card. If that's all he changed, he's only looking at the card or the psu in almost all cases if the failures started immediately after the change.

    Too much missing info to nail it down (same app or game crashing, crashing in everythying, when did they start etc), but your procedure should tell him more. But for most people a quick trip to fry's buying the same thing can avoid most testing :) 30 days to check if it's the card with nothing to do other than USE it to test. If it's fine after switch return the bad card (assuming under 30 days on bad card), or return new card and RMA now that you know it's bad. Cost you nothing (costs fry's...oops), solves the problem. If both crash, I'd probably replace my PSU (everything dies) if I didn't have any other quick replacements handy. Of course I have a PHD PCI and Quicktech Pro so I have other options :) Fry's idea is quick for most people and easy ;) But totally unscrupulous. :)

    Same idea for PSU...LOL. Test and return. "I just couldn't get it to run, must be incompatible with my board". I even gave him an excuse for the return in both cases...LOL. Is my solution desperate? :)
  • shawkie - Friday, September 24, 2010 - link

    I've just realised that actually the card I replaced in my home PC was a GTS 250 not a GTX 260 at all. In my defence, I've been using the GTX 260 a lot at work and actually did buy a 192 core version for my home PC only to find there wasn't room for it. That probably explains the "hotter and louder" and also means I might be looking at a power supply problem. Or it might be a dodgy card - its one of the first and it is of the external exhaust variety. Thanks for all the advice - It'll probably come in handy when I get around to looking into the problem.
  • TheJian - Thursday, September 23, 2010 - link

    Smells like your PC doesn't have an adequate PSU or you got a bad card. They overclock like mad. It should use about 10 less watts according to anandtech, than the GTX 260 LOAD or IDLE. Again, you must have a bad card. It's 10-20 cooler than GTX 260 at LOAD or IDLE or Anandtech lied in their recent article on the 460 (doubtful).

    http://www.anandtech.com/show/3809/nvidias-geforce...

    Same article...Go ahead...Scroll down a bit more and you see you're card is completely out of touch with reality apparenlty. Because from the charts it sure looks like all the GTX 460's are a few DB's less noisy than a GTX 460 either at LOAD or IDLE again (except for Zotac's card which seems a bad fan or something, so far from all others).

    Nvidia and AMD both have drivers that don't crash anywhere near what you're saying so again, bad card or not enough PSU. Get a new card or PSU, quit hating a great product (best at $200, or Anandtech and everyone else writes BS reviews...). I have doubts about you owning this card. Why would you keep something and not RMA with reviews showing it makes a GTX 260 look like crap. You weren't skeptical? Currently I'd bet money you don't own one.

    Too lazy to correct my previous post comment about amazon finally shipping my 5830. It was supposed to say 5850 (for $260, which is why the waited so long). Still a good deal a few months later. I complained for months which you can check on XFX 5850 complaint page at amazon if needed :)
  • AnnonymousCoward - Wednesday, September 22, 2010 - link

    Dark_Archonis, AMD's latest architecture blows away Nvidia's when you consider power consumption per performance (gaming).
  • shawkie - Wednesday, September 22, 2010 - link

    Are these figures for the £250 GeForces or the £2500 Teslas? It makes quite a big difference since there's a factor of about 8 difference in double precision floating point performance between the two.
  • Ryan Smith - Wednesday, September 22, 2010 - link

    To be clear, Tesla parts. GeForce cards have their DP performance artificially restricted (although if they keep the same restriction ratio, then these improvements will carry over by the same factors).
  • Griswold - Wednesday, September 22, 2010 - link

    at heise.de they asked Huang about 3D performance for the 2013 part. Looks like Huang lost all of his newly gained humbleness and starts to bloat his head once again.

    The answer was: at least 10 times the 3D performance of the GF100. Right...
  • stmok - Wednesday, September 22, 2010 - link

    When Intel does their IDF; they have something to show for it.
    (Sandy Bridge, Light Peak, etc.)

    When AMD books a hotel room nearby; they have something to demonstrate.
    (The Bobcat-based Zacate compared to a typical Intel based notebook.)

    Even a Chinese company called Nufront, was able to demonstrate a dual-core 2GHz Cortex A9 based system prototype in the same week!
    => http://www.youtube.com/watch?v=0Gfs5ujSw1Q

    ...And what does Nvidia bring on their first day at GTC? Codenames and vague performance promises of future products on a graph. (Yes, I can mark points on a graph to represent a half-parabola too!)
    => http://en.wikipedia.org/wiki/Parabola

    You know what would be awesome? Actually demonstrating a working prototype of "Kepler"!

    Even when question about Tegra-based solutions not being in consumer products: Talk. Talk. Talk.
    => http://www.youtube.com/watch?v=00wUypuruKE
    => http://www.youtube.com/watch#!v=tAHqmNmbN8U

Log in

Don't have an account? Sign up now