Final Words

Is NVIDIA in trouble? In the short term there are clearly causes to worry. AMD’s Eric Demers often tells me that the best way to lose a fight is by not showing up. NVIDIA effectively didn’t show up to the first DX11 battles, that’s going to hurt. But as I said in the things get better next year section, they do get better next year.

Fermi devotes a significant portion of its die to features that are designed for a market that currently isn’t generating much revenue. That needs to change in order for this strategy to make sense.

NVIDIA told me that we should see exponential growth in Tesla revenues after Fermi, but what does that mean? I don’t suspect that the sort of customers buying Tesla boards and servers will be lining up on day 1. I’d say best case scenario, Tesla revenues should see a bump one to two quarters after Fermi’s launch.

Nexus, ECC, and better double precision performance will all make Fermi more attractive in the HPC space than Cypress. The question is how much revenue will that generate in the short term.


Nexus enables full NVIDIA GPU debugging from within Visual Studio. Not so useful for PC gaming, but very helpful for Tesla

Then there’s the mobile space. NVIDIA could do very well with Tegra. NVIDIA is an ARM licensee, and that takes care of the missing CPU piece of the puzzle. Unlike the PC space, x86 isn’t the dominant player in the mobile market. NVIDIA has a headstart in the ultra mobile space much like it does in the GPU computing space. Intel is a bit behind with its Atom strategy. NVIDIA could use this to its advantage.

The transition needs to be a smooth one. The bulk of NVIDIA’s revenues today come from PC graphics cards. There’s room for NVIDIA in the HPC and ultra mobile spaces, but it’s not revenue that’s going to accumulate over night. The changes in focus we’re seeing from NVIDIA today are in line with what it’d have to do in order to establish successful businesses outside of the PC industry.

And don’t think the PC GPU battle is over yet either. It took years for NVIDIA to be pushed out of the chipset space, even after AMD bought ATI. Even if the future of PC graphics are Intel and AMD GPUs, it’s going to take a very long time to get there.

Chipsets: One Day You're In and the Next, You're Out
Comments Locked

106 Comments

View All Comments

  • sbuckler - Wednesday, October 14, 2009 - link

    Not all doom and gloom: http://www.brightsideofnews.com/news/2009/10/13/nv...">http://www.brightsideofnews.com/news/20...contract...

    Which also puts them in the running for the next Wii I would have thought?
  • Zapp Brannigan - Thursday, October 15, 2009 - link

    unlikely, tegra is basically just an arm11 processor, allowing full backwards compatibility with the current arm9 and arm7 processors in the ds. If Nintendo want to have full backwards compatibility with the wii 2 then they'll have to stick the current ibm/ati combo.
  • papapapapapapapababy - Wednesday, October 14, 2009 - link

    1) ati launches crappy cards, 2) anand realizes "crapy cards, we might need nvidia after all" 3) anand does some nvidia damage control. 4)damage control sounds like wishful thinking to me 5) lol

    "While RV770 caught NVIDIA off guard, Cypress did not". XD
    NVIDIA knew they were going to -FAIL- and made a conscious decision to KEEP FAILING? Guys, guys, they where cough off guard AGAIN. It does not matter if they know it! IT IS STILL A BIG FAIL! THEY KNEW? what kind of nonsense is that? BTW They could not shrink @ launch anything except that OEM garbage... what makes you think that fermi is going to be any different ?
  • whowantstobepopular - Wednesday, October 14, 2009 - link

    "1) ati launches crappy cards, 2) anand realizes "crappy cards, we might need nvidia after all" 3) anand does some nvidia damage control"

    ROFL

    Maybe Anand wrote this article to lay to the rest the last vestiges of SiliconDoc's recent rantings.

    Seriously, Anand and team...

    You guys do a fine job of thoroughly covering the latest official developments in the enthusiast PC space. You're doing the right thing by sticking to info that's confirmed. Charlie, Fudo and others are covering the rumours just fine, so what we really need is what we get here at Anandtech: Thorough, prompt reviews of tech products that have just released, and interesting commentary on PC market developments and directions (such as the above article).

    I like the fact that you add a little bit of your own interpretation into these sorts of commentaries, and at the same time make sure we know what is fact and what is interpretation.
    I guess I see it this way: You've been commentating on this IT game for quite a few years now, and the articles show it. There are plenty of references to parallels between current situations and historic ones, and these are both interesting and informative. This is one of many aspects of the articles here at Anandtech that make me (and others, it seems) keep coming back. Your knowledge of the important points in IT history is confidence inspiring when it comes to weighing up the value of your commentaries.

    Finally, I have to commend the way that everyone on the Anandtech team appears to read through the comments under their articles. It's rather encouraging when suggestions and corrections for articles are noted and acted upon promptly, even when it involves extra work (re-running benchmarks, creating new graphs etc.). And the touch of humour that comes across in some of the replies (and articles) from the team makes a good comedic interlude during an otherwise somewhat bland day at work.

    Keep up the good work Anandtech!

  • Transisto - Wednesday, October 14, 2009 - link

    I like this place. . .
  • Transisto - Wednesday, October 14, 2009 - link

    I like this place. . .
  • shotage - Wednesday, October 14, 2009 - link

    Thumbs up to this post. These are my thoughts and sentiments also. Thank you to all @ Anandtech for excellent reading! Comments included ;)
  • Shayd - Wednesday, October 14, 2009 - link

    Ditto, thanks!
  • Pastuch - Wednesday, October 14, 2009 - link

    Fantastic post. I couldn't have said it better myself.
  • Scali - Wednesday, October 14, 2009 - link

    We'll have to see. nVidia competed just fine against AMD with the G80 and G92. The biggest problem with GT200 was that they went 65 nm rather than 55 nm, but even so, they still held up against AMD's parts because of the performance advantage. Especially G92 was hugely successful, incredible performance at a good price. Yes, the chip was larger than a 3870, but who cared?

    Don't forget that GT200 is based on a design that is now 3 years old, which is ancient. Just going to GDDR5 alone will already make the chip significantly smaller and less complex, because you only need half the bus width for the same performance.
    Then there's probably tons of other optimizations that nVidia can do to the execution core to make it more compact and/or more efficient.

    I saw someone who estimated the number of transistors per shader processor based on the current specs of Fermi, compared to G80/G92/GT200. The result was that they were all around 5.5M transistors per SP, I believe. So that means that effectively nVidia gets the extra flexibility 'for free'.
    Combine that with the fact that 40 nm allows them to scale to higher clockspeeds, and allows them to pack more than twice the number of SPs on a single chip, and the chip as a whole will probably be more efficient anyway, and it seems very likely that this chip will be a great performer.
    And if you have the performance, you dictate the prices. It will then be the salvage parts and the scaled down versions of this architecture that will do the actual competing against AMD's parts, and those nVidia chips will obviously be in a better position to compete on price than the 'full' Fermi.
    If Fermi can make the 5870 look like a 3870, nVidia is golden.

Log in

Don't have an account? Sign up now