Final Words

Is NVIDIA in trouble? In the short term there are clearly causes to worry. AMD’s Eric Demers often tells me that the best way to lose a fight is by not showing up. NVIDIA effectively didn’t show up to the first DX11 battles, that’s going to hurt. But as I said in the things get better next year section, they do get better next year.

Fermi devotes a significant portion of its die to features that are designed for a market that currently isn’t generating much revenue. That needs to change in order for this strategy to make sense.

NVIDIA told me that we should see exponential growth in Tesla revenues after Fermi, but what does that mean? I don’t suspect that the sort of customers buying Tesla boards and servers will be lining up on day 1. I’d say best case scenario, Tesla revenues should see a bump one to two quarters after Fermi’s launch.

Nexus, ECC, and better double precision performance will all make Fermi more attractive in the HPC space than Cypress. The question is how much revenue will that generate in the short term.


Nexus enables full NVIDIA GPU debugging from within Visual Studio. Not so useful for PC gaming, but very helpful for Tesla

Then there’s the mobile space. NVIDIA could do very well with Tegra. NVIDIA is an ARM licensee, and that takes care of the missing CPU piece of the puzzle. Unlike the PC space, x86 isn’t the dominant player in the mobile market. NVIDIA has a headstart in the ultra mobile space much like it does in the GPU computing space. Intel is a bit behind with its Atom strategy. NVIDIA could use this to its advantage.

The transition needs to be a smooth one. The bulk of NVIDIA’s revenues today come from PC graphics cards. There’s room for NVIDIA in the HPC and ultra mobile spaces, but it’s not revenue that’s going to accumulate over night. The changes in focus we’re seeing from NVIDIA today are in line with what it’d have to do in order to establish successful businesses outside of the PC industry.

And don’t think the PC GPU battle is over yet either. It took years for NVIDIA to be pushed out of the chipset space, even after AMD bought ATI. Even if the future of PC graphics are Intel and AMD GPUs, it’s going to take a very long time to get there.

Chipsets: One Day You're In and the Next, You're Out
Comments Locked

106 Comments

View All Comments

  • iwodo - Wednesday, October 14, 2009 - link

    Why no one thought of Nvidia Chipset using PCI-Express 8x?

    Couldn't you theoretically make an mGPU with IO function, ( the only thing left is SATA, USB and Ethernet ) and another PCI-Express 8x So the mGPU communicate with another Nvidia CPU via its own lane without going back to CPU.
  • chizow - Wednesday, October 14, 2009 - link

    [quote]Let’s look at what we do know. GT200b has around 1.4 billion transistors and is made at TSMC on a 55nm process. Wikipedia lists the die at 470mm^2, that’s roughly 80% the size of the original 65nm GT200 die. In either case it’s a lot bigger and still more expensive than Cypress’ 334mm^2 40nm die.[/quote]

    Anand, why perpetuate this myth comparing die sizes and price on different process nodes? Surely someone with intimate knowledge of the semiconductor industry like yourself isn't claiming a single TSMC 300mm wafer on 40nm costs the same as 55nm or 65nm?

    A wafer is just sand with some copper interconnects, the raw material price means nothing for the end price tag. Cost is determined by capitalization of assets used to manufacture goods, the actual raw material involved means very little. Obviously the uncapitalised investment on the new 40nm process exceeds that of 55nm or 65nm, so obviously prices would need to be higher to compensate.

    I can't think of anything "old" that costs more than the "new" despite the "old" being larger. If you think so, I have about 3-4 100 lb CRT TVs I want to sell you for current LCD prices. ;)

    In any case, I think the concerns about selling GT200b parts are a bit unfounded and mostly to justify the channel supply deficiency. We already know the lower bounds of GT200b pricing, the GTX 260 has been selling for $150 or less with rebates for quite some time already. If anything, the somewhat artificial supply deficiency has kept demand for Nvidia parts high.

    I think it was more of a calculated risk by Nvidia to limit their exposure to excess inventory in channel, which was a reportedly big issue during the 65nm G92/GT200 to 55nm G92b/GT200b transition. There's was also some rumors about Nvidia going to more of a JIT delivery system to avoid some of the purchasing discounts some of the major partners were exploiting. They basically waited for the last day of the quarter for Nvidia to discount and unload inventory stock levels in an effort to beef up quarterly results.
  • chizow - Wednesday, October 14, 2009 - link

    quote:

    Let’s look at what we do know. GT200b has around 1.4 billion transistors and is made at TSMC on a 55nm process. Wikipedia lists the die at 470mm^2, that’s roughly 80% the size of the original 65nm GT200 die. In either case it’s a lot bigger and still more expensive than Cypress’ 334mm^2 40nm die.


    Properly formatted portion meant to be quoted in above post for emphasis.

  • MadMan007 - Wednesday, October 14, 2009 - link

    Possibly the most important question for desktop PC discrete graphics from gamers who aren't worried about business analysis is 'What will be the rollout of Fermi architecture to non-highend non-high cost graphics cards?'

    Is NV going to basically shaft that market by going with the cut down GT200 series DX10.1 chips like GT220? (OK, that's a little *too* lowend but I mean the architecture.) As much as we harped on G92 renaming at least it was competitive versus the HD4000 series in certain segments and the large GT200s, GTX260 in particular, were ok after price cuts. The same is very likely not going to be true for DX10.1 GT200 cards especially when you consider less tangible things like DX11 which people will feel better buying anyway.

    Answer that question and you'll know the shape of desktop discrete graphics for this generation.
  • vlado08 - Wednesday, October 14, 2009 - link

    I thing that Nvidia is preparing Fermi to meet with Larabee. Probably they are confident that AMD isn't a big threat. They know them very well and know what they are capable of and what to expect but they don't now their new opponent. Every body knows that Intel has very strong financial power and if they want something they do it some times it takes more time but eventually they bring every thing to an end. They are a force not to be underestimateed. If Nvidia has any time advantige they should use it.
  • piesquared - Wednesday, October 14, 2009 - link

    [quote]Other than Intel, I don’t know of any company that could’ve recovered from NV30.[/quote]

    How about recoverying from both Barcelona AND R600
  • Scali - Thursday, October 15, 2009 - link

    AMD hasn't recovered yet. They've been making losses quarter after quarter. I expect more losses in the future, now that Nehalem has gone mainstream.
    Some financial experts have put AMD in their list of companies to most likely go bankrupt in 2010:
    http://www.jiltin.com/index.php/finance/economy/bi...">http://www.jiltin.com/index.php/finance...-bankrup...
  • bhougha10 - Wednesday, October 14, 2009 - link

    The one thing that is not considered in these articals is the real world. In the real world people were not waiting for the ATI 5800s to come out, but they are waiting for the GTX 300s to come out. The perception in the market place is that NVIDIA is namebrand and ATI is generic.
    It is not a big deal at all that the GTX 300s are late. Nvidia had the top cards for 3/4ths of the year, it is only healthy that ATI have the lead for 1/4th of the year. The only part that is bad for NVIDIA is that they don't have this stuff out for Christmas, and I am not sure that is even a big deal. Even so, these high end cards are not gifts, they are budget items (i.e. like I plan to wait till the beginning of next year to buy this or that)
    Go do some online gamming if you think this is all made up. You will have your AMD fanboys, but the percentage is low. (sorry didn't want to say fanboy)
    These new 210/220 GTs sound like crap, but the people buying them won't know that and will pay the money and not be any the wiser that they could have gotten a better value. They only thought of the NVIDIA namebrand.
    Anyway, I say this in regards to the predicting of the eventual demise of NVIDIA. Same reason these financial anaylst can't predict the market well.

    Another great artical, a lot of smart guys working for these tech sites.
  • Zool - Wednesday, October 14, 2009 - link

    So you think that the company who has the top card and is a namebrand is the winner. Even with top cards in 3/4ths of the year they had lost quite a money on predicted margins with 4k radeons on market. They couldnt even get to midrange cards with GT200 the price for them was too high. And u think that with that monster gt300 it will be better.
    They will sure sell it but at what cost for them.
  • bhougha10 - Wednesday, October 14, 2009 - link

    AMD has lost money for the last 3 years (as far as I looked back) and not just a little money a lot of money. NVIDIA on the other hand lost a little money last year. So,not sure of the point of that last reply.
    The point of the original post was that there is an intrinsic value or novelty that comes with a company. AMD has very little novelty. This is just the facts. I have AMD and NVIDIA stock, I want them to both win. It's good for everyone if they do both win.

Log in

Don't have an account? Sign up now