Final Words

Is NVIDIA in trouble? In the short term there are clearly causes to worry. AMD’s Eric Demers often tells me that the best way to lose a fight is by not showing up. NVIDIA effectively didn’t show up to the first DX11 battles, that’s going to hurt. But as I said in the things get better next year section, they do get better next year.

Fermi devotes a significant portion of its die to features that are designed for a market that currently isn’t generating much revenue. That needs to change in order for this strategy to make sense.

NVIDIA told me that we should see exponential growth in Tesla revenues after Fermi, but what does that mean? I don’t suspect that the sort of customers buying Tesla boards and servers will be lining up on day 1. I’d say best case scenario, Tesla revenues should see a bump one to two quarters after Fermi’s launch.

Nexus, ECC, and better double precision performance will all make Fermi more attractive in the HPC space than Cypress. The question is how much revenue will that generate in the short term.


Nexus enables full NVIDIA GPU debugging from within Visual Studio. Not so useful for PC gaming, but very helpful for Tesla

Then there’s the mobile space. NVIDIA could do very well with Tegra. NVIDIA is an ARM licensee, and that takes care of the missing CPU piece of the puzzle. Unlike the PC space, x86 isn’t the dominant player in the mobile market. NVIDIA has a headstart in the ultra mobile space much like it does in the GPU computing space. Intel is a bit behind with its Atom strategy. NVIDIA could use this to its advantage.

The transition needs to be a smooth one. The bulk of NVIDIA’s revenues today come from PC graphics cards. There’s room for NVIDIA in the HPC and ultra mobile spaces, but it’s not revenue that’s going to accumulate over night. The changes in focus we’re seeing from NVIDIA today are in line with what it’d have to do in order to establish successful businesses outside of the PC industry.

And don’t think the PC GPU battle is over yet either. It took years for NVIDIA to be pushed out of the chipset space, even after AMD bought ATI. Even if the future of PC graphics are Intel and AMD GPUs, it’s going to take a very long time to get there.

Chipsets: One Day You're In and the Next, You're Out
Comments Locked

106 Comments

View All Comments

  • neomatrix724 - Wednesday, October 14, 2009 - link

    Were you looking at the same cards as everyone else? AMD has always aimed for the best price for performance. nVidia has always won hands down on performance...but these came at the expense of costlier cards.

    AMD hit one out of the park with their new cards. OpenCL, Eyefinity and a strong improvement over previous cards is a very strong feature set. I'm not sure about Fermi and I'm curious to see where nVidia is going with it...but their moves have been confusing me lately.
  • shin0bi272 - Thursday, October 15, 2009 - link

    actually nvidia hasnt always won. Their entire first 2 generations of dx9 cards were slower than ATi's because nvidia boycotted the meetings on the specs for dx9 and they made a card based on the beefier specs that they wanted and that card (the 5800) turned out to be 20% slower than the ati 9700pro. This trend sort of continued for a couple of years but nvidia got closer with the 5900 and eeked out a win with the 6800 a little later. Keep in mind I havent owned an ATi card for gaming since the 9700pro (and that was a gift). So I am no way an ATi fan but facts are facts. Nvidia has made great cards but not always the fastest.
  • Griswold - Wednesday, October 14, 2009 - link

    That didnt make alot of sense...
  • vlado08 - Wednesday, October 14, 2009 - link

    I am wondering about wddm 2.0 and multitasking on GPU are they coming soon. May be Fermi is better prepared for it?
  • Scali - Wednesday, October 14, 2009 - link

    WDDM 2.0 is not part of Windows 7, so we'll need to wait for at least another Windows generation before that becomes available. By then Fermi is most probably replaced by a newer generation of GPUs anyway.
    Multitasking on the GPU is possible for the first time on Fermi, as it can run multiple GPGPU kernels concurrently (I believe up to 16 different kernels).
  • vlado08 - Wednesday, October 14, 2009 - link

    You are right about that we are going to wait but what about Microsoft and what about Nvidia they should be working on it. Probably Nvidia don't want to be late again. May be they want to be first this time seeing where the things are going. If their hardware is more prepared for wddm 2.0 today, then they will have more time to gain experience and to polish their drivers. Ati(AMD) have a hard only launch of the DirectX11. They are missing the "soft" part of it - (drivers not ready). ATI needs to win they have to make DX11 working and they are puting alot of efort in it so Nvidia is skipping DX11 battle and starting to get ready for the next one. Every thing is getting more complex and needs more time to mature. We are also getting more demanding and less forgiving. So for the next Windows to be ready after 2 or 3 years they need to start now. At least planning.
  • Mills - Wednesday, October 14, 2009 - link

    Couldn't these 'extra transistors' be utilized in games as well, similar to how NVIDIA handled PhysX? In other words, incorporate NVIDIA-specific game enhancements that utilize these functions in NVIDIA sponsored titles?

    Is it too late to do this? Perhaps they will just extend the PhysX API.

    Though, PhysX has been out for quite some time and there are only 13(?) PhysX supported titles. NVIDIA better pick up its game here if they plan to leverage PhysX to out-value ATI. Does anyone know if there are any big name titles that have announced PhysX support?
  • Griswold - Wednesday, October 14, 2009 - link

    Physx is a sinking ship, didnt you get the memo?
  • shin0bi272 - Thursday, October 15, 2009 - link

    nvidia says that the switching between gpu and cuda is going to be 10x faster in fermi meaning that physx performance will more than double.
  • Scali - Thursday, October 15, 2009 - link

    Yup, and that's a hardware feature, which applies equally to any language, be it C/C++ for Cuda, OpenCL or DirectCompute.
    So not only PhysX will benefit, but also Bullet or Havok, or whatever other GPU-accelerated physics library might surface.

Log in

Don't have an account? Sign up now