Chipsets: One Day You're In and the Next, You're Out

Presently, NVIDIA’s chipset business is far from dead. They are in nearly every single Apple computer on the market, not to mention countless other OEMs. I’m not sure how much money NVIDIA is making from these chipsets, but they are selling.


NVIDIA won Apple's chipset business, Intel was not happy

Long term I don’t see much of a future for NVIDIA’s chipset business. NVIDIA said that they have no interest in pursuing an LGA-1156 chipset given Intel’s legal threats. Even if NVIDIA had a license to produce DMI chipsets, I’m not sure it makes sense.


NVIDIA's Advantage: A single chip GeForce 9400M instead of a dated Intel solution

Once the ‘dales hit, every single mainstream CPU from Intel is going to come with graphics on-package. Go out one more generation and Sandy Bridge brings the graphics on-die. AMD is doing the same thing starting in 2012.

It’s taken longer than expected, but there’s honestly no need for a third party chipset maker anymore. Most of the performance differentiation in chipsets has been moved onto the CPU die anyway, all that’s left are SATA, USB, and a bunch of validation that no one likes doing. NVIDIA is much better off building a discrete GeForce 9400M GPU at low cost and selling that. There’s much less headache involved with selling discrete GPUs than selling chipsets, plus graphics is NVIDIA’s only value add when it comes to chipsets - everyone knows how to integrate a USB controller by now. I’d say the same about SATA but AMD still has some AHCI silliness that it needs to sort out.

NVIDIA committed to supporting existing products in the channel and continues to poke fun at AMD with lines like this:

“On AMD platforms, we continue to sell a higher quantity of chipsets than AMD itself. MCP61-based platforms continue to be extremely well positioned in the entry CPU segments where AMD CPUs are most competitive vs. Intel”

As successful as NVIDIA’s AMD chipsets are today, AMD is telling us that nearly all OEM designs going forward use AMD chipsets. Again, NVIDIA’s chipset business is quite healthy today, but I don’t see much of a future in it - not that it’s a bad thing.

The only reason NVIDIA’s chipset business has lasted this long is because AMD and Intel couldn’t get their houses in order quickly enough. AMD is finally there and Intel is getting there, although it remains to be seen how well the next-generation of Atom platforms will work in practice.


A pair of Ion motherboards we reviewed

The main reason Ion got traction in the press was because it could play Blu-ray content. If Intel had done the right thing from the start and paired Atom with a decent chipset, NVIDIA would never have had the niche for Ion to fit into.

Index Final Words
Comments Locked

106 Comments

View All Comments

  • AnandThenMan - Wednesday, October 14, 2009 - link

    Leave it to Scali to regurgitate the same old same old.
  • TGressus - Wednesday, October 14, 2009 - link

    It's always the same, man. When ATI/AMD is down people get interested in their comeback story too.

    I've always wondered why people bother to "take a side". How'd that work out with Blu-Ray? Purchased many BD-R DL recently?

    Personally, I'd like to see more CPU and GPU companies. Not less.
  • Scali - Thursday, October 15, 2009 - link

    What comeback story?
    My point was that it wouldn't be the first time that the bigger, more expensive GPU was the best bang for the buck.
    It isn't about taking sides or comebacks at all.
    I'm interested in Fermi because I'm a technology enthusiast and developer. It sounds like an incredible architecture. It has nothing to do with the fact that it happens to have the 'nVidia' brand attached to it. If it was AMD that came up with this architecture, I'd be equally interested.
    But let's just view it from a neutral, technical point of view. AMD didn't do all that much to its architecture this time, apart from extending it to support the full DX11 featureset. It will not do C++, it doesn't have a new cache hierarchy approach, it won't be able to run multiple kernels concurrently, etc etc. There just isn't as much to be excited about.
    Intel however... now their Larrabee is also really cool. I'm excited to see what that is going to lead to. I just like companies that go off the beaten path and try new approaches, take risks. That's why I'm an enthusiast. I like new technology.
    At the end of the day, if both Fermi and Larrabee fail, I'll just buy a Radeon. Boring, but safe.
  • Scali - Wednesday, October 14, 2009 - link

    "Fermi devotes a significant portion of its die to features that are designed for a market that currently isn’t generating much revenue."

    The word 'devotes' is in sharp contrast with what Fermi aims to achieve: a more generic programmable processor.
    In a generic processor, you don't really 'devote' anything to anything, your execution resources are just flexible and can be used for many tasks.
    Even today's designs from nVidia do the same. The execution units can be used for standard D3D/OpenGL rendering, but they can also be used for PhysX (gaming market), video encoding (different market), Folding@Home (different market again), PhotoShop (another different market), HPC (yet another market), to name but a few things.
    So 'devoted', and 'designed for a market'? Hardly.
    Sure, the gaming market may generate the most revenue, but nVidia is starting to tap into all these other markets now. It's just added revenue, as long as the gaming performance doesn't suffer. And I don't see any reason for Fermi's gaming performance to suffer. I think nVidia's next generation is going to outperform AMD's offerings by a margin.
  • wumpus - Thursday, October 15, 2009 - link

    Go back and read the white paper. Nvidia plans to produce a chip that computes roughly half the double floating point multiplies as it can produce single point. This means that they have doubled the amount of transistors in the multipliers so that they can keep up with the rest of the chip in double mode (1 double or two singles both produce 8 bytes that need to be routed around the chip).

    There is no way to deny that this takes more transistors. Simply put if each letter represents 16 bits two singles represent:

    (a0)(a1)*(b0)(b1)=16*(a0b0)+8*(a0b1)+8*(a1b0)+(a1)(b1)
    (c0)(c1)*(d0)(d1)=16*(c0d0)+8*(c0d1)+8*(c1d0)+(c1d1)

    But if you have to multiply one double you get

    (a0)(a1)(a2)(a3)*(b0)(b1)(b2)(b3)=
    4096*(a0b0)(a0b1)(a0b2)(a0b3)
    +256*(a1b0)(a1b1)(a1b2)(a1b3)
    +16*(a2b0)(a2b1)(a2b2)(a2b3)
    +(a3b0)(a3b1)(a3b2)(a3b3)

    Which works to twice the work. Of course, the entire chip isn't multipliers, but they make up a huge chunk. Somehow I don't think either ATI nor nvidia are going to say exactly what percentage of the chip is made up by multipliers. I do expect that it is steadily going down and if such arrays keep being made, they will all eventually use double precision (and possibly full ieee754 with all the rounding that entails).
  • Scali - Saturday, October 17, 2009 - link

    My point is that the transistors aren't 'dedicated' to DP.
    They just make each single unit capable of both SP and DP. So the same logic that is used for DP is also re-used for SP, and as such the unit isn't dedicated. It's multi-functional.

    Besides, they probably didn't just double up the transistorcount to get from SP to DP.
    I think it's more likely that they'll use a scheme like Intel's SSE units. In Intel's case you can either process 4 packed SP floats in parallel, or 2 packed DP floats, with the same unit. This would also make it more logical why the difference in speed is a factor 2.
    Namely, if you take the x87 unit, it can always process only one number at a time, but SP isn't twice as fast as DP. Since you always use a full DP unit, SP only benefits from early-out, which doesn't gain that much on most operations (eg add/sub/mul).
    So I don't think that Fermi is just a bunch of full DP ALUs which will run with 'half the transistors' when doing SP math. Rather, I think they will just 'split' the DP units in some clever way that they can process two SP numbers at a time (or fuse two SP units to process one DP number, however you look at it). This only requires you to double up a relatively small part of the logic, you split up your internal registers.
  • Zool - Wednesday, October 14, 2009 - link

    Maybe but you forget one thing. Ati could pull out without problem a 5890 (with faster clocks and maybe 384bit memory) in Q1 2010 or a whole new chip somewhere in Q2 2010.
    So it doesnt change the fact that they are late. In this position it will be hard for nvidia if ati can make always the first move.
  • Scali - Wednesday, October 14, 2009 - link

    A 5890 doesn't necessarily have to be faster than Fermi. AMD's current architecture isn't THAT strong. It's the fastest GPU on the market, then again, it's the only high-end GPU that leverages 40 nm and GDDR5. So it's not all that surprising.
    Fermi will not only leverage 40 nm and GDDR5, but also aim at a scale above AMD's architecture.

    AMD may make the first move, but it doesn't have to be the better move.
    Assuming Fermi performance is in order, I very much believe that nVidia made the right move. Where AMD just patched up their DX10.1 architecture to support DX11 features, nVidia goes way beyond DX11 with an entirely new architecture.
    The only thing that could go wrong with Fermi is that it doesn't perform well enough, but it's too early to say anything about that now. Other than that, Fermi will mark a considerable technological lead of nVidia over AMD.
  • tamalero - Sunday, October 18, 2009 - link

    and you know this.... based on what facts?
    the "can of whoopass" from nvidia's marketting?
  • AnandThenMan - Wednesday, October 14, 2009 - link

    "The only thing that could go wrong with Fermi is that it doesn't perform well enough"

    Really? You really believe that? So if it has a monstrous power draw, extremely expensive, 6 months late, (even longer for scaled down parts) low yields etc. that's a-okay? Not to mention a new architecture always has software challenges to make the most of it.

Log in

Don't have an account? Sign up now