Intel's Mooly Eden just disclosed Ivy Bridge's transistor count to a captive audience at IDF this morning: 1.4 billion transistors. That's presumably for the quad-core version compared to 1.16B in Sandy Bridge. Don't pay attention to the die shot above, that's not an accurate representation of Ivy Bridge. Intel is holding off on showing the die until closer to launch. Why? Likely to avoid early disclosure of how much of the die is dedicated to the GPU. As you'll see in our Ivy Bridge architecture piece later today, the lion's share of those transistors are improvements in the GPU.

Update: Intel has provided us with updated numbers for both Ivy Bridge and Sandy Bridge. Transistor counts are only comparable when they're done the same way. Intel is saying that SNB is 1.16B and Ivy is 1.4B. The gap between these numbers is only 20%, which is much more in line with what we'd expect from a tick. I'm waiting on a response from Intel as to why the SNB number went up since launch.

Update 2: This is why 1.16B and 995M are both correct for Sandy Bridge.

POST A COMMENT

16 Comments

View All Comments

  • fic2 - Wednesday, September 14, 2011 - link

    Except that Intel doesn't get to hit the Ch-ching button on the discrete graphics. Reply
  • sinigami - Thursday, September 15, 2011 - link

    "Looks like they're doing it the right way
    to me - incentivizing you to buy a CPU
    more powerful (and expensive) than
    what you need, or conversely, making
    you pay for the GPU upgrade that you
    won't even use.

    Cha-ching!!! "

    That really applies to AMD's A8 chip, as their top Llano A8 is really the only one with a GPU powerful enough to play most games above the 30fps mark. All the reviews i've seen of the A6 say that it falls just below what you need to game at low-res, comfortably. So, to me personally, the only time i've ever really been compelled to move up to a more expensive version inside a cpu family, is with llano and the A8.

    Currently, with Intel's two versions of IGP, they both suck at current games, so there's really not enough [gpu performance] "incentivizing" to really make the jump compelling for those who may want to do the occasional gaming. Now, that MAY change with ivy bridge...

    of course, anyone and everyone who is a hardcore, serious gamer will be going Sandy Bridge with a fat card. AMD just can't play in this realm.

    The only reason to go A8 is so you can game without a card, and so long as you don't need major cpu power. Of course, that is a pretty nice reason :-)
    Reply
  • gramboh - Wednesday, September 14, 2011 - link

    The thing is, the PC gamer/GPU-add in market is small, the market for people to use this chipset and forego discreet graphics on both the desktop and mobile market is HUGE in comparison.

    I agree it would be nice if the top end parts traded die space for more CPU features, for me personally, but it is probably not profitable to design those parts for the consumer market given the way things are trending.
    Reply
  • DanNeely - Wednesday, September 14, 2011 - link

    Unfortunately people like us aren't a large enough market to justify a variant die with all the extra engineering it entails. Not using the IGP gives us more thermal headroom for turbo-boost and overclocking; but that's it. Reply
  • douglaswilliams - Wednesday, September 14, 2011 - link

    Anand,

    Someday in the next few years the number of transistors on a single die will be greater than the total number of people in the world.

    I think the Mayans predicted the world will come to an end at that point or something.
    Reply
  • ClagMaster - Wednesday, September 14, 2011 - link

    Does the extra transistors mean I can browse the internet quicker or make room for a larger, more bloated version of Windows?

    I have to admit I am a little blown away by this. These processors are far more powerful than some of the Cray Supercomputers I used in the 1980's.

    Simpler computers (like CDC 7600's) did not keep me from designing some really powerful hardware.

    What I see with the more powerful computers today is inflated models that predict essentially the same behavior and arrogant young engineers who think the computers allow them to ignore design margins for stuff they do not know about.

    I think of it as a sort of digital self-abuse. And it makes people stupid too.
    Reply

Log in

Don't have an account? Sign up now