Die Size and Transistor Count

At IDF last year we got word of Ivy Bridge's transistor count (1.4 billion), however today we know its die size: 160mm2. That's 75% the size of a quad-core Sandy Bridge, but with 20% more transistors.

This marks the first time since 2006 that Intel is offering a high-end desktop CPU with this small of a die size. I'm excluding the 6-core parts from the discussion since that line isn't really aimed at the same market anymore. The chart is even more insane when you consider the Ivy Bridge die size includes an integrated GPU alongside four of the highest performance x86 cores Intel has ever shipped. Remove the GPU and Ivy Bridge is even smaller than Conroe. A hypothetical GPU-less Ivy Bridge would measure in at roughly 113mm^2 chip on its 22nm process, making it smaller than any high-end Intel CPU since the days of the Pentium 3.

CPU Specification Comparison
CPU Manufacturing Process Cores Transistor Count Die Size
AMD Bulldozer 8C 32nm 8 1.2B 315mm2
Intel Ivy Bridge 4C 22nm 4 1.4B 160mm2
Intel Sandy Bridge E (6C) 32nm 6 2.27B 435mm2
Intel Sandy Bridge E (4C) 32nm 4 1.27B 294mm2
Intel Sandy Bridge 4C 32nm 4 1.16B 216mm2
Intel Lynnfield 4C 45nm 4 774M 296mm2
Intel Sandy Bridge 2C (GT1) 32nm 2 504M 131mm2
Intel Sandy Bridge 2C (GT2) 32nm 2 624M 149mm2

Ivy Bridge is tiny—but what does this mean? For starters, it means the obvious—Intel has little competition in the desktop space. I'm always hard on AMD in my meetings with them because of this reason alone. A less than competitive AMD means we get a less aggressive Intel.

More importantly however, a tiny Ivy means that Intel could have given us a much bigger GPU without breaking the bank. I hinted at this possibility in our Ivy Bridge architecture article. Unfortunately at the time only Apple was interested in a hypothetical Ivy Bridge GT3 and rumor has it that Otellini wasn't willing to make a part that only one OEM would buy in large quantities. We will eventually get the GPU that Apple wanted, but it'll be next year, with Haswell GT3. And the GPU that Apple really really wanted? That'll be GT4, with Broadwell in 2014.

All of this being said however, we must keep in mind that Ivy Bridge is both faster than Sandy Bridge and no more expensive. If we look at the supply and pricing constraints that accompany TSMC's 28nm process, the fact that Intel is able to ramp up 22nm and ship the first products without any price increase is something we shouldn't take for granted.

The Lineup: Quad-Core Only for Now Overclocking and 22nm
Comments Locked

173 Comments

View All Comments

  • ijozic - Thursday, April 26, 2012 - link

    Maybe because people who prefer to have the IPS screen would also like to have support for graphics switching to have a nice battery life while not doing anything GPU intensive. This was the one thing I expected from Ivy Bridge upgrade and NADA.
  • uibo - Monday, April 23, 2012 - link

    Does anyone know if the 24Hz issue has been resolved?
  • uibo - Monday, April 23, 2012 - link

    nevermind just saw the htpc perspective review
  • anirudhs - Monday, April 23, 2012 - link

    I didn't notice that issue. 23.976*1000 = 23976 frames, 24 * 1000 = 24000 frames, in 16 mins 40 secs. So that's about one second of mismatch for every 1000 seconds. I could not notice this discrepancy while playing a Blu Ray on my PC. Could you?
  • Old_Fogie_Late_Bloomer - Monday, April 23, 2012 - link

    Okay, well, I'm pretty sure that you would notice two seconds of discrepancy between audio and video after half an hour of viewing, or four seconds after an hour, or eight seconds by the end of a two-hour movie.

    However, the issue is actually more like having a duplicated frame every 40 seconds or so, causing a visible stutter, which seems like it would be really obnoxious if you started seeing it. I don't use the on-board SB video, so I can't speak to it, but clearly it is an issue for many people.
  • JarredWalton - Monday, April 23, 2012 - link

    I watch Hulu and Netflix streams on a regular basis. They do far more than "stutter" one frame out of every 960. And yet, I'm fine with their quality and so our millions of other viewers. I think the crowd that really gets irritated by the 23.976 FPS problems is diminishingly small. Losing A/V sync would be a horrible problem, but AFAIK that's not what happens so really it's just a little 0.04 second "hitch" every 40 seconds.
  • Old_Fogie_Late_Bloomer - Monday, April 23, 2012 - link

    Well, I can certainly appreciate that argument; I don't really use either of those services, but I know from experience they can be glitchy. On the other hand, if I'm watching a DVD (or <ahem> some other video file <ahem>) and it skips even a little bit, I know that I will notice it and usually it drives me nuts.

    I'm not saying that it's a good (or, for that matter, bad) thing that I react that way, and I know that most people would think that I was being overly sensitive (which is cool, I guess, but people ARE different from one another). The point is, if the movie stutters every 40 seconds, there are definitely people who will notice. They will especially notice if everything else about the viewing experience is great. And I think it's understandable if they are disappointed at a not insignificant flaw in what is otherwise a good product.

    Now, if my math is right, it sounds like they've really got the problem down to once every six-and-a-half minutes, rather than every 40 seconds. You know, for me, I could probably live with that in an HTPC. But I certainly wouldn't presume to speak for everyone.
  • anirudhs - Tuesday, April 24, 2012 - link

    I will get a discrete GPU and then do a comparison.
  • anirudhs - Monday, April 23, 2012 - link

    a discrete GPU! I could use a bump in transcoding performance for my ever-growing library of Blu-Rays.
  • chizow - Monday, April 23, 2012 - link

    Looks like my concerns a few years ago with Intel's decision to go on-package and eventually on-die GPU were well warranted.

    It seems as if Intel will be focusing much of the benefits from smaller process nodes toward improving GPU performance rather than CPU performance with that additional transistor budget and power saving.

    I guess we will have to wait for IVB-E before we get a real significant jump in performance in the CPU segment, but I'm really not that optimistic at this point.

Log in

Don't have an account? Sign up now