Fat Pockets, Dense Cache, Bad Pun

Whenever Intel introduces a new manufacturing process the first thing we see it used on is a big chip of cache. The good ol’ SRAM test vehicle is a great way to iron out early bugs in the manufacturing process and at the end of 2007 Intel demonstrated its first 32nm SRAM test chip.


Intel's 32nm SRAM test vehicle

The 291Mbit chip was made up of over 1.9 billion transistors, switching at 4GHz, using Intel’s 32nm process. The important number to look at is the cell size, which is the physical area a single bit of cache will occupy. At 45nm that cell size was 0.346 um^2 (for desktop processors, Atom uses a slightly larger cell), compared to 0.370 um^2 for AMD’s 45nm SRAM cell size. At 32nm you can cut the area nearly in half down to 0.171 um^2 for a 6T SRAM cell. This means that in the same die area Intel can fit twice the cache, or the same amount of cache in half the area. Given that Core i7 is a fairly large chip at 263 mm^2 I’d expect Intel to take the die size savings and run with them. Perhaps with a modest increase to L3 cache size.

A big reason we’re even getting this disclosure today is because of how healthy the 32nm process is. Below we have a graph of defect density (number of defects in silicon per area) vs time; manufacturing can’t start until you’re at the lowest part of that graph - the tail that starts to flatten out:

Intel’s 45nm process ramped and matured very well as you can see from the chart. The 45nm process reached lower defect densities than both 65nm and 90nm and did it faster than either process. Intel’s 32nm process is on track to outperform even that.

Two Different 32nm Processes?

With Intel now getting into the SoC business (System on a Chip), each process node will now have two derivatives - one for CPUs and one for SoCs. This started at 45nm with process P1266.8, used for Intel’s consumer electronics and Moorestown CPUs and will continue at 32nm with the P1269 process.

There are two major differences between the CPU and SoC versions of a given manufacturing process. One, the SoC version will be optimized for low leakage while the CPU version will be optimized for high drive current. Remember that graph comparing leakage vs. drive current of 45nm vs. 32nm? The P1268 process will exploit the arrows to the right, while P1269 will attempt to push leakage current down.

The second difference is that certain SoC circuits require higher than normal voltages and thus you need a process that can tolerate those voltages. Remember that with a SoC it’s not always just Intel IP being used, there are many third parties that will contribute to the chips that eventually make their way into smartphones and other ultra portable devices.

The buck doesn’t stop here, in two more years we’ll see the introduction of P1270, Intel’s 22nm process. But before we get there, there’s a little stop called Sandy Bridge. Let’s talk about microprocessors for a bit now shall we?

The Manufacturing Roadmap Tick-Tock: U R Doin it Right
Comments Locked

64 Comments

View All Comments

  • Oyvind - Wednesday, April 15, 2009 - link

    7 - 10 post are about the fact that todays sw don't use more than 2 cores in a efficent way. Well 2 - 3 years ago, there was close to none. Did Valve, Epic and others build frameworks for using multi cpu's before the hardware base was in place. The answer is no. Do most big software house today put a big effort in scaling over more cores? The answer is yes. Should Intel/Amd wait until the sw houses catch up? I don't understand it, but the spoken majority seems to answer this with a yes?
    My question: When the big sw house is done with the mulri cpu frameworks, do you son't belive they then will scale over n numbers of cpu's. Userinput, rendering/gpu stuff, AI x n the deept of today etc. All real lifte arhitechure is paralell, sw is not yet, but hopefully that will change.

    ----
    If lifte is good, and you have insane to much money, you stop developing, you dont need to priortize and you slowly fall back in your pillow. Yes AMD fight uphill, but if they manage to survive, nature has proven that fhigting uneven odds, will give you and middel to long term edge (ok if you survive). Tons of money dont save anything. Not sure they suvive, but if they don't a new company with clever enginers will rise somwhere in the future. Yes we need competition and there always will be.
  • mattigreenbay - Friday, March 6, 2009 - link

    This is the end of AMD. Unless this turns out like P4 (not likely), AMD will have to release their process first or soon after [or better yet, a 16nm ultra-fast processor, and while I'm still dreaming, make it free] and have it perform better (also not likely). Poor AMD. I was going to buy a Phenom II, but Intel seems the way to go, future-wise. AMD will be liquidated, as well as VIA and Intel will go back to selling way overpriced processors that perform less than a i386 [Windows 7 certified].
  • mattigreenbay - Friday, March 6, 2009 - link

    But it'll come with a free super fast Intel GPU. (bye bye Nvidia too) :(

  • arbiter378 - Sunday, November 22, 2009 - link

    Intel doesn't make fast gpu's. Even when they tried with that agp gpu ati and nvidia killed it. They won't let a new playing into a graphics market with out a fight. Lastly intel has been trying to beat amd for 40 something years, and there still not even close to beating them. Now that amd has acquired amd they have superior graphics patents.
  • LeadSled - Friday, February 20, 2009 - link

    What is really amazing, is the shrink proccess timetable. It looks like they will meet the timetable for our first Quantum DOT procersors. It is theorized to occure at the 1.5nm proccess and by the year 2020.
  • KeepSix - Saturday, February 14, 2009 - link

    I guess I can't blame them for changing sockets all the time, but I'm not sure if I'll be switching any time soon. My Q6600 hasn't gone past 50% usage yet, even when extreme multi-tasking (editing HD video, etc.)

    I'd love to build an i7 right now, but I just can't justify it.
  • Hrel - Thursday, February 12, 2009 - link

    On the mainstream quad-core side, it may not make sense to try to upgrade to 32nm quad-core until Sandy Bridge at the end of 2010. If you buy Lynnfield this year, chances are that you won’t feel a need to upgrade until late 2010/2011.

    So if you buy a quad core 8 thread 3.0 Ghz processor you will "NEED" to upgrade in one year?! What?! It doesn't make sense to upgrade just for the sake of having the latest. Upgrade when your computer can't run the programs you need it to anymore; or when you have the extra money and you'll see at least a 30 percent minimum increase in performance. You should be good for at least 2 years with Lynnfield and probably 4 or 5 years.
  • QChronoD - Thursday, February 12, 2009 - link

    He's saying that the people who have no qualms about throwing down a grand on just the processor are going to want to upgrade to the 32nm next year.

    However for the rest of us that don't shit gold, picking up a Lynnfield later this year will tide us over until 2011 fairly happily.
  • AnnonymousCoward - Thursday, February 12, 2009 - link

    > However for the rest of us that don't shit gold, picking up a Lynnfield later this year will tide us over until 2011 fairly happily.

    My C2D@3GHz will hold me over to 2011...
  • MadBoris - Thursday, February 12, 2009 - link

    I watch roadmaps from time to time and I know where AMD has potential.

    Simplify the damn roadmap, platforms, chipsets, sockets!

    Seriously, I need a spread sheet and calculator to keep it all straight.

    Glad Anand gave kind of a summary for were and when it makes sense to upgrade but I just don't have the patience to filter through it all to the end I get a working knowledge of it.

    One thing AMD has been good at in the past if they continue, is to keep upgrades simple. I don't want a new motherboard and new socket on near every CPU upgrade. I'm not sure if mobo makers love it or hate it, obviously they get new sales but it's kind of nuts.
    This alone, knowing I have some future proofing on the mobo, makes CPU upgrades appealing and easy and something I would take advantage of.

    As far as the GPU/CPU it's nothing I will need for years to come. We will have to wait until it permeates the market before it gets used by devs, just like multicore. It will at least take consoles implementing it before game devs start utilizing it, and even then it's liable to take a lot of steps back in performance (it's only hype now)...

Log in

Don't have an account? Sign up now