When a CPU becomes a sieve

The real problem is leakage power, and the Intel power graph below illustrates this perfectly.


Fig 2. "Leakage power grows exponentially ".

As you can see, dynamic power - which does useful work - has increased relatively slowly despite the increase in CPU complexity. Leakage power, however, increases exponentially, and not linearly. It has grown quickly from a "minor nuisance" to a "circuit killing monster".

Leakage is comparable to a small hole in a waterhose of a firefighter. The more pressure (i.e. the higher the core voltage), the bigger the hole gets, and thus, the more water that leaks to the ground. The thinner the walls of the tube (i.e. smaller process technology), the quicker the holes become bigger, and the more water you lose, the harder the pumps must work to get the same amount of water to extinguish the fire. If the pumps overheat, you better throttle them down, or they will cease to work after a while.

Power Leakage happens as a part of the current, which is supposed to make our transistors switch leaks away in the substrate and finally in the ground. There are several leakage currents, but the two most important ones are the gate oxide tunnelling current and sub-threshold leakage.[3]


Fig 3. I3 is the Gate oxide tunnelling currents, I2 is the Sub-threshold leakage current

Gate oxide tunnelling (I3) currents get more important with smaller process technology as the gate oxide that is supposed to insulate the transistor becomes thinner and thinner. As a result, current that is going through the transistors leaks away - the gate oxide becomes a sieve instead of being the "wall of a tube".

Sub-threshold leakage (I2) transistor is the leakage current flowing through the transistor when it is supposed to be turned off. To understand this, we got to back to basic transistor technology.

Normally, a voltage threshold of x volts is needed to get current across the transistor, with x volts being the threshold. This way, the transistor is being used as a switch with a binary function: more or equal to threshold voltage = ON = 1, less than the threshold voltage = OFF.

The point that you have to remember is this: ideally, as long as the threshold voltage is not reached, no current should run through the transistor. However, as transistors and interconnects get smaller and smaller (smaller process technology), the insulation between drain and source gets worse and worse. As a result, a small leakage current gets through the transistor (I 2) even though the threshold voltage is not reached (the Transistor is off).

That subthreshold leakage has become a major problem, which has been made clear by Shekhar Borkar [5] (Intel Fellow, Director of Circuit Research). He illustrated this by the logarithmic graph below.


Fig 4. Subthreshold leakage - notice the logarithmic scale!

Subthreshold leakage was only a small problem at the time of Willamette - the leakage problem wasted a few watts at 180 nm. The graph is based on Moore's law: every two years, the number of transistors doubles. As you can see, without countermeasures, it wouldn't be interesting to use devices that make use of 45 nm technology. They would simply leak too much power, up to 100 Watts!

And subthreshold leakage is only part of the leakage problem. Together with gate oxide tunnelling, CPUs made of 65 nm technology would leak more power than what they need for making the transistors switch. It is comparable to a fuel tank that has so many holes, causing it to leak more gasoline to the ground than what the fuel pump can pump to the engine.

Let us check the third and last problem for high performance CPUs.

Wire delay

It is hard to imagine that the little wires - the metal interconnects - between transistors can be a limiting factor. About twenty years ago, transistor switching speeds were pretty low, and wire delays were completely ignored. However, as process technology became better, transistors were capable of switching much faster. Right now, the fastest transistors in the labs can attain 100 GHz (the record being around 300-500 GHz) and more. So, transistor switching speed still has a lot of headroom.

The tiny wires between the different transistors are still not the problem. Functional blocks are also wired to the TLBs (Translation Lookaside Buffer) and caches. The real problem is these global wires - they are a lot longer . If the RC delay is too high, the clock speed will have to be reduced to get a working CPU.

The speeds at which signals travel through the global wires (from logic blocks to the caches, for example) are quite a bit slower than what the maximum speed (speed of light) allows. The reason is the resistance (R, Ohm) and capacitive resistance (C) of the wire. As the whole CPU was made with smaller process technology, the wires also shrunk. You probably know from your lessons of physics that resistance increases as the cross section of the wire gets smaller and the length of the wire gets longer. So, if you shrink a wire, the effect of the shorter length is completely negated by the smaller thickness of the wire. You could make the wires thicker, but it wouldn't be easy and that would increase the capacitance of the wire. The result is that wire delay remains, more or less, the same (in nanoseconds).

However, gate switching speed improves a lot with smaller transistors (for example, 100%). So, while RC delay improves with a very small percentage (or nothing all), gates might switch up to 100% (simplified example) faster as process technology improves. The RC delay of the global wires becomes more a bottleneck that makes bumping up the clock speed hard. Modern Integrated Circuits (ICs), such as CPUs, must be partitioned, as a signal can travel for a time slightly less than the length of one clockpulse.


CHAPTER 1: The brakes on CPU power CHAPTER 2: Why single core CPUs are no longer "cool"
Comments Locked

65 Comments

View All Comments

  • AnnoyedGrunt - Tuesday, February 8, 2005 - link

    It's possible that 22 was referring solely to the grammar of the sentence, which could potentially make more sense if it was rewritten as, "while other applications will REQUIRE exponential investments in develpment....."

    Very good article overall, but some portions could be polished a bit perhaps to make it easier for people only slightly familiar with processor details (people like myself) to understand.

    Really looking forward to part 2!

    -D'oh!
  • JarredWalton - Tuesday, February 8, 2005 - link

    23 - Not at all. Have you ever tried writing multi-threaded code? If it take 12 months to write and debug a single-threaded program that handles a task, and you try to do the same thing in multi-threaded code, I would expect 24 to 36 months to get everything done properly.

    Let's not even get into the discussion of the fact that not all code really *can* benefit from multi-threadedness. I had a similar conversation with several others in the Dual Core AMD Roadmap article. You can read the comments there for additional insight, I hope:

    http://www.anandtech.com/talkarticle.aspx?i=2303
  • cosmotic - Tuesday, February 8, 2005 - link

    "while the other applications will see exponential investments in development time to achieve the same performance increase." Thats a really stupid statement.
  • cosmotic - Tuesday, February 8, 2005 - link

    That first image really sucks. You should at least make it look decent. It looks like crap now.
  • IceWindius - Tuesday, February 8, 2005 - link

    Math hurts, and thus my head hurts.......


    Either way, Intel finally admits they fucked up and AMD spanked them for it. Justice is served.
  • faboloso112 - Tuesday, February 8, 2005 - link

    only about halfway through the article but this is a damn good article.

    not a fanboi of any sort but i certainly do hate intel's pr team.

    i think the reason amd has done well for itself is because it doesn't pride itself nor relies of fake product specs and their exaggerated capabilities and scalability...unlike intel...and ill admit...i got cought up in the hype too with the whole 10ghz thing at the time because based on moore's law and how things had been going w/ the clock speed jumps...i thought one day it would be possible...but look at where the prescott stands now...and look at how instead of blabbing about 10ghz..they talk of multi-core cpu.

    i think ill stop talking now and return to the article...
  • erikvanvelzen - Tuesday, February 8, 2005 - link

    i eat these sort of articles about cpu's, memory and the like which have references to hardware which i actually use.

    If you like this, check out these articles by John 'Hannibal' Stokes @ arstechnica.com:
    http://arstechnica.com/cpu/index.html
    http://arstechnica.com/articles/paedia/cpu.ars
  • jbond04 - Tuesday, February 8, 2005 - link

    AWESOME article, Johan. Good to see someone do some real research regarding the Prescott processor. Keep up the good work!
  • Oxonium - Tuesday, February 8, 2005 - link

    Johan used to write very good articles for Ace's Hardware. I'm glad to see him writing those same high-quality articles for Anandtech. Keep up the good work!
  • BlackMountainCow - Tuesday, February 8, 2005 - link

    Wow, very interesting read. Finally some stuff based on real facts and not some "Prescott just sux" stuff. Two thumbs up!

Log in

Don't have an account? Sign up now