Most modern day Intel CPUs run at or above 1V depending on clock speed. For years we had seen decreasing core voltages as Intel transitioned to lower power manufacturing processes, however in recent history it has remained almost flat. While actual transistor switching happens well below 1V, it's very difficult to accurately detect transistor state (off or on) at such low voltages. At what point are you measuring an actual switch vs. just noise? By mapping your high (or on) signal to higher voltages it's easier to tell when the transistor is actually on (e.g. 1V looks very different than 400mV, while 500mV doesn't).

Intel has done a lot of research into running CPUs near their threshold voltage (NTV), the actual voltage at which transistors begin conducting current. There's a lot of work that has to be done to make this happen but the end result is you get tremendous power savings. The chip Intel showed off today can run at less than 10mV (presumably when idle) and operate somewhere in the 400 - 500mV range when higher performance is needed. 

In testing NTV Intel turned to its original Pentium architecture for the basis of the chip. The result was a chip that didn't require any heatsink to operate. This NVT Pentium won't be productized but the research will be used for future Intel many-core and ultra-mobile CPUs. Operating at lower voltages is important to both ends of the spectrum - whether you have dozens of cores or a handful of them in a phone, NTV operation would result in huge performance or battery life gains.

POST A COMMENT

21 Comments

View All Comments

  • DEEPAYAN - Thursday, September 15, 2011 - link

    that is some impressive r&d. pointing out the very volt when switching happens is a enormous task Reply
  • Quikstyle - Thursday, September 15, 2011 - link

    Actually, the very definition of threshold voltage is different from fab to fab based on how they calculate it. Current flow through the transistor is not ON and OFF; it ramps up with the applied gate voltage from zero to saturated along a curve. The threshold voltage (Vt) value is based upon the interpretation of that graph, not an absolute value, and there are different beliefs on how it should be calculated. Sounds like Intel has characterized and perfected their process to such a high state that they can repeadtedly hit the same Vt target with very tight control limits. Reply
  • soliozuz - Saturday, September 17, 2011 - link

    Yeah very interesting indeed and the fact that processors won't require heatsinks is absolutely madness, I can already see how much more thinner they will be developed if the need for a coolant is not required. Reply
  • marc1000 - Thursday, September 15, 2011 - link

    it is fun how they always come back to pentium architecture on this kind of experiment. repeated times. seems like someone there really likes that cpu! Reply
  • Arnulf - Friday, September 16, 2011 - link

    It is (A) relatively simple (low element count) so it is easier to debug and (B) contains just about complete instruction set that is used by applications nowadays, minus the extensions (MMX, SSE* etc.). Why not go for it when making proof-of-concept chips ? Reply
  • fic2 - Thursday, September 15, 2011 - link

    Wonder what fab node they are doing this on. I can't imagine how small an original Pentium would be fabbed at 32nm or 22nm. Original Pentium had 3.1 million transistors up to 3.3M (w/MMX) fabbed with 800nm and die size of 294mm^2.
    Just taking the simplified approach: 800nm/32nm = 25, 294mm^s/25 = 58.8mm^2inch.

    Clarkdale is 384M transistors and 81mm^2 on 32nm.
    Doing the simplified 384M/3.1M = 123.87. 81mm^2/123.87 = 7.28mm^2.

    Pretty sure that neither of these is the correct way to do this but the 2nd is probably closer.
    Reply
  • doylecc - Thursday, September 15, 2011 - link

    You forgot the 800nm or 32nm are in 1 dimension only. To get the area of the die, you must square the dimensions: (800nm/32nm)^2 = (25)^2 = 25 x 25 = 625.
    Using your figures, die size of (294mm^2)/625 = 0.47mm^2 die size for original Pentium if @ 32nm.

    Clarkdale @ 384M transistors and 81mm^2 on 32nm gives:
    384M/3.1M = 123.87.
    81mm^2/123.87 = 0.654mm^2 die size for original Pentium if @ 32nm.

    0.47mm^2 is very close to 0.654mm^2 and the difference can be accounted for by differences in transistor sizes and the area occupied by components other than transistors.
    Reply
  • marraco - Saturday, September 17, 2011 - link

    I'm sure that 625 Pentium processors can do far more work than any 32 nm processor (for highly parallelizable tasks). Reply
  • futrtrubl - Thursday, September 15, 2011 - link

    btw 294/25 = 11.76 but units are wrong.. don't even know where the inches came from.
    Don't forget you are dealing in area (mm^2) not linear distance so it would be 294/(25*25) = 0.4704 mm^2
    Now that's fricken tiny... but given that even if you can make transistors that small you still might need to do extra magic to get it to work so I'd say the Clarkdale calculation would be closer... wait, that calc isn't right either... 81/123.87 = 0.65 mm^2
    So that agrees well, a pentium today would be less than mm on a side..... wowsers.
    Reply
  • Jaybus - Friday, September 16, 2011 - link

    My guess is a P54C core, since those have already been moved to a modern process for the SCC. This research very likely came out of the SCC project. The P54C was built on a 600 nm process. If moved to a 40 nm process, the scale would be (600/32)^2 = around 350 cores in the same area as one 600 nm process core. Of course, a better approach would be to use perhaps 48 such cores and use the rest of the area for an on-die network switch, shared message passing RAM, and DRAM controllers. Oh wait...that IS the SCC. Reply

Log in

Don't have an account? Sign up now