Power Efficiency Improvements

When Intel introduced its 22nm tri-gate transistors Intel claimed that it could see an 18% increase in performance at 1V compared to its 32nm process. At the same switching speed however, Intel's 22nm transistors can run at 75 - 80% of the voltage of their 32nm counterparts. Ivy Bridge's process alone should account for some pretty significant power savings. In addition to process however, there are a few architectural changes in IVB that will reduce power consumption.

Lower System Agent Voltages

Sandy Bridge introduced the System Agent, a name that used to refer to the uncore of a processor but now refers to the display output, memory controller, DMI and PCI Express interfaces. As of Sandy Bridge, the L3 cache was no longer included in the uncore and thus it wasn't a part of the System Agent.

The System Agent operates on a separate voltage plane than the rest of the chip. On Ivy Bridge Intel now offers even lower System Agent voltage options for the lower voltage SKUs, which in turn helps power optimize those SKUs.

More Accurate Voltage Characterization

Today Intel defines three different voltages for every Sandy Bridge CPU: LFM, nominal and turbo. LFM is the lowest frequency the CPU can run at (e.g. completely idle), nominal is the frequency it is specified to run at (e.g. 3.3GHz for a 2500K) and turbo is the highest available turbo frequency (e.g. 3.7GHz for a 2500K). Intel determines the lowest voltage possible for each one of those frequencies. Sandy Bridge obviously runs at more than just three frequencies, there are many more intermediate frequencies that it may run at depending on the current workload. The voltages at those intermediate frequencies are interpolated from the three points that I mentioned above.

With Ivy Bridge, Intel characterizes even more points along the frequency curve. Intel didn't reveal exactly how many points but it's more than three. A curve is then fit to the frequency/voltage data and depending on IVB's operating frequency a more accurate voltage point is calculated. The result from all of this seemingly simple work is a reduction in core voltage at these intermediate frequencies. Voltage changes have a cubic affect on power, so even a small reduction here can have a tangible impact. One of the points that wasn't previously characterized was max thread turbo. Ivy Bridge should be more power efficient in cases where you have all cores active.

Power Aware Interrupt Routing

This next feature is pretty neat. Ivy Bridge has logic to properly route interrupt requests to cores that are already awake vs. those that are asleep in their lowest power states. Obviously this approach can save a lot of power, however it may rob those active cores of some performance. IVB will allow prioritizing performance as well. Interrupt handling can thus be handled similarly to how it is today, or optimally for power savings.

Configurable TDP

I already wrote about what this is but if you missed our Pipeline post on it I'll briefly recap. All CPUs ship with a rated thermal design point (TDP) that tells OEMs what sort of cooling the chip requires. Traditionally that TDP value remained static and the CPU could do whatever it wanted but exceed that value. Ivy Bridge introduces configurable TDP that allows the platform to increase the CPU's TDP if given additional cooling, or decrease the TDP to fit into a smaller form factor.

The cTDP up mode is obviously for docked notebooks. You can imagine an Ivy Bridge notebook with an optional dock that could enhance the cooling capabilities of the machine. When undocked the notebook's processor would operate at a max TDP of 17W, for example, but toss it in a dock with additional cooling and the TDP would jump up to 33W. It's up to the OEMs to decide how they want to take advantage of this feature. It could be something as simple as a mobile dock with more fans, or something as complex as a modular water cooling solution with a bigger radiator in the dock. I haven't seen any concepts of systems that take advantage of Ivy Bridge's cTDP up support, but that's the theory.

What about cTDP down? Take the same 17W Ivy Bridge CPU from above but now drop the TDP to 13W, which in turn limits clock speed and voltage. Why would you want to do this? From the OEM perspective, Intel's TDP choices may seem arbitrary. Downwards configurable TDP allows OEMs to get a lower power configuration without forcing Intel to create a new SKU. OEMs can do this today through undervolting/underclocking of their own, but the cTDP down spec will at least give OEMs a guarantee of performance/power.

Configurable TDP obviously only applies to mobile Ivy Bridge. In particular the ultra low voltage and extreme edition parts will support cTDP. The cTDP values are listed in the table below:

Ivy Bridge Configurable TDP
  cTDP Down Nominal cTDP Up
Ivy Bridge ULV 13W 17W 33W
Ivy Bridge XE 45W 55W 65W

The most interesting are the 17W ULV Ivy Bridge parts as far as I'm concerned. Today you do sacrifice clock speed to get into the form factor of a MacBook Air. A clever OEM armed with Ivy Bridge might be able to deliver a cooling dock that would give you the best of both worlds: an ultra portable chassis on the go, and higher clock speeds while docked.

Cache, Memory Controller & Overclocking Changes The New GPU
Comments Locked


View All Comments

  • shiznit - Saturday, September 17, 2011 - link

    Intel's APU is more integrated than AMD's
  • TypeS - Monday, September 19, 2011 - link

    Guess you're one of those fanboys that just couldn't come back down from high of AMD's time in the spotlight with the Athlon 64?

    For someone who speaks of facts, you need to go check the architecture of both the SNB and Llano/Brazo cores before you say AMD has the more integrated approach.

    AMD is just using marketing nonsense with calling their new CPU an "APU", just like when they called the Phenom X4s "true quad cores".
  • JonnyDough - Monday, September 19, 2011 - link

    Marketing fluff is Intel's bag, right? Maybe you forget the whole "clock speed" fiasco. Selling P4's claiming they are faster than the competition, although they are not...

    At least consumers eventually caught on and OEMs began looking at AMD processors as well. :)

    You all sound like fanboys though, who really cares who's right? We should just be excited about the TECH!
  • Kaihekoa - Saturday, September 17, 2011 - link

    Lol with Intel's capital and recruiting experienced GPU engineers that "two year lead" will evaporate faster than boiling water. I don't know where you're getting your delusions of the mainstream market sapping up AMD's CPU/GPU combination marketing and products, but the average computer user doesn't use or need anything more than Intel's current generation of graphics. And as others have mentioned Intel's design is more integrated than AMDs on an engineering/design level.

    Yes, they have the more powerful GPU, but you have to be an idiot to think it's more integrated than Ivy Bridge. CPU performance and graphics good enough to power 2D and 3D accelerated media are the yardstick for PC performance for the vast majority of users. You're truly deluding yourself if you think the average computer user is playing The Witcher 2 and Deus Ex on their PCs with cards more powerful than IVB's. Even now with AMD's two year advantage, guess who owns the market for systems with a combined CPU/GPU? For integrated graphics? Wintel.

    Am I an Intel fanboy? No, the last desktop system I built had an AMD CPU and discrete GPU, but you can't logically deny how well their business is doing now, and you'd be a food to think they would overlook the mainstream demand for a high-end APU. In the future when the market needs/wants it, Intel will have something equivalent or better to AMD/ATI.
  • Zoomer - Saturday, September 17, 2011 - link

    Let's not forget drivers and game support, not to mention IQ. Last I checked, Intel graphics drivers were still pretty horrible.
  • iwodo - Sunday, September 18, 2011 - link

    Exactly. Designing Hardware is easy. You throw Money and Engineers you could be there in no time. Especially with the expertise from Intel.

    Software - on the other hand, takes time. No matter how many engineers you put in. Drivers is the problems Intel has to overcome.
  • JonnyDough - Monday, September 19, 2011 - link

    I agree. Software is key. Intel is good at parts of it, AMD is better when it comes to keeping up with game developers. However, business markets make enthusiast markets look miniscule. Still, both are great competitors and we consumers just keep winning. :)
  • iwodo - Sunday, September 18, 2011 - link

    I forgot to add, there is a reason why Nvidia has more Software Engineers then Hardware.
  • medi01 - Sunday, September 18, 2011 - link

    It's actually the other way round. Pretty much any CPU starting from about 2008 is "more than good enough" for most users.
  • BSMonitor - Sunday, September 18, 2011 - link

    "In spite of the marketing hype from Intel it looks like they've conceded that AMD has the better system approach with APUs for mainstream consumers and laptops. CPU performance alone is no longer a valid yardstick for PC performance thanks to AMD's advance thinking and Llano. "

    This is utter nonsense. All AMD has done is transfer 400 of its shader units on to the CPU core. What you have with AMD is a 4-5 year old GPU combined with a 3 year old CPU.

    Both sides of the coin yeild a huge YAWN from anyone looking for real performance.

Log in

Don't have an account? Sign up now