Power Improvements

Although Haswell’s platform power is expected to drop considerably in mobile, particularly with Haswell U and Y SKUs (Ultrabooks and ultrathins/tablets), there are benefits to desktop Haswell parts as well.

There’s more fine grained power gating, lower chipset power and the CPU cores can transition between power states about 25% quicker than in Ivy Bridge - allowing the power control unit to be more aggressive in selecting lower power modes. We’ve also seen considerable improvements on lowering platform power consumption at the motherboard level as well. Using ASUS’ Z77 Deluxe and Z87 Deluxe motherboards for the Haswell, Ivy and Sandy Bridge CPUs, I measured significant improvements in idle power consumption:

Idle Power

These savings are beyond what I’d expect from Haswell alone. Intel isn’t the only one looking to make things as best as can be in the absence of any low hanging fruit. The motherboard makers are aggressively polishing their designs in order to grow their marketshare in a very difficult environment.

Under load, there’s no escaping the fact that Haswell can burn more power in pursuit of higher performance:

Load Power - x264 HD 5.0.1 Benchmark

Here I’m showing an 11.8% increase in power consumption, and in this particular test the Core i7-4770K is 13% faster than the i7-3770K. Power consumption goes up, but so does performance per watt.

The other big part of the Haswell power story is what Intel is calling FIVR: Haswell’s Fully Integrated Voltage Regulator. Through a combination of on-die and on-package circuitry (mostly inductors on-package), Haswell assumes responsibility of distributing voltages to individual blocks and controllers (e.g. PCIe controller, memory controller, processor graphics, etc...). With FIVR, it’s easy to implement tons of voltage rails - which is why Intel doubled the number of internal voltage rails. With more independent voltage rails, there’s more fine grained control over the power delivered to various blocks of Haswell.

Thanks to a relatively high input voltage (on the order of 1.8V), it’s possible to generate quite a bit of current on-package and efficiently distribute power to all areas of the chip. Voltage ramps are 5 - 10x quicker with FIVR than with a traditional on-board voltage regulator implementation.

In order to ensure broad compatibility with memory types, there’s a second input voltage for DRAM as well.

FIVR also comes with a reduction in board area and component cost. I don’t suppose this is going to be a huge deal for desktops (admittedly the space and cost savings are basically non-existent), but it’ll mean a lot for mobile.

No S0ix for Desktop

You’ll notice that I didn’t mention any of the aggressive platform power optimizations in my sections on Haswell power management, that’s because they pretty much don’t apply here. The new active idle (S0ix) states are not supported by any of the desktop SKUs. It’s only the forthcoming Y and U series parts that support S0ix.

Introduction Memory, Platform & Overclocking
POST A COMMENT

208 Comments

View All Comments

  • jeffkibuule - Saturday, June 01, 2013 - link

    I wouldn't say that Pentium 4 was terrible, but their 2004-2006 exercise of continually pumping up clocks was misguided. Reply
  • Nfarce - Saturday, June 01, 2013 - link

    Exactly. As someone who still has my P4 Northwood 3.06GHz (with HT) as a general use PC, I loved it. It served as my main gaming and photo/video editing PC back in the day, and was only replaced with a C2D E8400 overclock build four and a half years ago (which was replaced two years ago with a SB 2500k build). Anyone who says the P4 was terrible is either an AMD fanboy trolling or never had one at the time. Reply
  • bji - Saturday, June 01, 2013 - link

    By any reasonable metric, P4s were pretty bad. Glad you like yours but that's mostly because even back in the P4 days CPUs were already "fast enough" most of the time for most tasks and you probably would have liked a Pentium M or Athlon just as well. P4s started out with very weak performance and were improved a decent amount during the lifetime of the architecture, but they were never spectacular performers vs. the competition and they were always extremely hot and power hungry. Also Rambus memory was a joke.

    More on topic, I'm not surprised that Haswell isn't significantly faster than Ivy Bridge. I said when Sandy Bridge came out that the x86 architecture would never get 50% faster per core than Sandy Bridge. With the combination of nearing the end of the road for process shrinking, the architecture itself already having been optimized to such a degree that any additional significant gains come at an extremely high transistor and R&D cost, the declining of importance of the x86 market as mobile devices become more prominent, and the "already much more than fast enough" aspect of modern CPUs for the vast majority of what they're used for, it's pretty clear that we'll never see significant increases in x86 speed again. There just isn't enough money available in the market to fund the extremely high costs necessary to significantly increase speed in a market where fast enough was achieved years ago.

    I'll stand by my statement of ~2 years ago: x86 will top out at 50% faster than Sandy Bridge per core.
    Reply
  • nunomoreira10 - Saturday, June 01, 2013 - link

    Maybe not on the comon instruncion set, wich intel has already adress on haxwell, just wait for the software to update to avx2 and you will see how slow sandy bridge is by comparation Reply
  • klmccaughey - Monday, June 03, 2013 - link

    @bji: Totally agree. We are in the halcion days and I can't see the likes of the 4770k getting significantly more powerful any time soon. I believe it will take a huge technology breakthrough in terms of fab materials, along the lines of optical or biological chips. At least 10 years away.

    The corollary to this is that we don't actually really need any more power. We already have the level of "good enough" for the GPU (in gaming terms). In terms of compute power, that is definitely continuing in the concurrency paradigm - which is where it should be, it makes sense. Programmers (like myself) are proceeding along these lines to get more power.

    I think we are at either a pivotal point or a point of divergence again in computer technology. It's very exciting and interesting for me :)
    Reply
  • jmelgaard - Sunday, June 02, 2013 - link

    Wait what... I must be an AMD fanboy then (although I love Intel and never owned an AMD >.<, lol)...

    Honestly, the P4 platform was terrible in many aspects, and yes I did own one, several actually (2.266, 2.4, 2.8)... But having a Dual Pentium III 1GHz at the time as well made it pretty obvious to me how bad the P4 really was... Granted all those P4 was at lower clocks than yours...

    But nothing is bad not to be good for something, after all intel's after the P4 generation has all been pretty amazing...

    More in the topic though, I am a bit dismayed and disappointed that the power consumption goes up compared to the last generation under load... Great that the idle power goes that much down, but I would rather see the exact same performance as 3rd gen and a huge power reduction... After all, performance wise I am still over satisfied with my i970... I don't feel like i need more juice, so I would rather save some bucks on the electrical bill... Obviously there will be different minds about that part... Just saying what I feel...
    Reply
  • Donkey2008 - Monday, June 03, 2013 - link

    Weird how you keep saying how "bad" it was in it's time, yet you present no actual facts to back that up. About the only bad thing I ever saw with the P4 were high temps, which any decent HSF fixed. Reply
  • bji - Monday, June 03, 2013 - link

    It was so bad that Intel had to pay vendors not to buy the competitor's chips, an action that they were later sued for and settled to the tune of $1.25 billion.

    The P4 started out very badly; it was very power hungry and had weak performance compared to the competition. Intel was also the only company able to make chip sets for it (can't remember if there were technical or legal reasons behind this or both), and they refused to support any memory but Rambus (for a long time), further hurting their cause by propping up a company that is pretty much the dregs of submarine patent lawsuit filth.

    I can't think of any way in which the P4 was better than its competition of the day except that it had Intel's sleazy business practices behind it, if you consider that "better". It certainly played better in the marketplace, ethics notwithstanding.

    You may have been happy with your P4 because it did what you needed it to do. Awesome. Nobody is saying that the P4 didn't work or that it couldn't actually fulfill the duties of a CPU, we're just saying that compared to its contemporaries, it kinda blew chunks.
    Reply
  • superjim - Wednesday, June 05, 2013 - link

    I had two P4 chips (2.4 Northwood and 3.0 Prescott) along with many Athlon XP systems (Palomino, Thoroughbred and Barton) and the Athlon's beat the P4s in nearly every metric. Then came the Athlon 64 to solidify AMD's crown. It wasn't until the original Core (Conroe) chips when Intel came screaming back and have held it since. Reply
  • Donkey2008 - Monday, June 03, 2013 - link

    "Anyone who says the P4 was terrible is either an AMD fanboy trolling or never had one at the time. "

    +5

    My Northwood 3GHz was as fast, stable and solid as any CPU I have ever owned. Performed slightly slower than an equivalent A64, but nothing noticable to the human eye. Maybe these people who bag on it have bionic eyes.
    Reply

Log in

Don't have an account? Sign up now