The New Sleep States: S0ix

A bunch of PC makers got together and defined the various operating modes that ACPI PCs can be in. If everyone plays by the same rules there are no surprises, which is good for the entire ecosystem.

System level power states are denoted S0 - S5. Higher S-numbers indicate deeper levels of sleep. The table below helps define the states:

ACPI Sleeping State Definitions
Sleeping State Description
S0 Awake
S1 Low wake latency sleeping state. No system context is lost, hardware maintains all context.
S2 Similar to S1 but CPU and system cache context is lost
S3 All system context is lost except system memory (CPU, cache, chipset context all lost).
S4 Lowest power, longest wake latency supported by ACPI. Hardware platform has powered off all devices, platform context is maintained.
S5 Similar so S4 except OS doesn't save any context, requires complete boot upon wake.

S0 is an operational system, while S1/S2 are various levels of idle that are transparent to the end user. S3 is otherwise known as Suspend to RAM (STR), while S4 is commonly known as hibernate or Suspend to Disk (this one is less frequently abbreviated for some reason...).

These six sleeping states have served the PC well over the years. The addition of S3 gave us fast resume from sleep, something that's often exploited when you're on the go and need to quickly transition between using your notebook and carrying it around. The ultra mobile revolution however gave us a new requirement: the ability to transact data while in an otherwise deep sleep state.

Your smartphone and tablet both fetch emails, grab Twitter updates, receive messages and calls while in their sleep state. The prevalence of always-on wireless connectivity in these devices makes all of this easy, but the PC/smartphone/tablet convergence guarantees that if the PC doesn't adopt similar functionality it won't survive in the new world.

The solution is connected standby or active idle, a feature supported both by Haswell and Clovertrail as well as all of the currently shipping ARM based smartphones and tablets. Today, transitioning into S3 sleep is initiated by closing the lid on your notebook or telling the OS to go to sleep. In Haswell (and Clovertrail), Intel introduced a new S0ix active idle state (there are multiple active idle states, e.g. S0i1, S0i3). These states promise to deliver the same power consumption as S3 sleep, but with a quick enough wake up time to get back into full S0 should you need to do something with your device.

If these states sound familiar it's because Intel first told us about them with Moorestown:

In Moorestown it takes 1ms to get out of S0i1 and only 3ms to get out of S0i3. I would expect Haswell's wakeup latencies to be similar. From the standpoint of a traditional CPU design, even 1ms is an eternity, but if you think about it from the end user perspective a 1 - 3ms wakeup delay is hardly noticeable especially when access latency is dominated by so many other factors in the chain (e.g. the network).

What specifically happens in these active idle power states? In the past Intel focused on driving power down for all of the silicon it owned: the CPU, graphics core, chipset and even WiFi. In order to make active idle a reality, Intel's reach had to extend beyond the components it makes.

With Haswell U/ULT parts, Intel will actually go in and specify recommended components for the rest of the platform. I'm talking about everything from voltage regulators to random microcontrollers on the motherboard. Even more than actual component "suggestions", Intel will also list recommended firmwares for these components. Intel gave one example where an embedded controller on a motherboard was using 30 - 50mW of power. Through some simple firmware changes Intel was able to drop this particular controller's power consumption down to 5mW. It's not rocket science, but this is Intel's way of doing some of the work that its OEM partners should have been doing for the past decade. Apple has done some of this on its own (which is why OS X based notebooks still enjoy tangibly longer idle battery life than their Windows counterparts), but Intel will be offering this to many of its key OEM partners and in a significant way.

Intel's focus on everything else in the system extends beyond power consumption - it also needs to understand the latency tolerance of everything else in the system. The shift to active idle states is a new way of thinking. In the early days of client computing there was a real focus on allowing all off-CPU controllers to work autonomously. The result of years of evolution along those lines resulted in platforms where any and everything could transact data whenever it wanted to.

By knowing how latency tolerant all of the controllers and components in the system are, hardware and OS platform power management can begin to align traffic better. Rather than everyone transacting data whenever it's ready, all of the components in the system can begin to coalesce their transfers so that the system wakes up for a short period of time to do work then quickly return to sleep. The result is a system that's more frequently asleep with bursts of lots of activity rather than frequently kept awake by small transactions. The diagram below helps illustrate the potential power savings:

Windows 8 is pretty much a requirement to get the full benefits, although with the right drivers in place you'll see some improvement on Windows 7 as well. As most of these platform level power enhancements are targeted at 3rd generation Ultrabooks/tablets it's highly unlikely you'll see Windows 7 ship on any of them.

All of these platform level power optimizations really focus on components on the motherboard and shaving mWs here and there. There's still one major consumer of power budget that needs addressing as well: the display.

For years Intel has been talking about Panel Self Refresh (PSR) being the holy grail of improving notebook battery life. The concept is simple: even when what's on your display isn't changing (staring at text, looking at your desktop, etc...) the CPU and GPU still have to wake up to refresh the panel 60 times a second. The refresh process isn't incredibly power hungry but it's more wasteful than it needs to be given that no useful work is actually being done.

One solution is PSR. By including a little bit of DRAM on the panel itself, the display could store a copy of the frame buffer. In the event that nothing was changing on the screen, you could put the entire platform to sleep and refresh the panel by looping the same frame data stored in the panel's DRAM. The power savings would be tremendous as it'd allow your entire notebook/tablet/whatever to enter a virtual off state. You could get even more creative and start doing selective PSR where only parts of the display are updated and the rest remain in self-refresh mode (e.g. following a cursor, animating a live tile, etc...).

Display makers have been resistant to PSR because of the fact that they now have to increase their bill of materials cost by adding DRAM to the panel. The race to the bottom that we've seen in the LCD space made it unlikely that any of the panel vendors would be jumping at the opportunity to make their products more expensive. Intel believes that this time things will be different. Half of the Haswell ULT panel vendors will be enabled with Panel Self Refresh over eDP. That doesn't mean that we'll see PSR used in those machines, but it's hopefully a good indication.

Similar to what we've seen from Intel in the smartphone and tablet space, you can expect to see reference platforms built around Haswell to show OEMs exactly what they need to put down on a motherboard to deliver the sort of idle power consumption necessary to compete in the new world. It's not clear to me how Intel will enforce these guidelines, although it has a number of tools at its disposal - logo certification being the most obvious.

Platform Retargeting & Platform Power Other Power Savings & The Fourth Haswell
Comments Locked

245 Comments

View All Comments

  • Kevin G - Saturday, October 6, 2012 - link

    Losing just the MacBook AIr isn't going to hurt Intel much as a whole but it is doubtful that Apple would just move that product line to ARM. The rest of the line up would likely follow. The results by the numbers would hurt Intel but nothing to doom the company. Intel does have the rest of the PC industry to fall back upon... except the PC market is shrinking.

    Apple is one of Intel's best gateway into the ultra mobile market. Apple has made indications that they want to merge iOS and OS X over the long term which would likely result in dropping either ARM or x86 hardware to simplify the line up.

    WinRT is also a threat to Intel and
  • Kevin G - Saturday, October 6, 2012 - link

    (Hrm... got cut off there)

    WinRT is also a threat to Intel but WinRT has next to zero market share. The threat here is any success it obtains. Apple on the other hand controls ~75% of the tablet market last I checked.

    Andriod is a bit neutral to Intel as manufacturers can transition between ARM and x86 versions with relative ease. Intel will just have to offer competitive hardware at competitive prices here. The sub 10W Haswell parts are going to be competitive but price is a great unknown. The ARM SoC's are far cheaper than what Intel has traditionally been comfortable with. So even if Intel were to acquire all of the Android tablet market, it would be a minority at this time and over the short term (even in the best case scenario, it'd take time for Android based tablets to surpass the iPad in terms of market share).

    So ultimately it would be best for Intel to snag Apple's support due to their dominant market share in the tablet space and influential position in the smart phone space.
  • andrewaggb - Friday, October 5, 2012 - link

    Agree with others. Best Anandtech article I've read in a long time.

    Most articles lack the detail and insights that this one has.
  • mrdude - Friday, October 5, 2012 - link

    Great article. Great depth, great info and very thorough. Hats off :)

    But I couldn't shake the feeling that I was missing perhaps the most important bit of information: price.

    Obviously, Intel isn't going to give that away 9 months away from the presumed launch date -- though in typical fashion we'll see it leaked early. It still is the biggest question regarding Haswell's, and in turn Intel's, success against ARM.

    I think most consumers are already at that good enough stage, where your Tegra 3 or Snapdragon S4 can fulfill all of their computing needs on a tablet or a phone. The biggest drawback for productivity purposes isn't necessarily the "lack of CPU performance" but rather the lack of a proper keyboard/mouse, gaming, along with a rare application or two that's still locked to x86 (Office rings a bell, though not for long). Or I should say, these were drawbacks. Not any longer.

    So is Intel going to cut their margins and go for volume? Or are they just going to keep their massive margins and price themselves out of contention? Apple carries with itself a brand name that people want. It's become more than a gadget but a fashion accessory. People don't mind paying for Apple tax. I don't think I ever will, but at least I can notice the trend. The Intel brand doesn't carry with it the same cult following and neither does x86. Unless Intel is willing to compete with ARM on price, lowering the cost of their products below Apple's, I don't think think the substantial increases in efficiency and performance will matter all that much.
  • name99 - Friday, October 5, 2012 - link

    "Sandy Bridge made ports 2 & 3 equal class citizens, with both capable of being used for load or store address calculation. In the past you could only do loads on port 2 and store addresses on port 3. Sandy Bridge's flexibility did a lot for load heavy code, which is quite common. Haswell's dedicated store address port should help in mixed workloads with lots of loads and stores."

    The rule of thumb numbers are, on "ordinary" integer type code:
    1/6 instructions are branches
    1/6 are writes
    2/6 are reads
    2/6 are ALU

    This makes it more obvious why Intel moved as it did.
    You want to sustain as close to 4ops/cycle as you can.
    This means that your order of adding abilities should be exactly as Intel has done
    - first two ALUs
    - next two read/writes per cycle (ideal would be a mix of load/store) but Intel gave us that you can do a load+store per cycle

    - next two loads per cycle

    - next make sure the branches aren't throttled (because back-to-back branches are common, and you want branches resolved ASAP)
    - next make the load-store system wide enough to sustain a MAC per cycle (two loads+store)

    It's hard to see what is left to complain about at this level.
    And of course we have better lock performance. So what's left?

    What I think still have substantial room for improvement (correct me if I'm wrong) is
    (a) TLB coverage
    (b) TLB efficiency.

    TLB coverage could be improved with a 2nd level TLB but (as far as I know) Intel doesn't go in for that, unlike POWER.
    By TLB efficiency, I mean not needing to lose performance due to different address spaces. Unfortunately Intel seems screwed here. The POWER segment scheme (especially the 64-bit scheme) is REALLY powerful here in allowing multiple address spaces to coexist, so that multiple shared libraries, the main app code, IO, and memory mapped files, can all have persistent simultaneous TLB entries. (Note that this has nothing to do with the Intel segment scheme --- different technology, to solve a different problem.)

    As far as I know, right now all Intel has is a single ASID representing a process. Better than no ASID, and having to flush the TLB on every context switch; but not especially good at sharing entries --- so (again as far as I know) shared libraries or shared mem-mapped files being used by multiple processes, even when they are mapped to the same address, have to have separate TLB entries, each one with a different ASID corresponding to the process calling them.
  • name99 - Friday, October 5, 2012 - link

    Stupid me. I should have read the entire article. So we do have a (nicely sized 2nd level TLB).

    I guess my only remaining complaint now is that ASIDs are too coarse a tool.
    In principle you could get dove some of the problems I mention using dedicated large pages for some particular purposes (eg to over the OS code and data, the equivalent of the frame buffer for modern windowing systems, and some pool of common shared libraries).
    Does anyone know the extent to which both Windows and OSX actually make use of dedicated large pages in this way?
  • Peanutsrevenge - Friday, October 5, 2012 - link

    Great article Anand, but when will Anand cloning be incorporated in CPU designs so we can all have one of you at home to pull out and extract information from @ will ? ?

    Although, with that said, I was already made aware of much of this recently from listening in to some random guys babbling about tech stuff on a podcast ;)
  • Rectified - Friday, October 5, 2012 - link

    Anand, you write the best tech articles on the web. As a graduate student in computer engineering, I appreciate the practical yet technical analyses you write on the industry. Keep it up!
  • Crazy1 - Friday, October 5, 2012 - link

    I like the concept of Panel Self Refresh, yet I feel that Intel could implement this themselves. I'm not an expert, but couldn't a buffer be placed on the CPU package between the GPU and panel? This may not be as efficient as if the panel makers did it themselves and it would probably only work when using the IGP (when it would most likely have the greatest impact), but at least it is a step in the right direction.

    Additionally, Great Article! Anandtech provides some of the most thorough technology articles. Keep it up.
  • random2 - Saturday, October 6, 2012 - link

    " If all mainstream client computing moves to smartphones,..........."

    Seriously? The idea of all mainstream computing done on nothing but smartphones seems to stretch the imagination just a bit much. There isn't even the most basic of businesses that do not have a computer (made with mainstream components as are most small and medium sized businesses) and business software. Don't forget the PC gamers and people who like larger viewing and typing surfaces. Or the fact that in eight years, home and business PC's will be blindingly fast with larger displays with much greater pixel density, possibly clear screen touch surfaces, likely alternative interfaces than just a keyboard and mouse and incredible computing and rendering power.

    The likelihood of the general populace turning all their computing needs over to a palm size PC I see as kind of weird fantasy where people learn to love minute typing interfaces and squinting at hi density displays fit into 3.5by 4.5 inches for long periods of the day without interruption. No, to push the idea of micro computing one must discount all of the other advances in the computer/electronics industries in order to make their pet theory viable.

Log in

Don't have an account? Sign up now