Decoupled L3 Cache

With Nehalem Intel introduced an on-die L3 cache behind a smaller, low latency private L2 cache. At the time, Intel maintained two separate clock domains for the CPU (core + uncore) and a third for what was, at the time, an off-die integrated graphics core. The core clock referred to the CPU cores, while the uncore clock controlled the speed of the L3 cache. Intel believed that its L3 cache wasn't incredibly latency sensitive and could run at a lower frequency and burn less power. Core CPU performance typically mattered more to most workloads than L3 cache performance, so Intel was ok with the tradeoff.

In Sandy Bridge, Intel revised its beliefs and moved to a single clock domain for the core and uncore, while keeping a separate clock for the now on-die processor graphics core. Intel now felt that race to sleep was a better philosophy for dealing with the L3 cache and it would rather keep things simple by running everything at the same frequency. Obviously there are performance benefits, but there was one major downside: with the CPU cores and L3 cache running in lockstep, there was concern over what would happen if the GPU ever needed to access the L3 cache while the CPU (and thus L3 cache) was in a low frequency state. The options were either to force the CPU and L3 cache into a higher frequency state together, or to keep the L3 cache at a low frequency even when it was in demand to prevent waking up the CPU cores. Ivy Bridge saw the addition of a small graphics L3 cache to mitigate this situation, but ultimately giving the on-die GPU independent access to the big, primary L3 cache without worrying about power concerns was a big issue for the design team.

When it came time to define Haswell, the engineers once again went to Nehalem's three clock domains. Ronak (Nehalem & Haswell architect, insanely smart guy) tells me that the switching between designs is simply a product of the team learning more about the architecture and understanding the best balance. I think it tells me that these guys are still human and don't always have the right answer for the long term without some trial and error.

The three clock domains in Haswell are roughly the same as what they were in Nehalem, they just all happen to be on the same die. The CPU cores all run at the same frequency, the on-die GPU runs at a separate frequency and now the L3 + ring bus are in their own independent frequency domain.

Now that CPU requests to L3 cache have to cross a frequency boundary there will be a latency impact to L3 cache accesses. Sandy Bridge had an amazingly fast L3 cache, Haswell's L3 accesses will be slower.

The benefit is obviously power. If the GPU needs to fire up the ring bus to give/get data, it no longer has to drive up the CPU core frequency as well. Furthermore, Haswell's power control unit can dynamically allocate budget between all areas of the chip when power limited.

Although L3 latency is up in Haswell, there's more access bandwidth offered to each slice of the L3 cache. There are now dedicated pipes for data and non-data accesses to the last level cache.

Haswell's memory controller is also improved, with better write throughput to DRAM. Intel has been quietly telling the memory makers to push for even higher DDR3 frequencies in anticipation of Haswell.

Feeding the Beast: 2x Cache Bandwidth in Haswell TSX
Comments Locked

245 Comments

View All Comments

  • dishayu - Friday, October 5, 2012 - link

    I derived immense pleasure reading the article. Thank you, Anand. Big ups for the comprehensive read.
    My thoughts :
    I think Intel really dropped the ball by not having unlinked clocks for each core, like qualcomm has for it's s4 pro processors. There are so many times that, for instance, i have a page open with some animated GIFs. They are strictly single thread processes and they won't let the processor go to idle state. And this is a very VERY common occurance that can IMO, only be solved by adopting unlocked states for each core. 3 cores can stay in sleep state (almost perpetually) and the processor runs on a single core with lowered frequency. THAT would be power efficient.
  • dagamer34 - Friday, October 5, 2012 - link

    Uhh... isn't turning off unused cores and overclocking the 4th core within TDP to perform single threaded tasks exactly what Turbo Boost introduced in Sandy Bridge is?
  • know of fence - Friday, October 5, 2012 - link

    Reducing power is great and also inevitable, but Intel's move to compete against everything and everybody is alarming. With everyone trying to follow/please Apple, that means nothing good for the consumer, throw-away luxury electronics for exceptionally well groomed masses.
    Also, isn't it too early to be hyping this stuff?
  • A5 - Friday, October 5, 2012 - link

    Intel has to compete against ARM to keep them from taking over the "good-enough" computing space.

    As for the rest of it, you're not making any sense.
  • jjj - Friday, October 5, 2012 - link

    The ARM problem is not about the product but about price, long term the CPU/SoC ASP will drop hard ,there is competition now. Servers will keep them on life support for a while but without fundamental changes to their business model they can't make it.
    Intel should remember how they won the market .
  • dishayu - Friday, October 5, 2012 - link

    It's about both. Intel does not have sufficinetly low power parts at all, regardless the price point.
  • mrdude - Friday, October 5, 2012 - link

    Regardless of whether they step foot into that end of the spectrum or not (and by Anand's analysis that's more likely with Broadwell and on?), they still need to compete on price.

    It's one thing to make a chip, it's quote another to make it competitive with respect to pricing. What works against a distant AMD won't work against ARM.
  • DesDizzy - Sunday, October 7, 2012 - link

    I agree. This seems to be something that most people overlook when addressing the Wintel monopoly. The costs of Wintel products are high within the PC/Laptop space. The price of ARM/Apps are cheap within the Smartphone/Tab space. How do Wintel square this circle without damaging their business model?
  • Krysto - Friday, October 5, 2012 - link

    You may not agree with Charlie, Anand, but reality seems to agree with him:

    http://www.techradar.com/news/computing/apple/appl...

    I really don't know how you can think Apple would ever start using Intel chips in their iPads when Apple has already proven they want to make their own chips with A6.

    Also, according to Charlie, Haswell will be like 40% more expensive than IVB. Atom tablets already seem to start at like $800. So I wish Intel good luck with that. Ultrabooks and Win8 hybrids won't drop down in price any time soon.

    http://semiaccurate.com/2012/10/03/oems-call-intel...
  • Penti - Friday, October 5, 2012 - link

    I don't know how you could fail so much in reading comprehension, Anand only said the same flying spaghetti monster-damn form factor. Nothing else. There also must be an ecosystem, but if you can run the same app on a tablet as well as a desktop on x86 with more performance then ARM why wouldn't you see vendors use it. It is a full system even capable of building itself. It's not about killing ARM. Intel still uses it, they need fairly high-performance RISC chips for stuff like baseband. They had a large markets in smart-phones before 2006 and they made the choice to sell it because they had Atom in their lineup. They didn't forget about it.

    It's Microsoft tablets that costs 500-900 dollars even on Atom, but they only need to compete with Windows RT which is totally retarded as far as corporate customers go and not the same system as 8 Pro, doesn't run the same software. An Android tablet could use a Z2460 (and coming Z2580, after that Valleyview SoC's) and build a 240 dollar tablet. There is no price difference to be had as far as hardware is concerned. Windows 8 tablets are a whole other form factor and device to begin with. Most will have keyboard and multitouch trackpad.

    He only talks about the same form factor, size and battery life here. In the Microsoft ecosystem there is really no reason to go to Windows RT powered ARM-devices which doesn't have better performance and runs no third party desktop (Win32/Full Windows SDK) software. It also lacks the same features in other areas which makes them devices instead of general computing platforms. Remember they offer both here. Hell the built in email is even worse then the one built into Android since version 3.0 or so, it's a lot worse then Third party mail-clients in Android, it's worse then mail-clients in Blackberry 10, Symbian, iOS and so on. If your replacing a desktop your not going with ARM here, not on a Windows device at least, Anand only talks about a new bread of DTR Tablets and Ultra-portables that will fit in the same form factor and battery life scenarios as ARM-tablets. Apple certainly don't need to participate here.

    Intel certainly has sales to be made if they move Haswell down to low-power Atom territory when it comes out later next year. They could be used as the only computing device you have (smartphone + hybrid tablet-pc). Replacing desktops, ARM/ATOM-tablets, media PCs for your TV (just stream with Miracast). Et cetera. ARM-devices would just be cheaper less capable devices there. But it's still different targets. Haswell still targets server (enterprise-market), desktop, notebooks with larger form-factor/power-usage, as well as more portable stuff. Atom is still for the handheld stuff you use with one hand. ARM has moved quiet fast but they have no reason to target high-performance applications or built 100W SoC's that is fast without parallel computing. Applications like high-performance routers for example still uses licensed and custom MIPS and PowerPC chips. There are plenty of markets where a full feature ARM Cortex or x86 won't work either. ARM is just moving into the multimedia-field, replacing customs architectures in TV's, displacing MIPS, PPC etc. If Apple builds a very large custom CPU-architecture compatible with ARM ISA for workstations, notebooks etc they will just be in the same position they were with PowerPC and have to compete with the high-performance chips that most can't compete with, even with much larger resources then Apple. Apple and Samsung has no reason in doing so outside handheld devices, low-power servers, consumer oriented routers, streaming media boxes which leaves plenty of room for Intel and all the rest. Plus WiFi and wireless baseband in a huge market in of it self and it doesn't matter what the application processor architecture is. Stuff like ARM has competed because you could replace previous products with it easily, thus taking some of the SoC-market away from other, but that coincides with the choice to do so.

Log in

Don't have an account? Sign up now