Announcement Three: Skylake-X's New L3 Cache Architecture

(AKA I Like Big Cache and I Cannot Lie)

SKU madness aside, there's more to this launch than just the number of cores at what price. Deviating somewhat from their usual pattern, Intel has made some interesting changes to several elements of Skylake-X that are worth discussing. Next is how Intel is implementing the per-core cache.

In previous generations of HEDT processors (as well as the Xeon processors), Intel implemented an three stage cache before hitting main memory. The L1 and L2 caches were private to each core and inclusive, while the L3 cache was a last-level cache covering all cores and that also being inclusive. This, at a high level, means that any data in L2 is duplicated in L3, such that if a cache line is evicted into L2 it will still be present in the L3 if it is needed, rather than requiring a trip all the way out to DRAM. The sizes of the memory are important as well: with an inclusive L2 to L3 the L3 cache is usually several multiplies of the L2 in order to store all the L2 data plus some more for an L3. Intel typically had 256 kilobytes of L2 cache per core, and anywhere between 1.5MB to 3.75MB of L3 per core, which gave both caches plenty of room and performance. It is worth noting at this point that L2 cache is closer to the logic of the core, and space is at a premium.

With Skylake-X, this cache arrangement changes. When Skylake-S was originally launched, we noted that the L2 cache had a lower associativity as it allowed for more modularity, and this is that principle in action. Skylake-X processors will have their private L2 cache increased from 256 KB to 1 MB, a four-fold increase. This comes at the expense of the L3 cache, which is reduced from ~2.5MB/core to 1.375MB/core.

With such a large L2 cache, the L2 to L3 connection is no longer inclusive and now ‘non-inclusive’. Intel is using this terminology rather than ‘exclusive’ or ‘fully-exclusive’, as the L3 will still have some of the L3 features that aren’t present in a victim cache, such as prefetching. What this will mean however is more work for snooping, and keeping track of where cache lines are. Cores will snoop other cores’ L2 to find updated data with the DRAM as a backup (which may be out of date). In previous generations the L3 cache was always a backup, but now this changes.

The good element of this design is that a larger L2 will increase the hit-rate and decrease the miss-rate. Depending on the level of associativity (which has not been disclosed yet, at least not in the basic slide decks), a general rule I have heard is that a double of cache size decreases the miss rate by the sqrt(2), and is liable for a 3-5% IPC uplift in a regular workflow. Thus here’s a conundrum for you: if the L2 has a factor 2 better hit rate, leading to an 8-13% IPC increase, it’s not the same performance as Skylake-S. It may be the same microarchitecture outside the caches, but we get a situation where performance will differ.

Fundamental Realisation: Skylake-S IPC and Skylake-X IPC will be different.

This is something that fundamentally requires in-depth testing. Combine this with the change in the L3 cache, and it is hard to predict the outcome without being a silicon design expert. I am not one of those, but it's something I want to look into as we approach the actual Skylake-X launch.

More things to note on the cache structure. There are many ‘ways’ to do it, one of which I imagined initially is a partitioned cache strategy. The cache layout could be the same as previous generations, but partitions of the L3 were designated L2. This makes life difficult, because then you have a partition of the L2 at the same latency of the L3, and that brings a lot of headaches if the L2 latency has a wide variation. This method would be easy for silicon layout, but hard to implement. Looking at the HCC silicon representation in our slide-deck, it’s clear that there is no fundamental L3 covering all the cores – each core has its partition. That being the case, we now have an L2 at approximately the same size as the L3, at least per core. Given these two points, I fully suspect that Intel is running a physical L2 at 1MB, which will give the design the high hit-rate and consistent low-latency it needs. This will be one feather in the cap for Intel.

Announcement Two: High Core Count Skylake-X Processors Announcement Four: The Other Stuff (AVX-512, Favored Core)
Comments Locked

203 Comments

View All Comments

  • ddriver - Friday, June 2, 2017 - link

    "I would be willing to bet that between 2-4 of those can replace your entire farm and still give you better FLOP/$."

    Not really. Aside from the 3770k's running at 4.4 Ghz, most of the performance actually comes from GPU compute. You can't pack those tiny stock rackmount systems with GPUs. Not that 256 cores @2.9 GHz would come anywhere near 256 cores @4.4 Ghz, even if they had the I/O to accommodate the GPUs.

    And no, Intel is NO LONGER better at flops/$. Actually it may not have ever been, considering how cheap Amd processors are. Amd was simply too slow and too power inefficient for me until now.

    And since the launch of Ryzen, Amd offers 50-100% better flops/$, so it is a no brainer, especially when performance is not only so affordable but actually ample.

    Your who post narrative basically says "intel fanboy in disguise". I guess it is back to the drawing board for you.
  • Meteor2 - Saturday, June 3, 2017 - link

    Ddriver is our friendly local troll; best ignored and not fed.
  • trivor - Saturday, June 3, 2017 - link

    Whether you're a large corporation with $Billion IT budget with dedicated IT or a SOHO (Small Office Home Office) user with a very limited budget everyone is looking for bang for the buck. While most people on this site are enthusiasts we all have some kind of budget to keep. Where do we find the sweet spot for gaming (intersection of CPU/GPU for the resolution we want) and more and more having a fairly quiet system (and even more for a HTPC) is important. While some corporations might be tied to certain vendors (Microsoft, Dell, Lenovo, etc.) they don't necessarily care what components are in there because it is the vendor that will be warranting the system. For pure home users, all of the systems are not for us. Ryzen 5/7, i5/i7, and maybe i9 are the cpus and SOCs for us. Anything more than that will not help our gaming performance or even other tasks (Video Editing/Encoding) because even multi core aware programs (Handbrake) can't necessarily use 16-20 cores. The absolute sweet spot right now are the CPUs around $200 (Ryzen 5 1600/1600x, Core i5) because you can get a very nice system in the $600 range. That will give you good performance in most games and other home user tasks.
  • swkerr - Wednesday, May 31, 2017 - link

    There may be brand loyalty on the Retail side but it does not exist in the Corporate world. Data Center mangers will look at total cost of ownership. Performance per watt will be key as well as the cost of the CPU and motherboard. What the Corporate world s loyal to is the brand of server and if Dell\Hp etc make AMD based servers than they will add them if the total cost of ownership looks good.

  • Namisecond - Wednesday, May 31, 2017 - link

    Actually, even for the consumer retail side, there isn't brandy loyalty at the CPU level (excepting a very vocal subset of the small "enthusiast" community) Brandy loyalty is at the PC manufacturer level: Apple, Dell, HP, Lenovo, etc.
  • bcronce - Tuesday, May 30, 2017 - link

    "But at that core count you are already limited by thermal design. So if you have more cores, they will be clocked lower. So it kind of defeats the purpose."

    TDP scales with the square of the voltage. Reduce the voltage 25%, reduce the TDP by almost 50%. Voltage scales non-linearly with frequency. Near the high end of the stock frequency, you're gaining 10% clock for a 30% increase in power consumption because of the large increase in voltage to keep the clock rate stable.
  • ddriver - Tuesday, May 30, 2017 - link

    The paragraph next to the one you quoted explicitly states that lower clocks is where you hit the peak of the power/performance ratio curve. Even to an average AT reader it should be implied that lowered clocks come with lowered voltage.

    There is no "magic formula" like for example the quadratic rate of intensity decay for a point light source. TDP vs voltage vs clocks in a function of process scale, maturity, leakage and operating environment. It is however true that the more you push above the optimal spot the less performance you will get for every extra watt.
  • boeush - Tuesday, May 30, 2017 - link

    "More cores would be beneficial for servers, where the chips are clocked significantly lower, around 2.5 Ghz, allowing to hit the best power/performance ratio by running defacto underclocked cores.

    But that won't do much good in a HEDT scenario."

    I work on software development projects where one frequently compiles/links huge numbers if files into a very large application. For such workloads, you can never have enough cores.

    Similarly, I imagine any sort of high-resolution (4k, 8k, 16k) raytracing or video processing workloads would benefit tremendously from many-core CPUs.

    Ditto for complex modelling tasks, such as running fluid dynamics, heat transfer, or finite element stress/deformation analysis.

    Ditto for quantum/molecular simulations.

    And so on, and on. Point being, servers are not the only type of system to benefit from high core counts. There are many easily-parallelizable problems in the engineering, research, and general R&D spheres that can benefit hugely.
  • ddriver - Tuesday, May 30, 2017 - link

    The problem is that the industry wants to push HEDT as gaming hardware. They could lower clocks and voltages, and add more cores, which would be beneficial to pretty much anything time staking like compilation, rendering, encoding or simulations, as all of those render themselves very well to multithreading and scale up nicely.

    But that would be too detrimental to gaming performance, so they will lose gamers as potential customers for HEDT. They'd go for the significantly cheaper, lower core count, higher clocked CPU. So higher margins market would be lost.
  • Netmsm - Thursday, June 1, 2017 - link

    "AMD will not and doesn't need to launch anything other than 16 core. Intel is simply playing the core count game, much like it played the Mhz game back in the days of pentium4."
    Exactly ^_^ That's it.

Log in

Don't have an account? Sign up now