Announcement Three: Skylake-X's New L3 Cache Architecture

(AKA I Like Big Cache and I Cannot Lie)

SKU madness aside, there's more to this launch than just the number of cores at what price. Deviating somewhat from their usual pattern, Intel has made some interesting changes to several elements of Skylake-X that are worth discussing. Next is how Intel is implementing the per-core cache.

In previous generations of HEDT processors (as well as the Xeon processors), Intel implemented an three stage cache before hitting main memory. The L1 and L2 caches were private to each core and inclusive, while the L3 cache was a last-level cache covering all cores and that also being inclusive. This, at a high level, means that any data in L2 is duplicated in L3, such that if a cache line is evicted into L2 it will still be present in the L3 if it is needed, rather than requiring a trip all the way out to DRAM. The sizes of the memory are important as well: with an inclusive L2 to L3 the L3 cache is usually several multiplies of the L2 in order to store all the L2 data plus some more for an L3. Intel typically had 256 kilobytes of L2 cache per core, and anywhere between 1.5MB to 3.75MB of L3 per core, which gave both caches plenty of room and performance. It is worth noting at this point that L2 cache is closer to the logic of the core, and space is at a premium.

With Skylake-X, this cache arrangement changes. When Skylake-S was originally launched, we noted that the L2 cache had a lower associativity as it allowed for more modularity, and this is that principle in action. Skylake-X processors will have their private L2 cache increased from 256 KB to 1 MB, a four-fold increase. This comes at the expense of the L3 cache, which is reduced from ~2.5MB/core to 1.375MB/core.

With such a large L2 cache, the L2 to L3 connection is no longer inclusive and now ‘non-inclusive’. Intel is using this terminology rather than ‘exclusive’ or ‘fully-exclusive’, as the L3 will still have some of the L3 features that aren’t present in a victim cache, such as prefetching. What this will mean however is more work for snooping, and keeping track of where cache lines are. Cores will snoop other cores’ L2 to find updated data with the DRAM as a backup (which may be out of date). In previous generations the L3 cache was always a backup, but now this changes.

The good element of this design is that a larger L2 will increase the hit-rate and decrease the miss-rate. Depending on the level of associativity (which has not been disclosed yet, at least not in the basic slide decks), a general rule I have heard is that a double of cache size decreases the miss rate by the sqrt(2), and is liable for a 3-5% IPC uplift in a regular workflow. Thus here’s a conundrum for you: if the L2 has a factor 2 better hit rate, leading to an 8-13% IPC increase, it’s not the same performance as Skylake-S. It may be the same microarchitecture outside the caches, but we get a situation where performance will differ.

Fundamental Realisation: Skylake-S IPC and Skylake-X IPC will be different.

This is something that fundamentally requires in-depth testing. Combine this with the change in the L3 cache, and it is hard to predict the outcome without being a silicon design expert. I am not one of those, but it's something I want to look into as we approach the actual Skylake-X launch.

More things to note on the cache structure. There are many ‘ways’ to do it, one of which I imagined initially is a partitioned cache strategy. The cache layout could be the same as previous generations, but partitions of the L3 were designated L2. This makes life difficult, because then you have a partition of the L2 at the same latency of the L3, and that brings a lot of headaches if the L2 latency has a wide variation. This method would be easy for silicon layout, but hard to implement. Looking at the HCC silicon representation in our slide-deck, it’s clear that there is no fundamental L3 covering all the cores – each core has its partition. That being the case, we now have an L2 at approximately the same size as the L3, at least per core. Given these two points, I fully suspect that Intel is running a physical L2 at 1MB, which will give the design the high hit-rate and consistent low-latency it needs. This will be one feather in the cap for Intel.

Announcement Two: High Core Count Skylake-X Processors Announcement Four: The Other Stuff (AVX-512, Favored Core)
Comments Locked

203 Comments

View All Comments

  • PUN - Tuesday, May 30, 2017 - link

    AMD will ALWAYS undermine intel with pricing, regardless of either performances. Now, they can compete with similar performance at lower price, giving consumer a choice.
  • Bullwinkle J Moose - Tuesday, May 30, 2017 - link

    Time to stop bragging about who has the best A.I. and start putting it to use

    Which "Intel" core count is the best "Value" for Gamers / Media Production / General Business / etc

    On day 1 (when ALL core counts are finally available to the public), what is the optimum core count for software that is currently available in each category of usage

    Is 4/6/8 core the best value for home use?
    Is 14/16/18 core the best value for certain business case usage?
    Or would 10/12/14 be the best all around value for other business classes?

    Time to call Microsoft and ask what the top 20 software applications are for each usage case scenario for Windows 10 and see who has the best A.I. to figure this out

    Best value vs core count on day 1 availability could be VERY different from 6 month / 1 year and 3 year mark after launch, so keep us updated!

    Thanks
  • edzieba - Tuesday, May 30, 2017 - link

    For the vast majority of consumer desktop applications (gaming, office, web browsing) you're looking at a spread of workloads from single-threaded to just about benefiting from a 4th core. Outside of video encoding, 3DCG rendering, or data analytics, there aren't that many workloads that really benefit from throwing more cores at them the handful that do are embarrassingly parallel and trivially parallellised have already moved over to GPGPU.

    We've heard the siren song of "We'll add more cores, and the parallellism will come!" many times over the last decade, and it has yet to happen.
  • Gothmoth - Tuesday, May 30, 2017 - link

    i run 4 heavy apps parallel most of the time.. if i give all of them 4 cores it works very well.
    and yes i do video editing and 3d rendering.

    people who run a lot of VM´s will love more cores also.
    this is not for word processing .. but there are enough people who can make use of 16 and more cores.
  • Threska - Friday, June 2, 2017 - link

    VMs are kind of the sneak that'll get more cores onto people's machines. In the enterprise VMs are something that's used a lot, but for desktop, not so much. But there are advantages to VMs that would benefit a desktop user, especially in this day and age, of a hostile internet, not to mention better software delivery.
  • theuglyman0war - Thursday, June 8, 2017 - link

    outside of billion dollar industries...
    ok?
  • Gothmoth - Tuesday, May 30, 2017 - link

    why is there basically no covering of AMD at computex at anandtech??

    it´s 10 articles about intel for one about AMD?

    where is an article about the displayed x399 boards?

    i am wrong in my perception?
  • fanofanand - Wednesday, May 31, 2017 - link

    Not wrong at all, I have seen just about nothing.
  • Meteor2 - Saturday, June 3, 2017 - link

    Not wrong. It's very weird how AMD have suddenly vanished from Anandtech's coverage. I don't think they even reported the Threadripper announcement a few weeks back.
  • Gothmoth - Sunday, June 4, 2017 - link

    good to know im not imagining this.

    seems like intel is paying anandtech a lot to not report much about AMD.
    back to the 90s and intels shady tactics... so soon?

    and where are the critical voices about the crippled x299 ?

    linus on x299:

    https://www.youtube.com/watch?v=TWFzWRoVNnE

Log in

Don't have an account? Sign up now