Die Size Estimates and Arrangements

On the previous page, we showed pictures of ring bus and mesh arrangements. With a ring bus, ultimately the silicon layout of the cores and the interconnects can be regular but are not that stringent. Put the cores in a circle (or overlapping circles) and away you go. With a mesh, things get a little more rigid.

The mesh diagrams on the previous page are all presented as rectangles in x*y arrangements. You either have to add a full row or a full column to increase the die count, whereas in a ring it could be straight forward just to add another pair of cores into the ring (which is what happened over the last few generations). Adding a pair of cores in a mesh means that you end up with more corners and more edges – not all cores end up ‘equal’ and there can be performance penalties therein. Ideally, an arrangement where x = y is usually the best bet. This lets us make some predictions about how Intel’s silicon is lining up.

A side-note for discussion. If we had a 100x100 core arrangement, the cores in the middle would have big latency to get anywhere near external memory. Also, a 2D mesh could become a 3D mesh.

Three things come to our aid in discussing the LCC and HCC silicon. First was the original Skylake-X announcement back at Computex: one of Intel’s slides had an image of the basic floorplan of the HCC silicon to be used for the high core-count Skylake-X processors:

At the time, we were a bit stumped by this image. By counting the regular structures, we can see 4x5 arrangement, or rather a 20-core chip. On closer inspection two of the cores were different: on the second column, the top and bottom ‘cores’ did not look like cores. At the time we postulated that given the size of AVX512, this might be where they were. But the second piece of information was given through Intel’s mesh announcement.

Here’s the diagram:

This is meant to be a pseudo mockup of a theoretical core of n processors using the mesh topology. At the top are the socket links, along with the PCIe root complexes. However on the left and right are the DRAM controllers, essentially taking up the same area as a core but also using one of the mesh networking links.

So scoot back to that HCC die image, and zoom in on one of those odd looking ‘cores’:

What we can see is three regular blue/green vertical areas, which means three on each side, for a total of six. Skylake-X only has four memory channels, but leaks have shown that the new Skylake-SP processors have six memory channels by design, so here they are. In the 4x5 grid, we have 18 cores and two sets of memory channels.

Back when Skylake-X was announced at Computex, I wrote that we were expecting the LCC silicon to be a 12-core design. At this time, we were still expecting Intel to use a ring-bus topology, and I mentioned before, adding two cores to a ring bus is fairly easy at the expense of peak latency between cores. Now that we know that Intel is using a mesh, it is quite different.

12 cores could quite easily fit into LCC silicon in a 3x4 arrangement, but that does not leave any room for the six memory controllers that the enterprise Xeons are all meant to have. If we added two ‘extra’ core sized areas for the 12-core design, we need a total of 14 segments. Using the x*y arrangement as required above, the only way 14 cores works is using a 7*2 arrangement. If this was the case, the DRAM controllers would essentially fill a whole row, or be at opposite ends of the column. If one of the x*y numbers is the number two, it makes more sense to use a ring bus any day of the week for power, die area and simplicity.

So that means that the 12-core SKU, the Core i9-7920X, is likely derived from the HCC 18-core silicon. Which also explains why that CPU has been delayed until August.

Die Sizes

At this point in time, the Skylake-X processors based on the LCC silicon have been in the hands of a few people. At Computex there were several extreme overclocking (using sub-zero coolants) events dedicated to the new processors. One element of recent extreme overclocking is delidding the processor and removing the integrated heat spreader to replace the thermal interface material underneath.

In general removing the IHS is not recommended without practice and experience, but for some processors in the past we have seen sizeable temperature benefits by replacing the standard thermal interface material (TIM) that Intel uses. The discussion on whether Intel should be offering a standard goopy TIM or the indium-tin solder that they used to (and AMD uses) is one I’ve run on AnandTech before, but there’s a really good guide from Roman Hartung, who overclocks by the name der8auer. I’m trying to get him to agree to post it on AnandTech with SKL-X updates so we can discuss it here, but it really is some nice research. You can find the guide over at http://overclocking.guide.

However removing the IHS means we can measure the silicon die.

The 10-core LCC die, which is a 3x4 design, measures in at 14.3 x 22.4, or 322mm2.
Using this, working from Intel’s 4x5 HCC diagram (and assuming it hasn’t been stretched), we can get 21.6 x 22.4 = 484mm2 for the high-core count design.

That leaves the Extreme core count option. Using the x*y strategy again, Intel could either run a 5x5 design, which gives 25 areas and 23 cores – which is unlikely. Next up is a 5x6 design, which gives 30 areas and 28 cores. It’s no secret that many leaks are pointing to a 28-core XCC processor at this point.

There’s also the fact that Intel provided this die shot at the Intel Manufacturing Day a few weeks ago, clearly showing the 5x6 arrangement:

Doing the basic math on a 5x6 design gives us a 21.6 x 32.3 = 698mm2 die size for XCC.

Skylake-SP Die Sizes
  Arrangement Dimensions
(mm)
Die Area
(mm2)
LCC 3x4 (10-core) 14.3 x 22.4 322 mm2
HCC 4x5 (18-core) 21.6 x 22.4 484 mm2
XCC 5x6 (28-core) 21.6 x 32.3 698 mm2

Compared to other chips with Intel’s mesh architecture, Knights Landing comes in at 646mm2 (minus MCDRAM), and sources put Knights Corner at 720mm2.

Intel Makes a Mesh: New Core-to-Core Communication Paradigm Favored Core, Speed Shift, and Big Motherboard Issues
Comments Locked

264 Comments

View All Comments

  • Ian Cutress - Monday, June 19, 2017 - link

    Prime95
  • AnandTechReader2017 - Tuesday, June 20, 2017 - link

    Are you sure the numbers are correct as the i7 6950X on your graph here states less than the 135W on your original review of it under an all-core load.
  • Ian Cutress - Tuesday, June 20, 2017 - link

    We're running a new test suite, different OSes, updated BIOSes, with different metrics/data gathering (might even be a different CPU, as each one is slightly different). There's going to be some differences, unfortunately.
  • gerz1219 - Monday, June 19, 2017 - link

    Power draw isn't relevant in this space. High-end users who work from a home office can write off part of their electric bill as a business expense. Price/performance isn't even that much of an issue for many users in this space for the same reason -- if you're using the machine to earn a living, a faster machine pays for itself after a matter of weeks. The only thing that matters is performance. I don't understand why so many gamers read reviews for non-gamer parts and apply gamer complaints.
  • demMind - Monday, June 19, 2017 - link

    This kind of response keeps popping up and is highly short sighted. Price for performance matters to high end especially if you use it for your livelihood.

    If you go large-scale movie rendering studios will definitely be going with what can soften the blow to a large scale project. This is a fud response.
  • Spunjji - Tuesday, June 20, 2017 - link

    Power efficiency will matter again when Intel lead in it. Been watching the same see-saw on the graphics side with nVidia. They lead in it now, so now it's the most important factor.

    Marketing works, folks.
  • JKflipflop98 - Thursday, June 22, 2017 - link

    Ah, AMD fanbots. Always with the insane conspiracy theories.
  • AnandTechReader2017 - Tuesday, June 20, 2017 - link

    Power draw is important as well as temps, it will allow you to push to higher clocks and cut costs.
    Say your work had to get 500 of these machines, if you can use a cheaper PSU, cheaper CPU and lower power use, the saving could be quite extreme. We're talking 95W vs 140W, a 50% increase versus the Ryzen. That's quite a bit in the long run.

    I run 4 high-end desktops in my household, the power draw saving would be quite advantageous form me. All depends on circumstances, information is king.

    Ian posted that everything is running at stock speeds, each version overclocked with power draw would also be interesting, also the difference different RAM clock speeds make (there was a huge fiasco with people claiming nice performance increases by using higher RAM clocks with the Ryzen CPU, how much is Intel's new line-up influenced? Can we cut costs and spend more on GPU/monitor/keyboard/pretty much anything else?)
  • psychok9 - Sunday, July 23, 2017 - link

    It's scandalous... no one graph about temperature!? I suspect that if it had been an AMD cpu we would have mass hysteria and daily news... >:(
    I'm looking for Iy 7820X and understand how can I manage with an AIO.
  • cknobman - Monday, June 19, 2017 - link

    Nope this CPU is a turd IMO.
    Intel cheaped out on thermal paste again and this chip heats up big time.
    Only 44PCIE lanes, shoddy performance, and a rushed launch.

    Only a sucker would buy now before seeing AMD Threadripper and that is exactly why, and who, Intel released these things so quickly for.

Log in

Don't have an account? Sign up now