Announcement Two: High Core Count Skylake-X Processors

The twist in the story of this launch comes with the next batch of processors. In our pre-briefing came something unexpected: Intel is bringing the high core count silicon from the enterprise side down to consumers. I’ll cover the parts and then discuss why this is happening.

The HCC die for Skylake is set to be either 18 or 20 cores. I say or, because there’s a small issue with what we had originally thought. If you had asked me six months ago, I would have said that the upcoming HCC core, based on some information I had and a few sources, would be an 18-core design. As with other HCC designs in previous years, while the LCC design is a single ring bus around all the cores, the HCC design would offer a dual ring bus, potentially lopsided, but designed to have an average L3 cache latency with so many cores without being a big racetrack (insert joke about Honda race engines). Despite this, Intel shared a die image of the upcoming HCC implementation, as in this slide:

It is clear that there are repeated segments: four rows of five, indicating the presence of a dual ring bus arrangement. A quick glance might suggest a 20 core design, but if we look at the top and bottom segments of the second column from the left: these cores are designed slightly differently. Are these actual cores? Are they different because they support AVX-512 (a topic discussed later), or are they non-cores, providing die area for something else? So is this an 18-core silicon die or a 20-core silicon die? We’ve asked Intel for clarification, but we were told to await more information when the processor is launched. Answers on a tweet @IanCutress, please.

So with the image of the silicon out of the way, here are the three parts that Intel is planning to launch. As before, all processors support hyperthreading.

Skylake-X Processors (High Core Count Chips)
  Core i9-7940X Core i9-7960X Core i9-7980XE
Cores/
Threads
14/28 16/32 18/36
Clocks TBD
L3 TBD
PCIe Lanes TBD
(Likely 44)
Memory Freq TBD
TDP TBD
Price $1399 $1699 $1999

As before, let us start from the bottom of the HCC processors. The Core i9-7940X will be a harvested HCC die, featuring fourteen cores, running in the same LGA2066 socket, and will have a tray price of $1399, mimicking the $100/core strategy as before, but likely being around $1449-$1479 at retail. No numbers have been provided for frequencies, turbo, power, DRAM or PCIe lanes, although we would expect DDR4-2666 support and 44 PCIe lanes, given that it is a member of the Core i9 family.

Next up is the Core i9-7960X, which is perhaps the name we would have expected from the high-end LCC processor. As with the 14-core part, we have almost no information except the cores (sixteen for the 7960X), the socket (LGA2066) and the price: $1699 tray ($1779 retail?). Reiterating, we would expect this to support at least DDR4-2666 memory and 44 PCIe lanes, but unsure on the frequencies.

The Core i9-7980XE sits atop of the stack as the halo part, looking down on all those beneath it. Like an unruly dictator, it gives nothing away: all we have is the core count at eighteen, the fact that it will sit in the LGA2066 socket, and the tray price at a rather cool $1999 (~$2099 retail). When this processor will hit the market, no-one really knows at this point. I suspect even Intel doesn’t know.

Analysis: Why Offer HCC Processors Now?

The next statement shouldn’t be controversial, but some will see it this way: AMD and ThreadRipper.

ThreadRipper is AMD’s ‘super high-end desktop’ processor, going above the eight cores of the Ryzen 7 parts with a full sixteen cores of their high-end microarchitecture. Where Ryzen 7 competed against Broadwell-E, ThreadRipper has no direct competition, unless we look at the enterprise segment.

Just to be clear, Skylake-X as a whole is not a response to ThreadRipper. Skylake-X, as far as we understand, was expected to be LCC only: up to 12 cores and sitting happy. Compared to AMD’s Ryzen 7 processors, Intel’s Broadwell-E had an advantage in the number of cores, the size of the cache, the instructions per clock, and enjoyed high margins as a result. Intel had the best, and could charge more. (Whether you thought paying $1721 for a 10-core BDW-E made sense compared to a $499 8-core Ryzen with fewer PCIe lanes, is something you voted on with your wallet). Pretty much everyone in the industry, at least the ones I talk to, expected more of the same. Intel could launch the LCC version of Skylake-X, move up to 12-cores, keep similar pricing and reap the rewards.

When AMD announced ThreadRipper at the AMD Financial Analyst Day in early May, I fully suspect that the Intel machine went into overdrive (if not before). If AMD had a 16-core part in the ecosystem, even at a lower 5-15% IPC to Intel, it would be likely that Intel with 12-cores might not be the halo product anymore. Other factors come into play of course, as we don’t know all the details of ThreadRipper such frequencies, or the fact that Intel has a much wider ecosystem of partners than AMD. But Intel sells A LOT of its top-end HEDT processor. I wouldn’t be surprised if the 10-core $1721 part was the bestselling Broadwell-E processor. So if AMD took that crown, Intel would lose a position it has held for a decade.

So imagine the Intel machine going into overdrive. What would be going through their heads? Competing in performance-per-dollar? Pushing frequencies? Back in the days of the frequency race, you could just slap a new TDP on a processor and just bin harder. In the core count race, you actually need physical cores to provide that performance, if you don’t have 33%+ IPC difference. I suspect the only way in order to provide a product in the same vein was to bring the HCC silicon to consumers.

Of course, I would suspect that inside Intel there was push back. The HCC (and XCC) silicon is the bread and butter of the company’s server line. By offering it to consumers, there is a chance that the business Intel normally gets from small and medium businesses, or those that buy single or double-digit numbers of systems, might decide to save a lot of money by going the consumer route. There would be no feasible way for Intel to sell HCC-based processors to end-users at enterprise pricing and expect everyone to be happy.

Knowing what we know about working with Intel for many years, I suspect that the HCC was the most viable option. They could still sell a premium part, and sell lots of them, but the revenue would shift from enterprise to consumer. It would also knock back any threat from AMD if the ecosystem comes into play as well.

As it stands, Intel has two processors lined up to take on ThreadRipper: the sixteen-core Core i9-7960X at $1699, and the eighteen-core Core i9-7980XE at $1999. A ThreadRipper design is two eight-core Zeppelin silicon designs in the same package – a single Zeppelin has a TDP of 95W at 3.6 GHz to 4.0 GHz, so two Zeppelin dies together could have a TDP of 190W at 3.6 GHz to 4.0 GHz, though we know that AMD’s top silicon is binned heavy, so it could easily come down to 140W at 3.2-3.6 GHz. This means that Intel is going to have to compete with those sorts of numbers in mind: if AMD brings ThreadRipper out to play at around 140W at 3.2 GHz, then the two Core i9s I listed have to be there as well. Typically Intel doesn’t clock all the HCC processors that high, unless they are the super-high end workstation designs.

So despite an IPC advantage and an efficiency advantage in the Skylake design, Intel has to ply on the buttons here. Another unknown is AMD’s pricing. What would happen if ThreadRipper comes out at $999-$1099?  

But I ask our readers this:

Do you think Intel would be launching consumer grade HCC designs for HEDT if ThreadRipper didn’t exist?

For what it is worth, kudos all around. AMD for shaking things up, and Intel for upping the game. This is what we’ve missed in consumer processor technology for a number of years.

(To be fair, I predicted AMD’s 8-core to be $699 or so. To see one launched at $329 was a nice surprise).

I’ll add another word that is worth thinking about. AMD’s ThreadRipper uses a dual Zeppelin silicon, with each Zeppelin having two CCXes of four cores apiece. As observed in Ryzen, the cache-to-cache latency when a core needs data in other parts of the cache is not consistent. With Intel’s HCC silicon designs, if they are implementing a dual-ring bus design, also have similar issues due to the way that cores are grouped. For users that have heard of NUMA (non-unified memory access), it is a tricky thing to code for and even trickier to code well for, but all the software that supports NUMA is typically enterprise grade. With both of these designs coming into consumer, and next-to-zero NUMA code for consumer applications (including games), there might be a learning period in performance. Either that or we will see software pinning itself to particular groups of cores in order to evade the issue entirely.

Announcement One: Low Core Count Skylake-X Processors Announcement Three: Skylake-X's New L3 Cache Architecture
Comments Locked

203 Comments

View All Comments

  • ddriver - Friday, June 2, 2017 - link

    "I would be willing to bet that between 2-4 of those can replace your entire farm and still give you better FLOP/$."

    Not really. Aside from the 3770k's running at 4.4 Ghz, most of the performance actually comes from GPU compute. You can't pack those tiny stock rackmount systems with GPUs. Not that 256 cores @2.9 GHz would come anywhere near 256 cores @4.4 Ghz, even if they had the I/O to accommodate the GPUs.

    And no, Intel is NO LONGER better at flops/$. Actually it may not have ever been, considering how cheap Amd processors are. Amd was simply too slow and too power inefficient for me until now.

    And since the launch of Ryzen, Amd offers 50-100% better flops/$, so it is a no brainer, especially when performance is not only so affordable but actually ample.

    Your who post narrative basically says "intel fanboy in disguise". I guess it is back to the drawing board for you.
  • Meteor2 - Saturday, June 3, 2017 - link

    Ddriver is our friendly local troll; best ignored and not fed.
  • trivor - Saturday, June 3, 2017 - link

    Whether you're a large corporation with $Billion IT budget with dedicated IT or a SOHO (Small Office Home Office) user with a very limited budget everyone is looking for bang for the buck. While most people on this site are enthusiasts we all have some kind of budget to keep. Where do we find the sweet spot for gaming (intersection of CPU/GPU for the resolution we want) and more and more having a fairly quiet system (and even more for a HTPC) is important. While some corporations might be tied to certain vendors (Microsoft, Dell, Lenovo, etc.) they don't necessarily care what components are in there because it is the vendor that will be warranting the system. For pure home users, all of the systems are not for us. Ryzen 5/7, i5/i7, and maybe i9 are the cpus and SOCs for us. Anything more than that will not help our gaming performance or even other tasks (Video Editing/Encoding) because even multi core aware programs (Handbrake) can't necessarily use 16-20 cores. The absolute sweet spot right now are the CPUs around $200 (Ryzen 5 1600/1600x, Core i5) because you can get a very nice system in the $600 range. That will give you good performance in most games and other home user tasks.
  • swkerr - Wednesday, May 31, 2017 - link

    There may be brand loyalty on the Retail side but it does not exist in the Corporate world. Data Center mangers will look at total cost of ownership. Performance per watt will be key as well as the cost of the CPU and motherboard. What the Corporate world s loyal to is the brand of server and if Dell\Hp etc make AMD based servers than they will add them if the total cost of ownership looks good.

  • Namisecond - Wednesday, May 31, 2017 - link

    Actually, even for the consumer retail side, there isn't brandy loyalty at the CPU level (excepting a very vocal subset of the small "enthusiast" community) Brandy loyalty is at the PC manufacturer level: Apple, Dell, HP, Lenovo, etc.
  • bcronce - Tuesday, May 30, 2017 - link

    "But at that core count you are already limited by thermal design. So if you have more cores, they will be clocked lower. So it kind of defeats the purpose."

    TDP scales with the square of the voltage. Reduce the voltage 25%, reduce the TDP by almost 50%. Voltage scales non-linearly with frequency. Near the high end of the stock frequency, you're gaining 10% clock for a 30% increase in power consumption because of the large increase in voltage to keep the clock rate stable.
  • ddriver - Tuesday, May 30, 2017 - link

    The paragraph next to the one you quoted explicitly states that lower clocks is where you hit the peak of the power/performance ratio curve. Even to an average AT reader it should be implied that lowered clocks come with lowered voltage.

    There is no "magic formula" like for example the quadratic rate of intensity decay for a point light source. TDP vs voltage vs clocks in a function of process scale, maturity, leakage and operating environment. It is however true that the more you push above the optimal spot the less performance you will get for every extra watt.
  • boeush - Tuesday, May 30, 2017 - link

    "More cores would be beneficial for servers, where the chips are clocked significantly lower, around 2.5 Ghz, allowing to hit the best power/performance ratio by running defacto underclocked cores.

    But that won't do much good in a HEDT scenario."

    I work on software development projects where one frequently compiles/links huge numbers if files into a very large application. For such workloads, you can never have enough cores.

    Similarly, I imagine any sort of high-resolution (4k, 8k, 16k) raytracing or video processing workloads would benefit tremendously from many-core CPUs.

    Ditto for complex modelling tasks, such as running fluid dynamics, heat transfer, or finite element stress/deformation analysis.

    Ditto for quantum/molecular simulations.

    And so on, and on. Point being, servers are not the only type of system to benefit from high core counts. There are many easily-parallelizable problems in the engineering, research, and general R&D spheres that can benefit hugely.
  • ddriver - Tuesday, May 30, 2017 - link

    The problem is that the industry wants to push HEDT as gaming hardware. They could lower clocks and voltages, and add more cores, which would be beneficial to pretty much anything time staking like compilation, rendering, encoding or simulations, as all of those render themselves very well to multithreading and scale up nicely.

    But that would be too detrimental to gaming performance, so they will lose gamers as potential customers for HEDT. They'd go for the significantly cheaper, lower core count, higher clocked CPU. So higher margins market would be lost.
  • Netmsm - Thursday, June 1, 2017 - link

    "AMD will not and doesn't need to launch anything other than 16 core. Intel is simply playing the core count game, much like it played the Mhz game back in the days of pentium4."
    Exactly ^_^ That's it.

Log in

Don't have an account? Sign up now