Explaining the Jump to Using HCC Silicon

When Intel makes its enterprise processors, it has historically produced three silicon designs:

  • LCC: Low Core Count
  • HCC: High Core Count (sometimes called MCC)
  • XCC: Extreme Core Count (sometimes called HCC, to confuse)

The idea is that moving from LCC to XCC, the silicon will contain more cores (sometimes more features), and it becomes cost effective to have three different designs rather than one big one and disable parts to meet the range. The size of the LCC silicon is significantly smaller than the XCC silicon, allowing Intel to extract a better production cost per silicon die.

Skylake-SP Die Sizes (from chip-architect.com)
  Arrangement Dimensions
(mm)
Die Area
(mm2)
LCC 3x4 (10-core) 14.3 x 22.4 322 mm2
HCC 4x5 (18-core) 21.6 x 22.4 484 mm2
XCC 5x6 (28-core) 21.6 x 32.3 698 mm2

In the enterprise space, Intel has each of the three designs throughout its Xeon processor stack, ranging from four-core parts (usually cut down versions of the LCC silicon) all the way up to 28 core parts (using XCC) for this generation. The enterprise platform has more memory channels, support for error correcting and high-density memory, the ability to communicate to multiple processors, and several other RAS (reliability, accessibility, serviceability) features that are prominent for these markets. These are typically disabled for the prosumer platform.

In the past, Intel has only translated the LCC silicon into the prosumer platform. This was driven for a number of reasons.

  • Cost: if users needed XCC, they had to pay the extra and Intel would not lose high-end sales.
  • Software: Enterprise software is highly optimized for the core count, and systems are built especially for the customer. Prosumer software has to work on all platforms, and is typically not so multi-threaded.
  • Performance: Large, multi-core silicon often runs at a low frequency to compensate. This can be suitable for an enterprise environment, but a prosumer environment requires responsiveness and users expect a good interactive experience.
  • Platform Integration: Some large silicon might have additional design rules above and beyond the smaller silicon support, typically with power or features. In order to support this, a prosumer platform would require additional engineering/cost or lose flexibility.

So what changed at Intel in order to bring HCC silicon to the HEDT prosumer platform?

The short and shrift answer that many point to is AMD. This year AMD launched its own high-end desktop platform, based on its Ryzen Threadripper processors. With their new high performance core, putting up to 16 of them in a processor for $999 was somewhat unexpected, especially with the processor beating Intel’s top prosumer processors in some (not all) of the key industry benchmarks. The cynical might suggest that Intel had to move to the HCC strategy in order to stay at the top, even if their best processor will cost twice that of AMD.

Of course, transitioning a processor from the enterprise stack to the prosumer platform is not an overnight process, and many analysts have noted that it is likely that Intel has considered this option for several generations: testing it internally at least and looking at the market to decide when (or if) it is a good time to do so. The same analysts point to Intel’s initial lack of specifications aside from core count when these processors were first announced several months ago: specifications that would have historically been narrowed down at that point in previous designs if they were in the original plans. It is likely that the feasibly of introducing the HCC silicon was already in process, but actually moving that silicon to retail was a late addition to counter a threat to Intel’s top spot. That being said, to say Intel had never considered it would perhaps be a jump too far.

The question now becomes if the four areas listed above would all be suitable for prosumers and HEDT users:

  • Cost: Moving the 18-core part into the $1999 is unprecedented for a consumer processor, so it will be interesting to see what the uptake will be. This does cut into Intel’s professional product line, where the equivalent processor is nearer $3500, but there are enough ‘cuts’ on the prosumer part for Intel to justify the difference: memory channels (4 vs 6), multi-processor support (1 vs 4), and ECC/RDIMM support (no vs yes). What the consumer platform does get in kind is overclocking support, which the enterprise platform does not.
  • Software: Intel introduced its concept of ‘mega-tasking’ with the last generation HEDT platform, designed to encompass users and prosumers that use multiple software packages at once: encoding, streaming, content creation, emulation etc. Its argument now is that even if software cannot fully scale beyond a few cores, a user can either run multiple instances or several different software packages simultaneously without any slow-down. So the solution to this is rather a redefinition of the problem rather than anything else, which could have applied previously as well.
  • Performance: Unlike enterprise processors, Intel is pushing the frequency on the new HCC parts for consumers. This translates into a slightly lower base frequency but a much higher turbo frequency, along with support for Turbo Max. In essence, software that requires responsiveness can still take advantage of the high frequency turbo modes, as long as the software is running solo. The disadvantage is going to be in power consumption, which is a topic later in the review.
  • Platform Integration: Intel ‘solved’ this by creating one consumer platform suitable for nine processors with three different designs (Kaby Lake-X, Skylake-X LCC and Skylake-X HCC). Both the Kaby Lake-X and Skylake-X parts have different power delivery methods, support different numbers of memory channels, and different numbers of PCIe lanes / IO. When this was first announced, there was substantial commentary that this was making the platform overly complex, and would lead to confusion (it lead to at least one broken processor in our testing).

Each of these areas has either been marked as solved, or redefined out of being issue (even if a user agrees with the redefinition or not). 

New Features in Skylake-X: Cache, Mesh, and AVX-512 Opinion: Why Counting ‘Platform’ PCIe Lanes (and using it in Marketing) Is Absurd
Comments Locked

152 Comments

View All Comments

  • Spunjji - Monday, September 25, 2017 - link

    Super relevant, because they indicate how badly thermally limited the CPU is - which is hella good info to have if you're, say, considering delidding a $1999 processor because the manufacturer used toothpaste under the IHS.
  • tricomp - Monday, September 25, 2017 - link

    Poor AMD...
    No chance they are going to supply (even more) cpu's demand after posting this article..
    I am trying to purchase at list 7 systems for my customers in my country but there's nowhere I can find them beasts here..
  • iwod - Monday, September 25, 2017 - link

    I wish someone could do an article on that too. GF doesn't seems to be the limitation here. GF, should in theory more then enough capacity in their Fab 8 for AMD. Unless GF have some other big customers, otherwise AMD should really be bumping out as much unit as possible.
  • Atom11 - Monday, September 25, 2017 - link

    Can we please see one test (!), if you could possible manage, that shows the advantage of AVX-512 in compare to AVX2 when doing:

    1.) matrix multiply
    2.) FFT
    3.) convolution
  • ZeDestructor - Monday, September 25, 2017 - link

    Give us a comparison to AVX1 and SSE4 too!
  • Gothmoth - Monday, September 25, 2017 - link

    threadripper delivers 80+% of the perfromance for less than 50% of the price.... you don´t have to be a genius to see what the better deal is (price germany: TR 1950x = 950 euro, 7890xe =2300 euro)
  • Spunjji - Monday, September 25, 2017 - link

    Don't let that stop them equivocating about how companies who need that power yet somehow have no need for ECC don't care about cost because something something software TCO blah blah.
  • spdragoo - Monday, September 25, 2017 - link

    I'm trying really, really hard to think of a company that, at some point or another, doesn't say, "Equipment X may outperform Equipment Y, but the extra cost to buy Equipment X is too much, we'll just make-do with Y instead." Especially since 100% of companies have a limit on their budgets.

    What's that, you say? Multi-billion dollar corporations don't have to worry about the money they spend? Someone apparently didn't pay attention in their Econ 200 class, or their Introduction to Accounting coursework.

    By definition, every business has a *finite* amount of money they can spend, based on a) how much money they collect from their customers, b) how much they can recoup on the sale of assets (tangible or intangible), & c) how much they can get from "other sources" (mostly bank loans or by selling stock shares, or sometimes government grants, but you might find the occasional situation where a generous benefactor just bequeaths money to a company...but I doubt you'll even see that happen to 1% of the companies out there -- & no, venture capitalists pouring money into a company is *not* a situation where they "give the money away", they're getting something for their money, usually stock shares or guarantees of repayment of the loans). Of that money, some of it is earmarked for employee compensation (not just the executives, but the office drones & lower-level employees that do 99% of the actual work), some of it goes towards taxes, some of it pays for rental payments, some for loan payments, some for utilities (telephone, Internet, electricity, gas, water, etc.), some of it may get set aside for "emergencies", some gets earmarked for dividends to the shareholders, etc. That means that a (relatively) small portion is set aside for "equipment replacement". Now, if the company is lucky, the lion's share of that budget is for IT-related equipment...but that covers more than just the office drones' machines, that covers everything: server racks, storage services, cloud vendor payments, etc.

    And that is where the price comes into play. For probably 90% of office users out there, not only is Threadripper an overpowered product, so are these products. Heck, we're in the middle of an upgrade from Windows 7 to Windows 10, & they're taking the opportunity to replace our old Sandy Bridge i5 machines with Skylake i7 machines. Sure, they're running faster now...but the main reason they're running faster is because we went from 32-bit Windows to 64-bit Windows, so our PCs now have 8GB of RAM instead of 4GB. That helps with our workload...which primarily revolves around MS Office & using browsers to access & modify a number of massive databases. Having an 8C/16T CPU, let alone a 16C/32T CPU, wouldn't provide any boost for us, since the primary slowdown is on the server side.

    These are going to be expensive systems for specialized purposes...& those individual companies are going to look at their budgets very closely, as well as the performance benchmarks, before deciding to purchase these systems. Sure, they may hold the performance crown...but not by that big of a margin, & especially when compared to the margin that gives them the "most expensive price" crown.
  • BrokenCrayons - Monday, September 25, 2017 - link

    Human labor is more expensive than hardware. The 20% additional performance for $1000 more can be earned back quickly by the increased productivity of your workforce (assuming your management staff is effective enough to keep the employees gainfully employed of course and that's certainly not always the case).
  • vladx - Tuesday, September 26, 2017 - link

    Indeed the difference in price is pretty much negligible in a professional setting.

Log in

Don't have an account? Sign up now