In the past couple of weeks, we have been re-testing and re-analysing our second generation Ryzen 2000-series review. The extra time and writing, looking at the results and the state of the market, led me down some interesting thoughts, ideas, and concepts, about how the competitive landscape is set to look over the next 12-18 months.

Based on our Ryzen 2000-series review, it was clear that Intel’s 8-core Skylake-X product is not up to task. The Core i7-7820X wins in memory bandwidth limited tests because of the fact that it is quad channel over the dual channel competition, but it falls behind in almost every other test and it costs almost double compared to the other chips in benchmarks where the results are equal. It also only has 28 PCIe lanes, rather than the 40 that this chip used to have two generations ago, or 60 that AMD puts on its HEDT Threadripper processors.

Intel uses its monolithic low-core-count (LCC) Xeon design for the 6-8-10 Skylake-X processors, as it has 10 cores in the silicon floor plan. AMD is currently highly competitive at 8 cores, with a much lower price point in the consumer space, making it hard for Intel to justify its 8-core Skylake-X design. Intel is also set to launch 8-core mainstream processors later this year, and is expected to extend its consumer ring-bus design from six-cores to eight-cores to do so, rather than transpose the 8-core LCC design using the latest Coffee Lake microarchitecture updates.

Because of all this, I am starting to be of the opinion that we will not see Intel release another LCC Xeon in the high-end desktop space in the future. AMD’s Threadripper HEDT processors run mainly at 12 and 16 cores, and we saw Intel ‘had to’* release its mid-range core count (called high core count, HCC) silicon design to compete.

*Officially Intel doesn’t consider its launch of 12-18 core Core i7/Core i9 processors a ‘response’ to AMD launching 16-core Threadripper processors. Many in the industry, due to the way the information came to light in spots and without a unified message, disagree.  

In this high-end desktop space, looking to the future, AMD is only ever going to push higher and harder, and AMD has room to grow. The Infinity Fabric, between different dies on the same package, is now a tried and tested technology, allowing AMD to scale out its designs in future products. The next product on the block is Threadripper 2, a minor update over Threadripper but based on 12nm and presumably with higher frequencies and better latencies as well. We expect to see similar 3-10% uplift over the last generation, and it is likely to be up to 16 cores in a single package coming out later this year.


A der8auer delid photo

With AMD turning the screw, especially with rumors of more high performance cores in the future, several things are going to have to happen from Intel to compete:

  1. We will only see HCC processors for HEDT to begin
  2. The base LCC design is relegated to low-end Xeons, and
  3. Intel will design its next big microarchitecture update with EMIB* in mind
  4. To compete, Intel will have to put at least two dies on a single package.

*EMIB: Embedded Multi-Die Interconnect Bridge, basically an intra-package interposer to connect two chips at high bidirectional speed without a bulky interposer by inserting a micro-interposer in the package PCB/substrate. We currently see this technology on Intel’s Core with Radeon RX Vega (‘Kaby Lake-G’) processors in the latest Intel NUC.

For the next generation of server-class Xeon processors, called Cascade Lake-SP and which are expected to be coming either this year or early next (Intel hasn’t stated), we believe it to be a minor update over the current Skylake-SP. Thus for CL-SP, option (1)+(2) could happen then. If Intel wants to make the mainstream platform on Coffee Lake go up to 8 cores, the high-end desktop is likely to only see 10 cores and up. The simple way to do this is to put the HCC core design (could be up to 18 cores) and cut it as necessary for each processor. Unless Intel are updating the LCC design to 12 cores (not really feasible given the way the new inter-core mesh interconnect works, image below), Intel should leave the LCC for the low count Xeons and only put the HCC chips in the high-end desktop space.


Representation of Intel's Mesh topology for its SP-class processors

Beyond CL-SP, for future generations, options (3)+(4) are the smarter paths to take. EMIB adds additional expense for packaging, but using two smaller dies should have a knock-on effect with better yields and a more cost effective implementation. Intel could also leave out EMIB and do an intra-package connection like AMD.

But one question is if Intel’s current library of interconnects, i.e. the ones that are competitors or analogues to AMD’s Infinity Fabric, are up to the task. Intel currently uses its UPI technology to connect between 2 socket, 4 socket and 8 socket platforms. Intel also uses it in the upcoming Xeon+FPGA products to combine two chips in a single package using an intra-package connection, but it comes at the expense of limiting those Xeon Gold processors down to two sockets rather than four (this is more a design thing of how the Xeon Gold has only 3 UPI connectors). But we will have to see if Intel can appropriately migrate UPI (or other technologies) across EMIB and over multiple dies in the same package. With the side of Intel, those dies might not need to be identical, like AMD, but as mentioned, AMD already has its Infinity Fabric in the market and selling today.

The question will be if Intel has had this in mind. We have seen ‘leaks’ in the past of Intel combining two reasonably high-core count chips into a single package, however we have never seen products like it in the market. If these designs are flying around Intel, which I’m sure they are, are they only for 10nm? Based on delays of 10nm, are Intel still waiting it out, or will they back-port the design as 14nm delays grow?

Intel’s Dr. Murthy Renduchintala, in a recent JP Morgan investment call, was clear that 10nm high volume manufacturing is set for 2019 (didn’t say when), but Intel is learning how to get more design wins within a node rather than waiting for new ones. I would not be surprised if this is one project that gets brought into 14nm in order to be competitive.

If Intel hasn’t done it by the time AMD launch Zen 2 on 7nm, the 10-15 year one-sided seesaw will tip the other way in the HEDT market.

Based on previous discussions from one member of the industry, I do not doubt that Intel might still win in the absolute raw money-is-no-object performance with its best high-end $10k+ parts. They are very good at that, and they have the money and expertise for these super halo, super high-bin monolithic parts. But if AMD makes the jump to Zen 2 and 7 nm before Intel comes to market with a post-Cascade Lake product on 10nm, then AMD is likely have the better, more aggressive, and more investment friendly product portfolio.

Competition is good.

Comments Locked

30 Comments

View All Comments

  • Arbie - Friday, June 1, 2018 - link

    Isn't the phrase you wanted "cost-no-object"? Or is "money-no-cost" used in Britain?
  • CajunArson - Friday, June 1, 2018 - link

    "Intel also uses it in the upcoming Xeon+FPGA products to combine two chips in a single package using an intra-package connection, but it comes at the expense of limiting those Xeon Gold processors down to two sockets rather than four"

    Oh yeah, Intel is SO FAR BEHIND AMD because their on-package FPGA products are "cripplingly limited" to the exact same number of sockets that the highest-end AMD Zen2 based Epyc servers will support when they finally launch.

    Oh, and those servers won't exactly have FPGAs or anything other than vanilla x86 cores on them either.

    Your fanboy is showing Cuttress.
  • MrSpadge - Friday, June 1, 2018 - link

    Please reread. Ian's statement is completely judgement free. And it's true: this design is "limiting" those Xeons to 2 rather than 4 socket configurations in the truest sense of the word: they simply can't be used for 4 sockets, so they're limited. That's the cost Intel currently has to pay for a multi chip package.

    Nowhere does Ian say this would be bad or inferior to AMD.
  • Alexvrb - Friday, June 1, 2018 - link

    You have to lick Intel's boots or else you're an AMD fanboy. PROVE ME WRONG.
  • close - Monday, June 4, 2018 - link

    Ian is certainly no AMD fanboy. Don't forget that he "forgot" to cover the AMD Threadripper/EPYC launch last year for more than 2 weeks. Which is about 2 weeks longer than any other tech website worth mentioning.

    And when he finally covered it along with some other AMD review the Anandtech front page was immediately flooded with more than 2 dozen meaningless press releases of random Intel motherboard launches and other such irrelevant news that pushed the AMD articles to the bottom in less than half a day.

    So definitely not an AMD fanboy. If anything he's being a little soft on Intel by not pointing out that one of Intel's the marching hymns was that they cater for the 4S market. So not being able to do that leaves them in the unenviable position of being substantially more expensive than AMD while catering to the exact same use cases.

    Fortunately there's always some "incentive" program just round the corner to keep OEMs from jumping to quickly on the AMD bandwagon until Intel has something to respond with.

    And I'm sure they will. Contrary to what many editors on plenty of tech sites insist on claiming, the "improvement pool" isn't tapped out. Fabrication might be more challenging but on the architecture side there plenty of room to improve. And Intel was always able to do that, just not willing due to lack of competition.
  • Vayra - Monday, June 4, 2018 - link

    What arguments do you have to support the statement that the improvement pool is not tapped out then?

    The reason CPU performance and IPC bumps have stagnated is certainly not because the pool is full and AMD releasing Zen with a very similar IPC supports that idea. So you must have one hell of a source to say otherwise right now.
  • close - Tuesday, June 5, 2018 - link

    That's not how the burden of proof works. So I expect all those Ziff Davis "journalists" who claimed "IPC can't improve" or "it's too expensive to put more than 4-6 cores on a die" to support the claims, not for everyone else to try to rebuff them.

    Time has proven again and again that the reason for stagnation is almost always lack of competition and insisting on profit margins.

    Plus there are fewer physical limitations to how well you can design a CPU architecture than say building atom scale transistors.
  • FullmetalTitan - Saturday, June 2, 2018 - link

    That statement had nothing at all to do with AMD though. It was just an observation that Intel sacrificed external communication lanes to set up the intra-package connect with the FPGA, when they had the option of a design tweak to maintain the same count of UPI connections externally.

    The closest that ever gets to EPYC is that the 2-core solution for EPYC servers follows a similar mentality of using their infinity fabric lanes for socket-to-socket communication such that single socket and dual socket EPYC platforms have the same number of external lanes exposed to the system.
  • FreckledTrout - Friday, June 1, 2018 - link

    Thanks Ian, I really liked this article. Intel should have killed LCC on HEDT say 2-3 years ago but then they "should" have also had 6-8 core minatream parts. With no competition the milking of 4 core 14nm mainstream parts persisted way longer than it should have with some healthy competition.

    In our datacenter the use of the halo / very large systems is fading away rather quickly. We are moving to cloud native tech where the mantra is lots of smaller commodity machines in clusters for scaling and availability. There are very few use cases anymore for very large single server systems. I think this is going to bode very well for AMD. I suspect when Zen 2 lands the big vendors like Amazon, Microsoft, Google will be all in on AMD especially if they beat Intel to 10nm.
  • euskalzabe - Friday, June 1, 2018 - link

    I find your choice of the exculpatory paragraph "*Officially Intel doesn’t consider..." quite interesting. This is an editorial, like any article published on Anandtech, it's your opinion and analysis, so why you felt like you had to placate Intel, by saying that this "response" isn't official to them, tells me a lot of how vulnerable you feel against them.

    What does any reader here care about what Intel considers official or not? We care about their hardware, not what they like their PR to look like. You would never say that AMD's Vega was not officially a "response" to the GTX1080, because it's obvious that it is, it's competition. The same situation applies to Intel VS AMD, so your explanatory paragraph comes out as plainly bizarre.

    If anything, it points to Intel's menacing position in the market, if it makes you feel like you can't state your opinion on your website without clarifying for them and walking on egg shells. Further, it reinvigorates my preference to buy AMD (and I say that while still annoyed I had to buy a i5-8400 for my last build that I can't wait to replace, and feel stupid I traded my 480 for a 1060 - although I did make money thanks to the mining craze).

Log in

Don't have an account? Sign up now