Conclusion & End Remarks

Today’s launch of the new 3rd gen Xeon Scalable processors is a major step forward for Intel and the company’s roadmap. Ice Lake SP had been baking in the oven for a very long time: originally planned for a 2020 release, Intel had only started production recently this January, so finally seeing the chips in silicon and in hand has been a relief.

Generationally Impressive

Technically, Ice Lake SP is an impressive and major generation leap for Intel’s enterprise line-up. Manufactured on a new 10nm process, node, employing a new core microarchitecture, faster memory with more memory channels, PCIe 4.0, new accelerator capabilities and VNNI instructions, security improvements – these are all just the tip of the iceberg that Ice Lake SP brings to the table.

In terms of generational performance uplifts, we saw some major progress today with the new Xeon 8380. With 40 cores at a higher TDP of 270W, the new flagship chip is a veritable beast with large increases in performance in almost all workloads. Major architectural improvements such as the new memory bandwidth optimisations are amongst what I found to be most impressive for the new parts, showcasing that Intel still has a few tricks up its sleeve in terms of design.

This being the first super-large 10nm chip design from Intel, the question of how efficiency would end up was a big question to the whole puzzle to the new generation line-up. On the Xeon 8380, a 40-core part at 270W, we saw a +18% increase in performance / W compared to the 28-core 205W Xeon 8280. This grew to a +36% perf/W advantage when limiting the ICX part to 205 as well. On the other hand, our mid-stack Xeon 6330 sample showed very little advantages to the Xeon 8280, even both are 28-core 205W designs. Due to the mix of good and bad results here, it seems we’ll have to delay a definitive verdict on the process node improvements to the future until we can get more SKUs, as the current variations are quite large.

Per-core performance, as well as single-thread performance of the new parts don’t quite achieve what I imagine Intel would have hoped through just the IPC gains of the design. The IPC gains are there and they’re notable, however the new parts also lose out on frequency, meaning the actual performance doesn’t move too much, although we did see smaller increases. Interestingly enough, this is roughly the same conclusion we came to when we tested Intel's Ice Lake notebook platform back in August 2019.

The Competitive Hurdle Still Stands

As impressive as the new Xeon 8380 is from a generational and technical stand-point, what really matters at the end of the day is how it fares up to the competition. I’ll be blunt here; nobody really expected the new ICL-SP parts to beat AMD or the new Arm competition – and it didn’t. The competitive gap had been so gigantic, with silly scenarios such as where competing 1-socket systems would outperform Intel’s 2-socket solutions. Ice Lake SP gets rid of those more embarrassing situations, and narrows the performance gap significantly, however the gap still remains, and is still undeniable.

We’ve only had access limited to the flagship Xeon 8380 and the mid-stack Xeon 6330 for the review today, however in a competitive landscape, both those chips lose out in both absolute performance as well as price/performance compared to AMD’s line-up.

Intel had been pushing very hard the software optimisation side of things, trying to differentiate themselves as well as novel technologies such as PMem (Optane DC persistent memory, essentially Optane memory modules), which unfortunately didn’t have enough time to cover for this piece. Indeed, we saw a larger focus on “whole system solutions” which take advantage of Intel’s broader product portfolio strengths in the enterprise market. The push for the new accelerator technologies means Intel needs to be working closely with partners and optimising public codebases to take advantage of these non-standard solutions, which might be a hurdle for deployments such as cloud services where interoperability might be important. While the theoretical gains can be large, anyone rolling a custom local software stack might see a limited benefit however, unless they are already experts with Intel's accelerator portfolio.

There’s also the looming Intel roadmap. While we are exulted to finally see Ice lake SP reach the market, Intel is promising the upcoming Sapphire Rapids chips for later this year, on a new platform with DDR5 and PCIe 5. Intel is set to have Ice Lake Xeon and Sapphire Rapids Xeon in the market concurrently, with the idea to manage both, especially for customers that apply the leading edge hardware as soon as it is available. It will be interesting to see the scale of the roll out of Ice Lake with this in mind.

At the end of the day, Ice Lake SP is a success. Performance is up, and performance per watt is up. I'm sure if we were able to test Intel's acceleration enhancements more thoroughly, we would be able to corroborate some of the results and hype that Intel wants to generate around its product. But even as a success, it’s not a traditional competitive success. The generational improvements are there and they are large, and as long as Intel is the market share leader, this should translate into upgraded systems and deployments throughout the enterprise industry. Intel is still in a tough competitive situation overall with the high quality the rest of the market is enabling.

Compiling LLVM, NAMD Performance
POST A COMMENT

171 Comments

View All Comments

  • lmcd - Tuesday, April 6, 2021 - link

    Linking semiaccurate like it's accurate, the jokes write themselves. Reply
  • arashi - Tuesday, April 6, 2021 - link

    Still more accurate than the embarrassment called #silicongang. Reply
  • schujj07 - Tuesday, April 6, 2021 - link

    As an actual administrator in a datacenter your statement about those advantages is bogus. Reply
  • Hifihedgehog - Tuesday, April 6, 2021 - link

    > AMD, as we know even from consumer products isn't that amazing when it comes to drivers, BIOS quality and fixing bugs, whereas Intel is much more reliable in this regard.

    What drivel even is this? Have you actually worked in the industry? Clearly, you have not. I have already seen machine learning nodes move to Epyc, my web host has since moved to Epyc, and even a lot of recommendations for home lab equipment (see ServeTheHome) has since been moving heavily towards AMD. You have no clue. So go eat a pound of sand. At least it will put out better crap than Intel’s 10nm.
    Reply
  • amootpoint - Wednesday, April 7, 2021 - link

    If your ML has moved to AMD, you are already burning a lot of money ... good luck.

    AI is where AMD is lagging so much compared to Intel, that it doesn’t even make sense.
    Reply
  • schujj07 - Wednesday, April 7, 2021 - link

    You obviously don't know what the term "Machine Learning Node" actually means. That doesn't mean the accelerators for machine learning are FirePro or Epyc, just the server that houses them are running Epyc. Reply
  • amootpoint - Thursday, April 8, 2021 - link

    You clearly sounds like an arrogant guy, with full on personal attacks. No point in further discussion. Reply
  • schujj07 - Thursday, April 8, 2021 - link

    Pot calling kettle black. Reply
  • duploxxx - Wednesday, April 7, 2021 - link

    This is a release of server chips, which are distributed through OEM mainly with their specific drivers and BIOS releases close design with AMD.... Do you honestly believe that you get instable BIOS, drivers for Server releases? It's not a consumer moboproduct for 50-100-150$ that goes on sale for the masses with generic subset of BIOS that needs to look fancy, has oc potential, looks and feel, fan mngmnt etc....

    second, price, upgradeability is still in favor of the AMD product, quality and support is delivered by the same OEM that ships both intel and AMD systems... and performance, well we know that answer already.. which only leaves retared ICT members that are aging and still believe in some mythical vendors... well i hope they still like all the spectre and meltdown patches and feel evry confident in there infinit support of a dominant monopoly and like to pay 10-15k$ for a server cpu to allow a bit more Ram support.
    Reply
  • DannyH246 - Tuesday, April 6, 2021 - link

    Oh and also don’t forget the usual Intel fused off features just because...Compare this to AMD where you get all features in all SKU’s. Anyone who recommends this crap is simply an Intel Fanboi. Reply

Log in

Don't have an account? Sign up now