In the news cycle today, Intel is announcing an update to its planned deployment of its next generation Xeon Scalable platform known as Sapphire Rapids. Sapphire Rapids is the main platform behind the upcoming Aurora supercomputer, and set to feature support for leading edge technologies such as DDR5, PCIe 5.0, CXL, and Advanced Matrix Extensions. The announcement today is Intel reaffirming its commitment to bringing Sapphire Rapids to market for wide availability in the first half of 2022, meanwhile early customers are currently operating with early silicon for testing and optimization.

In a blog post by Lisa Spelman, CVP and GM of Intel’s Xeon and Memory Group, Intel is getting ahead of the news wave by announcing that additional validation time is being incorporated into the product development cycle to assist with top tier partners and customers to streamline optimizations and ultimately deployment. To that end, Intel is working with those top tier partners today with early silicon, typically ES0 or ES1 using Intel’s internal designations, with those partners helping validate the hardware for issues against their wide ranging workloads. As stated by former Intel CTO Mike Mayberry in the 2020 VLSI conference, Intel’s hyperscale partners end up testing 10-100x more use cases and edge cases than Intel can validate, so working with them becomes a critical part of the launch cycle.

As the validation continues, Intel works with its top tier partners for their specific monetizable goals and features that they’ve requested, so that when the time comes for production (Q1 2022) and ramp (Q2 2022), and a full public launch (1H 2022), those key partners are already benefiting from working close with Intel. Intel has stated that as more information about Sapphire Rapids becomes public, such as at upcoming events like Hot Chips in August or Intel’s own event in October, there will be a distinct focus on benchmarks and metrics that customers rely upon for monetizable work flows, which is in part what this cycle of deployment assists with.

Top tier partners getting early silicon 12 months in advance, and then deploying final silicon before launch, is nothing new. It happens for all server processors regardless of source, so when we finally get a proper public launch of a product, those hyperscalers and HPC customers have already had it for six months. In that time, those relationships allow the CPU vendors to optimize the final bits to which the general public/enterprise customers are often more sensitive.

It should be noted that a 2022 H1 launch of Sapphire Rapids hasn’t always been the date in presentations. In 2019, Ice Lake Xeon was a 2020 product and Sapphire Rapids was a 2021 product. Ice Lake slipped to 2021, but Intel was still promoting that it would be delivering Sapphire Rapids to the Aurora supercomputer by the end of 2021. In an Interview with Lisa Spelman in April this year, we asked about the close proximity of the delayed Ice Lake to Sapphire Rapids. Lisa stated that they expected a fast follow on with the two platforms - AnandTech is under the impression that because Aurora has been delayed repeatedly, and that the ‘end of 2021’ was a hard part of Intel’s latest contract with Argonne on the machine for key deliverables. At Computex 2021, Spelman announced in Intel’s keynote that Sapphire Rapids would be launching in 2022, and today’s announcement reiterates that. We expect general availability to be more within the end Q2/Q3 timeframe.

It’s still coming later than expected, however it does space the Ice Lake/Sapphire Rapids transition out a bit more. Whether this constitutes an additional delay depends on your perspective; Intel contends that it is nothing more than a validation extension, whereas we are aware that others may ascribe the commentary to something more fundamental, such as manufacturing. It's no secret that the level of manufacturing capacity Intel has for its 10nm process, or particularly 10nm ESF which is what Sapphire Rapids is built on, is not well known beyond ‘three ramping fabs’ announced earlier this year. Intel appears to be of the opinion that it makes sense for them  to work closer with their key hyperscaler and HPC customers, who account for 50-60%+ of all Xeons sold in the previous generation, as a priority before a wider market launch to focus on their monetizable workflows. (Yes, I realize I’ve said monetizable a few times now; ultimately it’s all a function revenue generation.)

As part of today’s announcement, Intel also lifted the lid on two new Sapphire Rapids features.

First is Advanced Matrix Extensions (AMX), which has technically been announced before, and there is plenty of programming documentation about it already, however today Intel is confirming that AMX and Sapphire Rapids are the initial pairing for this technology. The focus of AMX is matrix mutliply, enabling more machine learning compute performance for training and inference in Intel’s key ‘megatrend markets’, such as AI, 5G, cloud, and others. Also part of the AMX disclosures today is some level of performance – Intel is stating that early Sapphire Rapids silicon with AMX, at a pure hardware level, is enabling at least a 2x performance increase over Ice Lake Xeon silicon with AVX512. Intel was keen to point out that this is early silicon without any additional software enhancements on Sapphire Rapids. AMX will form part of Intel’s next-gen DL Boost portfolio at launch.

The second feature is that Intel is integrating a Data Streaming Accelerator (DSA). Intel has also had documentation about DSA on the web since 2019, stating that it is a high-performance data copy and transformation accelerator for streaming data from storage and memory or to other parts of the system through a DMA remapping hardware unit/IOMMU. DSA has been a request from specific hyperscaler customers, who are looking to deploy it within their own internal cloud infrastructure, and Intel is keen to point out that some customers will use DSA, some will use Intel’s new Infrastructure Processing Unit, while some will use both, depending on what level of integration or abstraction they are interested in.

Yesterday we learned that Intel will be offering versions of Sapphire Rapids with HBM integrated for every customer, with the first deployment of those going to Aurora. As mentioned, Intel is confirming that they will be disclosing more details at Hot Chips in August, and at Intel’s own Innovation event in October. There may also apparently be some details about the architecture before that date as well, according to today’s press release.

Realted Reading

 

POST A COMMENT

34 Comments

View All Comments

  • ThereSheGoes - Tuesday, June 29, 2021 - link

    Sounds like someone is drinking the marketing cool aid. Late is late is late. Reply
  • at_clucks - Tuesday, June 29, 2021 - link

    No, they're launching the totally marketable tech that boosts performance right up until they finish selling the generation, discover that was a massive security hole, deactivate it wiping out years worth of performance increases, and then launch the new generation Crapfire Rapids CPU which will bring some totally marketable tech that boosts performance right up until they finish selling the generation... Reply
  • lilo777 - Wednesday, June 30, 2021 - link

    Too many folks here recently have been working hard trying to turn AnandTech into wccftech. Reply
  • at_clucks - Wednesday, June 30, 2021 - link

    Is it working? Reply
  • at_clucks - Wednesday, June 30, 2021 - link

    Clearly not... as disgusting as the Disqus commenting system is, at least it has an edit button.

    On a different topic, Intel has a propensity towards touting various performance improvements that will make everything great again only to later realize they were half-assed and sacrificed security for a short lived benchmarking gain (that stuff that looks good on those marketing slides everyone eats up). The customers are left holding the castrated chips that they paid top dollar for, and are encouraged to go for the new generation which has the next generation of various performance improvements that will make everything great again only to later realize... But I'm sure AMX and DSA on SR won't have the same fate. ;)

    https://www.anandtech.com/show/6355/intels-haswell...

    https://www.phoronix.com/scan.php?page=news_item&a...
    Reply
  • mode_13h - Thursday, July 1, 2021 - link

    > Crapfire Rapids

    Hilarious.

    I'll probably start referring to Sapphire Rapids as "Tire Fire Rapids" or maybe "Sapphire Tar Pits", if it starts to go the same way as Ice Lake SP.
    Reply
  • Gondalf - Wednesday, June 30, 2021 - link

    Being better than AMD offering by a wide margin, it is not late.
    AMD have not an answer since Zen 4 will come out at the end next year.
    Reply
  • mode_13h - Thursday, July 1, 2021 - link

    AMD could surprise everybody with a Zen3 EPYC that has stacked SRAM cache and new IO die. Not saying they will, but there are options besides Zen4. Reply
  • Qasar - Thursday, July 1, 2021 - link

    " Being better than AMD offering by a wide margin, it is not late. " until its released, its just opinion and speculation, specially if gondalf posts anything about it, or intel, is usually just anti amd BS from him anyway. Reply
  • arashi - Tuesday, July 20, 2021 - link

    Gondaft is just Dylan Patel's smurf. Reply

Log in

Don't have an account? Sign up now