As part of AMD's Q1'2024 earnings announcement this week, the company is offering a brief status update on some of their future products set to launch later this year. Most important among these is an update on their Zen 5 CPU architecture, which is expected to launch for both client and server products later this year.

Highlighting their progress so far, AMD is confirming that EPYC "Turin" processors have begun sampling, and that these early runs of AMD's next-gen datacenter chips are meeting the company's expectations.

"Looking ahead, we are very excited about our next-gen Turin family of EPYC processors featuring our Zen 5 core," said Lisa Su, chief executive officer of AMD, at the conference call with analysts and investors (via SeekingAlpha). "We are widely sampling Turin, and the silicon is looking great. In the cloud, the significant performance and efficiency increases of Turin position us well to capture an even larger share of both first and third-party workloads."

Overall, it looks like AMD is on-track to solidify its position, and perhaps even increase its datacenter market share with its EPYC Turin processors. According to AMD, the company's server partners are developing a 30% larger number of designs for Turin than they did Genoa. This underscores how AMD's partners are preparing for even more market share growth on the back of AMD's ongoing success, not to mention the improved performance and power efficiency that the Zen 5 architecture should offer.

"In addition, there are 30% more Turin platforms in development from our server partners, compared to 4th Generation EPYC platforms, increasing our enterprise and with new solutions optimized for additional workloads," Su said. "Turin remains on track to launch later this year."

AMD's EPYC 'Turin' processors will be drop-in compatible with existing SP5 platforms (i.e., will come in an LGA 6096 package), which will facilitate its faster ramp and adoption of the platform both by cloud giants and server makers. In addition, AMD's next-generation EPYC CPUs are expected to feature more than 96 cores and a more versatile memory subsystem.

Source: AMD Q1'24 Earnings Call (via SeekingAlpha)

POST A COMMENT

41 Comments

View All Comments

  • FreckledTrout - Monday, May 6, 2024 - link

    That isnt completely true. Intel's ability to be competitive with TSMC will improve massively. Part of the problem is that they were an internal division, so they really didn't have to operate in a lean way. I'm certain Intel will sort all of that out and the fight will be more a technological one. Reply
  • GeoffreyA - Thursday, May 2, 2024 - link

    So, owning fabs is some sort of p***s-measuring contest? Reply
  • FreckledTrout - Monday, May 6, 2024 - link

    Yes. Considering there are exactly three fab companies in the world that can make these cutting-edge chips. Reply
  • Dante Verizon - Tuesday, May 7, 2024 - link

    One* Reply
  • deltaFx2 - Thursday, May 2, 2024 - link

    No, Intel was superior because it had volume. Volume in client (PC) market recovered the NRE cost of CPU design, allowing the same design to be sold in server at far lower cost than server-only players (sun, dec, ibm etc). Binning for yield and performance requires volume. This is still true today. A mask set for a design on sub 3nm costs double-digit millions of dollars. Developing the IP that turns into that mask is an order of magnitude more expensive. Then there’s fab/packaging/test costs that come down with volume.
    Like intel, amd has volume. Arm design efforts are distributed across several vendors, each of whom is paying this NRE cost independently. For cloud vendors, the additional cost may be offset by not having to pay intel/amd profit margins, and may help negotiate better deals for the x86 parts they do procure. But even for them, volume matters or these efforts will get killed, esp considering the new focus on efficiency at big tech
    Reply
  • Blastdoor - Tuesday, May 7, 2024 - link

    Recovering the cost of the design is *trivial* compared to recovering the cost of developing the manufacturing process and building/equipping the fabs. So I agree that volume was key, but that volume was key because it enabled the economies of scale necessary to support the R&D and capital costs of manufacturing. Intel has rarely had the best designs, but up until the mid-late 2010s, they had the best manufacturing process, and that almost always help them overcome design deficiencies.

    When it comes to the volume needed to sustain manufacturing R&D and capital costs, AMD falls far short. That's why AMD doesn't own fabs anymore. That worked out ok for AMD for a while because they paired a better x86 design with TSMC's manufacturing to beat weak Intel designs on weak manufacturing. But Intel has improved designs and is on the cusp of pulling ahead in manufacturing. If Intel actually pulls it off, I think AMD is toast.

    TSMC has the volume in spades, thanks largely to Apple.
    Samsung and Intel are barely hanging in there. Part of the reason Intel is hanging in there is US government support. But if Intel can parlay the combo of traditional x86, AI, and foundry business into sufficient volume, then they can become self-sustaining and stay in the lead on process once again.
    Reply
  • Drivebyguy - Thursday, May 2, 2024 - link

    AMD has had ARM designs for servers before and could do so again. They're not married to x86.

    Also, part (by no means all) of the Apple success with high performance ARM designs depends on the integration of memory and CPU on the same chip, something which won't be a mainstream answer in servers.

    Don't get me wrong, I've seen what even the mid-range M3s can do and been slightly terrified. (Running on a gaming notebook with an NVidia GPU and top end AMD CPU vs. an M3 on a colleague's work laptop, he got probably 8x the performance on running an LLM locally? anecdotal and I'm hazy on whether my machine was making full use of the GPU for the LLM but still.)

    But AMD has plenty of other assets besides the x86 license and will follow the market where it goes.
    Reply
  • GeoffreyA - Friday, May 3, 2024 - link

    The M3 has a Neural Engine, from which the gains are likely coming. Reply
  • meacupla - Friday, May 3, 2024 - link

    M3 NPU is like 18TOPS across all the variants.
    It's on the slower side, and even beaten by it's own A17, which does 35TOPS.
    Reply
  • GeoffreyA - Friday, May 3, 2024 - link

    The memory bandwidth then? Reply

Log in

Don't have an account? Sign up now