DDR5 & AMD EXPO Memory: Memory Overclocking, AMD's Way

The final major feature being introduced with the AM5 platform is DDR5 memory support. Like AM4, which was introduced alongside AMD’s shift over from DDR3 to DDR4, socket AM5 is being rolled out to bring support for DDR5 for the platform.

In fact, socket AM5 only brings DDR5 support. Unlike rival Intel, who opted to support both DDR4 and DDR5 memory with their Alder Lake (12th Gen Core) CPUs, AMD is only supporting DDR5 on the AM5 platform. This is a true platform limitation, and there is no going back.

Like other engineering decisions, this marks a trade-off being made by AMD. In the short term, this is going to drive up the total cost of an AM5 system relative to a theoretical AM5 system with DDR4 memory; DDR5 simply costs more right now. But at the same time, it simplifies things over the long run of the platform, especially since AMD is planning on supporting it through 2025. There will be no such thing as a DDR4 AM5 motherboard, and AMD needs not bake DDR4 support into any of the Ryzen memory controllers.

Ultimately, with AMD starting the DDR5 transition roughly a year after Intel, the company’s expectations are that DDR5 prices are going to continue falling fast enough that they’re going to reach parity with DDR4 before too long. So why implement DDR4 support if it’s only going to be necessary for a short period of time?

As for memory speeds and capacities supported, while AM5 enforces the use of DDR5, ultimately it’s the individual memory controllers that determine the rest. For AMD’s Ryzen 7000 desktop processors, which are based on the Zen 4 Raphael design, these chips offer support for official (JEDEC) speeds at up to DDR5-5200 for a 1 DIMM Per Channel (DPC) configuration. But, like all other DDR5 products we’ve seen thus far, 2 DPC comes with a significant penalty; in that case the maximum JEDEC speed is reduced to just DDR5-3600.

So as was the case with Intel’s Alder Lake platform, system builders are going to need to put a lot more thought into how they go about adding memory, and how they’re going to handle future memory expansion, if at all. While Ryzen 7000 can drive a 2 DPC/4 DIMM setup, you’re going to lose 31% of your memory bandwidth if you go that route. So for peak performance, it’ll be best to treat Ryzen 7000 as a 1 DPC platform.

Meanwhile, for system builders looking at reliability and data integrity as opposed to performance, AMD has confirmed that Ryzen 7000 also supports ECC memory. Unfortunately, the compatibility situation is essentially unchanged from the AM4 platform, which is to say that while the CPU supports ECC memory, it’s going to be up to motherboard manufacturers to properly validate it against their boards. For boards that aren’t doing validation, AMD can’t guarantee ECC is going to work. Though it’s largely a moot point for today’s launch anyhow, since although DDR5 ECC UDIMMs exist, they are in very short supply.

Also, while we didn’t expect it to be supported to begin with, AMD has confirmed that Ryzen 7000 won’t support RDIMMS/LRDIMMs. So it’s unbuffered DIMMs all the way.

Overclocking Memory Ratios

JEDEC standard speeds aside, the Ryzen 7000 series will also support memory overclocking. And thanks to a combination of the switch to DDR5 memory, changes to AMD’s memory controllers, and changes to AMD’s power delivery infrastructure, the rules have changed.

On Ryzen 5000, the ideal configuration for memory overclocking was to run the fabric clock, memory controller, and memory clock all in sync at the same frequencies. This made DDR4-3600 the typical “sweet spot” for the platform, as going faster would typically require running parts of the CPU out of sync so that they could stay within their own attainable clockspeeds.

But for Ryzen 7000, AMD has loosened things up a bit. Ryzen 7000 systems can still get improved memory performance even when the fabric clock is allowed to go out of sync with the memory controller. As a result, most overclockers can just leave that clock set to Auto, and instead focus on keeping the memory and memory controller clocks in sync in a 1:1 ratio.

Specifically, when the fabric clock is set to Auto, it’s typically run at 2000MHz. Meanwhile the memory and memory controller clocks will be running at anywhere between 2400MHz and 3000MHz, depending on the speed of the RAM kit used. Ultimately, the goal for the best performance is to get the fabric clock to 2000MHz and then keep the memory/MC clock at 3000MHz or less. Otherwise, if memory speeds exceed 3000MHz (DDR5-6000), then the memory controller will fall to 1:2 with the memory frequency, which will incur a performance hit.

It should be noted that AMD’s idea of optimal memory speeds here is high memory clocks with low memory latencies, rather than pushing the absolute fastest memory clocks. On good chips it should be possible to drive Ryzen 7000 at speeds above DDR5-6000, but the latency hit from things falling out of sync will be significant – enough so that it’s likely going to be a performance regression for most workloads.

Overclocking with EXPO

But for most users doing memory overclocking, they’re likely going to simply rely on factory overclocked memory kits with pre-programmed profiles. And this is where AMD is rolling out their own standard for those memory kit profiles: EXPO.

AMD EXPO stands for EXtended Profiles for Overclocking and is designed to provide users with high-end memory overclocking when used in conjunction with AMD's Ryzen 7000 series processors. Similar to Intel's preexisting X.M.P (Extreme Memory Profile) technology found on most consumer-level memory kits designed for desktop Intel platforms, AMD's EXPO technology aims to do the same, but as an open standard with an emphasis on providing the best settings for AMD platforms.

The premise of AMD EXPO is that is a one-click DDR5 overclocking function for AM5 motherboards. On the surface EXPO is essentially a set of X.M.P-like profile specifically designed for AMD's Ryzen 7000 (Zen 4) processors.

The major impetus behind EXPO is two-fold. The first is simple: Intel doesn’t share XMP. There’s no published specification for it, and while AMD has reverse engineered it to some extent, they can’t be sure of what’s going on (especially with DDR5/XMP 3). So rather than deal with the potential compatibility issues and inefficiencies, they’re just going their own way. The second benefit for EXPO being that it means that memory kit manufacturers can then create memory profiles that are AMD-specific, potentially using tighter sub-timings that are possible in conjunction with AMD processors, but not with Intel’s.

It is worth noting that, despite the existence of EXPO, DDR5 memory with X.M.P profiles will be supported on Ryzen 7000 platforms. Still, AMD is very clearly pushing customers towards using EXPO DIMMs with their systems to get the best performance out of AMD systems.

As for EXPO itself, like most other AMD standards, the company is making this an open and royalty free standard (XMP is believed to have royalties, but how much has never been officially published). So memory kit partners will be able to implement EXPO profiles without the blessing of AMD, or needing to pay AMD for the privilege.

With that said, EXPO will be a self-certification program. So AMD is not charging anything for it, but at the same time they are not doing much in the way of extra work to validate support for it.

In lieu of that, memory kit manufacturers will be required to publish their self-certification reports. These reports will lay out in detail what memory was tested on what systems, and with what timings and voltages. The idea here being that openness goes both ways, and that buyers should be able to see complete configuration settings a profile calls for. The detailed data is in some respects overkill, but it also means that if memory kit manufacturers opt for a high-clocked kit with tight primary timings and loose secondary timings, potential customers will be able to see those full timings in advance.

As with manual memory overclocking, AMD expects the sweet spot for EXPO kits to be DDR5-6000. In an example profile provided for a 2 x 16GB G.Skill memory kit, that kit runs at DDR5-6000 CL30, with a VDD voltage of 1.35v. It’s kits like these that AMD expects to provide the best performance, offering rather low memory latencies in conjunction with a more modest increase in memory frequency.

The specific performance gains will vary depending on the workloads. But for gaming tasks, some of the most DRAM latency-sensitive workloads, AMD is touting performance gains of up to 11% at 1080p. Otherwise, at more GPU-limited resolutions and settings, the gains will be understandably lower.

AM5 Chipsets: X670 and B650, Built by ASMedia Ryzen 7000 I/O Die: TSMC & Integrated Graphics At Last
POST A COMMENT

205 Comments

View All Comments

  • Tomatotech - Friday, September 30, 2022 - link

    Nice idea but you’re swimming against the flow of history. The trend is always to more tightly integrate various components into smaller and smaller packages. Apple have moved to onboard RAM in the same package as the CPU which has bought significant bandwidth advantages and seems to have boosted iGPU to the level of low-end dGPUs.

    The main takeaway from your metaphor of the 650w dGPU with a 55w mainboard and 100-200w CPU is that high-end dGPUs are now effectively separate computers in their own right - especially as a decent one can be well over 50% of the cost of the whole PC - and are being constrained by having to fit into the PC in terms of physical space, power supply capacity, and cooling capacity.

    It’s a shrinking market on both the low end and high end for home use of dGPU, given these innovations and constraints and I don’t know where it’s going to go from here.

    Since I got optic fibre, I’ve started renting cloud based high-end dGPU and it has been amazing albeit the software interface has been frustrating at times. With symmetric gigabit service and 1-3ms ping, it’s like having it under my desk. I worked out that for unlimited hours and given the cost of electricity, it would take 10 years for my cloud rental costs to match the cost of buying and running a home high end dGPU.

    Not everyone has optic fibre of course but globally it’s rolling out year by year so the trend is clear again.
    Reply
  • Castillan - Wednesday, September 28, 2022 - link

    "

    clang version 10.0.0
    clang version 7.0.1 (ssh://git@github.com/flang-compiler/flang-driver.git
    24bd54da5c41af04838bbe7b68f830840d47fc03)

    -Ofast -fomit-frame-pointer
    -march=x86-64
    -mtune=core-avx2
    -mfma -mavx -mavx2
    "

    ...and then later the article says:

    "The performance increase can be explained by a number of variables, including the switch from DDR4 to DDR5 memory, a large increase in clock speed, as well as the inclusion of the AVX-512 instruction set, albeit using two 256-bit pumps."

    The problem here being that those arguments to Clang will NOT enable AVX-512. Only AVX2 will be enabled. I verified this on an AVX512 system.

    To enable AVX512, at least at the most basic level, you'll want to use "-mavx512f ". There's also a whole stack of other AVX512 capabilities, which are enabled with "-mavx512dq -mavx512bw -mavx512vbmi -mavx512vbmi2 -mavx512vl" but some may not be supported. It won't hurt to include those on the command line though, until you try to compile something that makes use of those specific features, and then you'll see a failure if the platform doesn't support those extensions.
    Reply
  • Ryan Smith - Friday, September 30, 2022 - link

    Correct. AVX-512 is not in play here. That is an error in analysis on our part. Thanks! Reply
  • pman6 - Thursday, September 29, 2022 - link

    intel supports 8k60 AV1 decode.

    Does ryzen 7000 support 8k60 ??
    Reply
  • GeoffreyA - Monday, October 3, 2022 - link

    The Radeon Technology Group is getting 16K ready. Reply
  • yhselp - Thursday, September 29, 2022 - link

    I'd love to see you investigate memory scaling on the Zen 4 core. Reply
  • Myrandex - Thursday, September 29, 2022 - link

    The table on page four mentions "Quad Channel (128-bit bus)" for memory support. Does that mean we could have a 4 memory slot solution, with one memory module per channel, with four channel support? This way to drastically increase memory bandwidth all while maintaining those fast DDR5 frequencies? Reply
  • Ryan Smith - Friday, September 30, 2022 - link

    No. That configuration would be no different than a 2 DIMM setup in terms of bandwidth or capacity. Slotted memory is all configured DIMMs; as in Dual Inline Memory Module. Reply
  • GeoffreyA - Friday, September 30, 2022 - link

    All in all, excellent work, AMD, on the 7950X. Undoubtedly shocking performance. Even that dubious AVX-512 benchmark where Intel used to win, Zen 4 has taken command of it. However, lower your prices, AMD, and don't be so greedy. Little by little, you are becoming Intel. Don't be evil.

    Thanks, Ryan and Gavin, for the review and all the hard work. Much appreciated. Have a great week.
    Reply
  • Footman36 - Friday, September 30, 2022 - link

    Yawn. I really don't see what the big fuss is about. I currently run 5600X and was interested to see how the 7600X compared and while it does look like a true uplift in performance over the 5600X, I would have to factor in cost of new motherboard and DDR5 ram! On top of that, the comparison is not exactly apples to apples in the testing. 7600X has a turbo speed of 5.3, 5600X 4.6. 7600X runs with 5200 DDR5 and 5600X 3200 DDR4, 7600X has TDP 105W, 5600X 65W. If you take a look at the final page where the 7950X is tested in ECO mode which effectively supplies 65W instead of 105W you lose 18% performance. If we try to do apples to apples and use eco mode with 7600X, to get apples to apples with 65W of 5600W, then lower boost to 4.6ghz then the performance of the 2 cpu's looks very similar. Perhaps not the way I should be analyzing the results, but just my observation.... Reply

Log in

Don't have an account? Sign up now