DDR5 & AMD EXPO Memory: Memory Overclocking, AMD's Way

The final major feature being introduced with the AM5 platform is DDR5 memory support. Like AM4, which was introduced alongside AMD’s shift over from DDR3 to DDR4, socket AM5 is being rolled out to bring support for DDR5 for the platform.

In fact, socket AM5 only brings DDR5 support. Unlike rival Intel, who opted to support both DDR4 and DDR5 memory with their Alder Lake (12th Gen Core) CPUs, AMD is only supporting DDR5 on the AM5 platform. This is a true platform limitation, and there is no going back.

Like other engineering decisions, this marks a trade-off being made by AMD. In the short term, this is going to drive up the total cost of an AM5 system relative to a theoretical AM5 system with DDR4 memory; DDR5 simply costs more right now. But at the same time, it simplifies things over the long run of the platform, especially since AMD is planning on supporting it through 2025. There will be no such thing as a DDR4 AM5 motherboard, and AMD needs not bake DDR4 support into any of the Ryzen memory controllers.

Ultimately, with AMD starting the DDR5 transition roughly a year after Intel, the company’s expectations are that DDR5 prices are going to continue falling fast enough that they’re going to reach parity with DDR4 before too long. So why implement DDR4 support if it’s only going to be necessary for a short period of time?

As for memory speeds and capacities supported, while AM5 enforces the use of DDR5, ultimately it’s the individual memory controllers that determine the rest. For AMD’s Ryzen 7000 desktop processors, which are based on the Zen 4 Raphael design, these chips offer support for official (JEDEC) speeds at up to DDR5-5200 for a 1 DIMM Per Channel (DPC) configuration. But, like all other DDR5 products we’ve seen thus far, 2 DPC comes with a significant penalty; in that case the maximum JEDEC speed is reduced to just DDR5-3600.

So as was the case with Intel’s Alder Lake platform, system builders are going to need to put a lot more thought into how they go about adding memory, and how they’re going to handle future memory expansion, if at all. While Ryzen 7000 can drive a 2 DPC/4 DIMM setup, you’re going to lose 31% of your memory bandwidth if you go that route. So for peak performance, it’ll be best to treat Ryzen 7000 as a 1 DPC platform.

Meanwhile, for system builders looking at reliability and data integrity as opposed to performance, AMD has confirmed that Ryzen 7000 also supports ECC memory. Unfortunately, the compatibility situation is essentially unchanged from the AM4 platform, which is to say that while the CPU supports ECC memory, it’s going to be up to motherboard manufacturers to properly validate it against their boards. For boards that aren’t doing validation, AMD can’t guarantee ECC is going to work. Though it’s largely a moot point for today’s launch anyhow, since although DDR5 ECC UDIMMs exist, they are in very short supply.

Also, while we didn’t expect it to be supported to begin with, AMD has confirmed that Ryzen 7000 won’t support RDIMMS/LRDIMMs. So it’s unbuffered DIMMs all the way.

Overclocking Memory Ratios

JEDEC standard speeds aside, the Ryzen 7000 series will also support memory overclocking. And thanks to a combination of the switch to DDR5 memory, changes to AMD’s memory controllers, and changes to AMD’s power delivery infrastructure, the rules have changed.

On Ryzen 5000, the ideal configuration for memory overclocking was to run the fabric clock, memory controller, and memory clock all in sync at the same frequencies. This made DDR4-3600 the typical “sweet spot” for the platform, as going faster would typically require running parts of the CPU out of sync so that they could stay within their own attainable clockspeeds.

But for Ryzen 7000, AMD has loosened things up a bit. Ryzen 7000 systems can still get improved memory performance even when the fabric clock is allowed to go out of sync with the memory controller. As a result, most overclockers can just leave that clock set to Auto, and instead focus on keeping the memory and memory controller clocks in sync in a 1:1 ratio.

Specifically, when the fabric clock is set to Auto, it’s typically run at 2000MHz. Meanwhile the memory and memory controller clocks will be running at anywhere between 2400MHz and 3000MHz, depending on the speed of the RAM kit used. Ultimately, the goal for the best performance is to get the fabric clock to 2000MHz and then keep the memory/MC clock at 3000MHz or less. Otherwise, if memory speeds exceed 3000MHz (DDR5-6000), then the memory controller will fall to 1:2 with the memory frequency, which will incur a performance hit.

It should be noted that AMD’s idea of optimal memory speeds here is high memory clocks with low memory latencies, rather than pushing the absolute fastest memory clocks. On good chips it should be possible to drive Ryzen 7000 at speeds above DDR5-6000, but the latency hit from things falling out of sync will be significant – enough so that it’s likely going to be a performance regression for most workloads.

Overclocking with EXPO

But for most users doing memory overclocking, they’re likely going to simply rely on factory overclocked memory kits with pre-programmed profiles. And this is where AMD is rolling out their own standard for those memory kit profiles: EXPO.

AMD EXPO stands for EXtended Profiles for Overclocking and is designed to provide users with high-end memory overclocking when used in conjunction with AMD's Ryzen 7000 series processors. Similar to Intel's preexisting X.M.P (Extreme Memory Profile) technology found on most consumer-level memory kits designed for desktop Intel platforms, AMD's EXPO technology aims to do the same, but as an open standard with an emphasis on providing the best settings for AMD platforms.

The premise of AMD EXPO is that is a one-click DDR5 overclocking function for AM5 motherboards. On the surface EXPO is essentially a set of X.M.P-like profile specifically designed for AMD's Ryzen 7000 (Zen 4) processors.

The major impetus behind EXPO is two-fold. The first is simple: Intel doesn’t share XMP. There’s no published specification for it, and while AMD has reverse engineered it to some extent, they can’t be sure of what’s going on (especially with DDR5/XMP 3). So rather than deal with the potential compatibility issues and inefficiencies, they’re just going their own way. The second benefit for EXPO being that it means that memory kit manufacturers can then create memory profiles that are AMD-specific, potentially using tighter sub-timings that are possible in conjunction with AMD processors, but not with Intel’s.

It is worth noting that, despite the existence of EXPO, DDR5 memory with X.M.P profiles will be supported on Ryzen 7000 platforms. Still, AMD is very clearly pushing customers towards using EXPO DIMMs with their systems to get the best performance out of AMD systems.

As for EXPO itself, like most other AMD standards, the company is making this an open and royalty free standard (XMP is believed to have royalties, but how much has never been officially published). So memory kit partners will be able to implement EXPO profiles without the blessing of AMD, or needing to pay AMD for the privilege.

With that said, EXPO will be a self-certification program. So AMD is not charging anything for it, but at the same time they are not doing much in the way of extra work to validate support for it.

In lieu of that, memory kit manufacturers will be required to publish their self-certification reports. These reports will lay out in detail what memory was tested on what systems, and with what timings and voltages. The idea here being that openness goes both ways, and that buyers should be able to see complete configuration settings a profile calls for. The detailed data is in some respects overkill, but it also means that if memory kit manufacturers opt for a high-clocked kit with tight primary timings and loose secondary timings, potential customers will be able to see those full timings in advance.

As with manual memory overclocking, AMD expects the sweet spot for EXPO kits to be DDR5-6000. In an example profile provided for a 2 x 16GB G.Skill memory kit, that kit runs at DDR5-6000 CL30, with a VDD voltage of 1.35v. It’s kits like these that AMD expects to provide the best performance, offering rather low memory latencies in conjunction with a more modest increase in memory frequency.

The specific performance gains will vary depending on the workloads. But for gaming tasks, some of the most DRAM latency-sensitive workloads, AMD is touting performance gains of up to 11% at 1080p. Otherwise, at more GPU-limited resolutions and settings, the gains will be understandably lower.

AM5 Chipsets: X670 and B650, Built by ASMedia Ryzen 7000 I/O Die: TSMC & Integrated Graphics At Last
POST A COMMENT

205 Comments

View All Comments

  • RestChem - Wednesday, October 5, 2022 - link

    Meh, time will out the ultimate price-points and all that, but as it emerges I really wonder what kind of users are looking to drop this kind of dollarses on high-end AMD builds. My gut is that they've priced themselves out of their primary demographic, and max TDP is right up there too, same as with their GPUs. When it comes down to a difference of a couple hundred bucks per build (assuming people build these with the pricey DDR5-6000 there's scant mobo support for through whatever AMD's integrated mem-OC profile scheme is) are there going to be enough users who just root hard enough for the underdog to build on these platforms, contra even high-end Alder Lake or (however much extra, reamins at time of writing to be seen) Raptor Lake builds? Before the announcements I was expecting AMD to get in cheap again, promise at least like performance for a bit of a discount, but it seems even those days are over and they want to play head-to-head. I wish them the best but I don't see them scoring well in that fight. Reply
  • tvdang7 - Thursday, October 6, 2022 - link

    " I have a 1440p 144Hz monitor and I play at 1080p just because that's what I'm used to."
    Is this some kind of joke. We are supposed to listen to reviewers that are stuck in 2010
    Reply
  • Hresna - Sunday, October 9, 2022 - link

    I’m curious as to whether there’s any appreciable difference to a consumer as to whether a particular PCIe lane or USB port is provisioned by the CPU or the Chipset…. Like, is there a reliability, performance, or some other metric difference?

    I’m just curious why it’s a design consideration to even include them in the CPU design to begin with, unless it has to do with how the CPU lanes are multiplexed in/out of the CPU and somehow some of the lanes can talk inter-device via the chipset without involving the cpu…
    Reply
  • bigtree - Monday, October 10, 2022 - link

    Where is octa channel memory? dual channel memory is a $300 CPU.
    Where is native Thunderbolt 4 support?
    (mac minis have had thunderbolt 3 for over 5 years).
    Cant even find one X670 Motherboard with 4x Thunderbolt 4 ports. And you want $300? Thunderbolt 4 should be standard on the cheapest boards. Its a $20 chip.
    Reply
  • Oxford Guy - Monday, October 10, 2022 - link

    The mission of corporations is to extract profit for shareholders and protect the lavish lifestyles of the rich. It is not to provide value to the plebs. Do the absolute minimum is the mantra. Reply
  • RedGreenBlue - Tuesday, October 11, 2022 - link

    That must be why Intel made Thunderbolt royalty-free and it’s now built into USB 4. Reply
  • Oxford Guy - Wednesday, October 12, 2022 - link

    It probably can afford to since states like Ohio are willing to bankroll half of the cost of its fabs. Reply
  • RedGreenBlue - Tuesday, October 11, 2022 - link

    It’s built into USB 4 now. Just make sure it’s functional already because it might need a driver, AMD did that on the 600 series. Aside from that important fact, I don’t care if there aren't many boards with it. The thunderbolt ecosystem has been crap since the beginning. Peripheral makers didn’t take advantage of it because USB was a more common approach and intel didn’t make thunderbolt cheap to implement. The Mac Minis have it because Apple made a big bet on it when it came out. These days it’s nice to have but it’s a throw-away feature unless you have a niche product that needs it. It’s for niche purposes and that would have been a waste of pci lanes. I would’ve liked it for external GPU’s but intel effectively shut that down and I don’t know if they’ve opened the door to it again. USB is way more convenient. Reply
  • RedGreenBlue - Tuesday, October 11, 2022 - link

    And 8 channel memory, like, this sounds like a joke. That’s for server or workstation cpus because of how many layers it takes for the wiring on the board and the pins on the socket. That’s part of why server and workstation boards are so expensive. If you need that much bandwidth you’re in the wrong market segment. Look at Threadripper chips. Reply
  • RedGreenBlue - Tuesday, October 11, 2022 - link

    It would be appreciated if architecture reviews had the pipeline differences in a chart to compare across generations. Anandtech used to have that included and it gave a good comparison of different generations and competitor architectures. I can understand not including it in the product review but I don’t remember a chart being in the previous Zen 4 overview article. Reply

Log in

Don't have an account? Sign up now