Marking an important milestone in computer memory development, today the JEDEC Solid State Technology Association is releasing the final specification for its next mainstream memory standard, DDR5 SDRAM. The latest iteration of the DDR standard that has been driving PCs, servers, and everything in-between since the late 90s, DDR5 once again extends the capabilities of DDR memory, doubling the peak memory speeds while greatly increasing memory sizes as well. Hardware based on the new standard is expected in 2021, with adoption starting at the server level before trickling down to client PCs and other devices later on.  

Originally planned for release in 2018, today’s release of the DDR5 specification puts things a bit behind JEDEC’s original schedule, but it doesn’t diminish the importance of the new memory specification. Like every iteration of DDR before it, the primary focus for DDR5 is once again on improving memory density as well as speeds. JEDEC is looking to double both, with maximum memory speeds set to reach at least 6.4Gbps while the capacity for a single, packed-to-the-rafters LRDIMM will eventually be able to reach 2TB. All the while, there are several smaller changes to either support these goals or to simplify certain aspects of the ecosystem, such as on-DIMM voltage regulators as well as on-die ECC.

JEDEC DDR Generations
  DDR5 DDR4 DDR3 LPDDR5
Max Die Density 64 Gbit 16 Gbit 4 Gbit 32 Gbit
Max UDIMM Size
(DSDR)
128 GB 32 GB 8 GB N/A
Max Data Rate 6.4 Gbps 3.2 Gbps 1.6 Gbps 6.4Gbps
Channels 2 1 1 1
Total Width
(Non-ECC)
64-bits
(2x32-bit)
64-bits 64-bits 16-bits
Banks
(Per Group)
4 4 8 16
Bank Groups 8/4 4/2 1 4
Burst Length BL16 BL8 BL8 BL16
Voltage (Vdd) 1.1v 1.2v 1.5v 1.05v
Vddq 1.1v 1.2v 1.5v 0.5v

Going Bigger: Denser Memory & Die-Stacking

We’ll start with a brief look at capacity and density, as this is the most-straightforward change to the standard compared to DDR4. Designed to span several years (if not longer), DDR5 will allow for individual memory chips up to 64Gbit in density, which is 4x higher than DDR4’s 16Gbit density maximum. Combined with die stacking, which allows for up to 8 dies to be stacked as a single chip, then a 40 element LRDIMM can reach an effective memory capacity of 2TB. Or for the more humble unbuffered DIMM, this would mean we’ll eventually see DIMM capacities reach 128GB for your typical dual rank configuration.

Of course, the DDR5 specification’s peak capacities are meant for later in the standard’s lifetime, when chip manufacturing catches up to what the spec can allow. To start things off memory manufacturers will be using today’s attainable densities 8Gbit and 16Gbit chips in order to build their DIMMs. So while the speed improvements from DDR5 will be fairly immediate, the capacity improvements will be more gradual as manufacturing densities improve.

Going Faster: One DIMM, Two (Smaller) Channels

The other half of the story for DDR5 is about once again increasing memory bandwidth. Everyone wants more performance (especially with DIMM capacities growing), and unsurprisingly, this is where a lot of work was put into the specification in order to make this happen.

For DDR5, JEDEC is looking to start things off much more aggressively than usual for a DDR memory specification. Typically a new standard picks up from where the last one started off, such as with the DDR3 to DDR4 transition, where DDR3 officially stopped at 1.6Gbps and DDR4 started from there. However for DDR5 JEDEC is aiming much higher, with the group expecting to launch at 4.8Gbps, some 50% faster than the official 3.2Gbps max speed of DDR4. And in the years afterwards, the current version of the specification allows for data rates up to 6.4Gbps, doubling the official peak of DDR4.

Of course, sly enthusiasts will note that DDR4 already goes above the official maximum of 3.2Gbps (sometimes well above), and it’s likely that DDR5 will eventually go a similar route. The underlying goal, regardless of specific figures, is to double the amount of bandwidth available today from a single DIMM. So don’t be too surprised if SK Hynix indeed hits their goal of DDR5-8400 later this decade.

Underpinning these speed goals are changes at both the DIMM and the memory bus in order to feed and transport so much data per clock cycle. The big challenge as always for DRAM speeds, comes from the lack of progress in DRAM core clock rates. Dedicated logic is still getting faster, and memory busses are still getting faster, but the capacitor-and-transistor-based DRAM underpinning modern memory still can’t clock higher than a few hundred megahertz. So in order to get more from a DRAM die – to maintain the illusion that the memory itself is getting faster and to feed the actually faster memory busses – more and more parallelism has been required. And DDR5 for its part ups the ante once more.

The big change here is that, similar to what we’ve seen in other standards like LPDDR4 and GDDR6, a single DIMM is being broken down into 2 channels. Rather than one 64-bit data channel per DIMM, DDR5 will offer two independent 32-bit data channels per DIMM (or 40-bit when factoring in ECC). Meanwhile the burst length for each channel is being doubled from 8 bytes (BL8) to 16 bytes (BL16), meaning that each channel will deliver 64 bytes per operation. Compared to a DDR4 DIMM, then, a DDR5 DIMM running at twice the rated memory speed (identical core speeds) will deliver two 64-byte operations in the time it takes a DDR4 DIMM to deliver one, doubling the effective bandwidth.

Overall, 64 bytes remains the magic number for memory operations as this is the size of a standard cache line. A larger burst length on DDR4-style memory would have resulted in 128-byte operations, which is too big for a single cache line, and at best, would have resulted in efficiency/utilization losses should a memory controller not want two lines’ worth of sequential data. By comparison, since DDR5’s two channels are independent, a memory controller can request 64 bytes from separate locations, making it a better fit to how processors actually work and avoiding the utilization penalty.

The net impact for a standard PC desktop then would be that instead of today’s DDR4 paradigm of two DIMMs filling two channels for a 2x64bit setup, a DDR5 system will functionally behave as a 4x32bit setup. Memory will still be installed in pairs – we’re not going back to the days of installing 32-bit SIMMs – but now the minimum configuration is for two of DDR5’s smaller channels.

This structural change also has some knock-on effects elsewhere, particularly to maximize usage in these smaller channels. DDR5 introduces a finer-grained bank refresh feature, which will allow for some banks to refresh while others are in use. This gets the necessary refresh (capacitor recharge) out of the way sooner, keeping latencies in check and making unused banks available sooner. The maximum number of bank groups is also being doubled from 4 to 8, which will help to mitigate the performance penalty from sequential memory access.

Rapid Bus Service: Decision Feedback Equalization

In contrast finding ways to increase the amount of parallelization within a DRAM DIMM, increasing the bus speed is both simpler and harder: the idea is simple in concept and harder in execution. At the end of the day to double DDR’s memory speeds, DDR5’s memory bus needs to run at twice the rate of DDR4’s.

There are several changes to DDR5 to make this happen, but surprisingly, there aren’t any massive, fundamental changes to the memory bus such as QDR or differential signaling. Instead, JEDEC and its members have been able to hit their targets with a slightly modified version of the DDR4 bus, albeit one that has to run at tighter tolerances.

The key driver here is the introduction of decision feedback equalization (DFE). At a very high level, DFE is a means to reduce inter-symbol interference by using feedback from the memory bus receiver to provide better equalization. And better equalization, in turn, allows for the cleaner signaling needed for DDR5’s memory bus to run at higher transfer rates without everything going off the rails. Meanwhile this is further helped by several smaller changes in the standard, such as the addition of new and improved training modes to help DIMMs and controllers compensate for minute timing differences along the memory bus.

Simpler Motherboards, More Complex DIMMs: On-DIMM Voltage Regulation

Along with the core changes to density and memory speeds, DDR5 also once again improves on DDR memory’s operating voltages. At-spec DDR5 will operate with a Vdd of 1.1v, down from 1.2v for DDR4. Like past updates this should improve the memory’s power efficiency relative to DDR4, although the power gains thus far aren’t being promoted as heavily as they were for DDR4 and earlier standards.

JEDEC is also using the introduction of the DDR5 memory standard to make a fairly important change to how voltage regulation works for DIMMs. In short, voltage regulation is being moved from the motherboard to the individual DIMM, leaving DIMMs responsible for their own voltage regulation needs. This means that DIMMs will now include an integrated voltage regulator, and this goes for everything from UDIMMs to LRDIMMs.

JEDEC is dubbing this “pay as you go” voltage regulation, and is aiming to improve/simplify a few different aspects of DDR5 with it. The most significant change is that by moving voltage regulation on to the DIMMs themselves, voltage regulation is no longer the responsibility of the motherboard. Motherboards in turn will no longer need to be built for the worst-case scenario – such as driving 16 massive LRDIMMs – simplifying motherboard design and reining in costs to a degree. Of course, the flip side of this argument is that it moves those costs over to the DIMM itself, but then system builders are at least only having to buy as much voltage regulation hardware as they have DIMMs, and hence the PAYGO philosophy.

According to JEDEC, the on-DIMM regulators will also allow for better voltage tolerances in general, improving DRAM yields. And while no specific promises are being made, the group is also touting the potential for this change to (further) reduce DDR5’s power consumption relative to DDR4.

As the implementation details for these voltage regulators will be up to the memory manufacturers, JEDEC hasn’t said too much about them. But it sounds like there won’t be a one-size-fits-all solution between clients and servers, so client UDIMMs and server (L)RDIMMs will have separate regulators/PMICs, reflecting their power needs.

DDR5 DIMMs: Still 288 Pins, But Changed Pinouts

Finally, as already widely demonstrated from earlier vendor prototypes, DDR5 will be keeping the same 288 pin count from DDR4. This mirrors the DDR2 to DDR3 transition, where the pin count was kept identical there as well at 240 pins.

Don’t expect to use DDR5 DIMMs in DDR4 sockets, however. While the pin count isn’t changing the pinout is, in order to accommodate DDR5’s new features – and in particular its dual channel design.

The big change here is that the command and address bus is being shrunk and partitioned, with the pins being reallocated to the data bus for the second memory channel. Instead of a single 24-bit CA bus, DDR5 will have two 7-bit CA busses, one for each channel. 7 is well under half of the old bus, of course, so things are becoming a bit more complex for memory controllers in exchange.

Sampling Now, Adoption Starts in the Next 12-18 Months

Wrapping things up for today’s announcement, like other JEDEC specification releases, today is less of a product launch and more about the development committee setting the standard loose for its members to use. The major memory manufacturers, whom have been participating in the DDR5 development process since the start, have already developed prototype DIMMs and are now looking at wrapping things up to bring their first commercial hardware to market.

The overall adoption curve for DDR5 is expected to be similar to earlier DDR standards. That is to say that JEDEC expects DDR5 to start showing up in devices in 12 to 18 months as hardware is finalized, and increase from there. And while the group doesn’t give specific product guidance, they have been very clear that they expect servers to once again be the driving force behind early adoption, especially with the major hyperscalers. Neither Intel nor AMD have officially announced platforms that will use the new memory, but at this point that’s only a matter of time.

Meanwhile, expect DDR5 to have as long of a lifecycle as DDR4, if not a bit longer. Both DDR3 and DDR4 have enjoyed roughly seven-year lifecycles, and DDR5 should enjoy the same degree of stability. And while seeing out several years with perfect clarity isn’t possible, at this point the JEDEC is thinking that if anything DDR5 will end up with a longer shelf-life than DDR4, thanks to the ongoing maturation of the technology industry. Of course, this is the same year that Apple has dropped Intel for its CPUs, so by 2028 anything is possible.

At any rate, expect to see the major memory manufacturers continue to show off their prototype and commercial DIMMs as DDR5 gets ready to launch. With adoption set to kick off in earnest in 2021, it sounds like next year should bring some interesting changes to the sever market, and eventually the client desktop market as well.

Source: JEDEC

POST A COMMENT

84 Comments

View All Comments

  • Ryan Smith - Tuesday, July 14, 2020 - link

    Unfortunately nothing about this precludes laptops coming half-filled with memory. Vendors can still put 1 SO-DIMM in a laptop, leaving it with only 64-bits of its 128-bit memory bus filled. Reply
  • PeachNCream - Tuesday, July 14, 2020 - link

    I must be misunderstanding the new two channels per single DIMM thing. Does that not apply to SODIMMs? I get that soldered down RAM would allow maybe something to fall outside the DDR5 specs, but your article implies that there is 128 bits worth of data moving across the memory bus from a single stick of RAM. Reply
  • Ryan Smith - Tuesday, July 14, 2020 - link

    The channels are now half-sized. It's 2 32-bit channels per DIMM, instead of 1 64-bit channel per DIMM. So you will still need two DIMMs to fill a 128-bit bus. Reply
  • Santoval - Tuesday, July 14, 2020 - link

    The article has a table mentioning that LPDDR5 is only single channel and just 16 bits wide. Which is weird since LPDDR4(X), its predecessor, was also split in two channels per DIMM (2x16 bit). Assuming the LPDDR5 bit is accurate* and that is what you mean by "SODIMMs" then forget the two channels per DIMM. By the way, the two channels per DIMM are *internal*; they do not require separate memory controllers. This is how a laptop with 4x16 bit LPDDR4X is run by a SoC with two memory controllers. That's dual channel externally (i.e. from the SoC) but quad channel internally.

    *I looked up LPDDR5 quickly at Wikipedia and it doesn't mention if it reverted to a single channel. However single channel LPDDR5 at 6.4 Gbps (the spec's top limit) would have an identical speed to dual channel LPDDR4(X) at 3.2 Gbps which has already well been surpassed. So I guess the table is incorrect and LPDDR5 also has two 16 bit channels per DIMM.
    Reply
  • Ryan Smith - Tuesday, July 14, 2020 - link

    Bear in mind that LPDDR has no concept of DIMMs. It's strictly a solder-down memory interface.

    Anyhow, the channel size difference between 4 and 5 is mostly semantics. Officially, according to a back-and-forth discussion we had with Samsung, the smallest unit of organization in LPDDR5 is a single 16-bit channel. This is as opposed to LPDDR4, where the smallest unit was two 16-bit channels. As a result, they classify LPDDR5 as 1x16 instead of 2x16.

    Chips will still come with multiple channels per chip. And in fact I'm not aware of anything smaller than a 32-bit (2x16) LPDDR5 chip.
    Reply
  • back2future - Wednesday, July 15, 2020 - link

    Is internal DDR5 memory refresh method independent from memory controller, while one single 16bit data channel always has access to one half of memory cells or is refresh influenced by parameters from SoC memory controller? Thx Reply
  • dotjaz - Thursday, July 16, 2020 - link

    " your article implies that there is 128 bits worth of data moving across the memory bus from a single stick of RAM."

    Where? Are we even reading the same article? The article EXPLICITLY said "two independent 32-bit data channels per DIMM", it's not implying anything, it flat out told you 2x32-bit per DIMM.
    Reply
  • Santoval - Tuesday, July 14, 2020 - link

    Not quite. Cheap laptops of the future will have LPDDR5. It just premiered in some flagship smartphones, and I believe it is commonly set up as dual channel. However, unless the table comparing the various memories in the article is inaccurate*, it can also work in dual channel mode (unlike its predecessor LPDDR4X). If that's the case that's what cheap laptops from 2021 onward will have.
    *I'm about to retire for the day, so I can't check out if it is right now..
    Reply
  • Dragonstongue - Tuesday, July 14, 2020 - link

    typo but will use to my advantage

    "with adoption starting at the sever level before trickling down to client PCs and other devices later on. "

    ----- you are likely quite very correct on sever(e) level, as DDR2-3-4 .. all of memory standards prior to mass market stable production >? in regards to pricing.

    who knows, maybe this time will be different with vendors well on their way to having a full assortment of speed bins, prices, kits and all that fun stuff

    maybe them makes might have smartened up a wee tad, that is, have the DIE used to make the memory as small as possible to increase yield hopefully meeting or beating expected % loss and all that

    long story, keep price as low as can be reasonably managed, as if it is "expected" to be all that and a cup of cakes, it will be quite likely flying off the shelves, only seems like yesterday was DDR4 launched where DDR3 was here for a long enough while (started of wicked @#$ expensive for the quite low speeds compared to now, whereas DDR4 beyond the not able to keep shelves stocked, makers once again monopolostic pricing (curb down the amount produced to keep price as high as can be, till get nailed some hefty fines (which is a joke..hurts me and you, them massive corps, slap on the wrist..considering there is what, like 4 maybe 5 memory makers these days, but only 3 major players overall (Samsung, Micron, Hynix (Elpida branding now as well? forgets)

    anyways.

    def feels less "snappy" with DDR4 over fast DDR3 (latency or something?) but when it gets going, it is wicked quick overall.. imagine DDR5 will be similar, small hit overall latency (for the system to "gear up") but when it does, that much faster, lower power use as well (guess that depends on raw amp vs just "volt")

    anywho

    enough word wall from me for another post..my bad
    Reply
  • Duncan Macdonald - Tuesday, July 14, 2020 - link

    Probably going to be an easier transition for AMD than Intel. As the memory access is via the I/O die in AMD CPUs, this can be modified without impacting the compute dies. Intel with its monolithic setup has to redo the whole die. Reply

Log in

Don't have an account? Sign up now