The Future of DDR4

DDR4 was first launch in the enthusiast space for several reasons. On the server side, any opportunity to use lower power and drive cooling costs down is a positive, so aiming at Xeons and high-end consumer platforms was priority number one. Any of the big players in the datacenter space most likely had hardware in and running for several months before the consumer arms got hold of it. Being such a new element in the twisting dynamic of memory, the modules command a premium and the big purchasers got first pick. The downside of when that shifts to consumer where budgets are tighter and some of the intended benefits of DDR4 are not that important, such as lower power, it causes problems. When we first launched our Haswell-E piece, the cost of a 4x4GB kit of JEDEC DRAM for even a basic eight-core system was over $250, and not much has changed since. Memory companies have lower stock levels, driving up the cost, and will only make and sell more if people start buying them.  At this point, Haswell-E and DDR4 is really restricted to early adopters or those with a professional requirement to go down this route.

DDR4 will start to get interesting when we see it in the mainstream consumer level. This means when regular Core i3/i5 desktops come into being, and eventually SO-DIMM variants in notebooks. The big question, as always, is when. If you believe the leaks, all arrows point towards a launch with Skylake on the Intel side, after Broadwell. Most analysts are also in this category, with the question being on how long the Broadwell platform on desktops is to last. The 14nm process node had plenty of issues, meaning that Q1 2015 is when we have started to see more Core-M (Broadwell-Y) products in notebooks and the launch of Broadwell-U, aiming at the AIO and mini-PC (such as the NUC and BRIX) market as well as laptops. This staggered launch would suggest that Broadwell on desktops should be due in the next few months, but there is no official indication as to when Skylake will hit the market, and in what form first. As always, Intel does not comment on unreleased product when asked.

On the AMD side of the equation, despite talks of a Kaveri refresh popping up in our forums and discussions about Carrizo focusing only on the sub-45W market with Excavator cores, we look to the talk surrounding Zen, K12 and everything that points to AMD’s architecture refresh with Jim Keller at the helm sometime around 2016. In a recent round table talk, Jim Keller described Zen as scaling from tablet to desktop but also probing servers. One would hope (as well as predict and imagine) that AMD is aiming for DDR4 with the platform. It makes sense to approach the memory subsystem of the new architecture from this angle, although for any official confirmation we might have to wait a few months at the earliest when AMD start releasing more information.

When DDR4 comes to desktop we will start to see a shift in the proportions of the market share that both DDR4 and DDR3 will get. The bulk memory market for desktop designs and mini-PCs will be a key demographic which will shift more to an equal DDR3-DDR4 stage and we can hope to achieve price parity before then. If we are to see mainstream DDR4 adoption, the bulk markets have to be interested in the performance of the platforms that require DDR4 specifically but also remain price competitive. It essentially means that companies like G.Skill that rely on DRAM sales for the bulk of their revenue have to make predictions on the performance of platforms like Skylake in order to tell their investors how quick DDR4 will take the market. It could be the difference between 10% and 33% adoption by the end of 2015.

One of the questions that sometimes appears with DDR4 is ‘what about DDR5?’. It looks like there are no plans to consider a DDR5 version ever for a number of reasons.

Firstly, but perhaps minor, is the nature of the DRAM interface.  It relies on a parallel connection and if other standards are indicative of direction, it should probably be upgraded to a serial connection, similarly as how PCI/PCI Express and PATA/SATA has evolved in order to increase throughput while at the same time decreasing pin counts and being easier to design for the same bandwidth.

Secondly, and more importantly, are the other memory standards currently being explored in the research labs. Rather than attempt to copy verbatim a piece from ExtremeTech, I’ll summarize it here. The three standards of interest, whilst mostly mobile focused, are:

Wide I/O 2: Designed to be placed on top of processors directly, abusing a larger number of I/O pins by TSVs, keeping frequencies down in order to reduce heat generation. This has benefits in industries where space is at a premium, saving some PCB area in exchange for processor Z-height.

Hybrid Memory Cube (HMC): Similar to current monolithic DRAM dies but using a stacked slice over a logic base, allowing for much higher density and much higher bandwidth within a single module. This also increases energy efficiency, but introduces higher cost and requires higher power consumption per module.

High Bandwidth Memory (HBM): This is almost a combination of the two above, specifically aimed more at graphics by offering multiple DRAM dies stacked on or near the memory controller to increase density and bandwidth. It is more described as a specialized implementation of Wide I/O 2, but should afford up to a 256GB/s bandwidth on a 128-bit bus with 4-8 stacks on a single interface.

Image from ExtremeTech

Moving some of the memory power consumption onto the processor directly has thermal issues to consider, which means that memory bandwidth/cost might be improved at the expense of operating frequencies. Adding packages onto the processor also introduces a heavy element of cost, which might leave these specialist technologies to the super-early adopters to begin with.

Given the time from DDR4 being considered to it actually entering the desktop market, we can safely say that DDR4 will become the standard memory option over the next four years, just as DDR3 is right now. Beyond DDR4 is harder to predict, and depends on how Intel/AMD want to approach a solution that offers higher memory bandwidth, depending at what cost. Both companies will be looking at how their integrated graphics are performing, as that will ultimately be the best beneficiary to the design. AMD has some leverage in the discrete GPU space and will be able to transfer any knowledge used over to the CPU side, but Intel has a big wallet. Both Intel and AMD has experimented with eDRAM/SRAM as extra level caches with Crystal Well and PS4/XBone, which puts less stress on the external memory demands when it comes to processor graphics, which leads me to the prediction that DDR4 will be here in the market longer than DDR3 or DDR2.

If any of the major CPU/SoC manufacturers want to invest heavily in Wide I/O 2, HBM or HMC, we will have to wait. If it changes what we see on the desktop, the mini-PC or the laptop, we might have to wait even longer.

Comparing DDR3 to DDR4 Conclusions on Haswell-E DDR4 Scaling
Comments Locked

120 Comments

View All Comments

  • galta - Thursday, February 5, 2015 - link

    Yes, yes, it is wrong: whoever spends money on "enthusiast" RAM has more money than brains, except for some very specific situations.
    The golden rule is to buy a nice standard RAM from a reputable brand and use the savings to beef-up your CPU/GPU or whatever.
  • Murloc - Thursday, February 5, 2015 - link

    yeah but e.g. with corsair ram I always bought the mainstream XMS one instead of the Value Select sticks, but given that I haven't done any tweaking in my last rig, I might just as well have bought the cheaper one without the heatsinks.

    Maybe in my next build I will do that if there is a significant price difference.
  • galta - Thursday, February 5, 2015 - link

    You just proved my point: crucial is pretty reputable and they have no thrills RAM that are generally the cheapest on the market.
    Corsair is always fancy ;)
  • Kidster3001 - Friday, February 6, 2015 - link

    The word "Enthusiast" with respect to computers is synonymous with "Spends more than they need to because they want to." If you're making the Price/Performance/Cost purchase then you are not an Enthusiast. Every year I spend money on computer stuff that I do not need. Why? Because I am an Enthusiast. You may consider this "wasting money", perhaps it is. I don't "need" my 30" monitor or my three SSD's or my fancy gaming keyboard and mouse. I did spend money on them though. It's my hobby and that's what hobbies are for.... spending money you don't need to spend.

    Stick with your cost conscience, consumer friendly computer parts. They are good and will do what you need them to do. Just don't ever try to call yourself an Enthusiast. You'll never have the tingly feeling of powering up something that is really cool, expensive and just plain fun. Yeah, it costs more money but in reality, that's half the fun. The tingly feeling goes away in a month or so. That's when you get to go "waste" more money on something else. :-)
  • sadsteve - Friday, February 6, 2015 - link

    Hm, I don't necessarily agree with you on size. With the size of digital photos today, a large amount of RAM gives you a lot more editing cache when Photoshopping. I would also imagine it's useful for video editing (witch I don't do). For all my regular computer use, yeah 16GB of RAM is not too useful.
  • Gunbuster - Thursday, February 5, 2015 - link

    So a 4x4 2133 kit for $200 or a 3333 kit for $800 and 2% more speed in only certain scenarios. Yeah seems totally worth $600 extra.

    You could buy an extra Nvidia or two AMD cards for that and damn sure get more than 2-10% speed boost.
  • FlushedBubblyJock - Sunday, February 15, 2015 - link

    Shhh ! We all have to pretend 5 or 10 dollars or maybe 25 or 50 is very, very ,very very important when it comes to grading the two warring red and green video cards against each other !
  • just4U - Thursday, February 5, 2015 - link

    Is there no way for memory makers to come up with solutions where they improve the latencies rather than the frequencies? The big numbers are all well and good at the one end but the higher you go at the other end offsets the gains.. at least that's the way it appears to me.
  • menting - Thursday, February 5, 2015 - link

    there is. The latency is due to physical contraints, so you can improve it by stacking (technology is just starting to slowly become mature for this), or by reducing the distance a signal needs to travel, which is done by smaller process size as well as shortening the signal distance (smaller array, smaller digit lines, etc). But shortening the signal distance comes at a cost of either|or|and smaller DRAM density, more power, etc, so companies don't really do it since it's more profitable to make larger density DRAM and/or lower power DRAM. The only low latency DRAM I know of is the RLDRAM, which has pretty high power and is fairly expensive.
  • ZeDestructor - Thursday, February 5, 2015 - link

    That, and with increasingly larger CPU caches, less and less of an issue as well.

Log in

Don't have an account? Sign up now