Generally speaking, LRDIMMs are a lot more attractive than their quad ranked RDIMM counterparts with the same capacity. Due to the capacitive load of memory chips on the signal integrity of a memory channel, the clock speed and the number of chips in a channel are limited. To make this more clear, we described the relation between DPC (DIMMs Per Channel), CPU (Sandy Bridge and Ivy Bridge), DIMM type, and DIMM clock speed in the following table. We based this table on the technical server manuals and recommendations of HP, Dell, and Cisco. Low voltage DDR3 works at 1.35V, "normal" DDR3 DIMMs work at 1.5V.

Memory type: 2DPC (SB) 2DPC (IVB) 3DPC (SB) 3DPC (IVB)
Dual Rank RDIMM - 1600 1600 1600 1066 1066/1333 (*)
Dual Rank RDIMM - 1866 1600 1866 1066 1066/1333 (*)
Quad Rank RDIMM - 1333 1333 1333 N/A N/A
LRDIMM - 1866 1600 1866 1333 1333
LV 16GB RDIMM - 1333 (1.35V) 1333 1333 N/A N/A
LV 16GB LRDIMM - 1600 (1.35V) 1600 1600 1333 1333

(*) Some servers support 1333 MHz, others limit speed to 1066 MHz

The new Ivy Bridge CPU supports 1866 MHz DIMMs—both LRDIMMs and RDIMMS—up to 2DPC. The load reduced DIMMs support up to 3DPC at 1333 MHz. In most servers, RDIMMs are limited to 1066 MHz at 3DPC. However, the main advantage of LRDIMMs is still capacity: you get twice as much capacity at 1866 MHz. Dual ranked RDIMMs are limited to 16GB while LRDIMMs support 32GB with the same load. 64GB LRDIMMs are now available, but currently (Q4 2013) few servers seem to support them. Notice also that only LRDIMMs support Low Power DIMMs at 3 DPC.

The quad ranked 32GB RDIMMs support only 2DPC and are limited to 1333 MHz. With 40% more speed at 2DPC and the same capacity, and 50% more capacity (3DPC) in your server, the LRDIMMs are simply a vastly superior offering at the same cost. So we can safely forget about quad ranked RDIMMs.

Server Memory Worth the Price Premium?
Comments Locked

27 Comments

View All Comments

  • JohanAnandtech - Friday, December 20, 2013 - link

    First of all, if your workload is read intensive, more RAM will almost always be much faster than any flash cache. Secondly, it greatly depends on your storage vendor whether adding more flash can be done at "dramatically lower cost". The tier-one vendors still charge an arm and a leg for flash cache, while the server vendors are working at much more competitive prices. I would say that in general it is cheaper and more efficient to optimize RAM caching versus optimizing your storage (unless your are write limited).
  • blaktron - Friday, December 20, 2013 - link

    Not only are you correct, but significantly so. Enterprise flash storage at decent densities is more costly PER GIG than DDR3. Not only that, but you need the 'cadillac' model SANs to support more than 2 SSDs. Not to mention fabric management is a lot more resource intensive and more prone to error.

    Right now, the best bet (like always) to get performance is to stuff your servers with memory and distribute your workload. Because its poor network architecture that creates bottlenecks in any environment where you need to stuff more than 256GB of RAM into a single box.
  • hoboville - Friday, December 20, 2013 - link

    Another thing about HPC is that, as long as a processor has: enough RAM to do its dataset on the CPU/GPU before it needs more data, the quantity of RAM is enough. Saving on RAM can let you buy more nodes, which gives you more performance capacity.
  • markhahn - Saturday, January 4, 2014 - link

    headline should have been: if you're serving static content, your main goal is to maximize ram per node. not exactly a shocker eh? in the real world, at least the HPC corner of it, 1G/core is pretty common, and 32G/core is absurd. hence, udimms are actually a good choice sometimes.
  • mr map - Monday, January 20, 2014 - link

    Very interesting article, Johan!

    I would very much like to know what specific memory model (brand, model number) you are referring to regarding the 32GB LRDIMM—1866 option.
    I have searched at no avail.
    Johan? / Anyone?
    Thank you in advance!
    / Tomas
  • Gasaraki88 - Thursday, January 30, 2014 - link

    A great article as always.
  • ShirleyBurnell - Tuesday, November 5, 2019 - link

    I don't know why people are still going after server hardware. I mean it's the 21st century. Now everything is on cloud. Where you have the ability to scale your server anytime you want to. I mean the hosting provider companies like: AWS, DigitalOcean, Vultr hosting https://www.cloudways.com/en/vultr-hosting.php, etc. has made it very easy to rent your server.

Log in

Don't have an account? Sign up now