We reviewed several types of server memory back in August 2012. You still have the same three choices—LRDIMMs, RDIMMs, and UDIMMs—but the situation has significantly changed now. The introduction of the Ivy Bridge EP is one of those changes. The latest Intel Xeon has better support for LR-DIMMs and supports higher memory speeds (up to 1866 MHz).

But the biggest change is that the pricing difference between LRDIMMs and RDIMMs has shrunk a lot. Just a year ago, a 32GB LRDIMM cost $2000 and more, while a more "sensible" 16GB RDIMM costs around $300-$400. You paid about three times more per GB to get the highest capacity DIMMs in your servers. Many servers could benefit from more memory, but that kind of pricing made LRDIMMs only an option for IT projects where hardware costs were dwarfed by other costs like consulting and software licenses. Fifteen months in IT is like half a decade in other industries; just look at the table below.

Memory type: Low
voltage
Ranks Price ($)
Q4 2013
Price ($)
perGB
8GB RDIMM—1600 yes Dual 153 19
16GB RDIMM—1600 yes Dual 243 15
8GB RDIMM—1866 no Dual 169 21
16GB RDIMM—1866 no Dual 257 16
32GB RDIMM—1333 yes Quad 808 25
32GB LRDIMM—1333 yes dual* 822 26
32GB LRDIMM—1866 no dual* 822 26

(*) Quad, but load of dual

If you need a refesher on UDIMMs, RDIMMs and LRDIMMs, check out our technical overview here. The price per GB of LRDIMMs is only 60% higher than that of the best RDIMMs. Quadrank 32GB RDIMMs used to be a lot cheaper than their load reduced competition and that difference is now negligible.

DIMM Limitations
POST A COMMENT

26 Comments

View All Comments

  • JohanAnandtech - Friday, December 20, 2013 - link

    First of all, if your workload is read intensive, more RAM will almost always be much faster than any flash cache. Secondly, it greatly depends on your storage vendor whether adding more flash can be done at "dramatically lower cost". The tier-one vendors still charge an arm and a leg for flash cache, while the server vendors are working at much more competitive prices. I would say that in general it is cheaper and more efficient to optimize RAM caching versus optimizing your storage (unless your are write limited). Reply
  • blaktron - Friday, December 20, 2013 - link

    Not only are you correct, but significantly so. Enterprise flash storage at decent densities is more costly PER GIG than DDR3. Not only that, but you need the 'cadillac' model SANs to support more than 2 SSDs. Not to mention fabric management is a lot more resource intensive and more prone to error.

    Right now, the best bet (like always) to get performance is to stuff your servers with memory and distribute your workload. Because its poor network architecture that creates bottlenecks in any environment where you need to stuff more than 256GB of RAM into a single box.
    Reply
  • hoboville - Friday, December 20, 2013 - link

    Another thing about HPC is that, as long as a processor has: enough RAM to do its dataset on the CPU/GPU before it needs more data, the quantity of RAM is enough. Saving on RAM can let you buy more nodes, which gives you more performance capacity. Reply
  • markhahn - Saturday, January 04, 2014 - link

    headline should have been: if you're serving static content, your main goal is to maximize ram per node. not exactly a shocker eh? in the real world, at least the HPC corner of it, 1G/core is pretty common, and 32G/core is absurd. hence, udimms are actually a good choice sometimes. Reply
  • mr map - Monday, January 20, 2014 - link

    Very interesting article, Johan!

    I would very much like to know what specific memory model (brand, model number) you are referring to regarding the 32GB LRDIMM—1866 option.
    I have searched at no avail.
    Johan? / Anyone?
    Thank you in advance!
    / Tomas
    Reply
  • Gasaraki88 - Thursday, January 30, 2014 - link

    A great article as always. Reply

Log in

Don't have an account? Sign up now