Micron is announcing today their next generation of NVDIMM-N modules combining DDR4 DRAM with NAND flash memory to support persistent memory usage models. The new 32GB modules double the capacity of Micron's previous NVDIMMs and boost the speed rating to DDR4-2933 CL21, faster than what current server platforms support.

Micron is not new to the Non-Volatile DIMM market: their first DDR3 NVDIMMs predated JEDEC standardization. The new 32GB modules were preceded by 8GB and 16GB DDR4 NVDIMMs. Micron's NVDIMMs are type N, meaning they function as ordinary ECC DRAM DIMMs but have NAND flash to backup data to in the event of a power loss. This is in contrast to the NVDIMM-F type that offers pure flash storage. During normal system operation, Micron's NVDIMMs use only the DRAM. When the system experiences a power failure or signals that one is imminent, the module's onboard FPGA-based takes over to manage saving the contents of the DRAM to the module's 64GB of SLC NAND flash. During a power failure, the module can be powered either through a cable to an external AGIGA PowerGEM capacitor module, or by battery backup supplied through the DIMM slot's 12V pins.

Micron says the most common use cases for their NVDIMMs are for high-performance journalling and log storage for databases and filesystems. In these applications, a 2S server will typically be equipped with a total of about 64GB of NVDIMMs, so the new Micron 32GB modules allow these systems to use just a single NVDIMM per CPU, leaving more slots free for traditional RDIMMs. Both operating systems and applications need special support for persistent memory provided by NVDIMMs: the OS to handle restoring saved state after a power failure, and applications to manage what portions of their memory should be allocated from the persistent portion of the overall memory pool. This can be addressed either through applications using block storage APIs to access the NVDIMM's memory, or through direct memory mapping.

Micron is currently sampling the new 32GB NVDIMMs but did not state when they will be available in volume.

Conspicuously absent from Micron's announcement today is any mention of the third kind of memory they make: 3D XPoint non-volatile memory. Micron will eventually be putting 3D XPoint memory onto DIMMs and into SSDs under their QuantX brand, but so far they have been lagging far behind Intel in announcing and shipping specific products. NVDIMMs based on 3D XPoint memory may not match the performance of DRAM modules or these NVDIMM-N modules, but they will offer higher storage density at a much lower cost and without the hassle of external batteries or capacitor banks. Until those are ready, Micron is smart to nurture the NVDIMM ecosystem with their DRAM+flash solutions.

Source: Micron

POST A COMMENT

56 Comments

View All Comments

  • CajunArson - Monday, November 13, 2017 - link

    Oh look, "ddriver" insulting technologies he clearly doesn't understand again from the comfort of his mom's basement. Reply
  • ddriver - Monday, November 13, 2017 - link

    Oh look another mediocrity cheerleader eager to pretend to be smart by displaying profound lack of the even basic understanding it requires to see how all this is entirely redundant and even a bad thing to anyone but its seller.

    U so smart. U mu hero.
    Reply
  • peevee - Monday, November 13, 2017 - link

    The only problem with it is that the capacitors are external. So the DIMM itself is insufficient to maintain the data. Reply
  • theeldest - Monday, November 13, 2017 - link

    Dell PowerEdge 14G has a battery available to provide power to up to 12 (maybe 16?) NVDIMMS. So it's integrated into the system and works exactly as you'd expect. Reply
  • Hereiam2005 - Monday, November 13, 2017 - link

    Thing is, there is this application called in memory database. Where the entire database is stored within the dimm. About 3TB a node.
    Lets say there’s a power failure. If you have a super ssd with a 3000MBps bandwidth, you have to keep the entire system alive for 1000 seconds, or about 15 minutes to backup your entire memory. That’s 15 minutes you don’t have.
    On the other hand, if you put the SLC cache on the DIMM, 1) you don’t have to keep the entire system up, just the DIMM itself is enough, 2) you only need to backup the data on one single DIMM per SLC cache instead of all of them, and 3) you bypass the entire CPU and motherboard, enabling you to have monster bandwidth between the DIMM and the cache with far less power requirement.
    Yeah, these things will eventually fail. But the pros outweigh the cons. Unless you can solve all those problems without the ssd cache, nvdimms are here to stay.
    Just because you can’t see the need for these doesn’t mean it is not useful to someone else.
    Reply
  • ddriver - Monday, November 13, 2017 - link

    So in your expert opinion, you are gonna spend 100 000$ on RAM but put a single SSD on that system? Yeah, that makes perfect sense, after all you spent your budget on RAM ;)

    IMO such applications would actually rely on much faster storage solutions than your "super ssd" - current enterprise SSDs are twice as fast and more. For example the Ultrastar SN260 pushes above 6 GB/s. So that's only 500 seconds. A tad over 8 minutes. And you can put a few of those in parallel too. Two of those will cut time to 4 minutes, four to just 2. You put 150k in a server and put in a power backup solution that cannot even last 4 minutes? You are clearly doing it wrong. I'd put a power generator on such a machine as well. Not just a beefy UPS.

    But that doesn't even have to take that long. Because in-memory databases can do async flushing of writes to negligible performance impact, and to tremendous returns.

    You DON'T wait for power failure and then commit the whole thing to memory. You periodically commit modifications, and when power is lost, you only flush what remains. It won't take more than a few seconds, even with very liberally configured flush cycles. It will usually take less than a second.

    Nobody keeps in memory databases willy-nilly without flushing data to persistent storage, not only in cases of power loss, but also in cases of component failure. Components do fail, dram modules including. And when that happens, your 3 TB database will be COMPLETELY lost, even with them precious pseudo NV DIMMs. As I already said - pointless.

    But hey, don't beat yourself, at least you tried to make a point, unlike pretty much everyone else. That's a good sign.
    Reply
  • Hereiam2005 - Monday, November 13, 2017 - link

    1) The SN260 tops out at 2.2GBps write speed. Check your .
    2) The last thing you'd want to do after spending 100k$ on RAM is to spend another 100k$ on SSD that you don't need.
    3) The whole point of in-memory database is the fact that it might be updating more frequently than what an array of SSD can handle. Something like high frequency trading. So no flushing, unfortunately.
    4) There's redundancy, in case of component failure. Still redundancy can only do so much.
    5) RAM fails far less frequently than PSU and other electronics.
    6) Yeah, power backup solution is bulky and fail often, even when not in use. If your machine fail and your generator/psu fail on you too, you are SOL.
    7) If you flush the content of RAM into SSDs, you have to keep the entire system online. That's 1000 watts per node. If you flush on an NVDIMM, you only need to keep the dimms alive - 5 watts per DIMM at most, or about 150W per 32 DIMM system. That's why small supercaps are sufficient. There are many NVDIMM solutions that are self contained within a single node - that is not something you can do with power backup/generator solutions.
    8)NVDIMMs/supercapacitors is far more compact and reliable than PSU/power generator solution, less power hungry and less expensive than SSD solution. What more do you want?
    I get it, it is pointless to you. Don't extrapolate that to everybody else, please.
    Reply
  • Hereiam2005 - Monday, November 13, 2017 - link

    I forgot to add, the time to flush the content of a NVDIMM to the on dimm ssd is about 40 sec. Which is constant no matter how much RAM you have. And that is just the baseline, which will be improved with better NVDIMM in the future, without having to change the underlying hardware.
    With SSD the write speed peaks around 2 GB/s, and is generally capped by the speed of the PCIE itself - that's why I used a generous 3GB/s figure for an imaginary SSD that is not yet on the market.
    Reply
  • III-V - Monday, November 13, 2017 - link

    God, you are such a fucking idiot. Of course the filthy consumer peasant doesn't understand enterprise hardware, lol. Reply
  • ddriver - Monday, November 13, 2017 - link

    Granted, god did mess up making the monumental mistake of creating you, but still, I don't think he deserves to be called "a fucking idiot", especially by you of all people. Reply

Log in

Don't have an account? Sign up now