Intel today announced the availability of their long-awaited Optane DIMMs, bringing 3D XPoint memory onto the DDR4 memory bus. The modules that have been known under the Apache Pass codename will be branded as Optane DC Persistent Memory, to contrast with Optane DC SSDs, and not to be confused with the consumer-oriented Optane Memory caching SSDs.

The new Optane DC Persistent Memory modules will be initially available in three capacities: 128GB, 256GB and 512GB per module. This implies that they are probably still based on the same 128Gb 3D XPoint memory dies used in all other Optane products so far. The modules are pin-compatible with standard DDR4 DIMMs and will be supported by the next generation of Intel's Xeon server platforms.

The Optane DC Persistent Memory modules Intel is currently showing off have heatspreaders covering the interesting bits, but they appear to feature ten packages of 3D XPoint memory. This suggests that the 512GB module features a raw capacity of 640GB and that Optane DC Persistent Memory DIMMs have twice the error correction overhead of ECC DRAM modules.

The Optane DC Persistent Memory modules are currently sampling and will be shipping for revenue later this year, but only to select customers. Broad availability is planned for 2019. In a similar strategy to how Intel brought Optane SSDs to market, Intel will be offering remote access to systems equipped with Optane DC Persistent Memory so that developers can prepare their software to make full use of the new memory. Intel is currently taking applications for access to this program. The preview systems will feature 192GB of DRAM and 1TB of Optane Persistent Memory, plus SATA and NVMe SSDs. The preview program will run from June through August. Participants will be required to keep their findings secret until Intel gives permission for publication.

Intel is not officially disclosing whether it will be possible to mix and match DRAM and Optane Persistent Memory on the same memory controller channel, but the 192GB DRAM capacity for the development preview systems indicates that they are equipped with a 16GB DRAM DIMM on every memory channel. Also not disclosed in today's briefing: power consumption, clock speeds, specific endurance ratings, and whether Optane DC Persistent Memory will be supported across the Xeon product line or only on certain SKUs. Intel did vaguely promise that Optane DIMMs will be operational for the normal lifetime of a DIMM, but we don't know what assumptions Intel is making about workload.

Intel has been laying the groundwork for application-level persistent memory support for years through their open-source Persistent Memory Development Kit (PMDK) project, known until recently as NVM Library. This project implements the SNIA NVM Programming Model, an industry standard for the abstract interface between applications and operating systems that provide access to persistent memory. The PMDK project currently includes libraries to support several usage models, such as a transactional object store or log storage. These libraries build on top of existing DAX capabilities in Windows and Linux for direct memory-mapped access to files residing on persistent memory devices.

Optane SSD Endurance Boost

The existing enterprise Optane SSD DC P4800X initially launched with a write endurance rating of 30 drive writes per day (DWPD) for three years, and when it hit widespread availability Intel extended that to 30 DWPD for five years. Intel is now preparing to introduce new Optane SSDs with a 60 DWPD rating, still based on first-generation 3D XPoint memory. Another endurance rating increase isn't too surprising: Intel has been accumulating real-world reliability information about their 3D XPoint memory and they have been under some pressure from competition like Samsung's Z-NAND that also offers 30 DWPD with a more conventional flash-based memory.

POST A COMMENT

72 Comments

View All Comments

  • eddman - Wednesday, May 30, 2018 - link

    3D xpoint =/= optane
    PCI-e vs. DIMM: apples vs. oranges

    Also: https://www.theregister.co.uk/2016/09/29/xpoint_pr...
    Reply
  • Lolimaster - Wednesday, May 30, 2018 - link

    It's not the raw transfer rate, IT'S ABOUT LATENCY, Z nand or regular NVME pretty much got the same latency 50microseonds+. Optane is already 10x better in that scenario. Reply
  • tuxRoller - Wednesday, May 30, 2018 - link

    You're wrong, but understandably so.
    There are two unknowns, afaics. One is the overhead imposed by nvme, while the other is the media access times of xpoint.
    An estimate of first one can be had by diffing the results of a loop ramdisk against an nvme ram device (I'm not even sure this exists). If the nvme ram device isn't feasible you can then just test against the ramdisk. That would at least give us an idea of the block layer overhead.
    With the previous results you could then test an xpoint device and the results should provide, at a minimum, the ceiling for xpoint latency.
    Reply
  • tomatotree - Thursday, May 31, 2018 - link

    DRAM-backed NVMe devices exist, at least in labs. They were used when the NVMe spec was being developed, to make sure it had capacity to support faster devices than NAND which weren't yet ready for prime time. Not sure if anyone ever productized such a device though. They still had latencies in the 10us range, due to the latency of the PCIe interrupt, the OS syscall, and the time it takes for the CPU to wake up and service the interrupt on completion (though polling drivers can avoid this, at a cost of higher CPU usage). Getting rid of those latencies is the whole motivation for putting XPoint on the DRAM bus in the first place. Reply
  • Billy Tallis - Thursday, May 31, 2018 - link

    Lite-On recently introduced a NVMe drive for servers that provides ~200GB of flash storage and a few GB of DRAM-backed storage that gets saved up to flash if there's a power failure. The intention is that the flash is used as a boot drive, and rather than let that PCIe port be underutilized after boot, they give the SSD some extra DRAM and make it a fast journal device. Reply
  • tuxRoller - Friday, June 01, 2018 - link

    Sorry, yes, I meant as a product. I've read about these, indirectly, on the lkml, and the lowest latencies were around 5us, but I don't recall the specifics of the system being mentioned.
    Regardless, you'll read no disagreements from me. My only reason for posting was to mention a few areas of uncertainty, the union of which are likely to contain the really key data wrt xpoint.
    Reply
  • nagi603 - Wednesday, May 30, 2018 - link

    Finally, enough RAM for Chrome.... until a new version comes out :D Reply
  • Lolimaster - Wednesday, May 30, 2018 - link

    Just change the chrome shortcut to "process per site" and you'll fix chrome specially when you have many tabs originating from the same site. For me it went from unusable to smoothly.

    40-50tabs with the default settings is nightmare.
    Reply
  • Lolimaster - Wednesday, May 30, 2018 - link

    Performance wise at least in speed is something around DDR2 667-800. Latency is still 100x high.

    Optane was supposed to have a latency in the range of hundreds of nanoseconds (0.1-0.99 microsecons), right now it's in the range of 6-10microseconds (6000-10000 nanoseonds).
    Reply
  • Old_Fogie_Late_Bloomer - Wednesday, May 30, 2018 - link

    Either your numbers are wrong or your math is. If the target latency was 0.1-1ms and the actual latency is 6-10ms then the discrepancy is closer to 10x.

    Unless you're disingenuously suggesting that the 10ms absolute worst case can be compared to the 0.1 absolute best case, I guess. Anyway, my point is that if you want to convince people that there's a "100x" problem, your numbers don't support your case.
    Reply

Log in

Don't have an account? Sign up now