Intel today announced the availability of their long-awaited Optane DIMMs, bringing 3D XPoint memory onto the DDR4 memory bus. The modules that have been known under the Apache Pass codename will be branded as Optane DC Persistent Memory, to contrast with Optane DC SSDs, and not to be confused with the consumer-oriented Optane Memory caching SSDs.

The new Optane DC Persistent Memory modules will be initially available in three capacities: 128GB, 256GB and 512GB per module. This implies that they are probably still based on the same 128Gb 3D XPoint memory dies used in all other Optane products so far. The modules are pin-compatible with standard DDR4 DIMMs and will be supported by the next generation of Intel's Xeon server platforms.

The Optane DC Persistent Memory modules Intel is currently showing off have heatspreaders covering the interesting bits, but they appear to feature ten packages of 3D XPoint memory. This suggests that the 512GB module features a raw capacity of 640GB and that Optane DC Persistent Memory DIMMs have twice the error correction overhead of ECC DRAM modules.

The Optane DC Persistent Memory modules are currently sampling and will be shipping for revenue later this year, but only to select customers. Broad availability is planned for 2019. In a similar strategy to how Intel brought Optane SSDs to market, Intel will be offering remote access to systems equipped with Optane DC Persistent Memory so that developers can prepare their software to make full use of the new memory. Intel is currently taking applications for access to this program. The preview systems will feature 192GB of DRAM and 1TB of Optane Persistent Memory, plus SATA and NVMe SSDs. The preview program will run from June through August. Participants will be required to keep their findings secret until Intel gives permission for publication.

Intel is not officially disclosing whether it will be possible to mix and match DRAM and Optane Persistent Memory on the same memory controller channel, but the 192GB DRAM capacity for the development preview systems indicates that they are equipped with a 16GB DRAM DIMM on every memory channel. Also not disclosed in today's briefing: power consumption, clock speeds, specific endurance ratings, and whether Optane DC Persistent Memory will be supported across the Xeon product line or only on certain SKUs. Intel did vaguely promise that Optane DIMMs will be operational for the normal lifetime of a DIMM, but we don't know what assumptions Intel is making about workload.

Intel has been laying the groundwork for application-level persistent memory support for years through their open-source Persistent Memory Development Kit (PMDK) project, known until recently as NVM Library. This project implements the SNIA NVM Programming Model, an industry standard for the abstract interface between applications and operating systems that provide access to persistent memory. The PMDK project currently includes libraries to support several usage models, such as a transactional object store or log storage. These libraries build on top of existing DAX capabilities in Windows and Linux for direct memory-mapped access to files residing on persistent memory devices.

Optane SSD Endurance Boost

The existing enterprise Optane SSD DC P4800X initially launched with a write endurance rating of 30 drive writes per day (DWPD) for three years, and when it hit widespread availability Intel extended that to 30 DWPD for five years. Intel is now preparing to introduce new Optane SSDs with a 60 DWPD rating, still based on first-generation 3D XPoint memory. Another endurance rating increase isn't too surprising: Intel has been accumulating real-world reliability information about their 3D XPoint memory and they have been under some pressure from competition like Samsung's Z-NAND that also offers 30 DWPD with a more conventional flash-based memory.

POST A COMMENT

73 Comments

View All Comments

  • eastcoast_pete - Wednesday, May 30, 2018 - link

    Disregard the last two sentences of the above - stupid mobile interface + touch typing. What I meant to say in my last two sentences is that having your precious database always updated in non-volatile memory might be worth it if your business depends on it. For home use, this is still years away from being useful. Reply
  • Jon Tseng - Thursday, May 31, 2018 - link

    You hit the nail on the head. Most obvious use case at present is running large (terrabyte-scale and potentially petabyte-scale) databases in-memory. HANA is an obvious example.

    I don't think the non-volatile part is a big thing (lets face it any datacenter worth its salt will have a decent UPS system), but the directly accessible memory + performance increase vs. NAND is the game changer for this workload.
    Reply
  • peevee - Friday, June 1, 2018 - link

    Non-huge real databases (meaning with ACID) will also benefit, because ACID requires write-through which limits transactions by storage performance (especially latency).
    Potential improvement here, if OS maps persistent pages directly into the database process memory and the database software is aware of its persistence and not as a stupid RAM disk putting all kinds of unnecessary overhead on top of a relatively fast DDR4 (relative to disks, still super slow relative to CPUs).

    Next step would be replacing DRAM with smaller, more expensive per GB SRAM on much faster bus (stacked with CPU for shorter lines) and have X-Point pickup the rest. At least on single CPU machines. And replacing last level cache (which is up to 30MB on Xeons) with more cores.
    Reply
  • duploxxx - Thursday, May 31, 2018 - link

    So now i will have to buy the next Xeon generation that will be a poor update to have the Intel Optane support? So now there will be again less space for memory on the intel part as these replaces dimms which is already less then the AMD counterpart and limited artificial capacity depending on cpu type... way to go INTEL.

    Oh wait i can already buy it today its called HPE persistent memory.....

    Lets be honest about implementation, the OS support for this is limited, the usecase is limited. It has a future, but limited because of the nvme introduction. nvme slots is expandable, memory lanes is always limited.
    Reply
  • Landos - Thursday, May 31, 2018 - link

    Are there applications here for scientific computing as well? Problems like computational chemistry and physics that require operations on very large, non-sparse matrices or grid based solvers? Reply
  • flgt - Thursday, May 31, 2018 - link

    Seems like everyone here is stuck on general compute. Intel is providing yet another piece of hardware to crush very specific tasks. If you’re one of the big boys and can afford the engineers to put these systems together, it’s not a big deal to pay the premium to Intel. It’s another nice diversification area for Intel. Reply
  • eva02langley - Thursday, May 31, 2018 - link

    Seems to me that Intel is trying to solve one of the biggest problem of Quantum computing, memory amount. I see potential in this area at least. Reply
  • pogostick - Thursday, May 31, 2018 - link

    This tech puts a bad taste in my mouth. For some reason it "feels" short lived. Is this a workaround for limited IO availability? I'm thinking 48 dedicated PCIe lanes, directly connected to the CPU, for a dozen NVMe drives in RAID 0. Reply
  • Billy Tallis - Thursday, May 31, 2018 - link

    If it's a workaround for anything, it's the difficulty of improving the latency of an interrupt-driven block storage protocol. NVMe was designed to be more or less the lowest latency storage protocol possible to layer over PCIe, and that combination still adds substantial overhead when you're using a storage medium as fast as 3D XPoint. The memory bus is the only place you can attach storage and avoid that overhead. Reply
  • peevee - Friday, June 1, 2018 - link

    PCIe interrupt processing latencies (through MSI) on Intel were about 500ns 10 years ago on old platforms with separate MCH. See
    https://www.intel.com/content/dam/www/public/us/en...

    I am sure now, with newer PCIe version and newer, faster CPUs, it can only be less.

    Of course NVMe introduces its own overhead. But even the 30 microsecond latencies are far more than just one interrupt.
    Reply

Log in

Don't have an account? Sign up now