Intel today announced the availability of their long-awaited Optane DIMMs, bringing 3D XPoint memory onto the DDR4 memory bus. The modules that have been known under the Apache Pass codename will be branded as Optane DC Persistent Memory, to contrast with Optane DC SSDs, and not to be confused with the consumer-oriented Optane Memory caching SSDs.

The new Optane DC Persistent Memory modules will be initially available in three capacities: 128GB, 256GB and 512GB per module. This implies that they are probably still based on the same 128Gb 3D XPoint memory dies used in all other Optane products so far. The modules are pin-compatible with standard DDR4 DIMMs and will be supported by the next generation of Intel's Xeon server platforms.

The Optane DC Persistent Memory modules Intel is currently showing off have heatspreaders covering the interesting bits, but they appear to feature ten packages of 3D XPoint memory. This suggests that the 512GB module features a raw capacity of 640GB and that Optane DC Persistent Memory DIMMs have twice the error correction overhead of ECC DRAM modules.

The Optane DC Persistent Memory modules are currently sampling and will be shipping for revenue later this year, but only to select customers. Broad availability is planned for 2019. In a similar strategy to how Intel brought Optane SSDs to market, Intel will be offering remote access to systems equipped with Optane DC Persistent Memory so that developers can prepare their software to make full use of the new memory. Intel is currently taking applications for access to this program. The preview systems will feature 192GB of DRAM and 1TB of Optane Persistent Memory, plus SATA and NVMe SSDs. The preview program will run from June through August. Participants will be required to keep their findings secret until Intel gives permission for publication.

Intel is not officially disclosing whether it will be possible to mix and match DRAM and Optane Persistent Memory on the same memory controller channel, but the 192GB DRAM capacity for the development preview systems indicates that they are equipped with a 16GB DRAM DIMM on every memory channel. Also not disclosed in today's briefing: power consumption, clock speeds, specific endurance ratings, and whether Optane DC Persistent Memory will be supported across the Xeon product line or only on certain SKUs. Intel did vaguely promise that Optane DIMMs will be operational for the normal lifetime of a DIMM, but we don't know what assumptions Intel is making about workload.

Intel has been laying the groundwork for application-level persistent memory support for years through their open-source Persistent Memory Development Kit (PMDK) project, known until recently as NVM Library. This project implements the SNIA NVM Programming Model, an industry standard for the abstract interface between applications and operating systems that provide access to persistent memory. The PMDK project currently includes libraries to support several usage models, such as a transactional object store or log storage. These libraries build on top of existing DAX capabilities in Windows and Linux for direct memory-mapped access to files residing on persistent memory devices.

Optane SSD Endurance Boost

The existing enterprise Optane SSD DC P4800X initially launched with a write endurance rating of 30 drive writes per day (DWPD) for three years, and when it hit widespread availability Intel extended that to 30 DWPD for five years. Intel is now preparing to introduce new Optane SSDs with a 60 DWPD rating, still based on first-generation 3D XPoint memory. Another endurance rating increase isn't too surprising: Intel has been accumulating real-world reliability information about their 3D XPoint memory and they have been under some pressure from competition like Samsung's Z-NAND that also offers 30 DWPD with a more conventional flash-based memory.

POST A COMMENT

72 Comments

View All Comments

  • jordanclock - Wednesday, May 30, 2018 - link

    So, just in raw numbers, these seems like it has a long way to go. Correct me if I'm wrong, but we're looking at a tenth the bandwidth and a thousand times the latency, based on the best case scenario of 2.5GB/s and 20μs.

    Still, I'm sure there will be an improvement over going out to disk for some workloads, even if the performance of the actual XPoint chips isn't improved over NVMe drives or the like.
    Reply
  • CajunArson - Wednesday, May 30, 2018 - link

    "Correct me if I'm wrong, but we're looking at a tenth the bandwidth and a thousand times the latency, based on the best case scenario of 2.5GB/s and 20μs."

    Well considering you just pulled those numbers out of your backside with no evidence to support them whatsoever, you might want to not jump to stupid conclusions.
    Reply
  • jordanclock - Wednesday, May 30, 2018 - link

    I actually pulled them from very generous estimates at improvements over the numbers we have seen for the P4800X, the fastest XPoint implementation so far.

    https://www.anandtech.com/show/11930/intel-optane-...

    AnandTechs benchmarks show a bit over 2GB/s and about 30μs. Again, assuming improvements have been made to the controller or memory packages, the reduction in overhead compared to a PCI-Express bus and any tweaking made for better response times, I think 2.5GB/s and 20μs are reasonable numbers.

    So these were not pulled from my backside and you're just a rude ass.
    Reply
  • CajunArson - Wednesday, May 30, 2018 - link

    Yeah, assuming that the PCIe P4800X that trivially saturates its PCIe connection means that the inherent bandwidth of Optanes is limited to that of a PCIe connection kind of shows that you really don't understand what this technology is all about.

    Maybe you should go back to the phone reviews.
    Reply
  • CajunArson - Wednesday, May 30, 2018 - link

    As a followup to my earlier reply, you literally just said that the fastest HBM2 solutions on the market are pathetically limited to 16GB of bandwidth because a CPU connected to a $10,000 GPU over a PCIe connection can only get data from the HBM2 memory at 16GB/sec. Reply
  • jordanclock - Wednesday, May 30, 2018 - link

    The P4800X uses a PCI-E 3.0 x4 interface, which would have a max bandwidth of around 4GB/s, not 2GB/s. So there is plenty of headroom available there if the P4800X were capable of higher throughput.

    Also I said nothing about HBM2. At all. I think you're confusing me with someone else you're trolling. That's a completely different memory interface and those numbers have nothing to do with the maximum bandwidth of an Optane DIMM.
    Reply
  • jordanclock - Wednesday, May 30, 2018 - link

    Actually, you know what, you're right. The PCI-E 3.0 x4 bus would be limited to 2GB/s, so it is possible that the P4800X is bottlenecked AND the XPoint DIMMs could perform higher.

    But you found the absolutely worst way to say it and just come across another combative troll.
    Reply
  • p1esk - Wednesday, May 30, 2018 - link

    More importantly, how much of that 30μs latency for P4800X comes from PCIe bus? Reply
  • jordanclock - Wednesday, May 30, 2018 - link

    I'm going to guess better than half? Between the clockrate of the bus, the physical distance and logical overhead. Reply
  • Samus - Thursday, May 31, 2018 - link

    I think the important thing to focus on here is Optane was never meant to replace SSD’s or DRAM. It combines the benefits of both. But it doesn’t completely combine the benefits of DRAM, which is still faster. Reply

Log in

Don't have an account? Sign up now