Final Words: Is 3D XPoint Ready?

The Intel Optane SSD DC P4800X is a very high-performing enterprise SSD, but more importantly it is the first shipping product using Intel's 3D XPoint memory technology. After a year and a half of talking up 3D XPoint, Intel has finally shipped something. The P4800X proves that 3D XPoint memory is real and that it really works. The P4800X is just a first-generation product, but it's more than sufficient to establish 3D XPoint memory as a serious contender in the storage market.

If your workload matches its strengths, the P4800X offers performance that cannot currently be provided by any other storage product. This means high throughput random access, as well as very strict latency requirements - the results Optane achieves for it's quality of service for latency on both reads and writes, especially in heavy environments with a mixed read/write workload, is a significant margin ahead of anything available on the market.


At 50/50 reads/writes, latency QoS for the DC P4800X is 30x better than the competition

The Intel Optane SSD DC P4800X is not the fastest SSD ever on every single test. It's based on a revolutionary technology, but no matter how high expectations were, very rarely does a first-generation product take over the world unless it becomes ubiquitous and cheap on day one. The Optane SSD is ultimately an expensive niche product. If you don't need high throughput random access with the strictest latency requirements, the Optane SSD DC P4800X may not be the best choice. It is very expensive compared to most flash-based SSDs.

With the Optane SSD and 3D XPoint memory now clearly established as useful and usable, the big question is how broad its appeal will be. The originally announcements around Optane promised a lot, and this initial product delivers a few of those metrics, so to some extent, the P4800X may have to grow its own market and reteach partners what Optane is capable of today. Working with developers and partners is going to be key here - they have to perform outreach and entice software developers to write applications that rely on extremely fast storage. That being said, there are plenty of market segments already that can never get enough storage performance, so anything above what is available in the market today will be more than welcome. 

There's still much more we would like to know about the Optane SSD and the 3D XPoint memory it contains. Since our testing was remote, we have not yet even had the chance to look under the drives's heatsink, or measure the power efficiency of the Optane SSD and compare it against other SSDs. We are awaiting an opportunity to get a drive in hand, and expect some of the secrets under the hood to be exposed in due course as drives filter through the ecosystem.

Mixed Read/Write Performance
Comments Locked

117 Comments

View All Comments

  • melgross - Tuesday, April 25, 2017 - link

    You're making the mistake those who know nothing make, which is surprising for you. This is a first generation product. It will get much faster, and much cheaper as time goes on. NAND will stagnate. You also have to remember that Intel never made the claim that this was as fast as RAM, or that it would be. The closest they came was to say that this would be in between NAND and RAM in speed. And yes, for some uses, it might be able to replace RAM. But that could be several generations down the road, in possibly 5 years, or so.
  • tuxRoller - Sunday, April 23, 2017 - link

    I'm not sure i understand you.
    You talk about "pages", but, i hope, the reviewer was only using dio, so there would be no page cache.
    It's very unclear where you are getting this "~100x" number. Nvme connected dram has a plurality of hits around 4-6 us (depending on software) but it also has a distributed latency curve. However, i don't know what the latency at the 99.999% percentile. The point is that even with dram's sub-100ns latency, it's still not staying terribly close to the theoretical min latency of the bus.
    Btw, it's not just the controller. A very large amount of latency comes from the block layer itself (amongst other things).
  • Santoval - Tuesday, June 6, 2017 - link

    It is quite possible that Intel artificially weakened P4800X's performance and durability in order to avoid internal competition with their SSD division (they already did the same with Atoms). If your new technology is *too* good it might make your other more mainstream technology look bad in comparison and you could see a big drop in sales. Or it might have a "deflationary" effect, where their customers might delay buying in hope of lower prices later. This way they can also have a more clear storage hierarchy, business segment wise, where their mainstream products are good, and their niche ones are better but not too good.

    I am not suggesting that it could ever compete with DRAM, just that the potential of 3D XPoint technology might actually be closer to what they mentioned a year ago than the first products they shipped.
  • albert89 - Friday, April 21, 2017 - link

    Intel wont be reducing the price of the optane but rather will be giving the average consumer a watered down version which will be charged at a premium but perform only slightly better then the top SSD. The conclusion ? Another over priced ripoff from Intel.
  • TheinsanegamerN - Thursday, April 20, 2017 - link

    the fastest SSD on the consumer market is the 960 pro, which can hit 3.2GB/s read under certain circumstances.

    This is the equivalent of single channel DDR 400 from 2001. and DDR had far lower latencys to boot.

    We are a long, long way from replacing RAM with storage.
  • ddriver - Friday, April 21, 2017 - link

    What makes the most impression is it took a completely different review format to make this product look good. No doubt strictly following intel's own review guidelines. And of course, not a shred of real world application. Enter hypetane - the paper dragon.
  • ddriver - Friday, April 21, 2017 - link

    Also, bandwidth is only one side of the coin. Xpoint is 30-100+ times more latent than dram, meaning the CPU will have to wait 30-100+ times longer before it has data to compute, and dram is already too slow in this aspect, so you really don't want to go any slower.

    I see a niche for hypetane - ram-less systems, sporting very slow CPUs. Only a slow CPU will not be wasted on having to wait on working memory. Server CPUs don't really need to crunch that much data either, if any, which is paradoxical, seeing how intel will only enable avx512 on xeons, so it appears that the "amazingly fast" and overpriced hypetane is at home only in simple low end servers, possibly paired with them many core atom chips. Even overpriced, it will kind of a decent deal, as it offers about 3 times the capacity per dollar as dram, paired with wimpy atoms it could make for a decent simple, low cost, frequent access server.
  • frenchy_2001 - Friday, April 21, 2017 - link

    You are missing the usefulness of it entirely.
    Yes, it is a niche product.
    And I even agree, intel is hyping it and offering it for consumer with minimal benefit (beside intel's bottom line).
    But it realistically slots between NAND and DRAM.
    This review shows that it has lower latency than NAND and it has higher density than DRAM.
    This is the play.

    You say it cannot replace DRAM and for most usage (by far) you are true. However, for a small niche that works with very big data sets (like for finace or exploration), having more memory, although slower, will still be much faster than memory + swap (to a slower NAND storage).

    Let me repeat, this is a niche product, but it has its uses.
    Intel marketing is hyping it and trying to use it where its tradeoffs (particularly price) make little sense, but the technology itself is good (if limited).
  • wumpus - Sunday, April 23, 2017 - link

    Don't be so sure that latency is keeping it from being used as [secondary] main memory. A 4GB machine can actually function (more or less) for office duty and some iffy gaming capability. I'd strongly suspect that a 4-8GB stack of HBM (preferably the low-cost 512 bit systems, as the CPU really only wants 512bit chunks of memory at a time) with the rest backed by 3dxpoint would still be effective at this high latency. Any improvement is likely to remove latency as something that would stop it (and current software can use the current stack [with PCIe connection] to work 3dxpoint as "swappable ram").

    The endurance may well keep this from happening (it is on par with SLC).

    The other catch is that this is a pretty steep change along the entire memory system. Expect Intel to have huge internal fights as to what the memory map should look like, where the HBM goes (does Intel pay to manufacture an expensive CPU module or foist it on down the line), do you even use HBM (if Ravenridge does, I'd expect that Intel would have to if they tried to use xpoint as main memory)? The big question is what would be the "cache line" of the DRAM memory: the current stack only works with 4k, the CPU "wants" 512 bits, HBM is closer to 4k. 4k looks like a no-brainer, but you still have to put a funky L5/buffer that deals with the huge cache line or waste a ton of [top level, not sure if L3 or L4] cache by giving it 4k cache lines.
  • melgross - Tuesday, April 25, 2017 - link

    What is it with you and RAM? This isn't a RAM replacement for most any use. Intel hasn't said that it is. Why are you insisting on comparing it to RAM?

Log in

Don't have an account? Sign up now