Final Words: Is 3D XPoint Ready?

The Intel Optane SSD DC P4800X is a very high-performing enterprise SSD, but more importantly it is the first shipping product using Intel's 3D XPoint memory technology. After a year and a half of talking up 3D XPoint, Intel has finally shipped something. The P4800X proves that 3D XPoint memory is real and that it really works. The P4800X is just a first-generation product, but it's more than sufficient to establish 3D XPoint memory as a serious contender in the storage market.

If your workload matches its strengths, the P4800X offers performance that cannot currently be provided by any other storage product. This means high throughput random access, as well as very strict latency requirements - the results Optane achieves for it's quality of service for latency on both reads and writes, especially in heavy environments with a mixed read/write workload, is a significant margin ahead of anything available on the market.


At 50/50 reads/writes, latency QoS for the DC P4800X is 30x better than the competition

The Intel Optane SSD DC P4800X is not the fastest SSD ever on every single test. It's based on a revolutionary technology, but no matter how high expectations were, very rarely does a first-generation product take over the world unless it becomes ubiquitous and cheap on day one. The Optane SSD is ultimately an expensive niche product. If you don't need high throughput random access with the strictest latency requirements, the Optane SSD DC P4800X may not be the best choice. It is very expensive compared to most flash-based SSDs.

With the Optane SSD and 3D XPoint memory now clearly established as useful and usable, the big question is how broad its appeal will be. The originally announcements around Optane promised a lot, and this initial product delivers a few of those metrics, so to some extent, the P4800X may have to grow its own market and reteach partners what Optane is capable of today. Working with developers and partners is going to be key here - they have to perform outreach and entice software developers to write applications that rely on extremely fast storage. That being said, there are plenty of market segments already that can never get enough storage performance, so anything above what is available in the market today will be more than welcome. 

There's still much more we would like to know about the Optane SSD and the 3D XPoint memory it contains. Since our testing was remote, we have not yet even had the chance to look under the drives's heatsink, or measure the power efficiency of the Optane SSD and compare it against other SSDs. We are awaiting an opportunity to get a drive in hand, and expect some of the secrets under the hood to be exposed in due course as drives filter through the ecosystem.

Mixed Read/Write Performance
Comments Locked

117 Comments

View All Comments

  • lilmoe - Thursday, April 20, 2017 - link

    With all the Intel hype and PR, I was expecting the charts to be a bit more, um, flat? Looking at the deltas from start to finish of each benchmark, it looks like the drive has lots of characteristics similar to current flash based SSDs for the same price.

    Not impressed. I'll wait for your hands on review before bashing it more.
  • DrunkenDonkey - Thursday, April 20, 2017 - link

    This is what the reviews don't explain and leave people in total darkness. You think your shiny new samsung 960 pro with 2.5g/s will be faster than your dusty old 840 evo barely scratching 500? Yes? Then you are in for a surprise - graphs look great, but check on loading times and real program/game benches and see it is exactly the same. That is why SSD reviews should always either divide to sections for the different usage or explain in great simplicity and detail what you need to look for in a PART of the graph. This one is about 8-10 times faster than your SSD so it IS impressive a lot, but price is equally impressive.
  • lilmoe - Friday, April 21, 2017 - link

    Yes, that's the problem with readers. They're comparing this to the 960 Pro and other M.2 and even SATA drives. Um.... NO. You compare this with similar form factor SSDs with similar price tags and heat sinks.

    And no, even QD1 benches aren't that big of a difference.
  • lilmoe - Friday, April 21, 2017 - link

    "And no, even QD1 benches aren't that big of a difference"
    This didn't sound right, I meant to say that even QD1 isn't very different **compared to enterprise full PCIe SSDs*** at similar prices.
  • sor - Friday, April 21, 2017 - link

    You're crazy. This thing is great. The current weak spot of NAND is on full display here, and xpoint is decimating it. We all know SSDs chug when you throw a lot of writes at them, all of Anandtech "performance consistency" benchmarks show that iops take a nose dive if you benchmark for more than a few seconds. Xpoint doesn't break a sweat and is orders of magnitude faster.

    I'm also pleasantly surprised at the consistency of sequential. A lot of noise was made about their sequential numbers not being as good as the latest SSDs, but one thing not considered is that SSDs don't hit that number until you get to high queue depths. For individual transfers xpoint seems to actually come closer to max performance.
  • tuxRoller - Friday, April 21, 2017 - link

    I think the controllers have a lot to due with the perf.
    It's perf profile is eerily similar to the p3700 in too many cases.
  • Meteor2 - Thursday, April 20, 2017 - link

    So... what is a queue depth? And what applications result in short or long QDs?
  • DrunkenDonkey - Thursday, April 20, 2017 - link

    Queue depth is concurent access to the drive, at the same time.

    For desktop/gaming you are looking at 4k random read (95-99% of the time), QD=1
    For movie processing you are looking at sequential read/write at QD=1
    For light file server you are looking at both higher blocks, say 64k random read and also sequential read, at QD=2/4
    For heavy file server you go for QD=8/16
    For light database you are looking for QD=4, random read/random write (depends on db type)
    For heavy database you are looking for QD=16/more, random read/random write (depends on db type)
  • Meteor2 - Thursday, April 20, 2017 - link

    Thank you!
  • bcronce - Thursday, April 20, 2017 - link

    A heavy file server only has such a small queue depth if using spinning rust, to keep down latency. When using SSDs, file servers have QDs in 64-256 range.

Log in

Don't have an account? Sign up now