Test Configurations

So while the Intel SSD DC P4800X is technically launching today, 3D XPoint memory is still in short supply. Only the 375GB add-in card model has been shipped, and only as part of an early limited release program. The U.2 version of the 375GB model and the add-in card 750GB model are planned for a Q2 release, and the U.2 750GB model and the 1.5TB model are expected in the second half of 2017. Intel's biggest enterprise customers, such as the Super Seven, have had access to Optane devices throughout the development process, but broad retail availability is still a little ways off.

Citing the current limited supply, Intel has taken a different approach to review sampling for this product. Their general desire for secrecy regarding the low-level details of 3D XPoint has also likely been a factor. Instead of shipping us the Optane SSD DC P4800X to test on our own system, as is normally the case with our storage testing, this time around Intel has only provided us with remote access to a DC P4800X system housed in their data center. Their Non-Volatile Memory Solutions Group maintains a pool of servers to provide partners and customers with access to the latest storage technologies and their software partners have been using these systems for months to develop and optimize applications to take advantage of Optane SSDs.

Intel provisioned one of these servers for our exclusive use during the testing period, and equipped it with a 375GB Optane SSD DC P4800X and a 800GB SSD DC P3700 for comparison. The P3700 was the U.2 version of the drive and was connected through a PLX PEX 9733 PCIe switch. The Optane SSD under test was initially going to be a U.2 version connected to the same backplane, but Intel found that the PCIe switch was introducing some inconsistency in the access latency on the order of a microsecond or two, which is a problem when trying to benchmark a drive with ~8µs best case latency. Intel swapped out the U.2 Optane SSD for an add-in card version that uses PCIe lanes direct from the processor, but the P3700 was still potentially subject to whatever problems the PCIe switch may have caused. Clearly, there's some work to be done to ensure the ecosystem is ready to take full advantage of the performance promised by Optane SSDs, but debugging such issues is beyond the scope of this review.

Intel NSG Marketing Test Server
CPU 2x Intel Xeon E5 2699 v4
Motherboard Intel S2600WTR2
Chipset Intel C612
Memory 256GB total, Kingston DDR4-2133 CL11 16GB modules
OS Ubuntu Linux 16.10, kernel 4.8.0-22

The system was running a clean installation of Ubuntu 16.10, with no Intel or Optane-specific software or drivers installed, and the rest of the system configuration was as expected. We had full administrative access to tweak the software to our liking, but chose to leave it mostly in its default state.

Our benchmarking is a variety of synthetic workloads generated and measured using fio version 2.19. There are quite a few operating system and fio options that can be tuned, but we generally ignored them: for example the NVMe driver wasn't manually switched to polling mode, or the CPU affinity was not manually set, and nothing was tweaked about power management or CPU clock speed turbo. There is work underway to switch fio over to using nanosecond-precision time measurement, but it has not reached a usable state yet. Our tests only record latencies in microsecond increments, and mean latencies that report fractional microseconds are just weighted averages of eg. how many operations were closer to 8µs than 9µs.

All tests were run directly on the SSD with no intervening filesystem. Real-world applications will almost always be accessing the drive through a filesystem, but will also be benefiting from the operating system's cache in main RAM, which is bypassed with this testing methodology.

To provide an extra point of comparison, we also tested the Micron 9100 MAX 2.4TB on one of our systems, using a Xeon E3 1240 v5 processor. In order to not unfairly disadvantage the Micron 9100, most of the tests  were limited to use at most 4 threads. Our test system was running the same Linux kernel as the Intel NSG marketing test server and used a comparable configuration with the Micron 9100 connected directly to the CPU's PCIe lanes rather than through the PCH.

AnandTech Enterprise SSD Testbed
CPU Intel Xeon E3 1240 v5
Motherboard ASRock Fatal1ty E3V5 Performance Gaming/OC
Chipset Intel C232
Memory 4x 8GB G.SKILL Ripjaws DDR4-2400 CL15
OS Ubuntu Linux 16.10, kernel 4.8.0-22

Because this was not a hands-on test of the Optane SSD on our own equipment, we were unable to conduct any power consumption measurements. Due to the limited time available for testing, we were unable to make any systematic test of write endurance or the impact of extra overprovisioning on performance. We hope to have the opportunity to conduct a full hands-on review later in the year to address these topics.

Due to time, we were unable to cover Intel's new Memory Drive Technology software. This is an optional software add-on that can be purchased with the Optane SSD. The Memory Drive Technology software is a minimal virtualization system that allows software to pretend that their Optane SSD is RAM. The hypervisor will present to the guest OS a pool of memory equal to the amount of available DRAM plus up to 320GB of the Optane SSD's 375GB capacity. The hypervisor manages the placement of data to automatically cache hot data in DRAM, such that applications or the guest OS cannot explicitly address or allocate Optane storage. We may get a chance to look at this in the future, as it offers an interesting aspect of the new ways multi-tiered storage will be affecting the Enterprise market over the next few years.

3D XPoint Refresher Checking Intel's Numbers
Comments Locked

117 Comments

View All Comments

  • ddriver - Friday, April 21, 2017 - link

    *450 ns, by which I mean lower by 450 ns. And the current xpoint controller is nowhere near hitting the bottleneck of PCIE. It would take a controller that is at least 20 times faster than the current one to even get to the point where PCIE is a bottleneck. And even faster to see any tangible benefit from connecting xpoint directly to the memory controller.

    I'd rather have some nice 3D SLC (better than xpoint in literally every aspect) on PCIE for persistent storage RAM in the dimm slots. Hyped as superior, xpoint is actually nothing but a big compromise. Peak bandwidth is too low even compared to NVME NAND, latency is way too high and endurance is way too low for working memory. Low queue depths performance is good, but credit there goes to the controller, such a controller will hit even better performance with SLC nand. Smarter block management could also double the endurance advantage SLC already has over xpoint.
  • mdriftmeyer - Saturday, April 22, 2017 - link

    ddriver is spot on. just to clarify an early comment. He's correct and the IntelUser2000 is out of his league.
  • mdriftmeyer - Saturday, April 22, 2017 - link

    Spot on.
  • tuxRoller - Friday, April 21, 2017 - link

    We don't know how much slower the media is than dram right now.
    We know than using dram over nvme has similar (though much better worst case) perf to this.
    See my other post regarding polling and latency.
  • bcronce - Saturday, April 22, 2017 - link

    Re-reading, I see it says "typical" latency is under 10us, placing it in spitting distance of DDR3/4. It's the 99.9999th percentile that is 60us for Q1. At Q16, 99.999th percentile is 140us. That means it takes only 140us to service 16 requests. That's pretty much the same as 10us.

    Read Q1 4KiB bandwidth is only about 500MiB/s, but at Q8, it's about 2GiB which puts it on par with DDR4-2400.
  • ddriver - Saturday, April 22, 2017 - link

    "placing it in spitting distance of DDR3/4"

    I hope you do realize that dram latency is like 50 NANOseconds, and 1 MICROsecond is 1000 NANOseconds.

    So 10 us is actually 200 times as much as 50 ns. Thus making hypetane about 200 times slower in access latency. Not 200%, 200X.
  • tuxRoller - Saturday, April 22, 2017 - link

    Yes, the dram media is that fast but when it's exposed through nvme it has the latency characteristics that bcronce described.
  • wumpus - Sunday, April 23, 2017 - link

    That's only on a page hit. For the type of operations that 3dxpoint is looking at (4k or so) you won't find it on an open page and thus take 2-3 times as long till it is ready.

    That still leaves you with ~100x latency. And we are still wondering if losing the PCIe controller will make any significant difference to this number (one problem is that if Intel/Micron magically fixed this, the endurance is only slightly better than SLC and would quickly die if used as main memory).
  • ddriver - Sunday, April 23, 2017 - link

    Endurance for the initial batch postulated from intel's warranty would be around 30k PE cycles, and 50k for the upcoming generation. That's not "only slightly better than SLC" as SCL has 100k PE cycles endurance. But the 100k figure is somewhat old, and endurance goes down with process node. So at a comparable process, SLC might be going down, approaching 50k.

    It remains to be seen, the lousy industry is penny pinching and producing artificial NAND shortages to milk people as much as possible, and pretty much all the wafers are going into TLC, some MLC and why oh why, QLC trash.

    I guess they are saving the best for last. 3D SLC will address the lower density, samsung currently has 2 TB MLC M2, so 1 TB is perfectly doable via 3D SLC. I am guessing samsung's z-nand will be exactly that - SLC making a long overdue comeback.
  • tuxRoller - Sunday, April 23, 2017 - link

    The endurance issue is, imho, the biggest concern right now.

Log in

Don't have an account? Sign up now