Sequential Read Performance

Our first test of sequential read performance uses short bursts of 128MB, issued as 128kB operations with no queuing. The test averages performance across eight bursts for a total of 1GB of data transferred from a drive containing 16GB of data. Between each burst the drive is given enough idle time to keep the overall duty cycle at 20%.

Burst 128kB Sequential Read (Queue Depth 1)

The burst sequential read results are bizarre, with the 32GB caching configuration coming in second only to the Optane SSD 900P while the large Optane M.2 modules perform much worse as cache devices than as standalone drives. The caching performance from the 64GB Optane Memory M10 is especially disappointing, with less than a third of the performance that the drive delivers as a standalone device. Some SSD caching software attempts to have sequential I/O bypass the cache to leave the SSD ready handle random I/O, but this test is not a situation where such a strategy would make sense. Without more documentation from Intel about their proprietary caching algorithms and with no way to query the Optane Memory drivers about the cache status, it's hard to figure out what's going on here. Aside from the one particularly bad result from the M10 as a cache, all of the Optane configurations do at least score far above the SATA SSD.

Our test of sustained sequential reads uses queue depths from 1 to 32, with the performance and power scores computed as the average of QD1, QD2 and QD4. Each queue depth is tested for up to one minute or 32GB transferred, from a drive containing 64GB of data.

Sustained 128kB Sequential Read

The sustained sequential read test results make more sense. The 32GB cache configuration isn't anywhere near large enough for this test's 64GB dataset, but the larger Optane M.2 modules offer good performance as standalone drives or as cache devices. The 64GB Optane Memory M10 scores worse as a cache drive, which is to be expected since the test's dataset doesn't quite fit in the cache.

Using an 118GB Optane M.2 module as a cache seems to help with sequential reads at QD1, likely due to some  prefetching in the caching software. The 64GB cache handles the sustained sequential read workload better than either of the sustained random I/O tests, but it is still slower than the SSD alone at low queue depths. Performance from the 32GB cache is inconsistent but usually still substantially better than the hard drive alone.

Sequential Write Performance

Our test of sequential write burst performance is structured identically to the sequential read burst performance test save for the direction of the data transfer. Each burst writes 128MB as 128kB operations issued at QD1, for a total of 1GB of data written to a drive containing 16GB of data.

Burst 128kB Sequential Write (Queue Depth 1)

As with the random write tests, the cache configurations show higher burst sequential write performance than testing the Optane M.2 modules as standalone SSDs. This points to driver improvements that may include mild cheating through the use of a RAM cache, but the performance gap is small enough that there doesn't appear to be much if any data put at risk. The 64GB and 118GB caches have similar performance with the 64GB slightly ahead, but the 32GB cache barely keep up with a SATA drive.

Our test of sustained sequential writes is structured identically to our sustained sequential read test, save for the direction of the data transfers. Queue depths range from 1 to 32 and each queue depth is tested for up to one minute or 32GB, followed by up to one minute of idle time for the drive to cool off and perform garbage collection. The test is confined to a 64GB span of the drive.

Sustained 128kB Sequential Write

The rankings on the sustained sequential write test are quite similar, but this time the 118GB Optane SSD 800P has the lead over the 64GB Optane Memory M10. The performance advantage of the caching configurations over the standalone drive performance is smaller than for the burst sequential write test, because this test writes far more data than could be cached in RAM.

Aside from some differences at QD1, the Optane M.2 modules offer basically the same performance when used as caches or as standalone drives. Since this test writes no more than 32GB at a time without a break and all of the caches tested are that size or larger, the caching software can always stream all of the writes to just the Optane module without having to stop and flush dirty data to the slower hard drive. If this test were lengthened to write more than 32GB at a time or if it were run on the 16GB Optane Memory, performance would plummet partway through each phase of the test.

Random Performance Mixed Read/Write Performance
Comments Locked

96 Comments

View All Comments

  • FunBunny2 - Wednesday, May 16, 2018 - link

    one of the distinguishing points, so to speak, of XPoint is its byte-addressable protocol. but I've found nothing about the advantages, or whether (it seems so) OS has to be (heavily?) modified to support such files. anyone know?
  • Billy Tallis - Wednesday, May 16, 2018 - link

    The byte-addressability doesn't provide any direct advantages when the memory is put behind a block-oriented storage protocol like NVMe. But it does simplify the internal management the SSD needs to do, because modifying a chunk of data doesn't require re-writing other stuff that isn't changing. NVDIMMs will provide a more direct interface to 3D XPoint, and that's where the OS and applications need to be heavily modified.
  • zodiacfml - Friday, May 18, 2018 - link

    Quite impressive but for 32GB Optane drive, I can have a 250 GB SSD.

    The Optane might improve performance for fractions of a second over SSDs for applications but it won't help during program/driver installations or Windows updates which needs more speed.

    I'd reconsider it for a 64 GB Optane as a boot drive for the current price of the 32GB.
  • RagnarAntonisen - Sunday, May 20, 2018 - link

    You've got to feel for Intel. They spend a tonne of cash on projects like Larrabee, Itanium and Optane and the market and tech reviewers mostly respond with a shrug.

    And then everyone complains they're being complacent when it comes to CPU design. Mind you they clearly were - CPU performances increased at a glacial rate until AMD released a competitive product and then there was a big jump from 4 cores to 6 in mainstream CPUs with Coffee Lake. Still if the competition was so far behind you can afford to direct to R&D dollars to other areas.

    Still it all seems a bit unfair - Intel get criticised when they try something new and when they don't.

    And Itanium, Larrabee and Optane all looked like good ideas on paper. It was only when they had a product that it became clear that it wasn't competitive.
  • Adramtech - Sunday, May 20, 2018 - link

    since when is a 1st or 2nd Gen product competitive? I'm sure if they don't have a path to reach competitiveness, the project will be scrapped.
  • Keljian - Tuesday, May 29, 2018 - link

    While I don't doubt the tests are valid, I would really like to see a test with say PrimoCache - with the blocksize set to 4k. I have found in my own testing that Optane (with PrimoCache using optane as an L2 @ 4k) is very worthwhile even for my Samsung 950 pro.
  • Keljian - Tuesday, May 29, 2018 - link

    https://hardforum.com/threads/intel-900p-optane-wo... - Here are my benchmark findings for the 850 evo and 950 pro using the 32gb optane as L2 cache. You'll notice the 4k speeds stand out.
  • denywinarto - Tuesday, May 29, 2018 - link

    Thinking of using this with 12 tb hgst for a gamedisk drive for a ISCSI-based server, the data read is usually the same as they only game files. But occasionally new game gets added. Would it be a better option compared to raid? SSD are too expensive.
  • Lolimaster - Monday, October 1, 2018 - link

    Nice to use the 16GB as pagefile, chrome/firefox profile/cache
  • Lolimaster - Tuesday, October 2, 2018 - link

    It's better to use them as extra ram/pagefile or scratch disk.

Log in

Don't have an account? Sign up now