Sequential Read Performance

Our first test of sequential read performance uses short bursts of 128MB, issued as 128kB operations with no queuing. The test averages performance across eight bursts for a total of 1GB of data transferred from a drive containing 16GB of data. Between each burst the drive is given enough idle time to keep the overall duty cycle at 20%.

Burst 128kB Sequential Read (Queue Depth 1)

The burst sequential read performance of the Optane Memory H10 is much lower than what the high-end TLC-based drives provide, but it is competitive with the other low-end NVMe drives that are limited to PCIe 3 x2 links. The Optane Memory caching is only responsible for about a 10% speed increase over the raw QLC speed, so this is obviously not one of the scenarios where the caching drivers can effectively stripe access between the Optane and NAND.

Our test of sustained sequential reads uses queue depths from 1 to 32, with the performance and power scores computed as the average of QD1, QD2 and QD4. Each queue depth is tested for up to one minute or 32GB transferred, from a drive containing 64GB of data. This test is run twice: once with the drive prepared by sequentially writing the test data, and again after the random write test has mixed things up, causing fragmentation inside the SSD that isn't visible to the OS. These two scores represent the two extremes of how the drive would perform under real-world usage, where wear leveling and modifications to some existing data will create some internal fragmentation that degrades performance, but usually not to the extent shown here.

Sustained 128kB Sequential Read

On the longer sequential read test, the Optane caching is still not effectively combining the performance of the Optane and NAND halves of the H10. However, when reading back data that was not written sequentially, the Optane cache is a significant help.

The Optane cache is a bit of a hindrance to sequential reads at low queue depths on this test, but at QD8 and higher it provides some benefit over using just the QLC.

Sequential Write Performance

Our test of sequential write burst performance is structured identically to the sequential read burst performance test save for the direction of the data transfer. Each burst writes 128MB as 128kB operations issued at QD1, for a total of 1GB of data written to a drive containing 16GB of data.

Burst 128kB Sequential Write (Queue Depth 1)

The burst sequential write speed of 32GB of Optane on its own is quite poor, so this is a case where the QLC NAND is significantly helping the Optane on the H10. The SLC write cache on the H10's QLC side is competitive with those on the TLC-based drives, but when the caching software gets in the way the H10 ends up with SATA-like performance.

Our test of sustained sequential writes is structured identically to our sustained sequential read test, save for the direction of the data transfers. Queue depths range from 1 to 32 and each queue depth is tested for up to one minute or 32GB, followed by up to one minute of idle time for the drive to cool off and perform garbage collection. The test is confined to a 64GB span of the drive.

Sustained 128kB Sequential Write

The story is pretty much the same on the longer sequential write test, though some of the other low-end NVMe drives have fallen far enough that the Optane Memory H10's score isn't a complete embarrassment. However, the QLC portion on its own is still doing a better job of handling sustained sequential writes than the caching configuration.

There's no clear trend in performance for the H10 during the sustained sequential write test. It is mostly performing between the levels of the QLC and Optane portions, which means the caching software is getting in the way rather than allowing the two halves to work together and deliver better performance than either one individually. It's possible that with more idle time to clear out the Optane and SLC caches we would see drastically different behavior here.

Random IO Performance Mixed Read/Write Performance
Comments Locked

60 Comments

View All Comments

  • Alexvrb - Monday, April 22, 2019 - link

    "The caching is managed entirely in software, and the host system accesses the Optane and QLC sides of the H10 independently. "

    So, it's already got serious baggage. But wait, there's more!

    "In practice, the 660p almost never needed more bandwidth than an x2 link can provide, so this isn't a significant bottleneck."

    Yeah OK, what about the Optane side of things?
  • Samus - Tuesday, April 23, 2019 - link

    They totally nerf'd this thing with 2x PCIe.
  • PeachNCream - Tuesday, April 23, 2019 - link

    Linux handles Optane pretty easily without any Intel software through bcache. I'm not sure why Anandtech can't test that, but maybe just a lack of awareness.

    https://www.phoronix.com/scan.php?page=article&...
  • Billy Tallis - Tuesday, April 23, 2019 - link

    Testing bcache performance won't tell us anything about how Intel's caching software behaves, only how bcache behaves. I'm not particularly interested in doing a review that would have such a narrow audience. And bcache is pretty thoroughly documented so it's easier to predict how it will handle different workloads without actually testing.
  • easy_rider - Wednesday, April 24, 2019 - link

    Is there a reliable review of 118gb intel optane ssd in M2 form factor? Does it make sense to hunt it down and put as a system drive in the dual-m2 laptop?
  • name99 - Thursday, April 25, 2019 - link

    "QLC NAND needs a performance boost to be competitive against mainstream TLC-based SSDs"

    The real question is what dimension, if any, does this thing win on?
    OK, it may not be the fastest out there? But does it, say, provide approximately leading edge TLC speed at QLC prices, so it wins by being cheap?
    Because just having a cache is meaningless. Any QLC drive that isn't complete garbage will have a controller-managed cache created by using the QLC flash as SLC; and the better controllers will slowly degrade across the entire drive, maintaining always an SLC cache, but also using the entire drive (till its filled up) as SLC, then switching blocks to MLC, then to TLC, and only when the drive is approaching capacity, using blocks as QLC.

    So the question is not "does it give cached performance to a QLC drive", the question is does it give better performance or better price than other QLC solutions?
  • albert89 - Saturday, April 27, 2019 - link

    Didn't I tell ya ? Optane's capacity was too small for many yrs and compatible with a very tiny number devices/hardware/OS. She played the game of hard to get and now no guy wants her.
  • peevee - Monday, April 29, 2019 - link

    "The caching is managed entirely in software, and the host system accesses the Optane and QLC sides of the H10 independently. Each half of the drive has two PCIe lanes dedicated to it."

    Fail.
  • ironargonaut - Monday, April 29, 2019 - link

    "While the Optane Memory H10 got us into our Word document in about 5 seconds, the TLC-based 760P took 29 seconds to open the file. In fact, we waited so long that near the end of the run, we went ahead and also launched Google Chrome with it preset to open four websites. "

    https://www.pcworld.com/article/3389742/intel-opta...

    Win
  • realgundam - Saturday, November 16, 2019 - link

    What if you have a normal 660p and an Optane stick? would it do the same thing?

Log in

Don't have an account? Sign up now