Sequential Read Performance

Our first test of sequential read performance uses short bursts of 128MB, issued as 128kB operations with no queuing. The test averages performance across eight bursts for a total of 1GB of data transferred from a drive containing 16GB of data. Between each burst the drive is given enough idle time to keep the overall duty cycle at 20%.

Burst 128kB Sequential Read (Queue Depth 1)

The burst sequential read performance of the Optane Memory H10 is much lower than what the high-end TLC-based drives provide, but it is competitive with the other low-end NVMe drives that are limited to PCIe 3 x2 links. The Optane Memory caching is only responsible for about a 10% speed increase over the raw QLC speed, so this is obviously not one of the scenarios where the caching drivers can effectively stripe access between the Optane and NAND.

Our test of sustained sequential reads uses queue depths from 1 to 32, with the performance and power scores computed as the average of QD1, QD2 and QD4. Each queue depth is tested for up to one minute or 32GB transferred, from a drive containing 64GB of data. This test is run twice: once with the drive prepared by sequentially writing the test data, and again after the random write test has mixed things up, causing fragmentation inside the SSD that isn't visible to the OS. These two scores represent the two extremes of how the drive would perform under real-world usage, where wear leveling and modifications to some existing data will create some internal fragmentation that degrades performance, but usually not to the extent shown here.

Sustained 128kB Sequential Read

On the longer sequential read test, the Optane caching is still not effectively combining the performance of the Optane and NAND halves of the H10. However, when reading back data that was not written sequentially, the Optane cache is a significant help.

The Optane cache is a bit of a hindrance to sequential reads at low queue depths on this test, but at QD8 and higher it provides some benefit over using just the QLC.

Sequential Write Performance

Our test of sequential write burst performance is structured identically to the sequential read burst performance test save for the direction of the data transfer. Each burst writes 128MB as 128kB operations issued at QD1, for a total of 1GB of data written to a drive containing 16GB of data.

Burst 128kB Sequential Write (Queue Depth 1)

The burst sequential write speed of 32GB of Optane on its own is quite poor, so this is a case where the QLC NAND is significantly helping the Optane on the H10. The SLC write cache on the H10's QLC side is competitive with those on the TLC-based drives, but when the caching software gets in the way the H10 ends up with SATA-like performance.

Our test of sustained sequential writes is structured identically to our sustained sequential read test, save for the direction of the data transfers. Queue depths range from 1 to 32 and each queue depth is tested for up to one minute or 32GB, followed by up to one minute of idle time for the drive to cool off and perform garbage collection. The test is confined to a 64GB span of the drive.

Sustained 128kB Sequential Write

The story is pretty much the same on the longer sequential write test, though some of the other low-end NVMe drives have fallen far enough that the Optane Memory H10's score isn't a complete embarrassment. However, the QLC portion on its own is still doing a better job of handling sustained sequential writes than the caching configuration.

There's no clear trend in performance for the H10 during the sustained sequential write test. It is mostly performing between the levels of the QLC and Optane portions, which means the caching software is getting in the way rather than allowing the two halves to work together and deliver better performance than either one individually. It's possible that with more idle time to clear out the Optane and SLC caches we would see drastically different behavior here.

Random IO Performance Mixed Read/Write Performance
Comments Locked

60 Comments

View All Comments

  • SaberKOG91 - Monday, April 22, 2019 - link

    Nothing special about my usage on my laptop. Running linux so I'm sure journals and other logs are a decent portion of the background activity. I also consume a fair bit of streaming media so caching to disk is also very likely. This machine gets actively used an average of 10-12 hours a day and is usually only completely off for about 8-10 hours. I also install about 150MB of software updates a week, which is pretty on par with say windows update. I also use Spotify which definitely racks up some writes.

    I can't speak to the endurance of that drive, but it is also MLC instead of TLC.

    I would argue that it means that the cost per GB of QLC is now low enough that the manufacturing benefit of smaller dies for the same capacity is worth it. Most consumer SSDs are 250-500GB regardless of technology.

    I'm not referring to a few faulty units or infant mortality. I can't remember the exact news piece, but there were reports of unusually high failure rates in the first generation of Optane cache modules. I also wasn't amused when Anandtech's review sample of the first consumer cache drive died before they finished testing it. You're also assuming that they only factor in the failure of a drive is write endurance. It could very well be that overheating, leakage buildup, or some other electrical factor lead to premature failure, regardless of TBW. It's also worth noting that you may accelerate drive death if you exceed the rated DWPD.
  • RSAUser - Tuesday, April 23, 2019 - link

    I'm at about 3TB after nearly 2 years, this with adding new software like android etc. And swapping between technologies constantly and wiping my drive once every year.
    I also have Spotify, game on it, etc.

    There is something wrong with your usage if you have that much write? I have 32GB RAM so very little caching though, so could be the difference.
  • IntelUser2000 - Tuesday, April 23, 2019 - link

    "You're also assuming that they only factor in the failure of a drive is write endurance. It could very well be that overheating, leakage buildup, or some other electrical factor lead to premature failure, regardless of TBW."

    I certainly did not. It was in reply to your original post.

    Yes, write endurance is a small part of a drive failing. If its failing due to other reasons way before warranty, then they should move to remedy this.
  • Irata - Tuesday, April 23, 2019 - link

    You are forgetting the sleep state on laptops. That alone will result in a lot of data being written to the SSD.
  • jeremyshaw - Sunday, July 14, 2019 - link

    Or they have a laptop with the "Modern Standby," which is code for:

    Subpar idle state which goes to Hibernation (flush RAM to SSD - I have 32GB of RAM) whenever the system drains too much power in this "Standby S3 replacement."
  • voicequal - Monday, April 22, 2019 - link

    "Optane has such horrible lifespan at these densities that reviewers destroyed the drives just benchmarking them."

    What is your source for this comment?
  • SaberKOG91 - Monday, April 22, 2019 - link

    Anandtech killed their review sample when Optane first came out. Happened other places too.
  • voicequal - Tuesday, April 23, 2019 - link

    Link? Anandtech doesn't do endurance testing, so I don't think it's possible to conclude that failures were the result of worn out media.
  • FunBunny2 - Wednesday, April 24, 2019 - link

    "Since our Optane Memory sample died after only about a day of testing, we cannot conduct a complete analysis of the product or make any final recommendations. "

    here: https://www.anandtech.com/show/11210/the-intel-opt...
  • Mikewind Dale - Monday, April 22, 2019 - link

    I don't understand the purpose of this product. For light duties, the Optane will be barely faster than the SLC cache, and the limitation to PCIe x2 might make the Optane slower than a x4 SLC cache. And for heavy duties, the PCIe x2 is definitely a bottleneck.

    So for light duties, a 660p is just as good, and for heavy duties, you need a Samsung 970 or something similar.

    Add in the fact that this combo Optane+QLC has serious hardware compatibility problems, and I just don't see the purpose. Even in the few systems where the Optane+QLC worked, it would still be much easier to just install a 660p and be done with it. Adding an extra software layer is just one more potential point of failure, and there's barely any offsetting benefit.

Log in

Don't have an account? Sign up now