The idea behind the Optane Memory H10 is quite intriguing. QLC NAND needs a performance boost to be competitive against mainstream TLC-based SSDs, and Intel's 3D XPoint memory is still by far the fastest non-volatile storage on the market. Unfortunately, there are too many factors weighing down the H10's potential. It's two separate SSDs on one card, so the NAND side of the drive still needs some DRAM that adds to the cost. The caching is entirely software managed, so the NAND SSD controller and the Optane controller cannot coordinate with each other and Intel's caching software sometimes struggles to make good use of both portions of the drive simultaneously.

Some of these challenges are exacerbated by benchmarking conditions; our test suite was designed with SLC write caching in mind but not two layers of cache that are sometimes functioning more like a RAID-0. None of our synthetic benchmarks managed to trigger that bandwidth aggregation between the Optane and NAND portions of the H10. Intel cautions that they have only optimized their caching algorithms for real-world storage patterns, and it is easy to see how some of our tests have differences that may be very significant. (In particular, many of our tests only give the system the opportunity to use block-level caching, but Intel's software can also perform file-level caching.) But this only emphasizes that the Optane Memory H10 is not a one size fits all storage solution.

For the heaviest, most write-intensive workloads, putting a small Optane cache in front of the QLC NAND only postpones the inevitable performance drops. In some cases, trying to keep the right data in the cache causes more performance issues than it solves. However, the kind of real-world workloads that generate that much IO are unlikely to run well on a 15W notebook CPU anyways. The Optane cache doesn't magically transform a low-end SSD into a top of the line drive, and the Optane Memory H10 is probably never going to be a good choice for desktops that can easily accommodate a wider range of storage options than a thin ultrabook.

On lighter workloads that are more typical of what an ultrabook is good for, the Optane Memory H10 is generally competitive with other low-end NVMe offerings and in good conditions it can be more responsive than any NAND flash-only drive. For everyday use, the H10 is certainly preferable over a QLC-only drive, but against TLC-based drives it's a tough sell. We haven't had the chance to perform detailed power measurements of the Optane Memory H10, but there's little chance it can provide better battery life than the best TLC-based SSDs.

If Intel is serious about making QLC+Optane caching work well enough to compete against TLC-only drives, they'll have to do better than the Optane Memory H10. TLC-only SSDs will almost always have a more consistent performance profile than a tiered setup. The Optane cache on the H10 doesn't soften the rough edges enough to make it suitable for heavy workloads, and it doesn't enhance the performance on light workloads enough to give the H10 a significant advantage over the best TLC drives. When the best-case performance of even a QLC SSD is solidly in "fast enough" territory thanks to SLC caching, the focus should be on improving the worst case, not on optimizing use cases that already feel almost instantaneous.

Optane has found great success in some segments of the datacenter storage market, but in the consumer market it's still looking for the right niche. QLC NAND is also still relatively unproven, though recently it has finally started to deliver on the promise of meaningfully lower prices. The combination of QLC and Optane might still be able to produce an impressive consumer product, but it will take more work from Intel than this relatively low-effort product.

Mixed Read/Write Performance


View All Comments

  • SaberKOG91 - Monday, April 22, 2019 - link

    Nothing special about my usage on my laptop. Running linux so I'm sure journals and other logs are a decent portion of the background activity. I also consume a fair bit of streaming media so caching to disk is also very likely. This machine gets actively used an average of 10-12 hours a day and is usually only completely off for about 8-10 hours. I also install about 150MB of software updates a week, which is pretty on par with say windows update. I also use Spotify which definitely racks up some writes.

    I can't speak to the endurance of that drive, but it is also MLC instead of TLC.

    I would argue that it means that the cost per GB of QLC is now low enough that the manufacturing benefit of smaller dies for the same capacity is worth it. Most consumer SSDs are 250-500GB regardless of technology.

    I'm not referring to a few faulty units or infant mortality. I can't remember the exact news piece, but there were reports of unusually high failure rates in the first generation of Optane cache modules. I also wasn't amused when Anandtech's review sample of the first consumer cache drive died before they finished testing it. You're also assuming that they only factor in the failure of a drive is write endurance. It could very well be that overheating, leakage buildup, or some other electrical factor lead to premature failure, regardless of TBW. It's also worth noting that you may accelerate drive death if you exceed the rated DWPD.
  • RSAUser - Tuesday, April 23, 2019 - link

    I'm at about 3TB after nearly 2 years, this with adding new software like android etc. And swapping between technologies constantly and wiping my drive once every year.
    I also have Spotify, game on it, etc.

    There is something wrong with your usage if you have that much write? I have 32GB RAM so very little caching though, so could be the difference.
  • IntelUser2000 - Tuesday, April 23, 2019 - link

    "You're also assuming that they only factor in the failure of a drive is write endurance. It could very well be that overheating, leakage buildup, or some other electrical factor lead to premature failure, regardless of TBW."

    I certainly did not. It was in reply to your original post.

    Yes, write endurance is a small part of a drive failing. If its failing due to other reasons way before warranty, then they should move to remedy this.
  • Irata - Tuesday, April 23, 2019 - link

    You are forgetting the sleep state on laptops. That alone will result in a lot of data being written to the SSD. Reply
  • jeremyshaw - Sunday, July 14, 2019 - link

    Or they have a laptop with the "Modern Standby," which is code for:

    Subpar idle state which goes to Hibernation (flush RAM to SSD - I have 32GB of RAM) whenever the system drains too much power in this "Standby S3 replacement."
  • voicequal - Monday, April 22, 2019 - link

    "Optane has such horrible lifespan at these densities that reviewers destroyed the drives just benchmarking them."

    What is your source for this comment?
  • SaberKOG91 - Monday, April 22, 2019 - link

    Anandtech killed their review sample when Optane first came out. Happened other places too. Reply
  • voicequal - Tuesday, April 23, 2019 - link

    Link? Anandtech doesn't do endurance testing, so I don't think it's possible to conclude that failures were the result of worn out media. Reply
  • FunBunny2 - Wednesday, April 24, 2019 - link

    "Since our Optane Memory sample died after only about a day of testing, we cannot conduct a complete analysis of the product or make any final recommendations. "

  • Mikewind Dale - Monday, April 22, 2019 - link

    I don't understand the purpose of this product. For light duties, the Optane will be barely faster than the SLC cache, and the limitation to PCIe x2 might make the Optane slower than a x4 SLC cache. And for heavy duties, the PCIe x2 is definitely a bottleneck.

    So for light duties, a 660p is just as good, and for heavy duties, you need a Samsung 970 or something similar.

    Add in the fact that this combo Optane+QLC has serious hardware compatibility problems, and I just don't see the purpose. Even in the few systems where the Optane+QLC worked, it would still be much easier to just install a 660p and be done with it. Adding an extra software layer is just one more potential point of failure, and there's barely any offsetting benefit.

Log in

Don't have an account? Sign up now