The Intel Optane Memory H10 Review: QLC and Optane In One SSDby Billy Tallis on April 22, 2019 11:50 AM EST
Our primary system for consumer SSD testing is a Skylake desktop. This is equipped with a Quarch XLC Power Module for detailed SSD power measurements and is used for our ATSB IO trace tests and synthetic benchmarks using FIO. This system predates all of the Optane Memory products, and Intel and their motherboard partners did not want to roll out firmware updates to provide Optane Memory caching support on Skylake generation systems. Using this testbed, we can only access the QLC NAND half of the Optane Memory H10.
As usual for new Optane Memory releases, Intel sent us an entire system with the new Optane Memory H10 pre-installed and configured. This year's review system is an HP Spectre x360 13t notebook with an Intel Core i7-8565U Whiskey Lake processor and 16GB of DDR4. In previous years Intel has provided desktop systems for testing Optane Memory products, but the H10's biggest selling point is that it is a single M.2 module that fits in small systems, so the choice of a 13" notebook this year makes sense. Intel has confirmed that the Spectre x360 will soon be available for purchase with the Optane Memory H10 as one of the storage options.
The HP Spectre x360 13t has only one M.2 type-M slot, so in order to test multi-drive caching configurations or anything involving SATA, we made use of the Coffee Lake and Kaby Lake systems Intel provided for previous Optane Memory releases. For application benchmarks like SYSmark and PCMark, the scores are heavily influenced by the differences in CPU power and RAM between these machines so we have to list three sets of scores for each storage configuration tested. However, our AnandTech Storage Bench IO trace tests and our synthetic benchmarks using FIO produce nearly identical results across all three of these systems, so we can make direct comparisons and each test only needs to list one set of scores for each storage configuration.
|Intel-provided Optane Memory Review Systems|
|Platforn||Kaby Lake||Coffee Lake||Whiskey Lake|
|CPU||Intel Core i5-7400||Intel Core i7-8700K||Intel Core i7-8565U|
|Motherboard||ASUS PRIME Z270-A||Gigabyte Aorus H370 Gaming 3 WiFi||HP Spectre x360 13t|
|Chipset||Intel Z270||Intel H370|
|Memory||2x 4GB DDR4-2666||2x 8GB DDR4-2666||16GB DDR4-2400|
|Case||In Win C583||In Win C583|
|Power Supply||Cooler Master G550M||Cooler Master G550M||HP 65W USB-C|
|OS||Windows 10 64-bit, version 1803|
Intel's Optane Memory caching software is Windows-only, so our usual Linux-based synthetic testing with FIO had to be adapted to run on Windows. The configuration and test procedure is as close as practical to our usual methodology, but a few important differences mean the results in this review are not directly comparable to those from our usual SSD reviews or the results posted in Bench. In particular, it is impossible to perform a secure erase or NVMe format from within Windows except in the rare instance where a vendor provides a tool that only works with their drives. Our testing usually involves erasing the drive between major phases in order to restore performance without waiting for the SSD's background garbage collection to finish cleaning up and freeing up SLC cache. For this review's Windows-based synthetic benchmarks, the tests that write the least amount of data were run first, and those that require filling the entire drive were saved for last.
Optane Memory caching also requires using Intel's storage drivers. Our usual procedure for Windows-based tests is to use Microsoft's own NVMe driver rather than bother with vendor-specific drivers. The tests of Optane caching configurations in this review were conducted with Intel's drivers, but all single-drive tests (including tests of just one side of the Optane Memory H10) use the Windows default driver.
Our usual Skylake testbed is setup to test NVMe SSDs in the primary PCIe x16 slot connected to the CPU. Optane Memory caching requires that the drives be connected through the chipset, so there's a small possibility that congestion on the x4 DMI link could have an effect on the fastest drives, but the H10 is unlikely to come close to saturating this connection.
We try to include detailed power measurements alongside almost all of our performance tests, but this review is missing most of those. Our current power measurement equipment is unable to supply power to a M.2 slot in a notebook and requires a regular PCIe x4 slot for the power injection fixture. We have new equipment on the way from Quarch to remedy this limitation and will post an article about the upgrade after taking the time to re-test the drives in this review with power measurement on the HP notebook.
Post Your CommentPlease log in or sign up to comment.
View All Comments
Flunk - Monday, April 22, 2019 - linkThis sounded interesting until I read software solution and split bandwidth. Intel seems to be really intent upon forcing Optane into products regardless if they make sense or not.
Maybe it would have made sense with SSDs at the price points they were this time last year, but now it just seems like pointless exercise.
PeachNCream - Monday, April 22, 2019 - linkWho knew Optane would end up acting as a bandage fix for QLC's garbage endurance? I suppose its better than nothing, but 0.16 DWPD is terrible. The 512GB model would barely make it to 24 months in a laptop without making significant configuration changes (caching the browser to RAM, disabling the swap file entirely, etc.)
IntelUser2000 - Monday, April 22, 2019 - linkThe H10 is a mediocre product, but endurance claims are overblown.
Even if the rated lifespan is a total of 35TB, you'd be perfectly fine. The 512GB H10 is rated for 150TB.
The amount of users that would even reach 20TB in 5 years are in the minority. When I was actively using the system, my X25-M registered less than 5TB in 2 years.
PeachNCream - Monday, April 22, 2019 - linkYour usage is extremely light. Endurance is a real-world problem. I've already dealt with it a couple of times with MLC SSDs.
IntelUser2000 - Monday, April 22, 2019 - linkSSDs are over 50% of the storage sold in notebooks. It's firmly reaching mainstream there.
I would say instead I think most of *your* customers are too demanding. Vast majority of the folks would use less than me.
The market agrees too, which is why we went from MLC to TLC, and now we have QLCs coming.
Perhaps you are confusing write-endurance with physical stress endurance, or even natural MTBF related endurance.
PeachNCream - Monday, April 22, 2019 - linkI haven't touched on any usage but my own so far. The drives' own software identified the problems so if there is confusion about failures, that's in the domain of the OEM. (Note, those drives don't fail gracefully either so that data can be recovered. It's a pretty ugly end to reach.) As for the move from MLC to TLC and now QLC -- thats driven by cost sensitivity for given capacities and ignores endurance to a great extent.
IntelUser2000 - Monday, April 22, 2019 - linkI get the paranoia. The world does that to you. You unconsciously become paranoid in everything.
However, for most folks endurance is not a problem. The circuit in the SSD will likely fail of natural causes before write endurance is reached. Everything dies. But people are just excessively worried about NAND SSD write endurance because its a fixed metric.
It's like knowing the date of your death.
PeachNCream - Friday, May 3, 2019 - linkThat's not really a paranoia thing. You're attempt to bait someone into an argument where you can then toss out insults is silly.
SaberKOG91 - Monday, April 22, 2019 - linkThat's a naive argument. Most SSDs of 250GB or larger are rated for at least 100TBW on a 3 year warranty. 75TBW on a 5 year warranty is an insult.
I think you underestimate how much demand the average user makes of their system. Especially when you have things like anti-virus and web browsers making lots of little writes in the background, all the time.
The market is going from TLC to QLC because of density, not reliability. We had all the same reliability issues going from MLC to TLC and from SLC to MLC. It took years for each transition for manufacturers to reach the same durability level as the previous technology, all while seeing the previous generation continuing to improve even further. Moving to denser tech means smaller dies for the same capacity or higher capacity for unit area which is good for everyone. But these drives don't even look to have 0.20DWPD or 5 year warranty of other QLC Flash products.
I am a light user who doesn't have a lot of photos or video and this laptop has already seen 1.3TBW in only 3 months. My work desktop has over 20TBW from the last 5 years. My home desktop where I compile software has over 12TBW in the first year. My gaming PC has 27TBW on a 5 year old drive. So while I might agree that 75TBW seems like a lot, If I were to simplify my life down to one machine, I'd easily hit 20TBW a year or 8TBW a year even without the compile machine.
That all said, you're still ignoring that many Micron and Samsung drives have been shown to go way beyond their rated lifespan whereas Optane has such horrible lifespan at these densities that reviewers destroyed the drives just benchmarking them. Since the Optane is acting as a persistent cache, what happens to these drives when the Optane dies? At the very least performance will tank. At the worst the drive is hosed.
IntelUser2000 - Monday, April 22, 2019 - linkSomething is very wrong with your drive or you are not really a "light user".
1300GB in 3 months equals to 14GB write per day. That means if you use your computer 7 hours a day you'd be using 2GB/s hour. The computer I had the SSD on I used it for 8-12 hours every day for the two years and it was a gaming PC and a primary one at that.
Perhaps the X25-M drive I had is particularly good at this aspect, but the differences seem too much.
Anyways, moving to denser cells just mean consumer level workloads do not need the write endurance MLC needs and lower prices are preferred.
"Optane has such horrible lifespan at these densities that reviewers destroyed the drives just benchmarking them."
Maybe you are referring to the few faulty units in the beginning? Any devices can fail in the first 30 days. That's completely unrelated to *write endurance*. The first gen modules are rated for 190TBW. If they played around for a year(which is unrealistic since its for a benchmark), they would have been using 500GB/s day. Maybe you want to verify your claims yourself.