Sequential Read Performance

Our first test of sequential read performance uses short bursts of 128MB, issued as 128kB operations with no queuing. The test averages performance across eight bursts for a total of 1GB of data transferred from a drive containing 16GB of data. Between each burst the drive is given enough idle time to keep the overall duty cycle at 20%.

Burst 128kB Sequential Read (Queue Depth 1)

The burst sequential read performance of the Optane Memory H10 is much lower than what the high-end TLC-based drives provide, but it is competitive with the other low-end NVMe drives that are limited to PCIe 3 x2 links. The Optane Memory caching is only responsible for about a 10% speed increase over the raw QLC speed, so this is obviously not one of the scenarios where the caching drivers can effectively stripe access between the Optane and NAND.

Our test of sustained sequential reads uses queue depths from 1 to 32, with the performance and power scores computed as the average of QD1, QD2 and QD4. Each queue depth is tested for up to one minute or 32GB transferred, from a drive containing 64GB of data. This test is run twice: once with the drive prepared by sequentially writing the test data, and again after the random write test has mixed things up, causing fragmentation inside the SSD that isn't visible to the OS. These two scores represent the two extremes of how the drive would perform under real-world usage, where wear leveling and modifications to some existing data will create some internal fragmentation that degrades performance, but usually not to the extent shown here.

Sustained 128kB Sequential Read

On the longer sequential read test, the Optane caching is still not effectively combining the performance of the Optane and NAND halves of the H10. However, when reading back data that was not written sequentially, the Optane cache is a significant help.

The Optane cache is a bit of a hindrance to sequential reads at low queue depths on this test, but at QD8 and higher it provides some benefit over using just the QLC.

Sequential Write Performance

Our test of sequential write burst performance is structured identically to the sequential read burst performance test save for the direction of the data transfer. Each burst writes 128MB as 128kB operations issued at QD1, for a total of 1GB of data written to a drive containing 16GB of data.

Burst 128kB Sequential Write (Queue Depth 1)

The burst sequential write speed of 32GB of Optane on its own is quite poor, so this is a case where the QLC NAND is significantly helping the Optane on the H10. The SLC write cache on the H10's QLC side is competitive with those on the TLC-based drives, but when the caching software gets in the way the H10 ends up with SATA-like performance.

Our test of sustained sequential writes is structured identically to our sustained sequential read test, save for the direction of the data transfers. Queue depths range from 1 to 32 and each queue depth is tested for up to one minute or 32GB, followed by up to one minute of idle time for the drive to cool off and perform garbage collection. The test is confined to a 64GB span of the drive.

Sustained 128kB Sequential Write

The story is pretty much the same on the longer sequential write test, though some of the other low-end NVMe drives have fallen far enough that the Optane Memory H10's score isn't a complete embarrassment. However, the QLC portion on its own is still doing a better job of handling sustained sequential writes than the caching configuration.

There's no clear trend in performance for the H10 during the sustained sequential write test. It is mostly performing between the levels of the QLC and Optane portions, which means the caching software is getting in the way rather than allowing the two halves to work together and deliver better performance than either one individually. It's possible that with more idle time to clear out the Optane and SLC caches we would see drastically different behavior here.

Random IO Performance Mixed Read/Write Performance
Comments Locked

60 Comments

View All Comments

  • Flunk - Monday, April 22, 2019 - link

    This sounded interesting until I read software solution and split bandwidth. Intel seems to be really intent upon forcing Optane into products regardless if they make sense or not.

    Maybe it would have made sense with SSDs at the price points they were this time last year, but now it just seems like pointless exercise.
  • PeachNCream - Monday, April 22, 2019 - link

    Who knew Optane would end up acting as a bandage fix for QLC's garbage endurance? I suppose its better than nothing, but 0.16 DWPD is terrible. The 512GB model would barely make it to 24 months in a laptop without making significant configuration changes (caching the browser to RAM, disabling the swap file entirely, etc.)
  • IntelUser2000 - Monday, April 22, 2019 - link

    The H10 is a mediocre product, but endurance claims are overblown.

    Even if the rated lifespan is a total of 35TB, you'd be perfectly fine. The 512GB H10 is rated for 150TB.

    The amount of users that would even reach 20TB in 5 years are in the minority. When I was actively using the system, my X25-M registered less than 5TB in 2 years.
  • PeachNCream - Monday, April 22, 2019 - link

    Your usage is extremely light. Endurance is a real-world problem. I've already dealt with it a couple of times with MLC SSDs.
  • IntelUser2000 - Monday, April 22, 2019 - link

    SSDs are over 50% of the storage sold in notebooks. It's firmly reaching mainstream there.

    I would say instead I think most of *your* customers are too demanding. Vast majority of the folks would use less than me.

    The market agrees too, which is why we went from MLC to TLC, and now we have QLCs coming.

    Perhaps you are confusing write-endurance with physical stress endurance, or even natural MTBF related endurance.
  • PeachNCream - Monday, April 22, 2019 - link

    I haven't touched on any usage but my own so far. The drives' own software identified the problems so if there is confusion about failures, that's in the domain of the OEM. (Note, those drives don't fail gracefully either so that data can be recovered. It's a pretty ugly end to reach.) As for the move from MLC to TLC and now QLC -- thats driven by cost sensitivity for given capacities and ignores endurance to a great extent.
  • IntelUser2000 - Monday, April 22, 2019 - link

    I get the paranoia. The world does that to you. You unconsciously become paranoid in everything.

    However, for most folks endurance is not a problem. The circuit in the SSD will likely fail of natural causes before write endurance is reached. Everything dies. But people are just excessively worried about NAND SSD write endurance because its a fixed metric.

    It's like knowing the date of your death.
  • PeachNCream - Friday, May 3, 2019 - link

    That's not really a paranoia thing. You're attempt to bait someone into an argument where you can then toss out insults is silly.
  • SaberKOG91 - Monday, April 22, 2019 - link

    That's a naive argument. Most SSDs of 250GB or larger are rated for at least 100TBW on a 3 year warranty. 75TBW on a 5 year warranty is an insult.

    I think you underestimate how much demand the average user makes of their system. Especially when you have things like anti-virus and web browsers making lots of little writes in the background, all the time.

    The market is going from TLC to QLC because of density, not reliability. We had all the same reliability issues going from MLC to TLC and from SLC to MLC. It took years for each transition for manufacturers to reach the same durability level as the previous technology, all while seeing the previous generation continuing to improve even further. Moving to denser tech means smaller dies for the same capacity or higher capacity for unit area which is good for everyone. But these drives don't even look to have 0.20DWPD or 5 year warranty of other QLC Flash products.

    I am a light user who doesn't have a lot of photos or video and this laptop has already seen 1.3TBW in only 3 months. My work desktop has over 20TBW from the last 5 years. My home desktop where I compile software has over 12TBW in the first year. My gaming PC has 27TBW on a 5 year old drive. So while I might agree that 75TBW seems like a lot, If I were to simplify my life down to one machine, I'd easily hit 20TBW a year or 8TBW a year even without the compile machine.

    That all said, you're still ignoring that many Micron and Samsung drives have been shown to go way beyond their rated lifespan whereas Optane has such horrible lifespan at these densities that reviewers destroyed the drives just benchmarking them. Since the Optane is acting as a persistent cache, what happens to these drives when the Optane dies? At the very least performance will tank. At the worst the drive is hosed.
  • IntelUser2000 - Monday, April 22, 2019 - link

    Something is very wrong with your drive or you are not really a "light user".

    1300GB in 3 months equals to 14GB write per day. That means if you use your computer 7 hours a day you'd be using 2GB/s hour. The computer I had the SSD on I used it for 8-12 hours every day for the two years and it was a gaming PC and a primary one at that.

    Perhaps the X25-M drive I had is particularly good at this aspect, but the differences seem too much.

    Anyways, moving to denser cells just mean consumer level workloads do not need the write endurance MLC needs and lower prices are preferred.

    "Optane has such horrible lifespan at these densities that reviewers destroyed the drives just benchmarking them."

    Maybe you are referring to the few faulty units in the beginning? Any devices can fail in the first 30 days. That's completely unrelated to *write endurance*. The first gen modules are rated for 190TBW. If they played around for a year(which is unrealistic since its for a benchmark), they would have been using 500GB/s day. Maybe you want to verify your claims yourself.

Log in

Don't have an account? Sign up now