Sequential Read Performance

Our first test of sequential read performance uses short bursts of 128MB, issued as 128kB operations with no queuing. The test averages performance across eight bursts for a total of 1GB of data transferred from a drive containing 16GB of data. Between each burst the drive is given enough idle time to keep the overall duty cycle at 20%.

Burst 128kB Sequential Read (Queue Depth 1)

The burst sequential read performance of the Optane Memory H10 is much lower than what the high-end TLC-based drives provide, but it is competitive with the other low-end NVMe drives that are limited to PCIe 3 x2 links. The Optane Memory caching is only responsible for about a 10% speed increase over the raw QLC speed, so this is obviously not one of the scenarios where the caching drivers can effectively stripe access between the Optane and NAND.

Our test of sustained sequential reads uses queue depths from 1 to 32, with the performance and power scores computed as the average of QD1, QD2 and QD4. Each queue depth is tested for up to one minute or 32GB transferred, from a drive containing 64GB of data. This test is run twice: once with the drive prepared by sequentially writing the test data, and again after the random write test has mixed things up, causing fragmentation inside the SSD that isn't visible to the OS. These two scores represent the two extremes of how the drive would perform under real-world usage, where wear leveling and modifications to some existing data will create some internal fragmentation that degrades performance, but usually not to the extent shown here.

Sustained 128kB Sequential Read

On the longer sequential read test, the Optane caching is still not effectively combining the performance of the Optane and NAND halves of the H10. However, when reading back data that was not written sequentially, the Optane cache is a significant help.

The Optane cache is a bit of a hindrance to sequential reads at low queue depths on this test, but at QD8 and higher it provides some benefit over using just the QLC.

Sequential Write Performance

Our test of sequential write burst performance is structured identically to the sequential read burst performance test save for the direction of the data transfer. Each burst writes 128MB as 128kB operations issued at QD1, for a total of 1GB of data written to a drive containing 16GB of data.

Burst 128kB Sequential Write (Queue Depth 1)

The burst sequential write speed of 32GB of Optane on its own is quite poor, so this is a case where the QLC NAND is significantly helping the Optane on the H10. The SLC write cache on the H10's QLC side is competitive with those on the TLC-based drives, but when the caching software gets in the way the H10 ends up with SATA-like performance.

Our test of sustained sequential writes is structured identically to our sustained sequential read test, save for the direction of the data transfers. Queue depths range from 1 to 32 and each queue depth is tested for up to one minute or 32GB, followed by up to one minute of idle time for the drive to cool off and perform garbage collection. The test is confined to a 64GB span of the drive.

Sustained 128kB Sequential Write

The story is pretty much the same on the longer sequential write test, though some of the other low-end NVMe drives have fallen far enough that the Optane Memory H10's score isn't a complete embarrassment. However, the QLC portion on its own is still doing a better job of handling sustained sequential writes than the caching configuration.

There's no clear trend in performance for the H10 during the sustained sequential write test. It is mostly performing between the levels of the QLC and Optane portions, which means the caching software is getting in the way rather than allowing the two halves to work together and deliver better performance than either one individually. It's possible that with more idle time to clear out the Optane and SLC caches we would see drastically different behavior here.

Random IO Performance Mixed Read/Write Performance
Comments Locked

60 Comments

View All Comments

  • yankeeDDL - Monday, April 22, 2019 - link

    Is it me or, generally speaking, it is noticeably slower than the 970 Evo?
  • DanNeely - Monday, April 22, 2019 - link

    The 970 can make use of 4 lanes, with only 2 effective lanes in most scenarios any good x4 drive is going to be able to smoke the H10.
  • yankeeDDL - Monday, April 22, 2019 - link

    I still remember that Optane should be 1000x faster and 1000x cheaper. It seems that it is faster, albeit by a much lower factor ... then why hamper it with a slower bus? I mean, I came to read the review thinking that it could be a nice upgrade, and then I see it beaten handily by the 970 Evo. What's the point of such device? It is clearly more complex, so I doubt it'll be cheaper than the 970 Evo...
  • Alexvrb - Monday, April 22, 2019 - link

    Wait, did they say it would be cheaper? I don't remember that. I know they thought it would be a lot faster than it is... to be fair they seemed to be making projections like NAND based solutions wouldn't speed up at all in years LOL.

    It can be a lot faster in certain configs (the high end PCIe add-on cards, for example) but it's insanely expensive. Even then it's mainly faster for low QDs...
  • kgardas - Tuesday, April 23, 2019 - link

    Yes, but just in comparison with DRAM prices. E.g. NVDIMM of big size cheaper than DIMM of big size.
  • Irata - Tuesday, April 23, 2019 - link

    It was supposed to be 1000x faster and have 1000x the endurance of NAND as per Intel's official 2016 slides.

    It may be slightly off on those promises - would have loved for the article to include the slide with Intel's original claims.

    Price wasn't mentioned.
  • yankeeDDL - Tuesday, April 23, 2019 - link

    You're right. They said 1000x faster, 1000x endurance and 10x denser, but they did not say cheaper, although, the 10x denser somewhat implies it (https://www.micron.com/~/media/documents/products/... Still, this drive is not faster, nor it has significantly higher endurance. Let's see if it is any cheaper.
  • Valantar - Tuesday, April 23, 2019 - link

    Denser than DRAM, not NAND. Speed claims are against NAND, price/density claims against DRAM - where they might not be 1/10th the price, but definitely cheaper. The entire argument for 3D Xpoint is "faster than NAND, cheaper than DRAM (while persistent and closer to the former than the latter in capacity)", after all.
  • CheapSushi - Wednesday, April 24, 2019 - link

    I think this is why there's still negative impressions around 3D Xpoint. Too many people still don't understand it or confuse the information given.
  • cb88 - Friday, May 17, 2019 - link

    Optane itself is *vastly* faster than this... on an NVDIMM it rivals DDR4 with latencies in hundreds of ns instead of micro or milliseconds. And bandwidth basically on par with DDR4.

    I think it's some marketing BS that they don't use 4x PCIe on thier M.2 cards .... perhaps trying to avoid server guys buying them up cheap and putting them on quad m.2 to PCIe adapters.

Log in

Don't have an account? Sign up now