Sequential Read Performance

Our first test of sequential read performance uses short bursts of 128MB, issued as 128kB operations with no queuing. The test averages performance across eight bursts for a total of 1GB of data transferred from a drive containing 16GB of data. Between each burst the drive is given enough idle time to keep the overall duty cycle at 20%.

Burst 128kB Sequential Read (Queue Depth 1)

The burst sequential read performance of the Optane Memory H10 is much lower than what the high-end TLC-based drives provide, but it is competitive with the other low-end NVMe drives that are limited to PCIe 3 x2 links. The Optane Memory caching is only responsible for about a 10% speed increase over the raw QLC speed, so this is obviously not one of the scenarios where the caching drivers can effectively stripe access between the Optane and NAND.

Our test of sustained sequential reads uses queue depths from 1 to 32, with the performance and power scores computed as the average of QD1, QD2 and QD4. Each queue depth is tested for up to one minute or 32GB transferred, from a drive containing 64GB of data. This test is run twice: once with the drive prepared by sequentially writing the test data, and again after the random write test has mixed things up, causing fragmentation inside the SSD that isn't visible to the OS. These two scores represent the two extremes of how the drive would perform under real-world usage, where wear leveling and modifications to some existing data will create some internal fragmentation that degrades performance, but usually not to the extent shown here.

Sustained 128kB Sequential Read

On the longer sequential read test, the Optane caching is still not effectively combining the performance of the Optane and NAND halves of the H10. However, when reading back data that was not written sequentially, the Optane cache is a significant help.

The Optane cache is a bit of a hindrance to sequential reads at low queue depths on this test, but at QD8 and higher it provides some benefit over using just the QLC.

Sequential Write Performance

Our test of sequential write burst performance is structured identically to the sequential read burst performance test save for the direction of the data transfer. Each burst writes 128MB as 128kB operations issued at QD1, for a total of 1GB of data written to a drive containing 16GB of data.

Burst 128kB Sequential Write (Queue Depth 1)

The burst sequential write speed of 32GB of Optane on its own is quite poor, so this is a case where the QLC NAND is significantly helping the Optane on the H10. The SLC write cache on the H10's QLC side is competitive with those on the TLC-based drives, but when the caching software gets in the way the H10 ends up with SATA-like performance.

Our test of sustained sequential writes is structured identically to our sustained sequential read test, save for the direction of the data transfers. Queue depths range from 1 to 32 and each queue depth is tested for up to one minute or 32GB, followed by up to one minute of idle time for the drive to cool off and perform garbage collection. The test is confined to a 64GB span of the drive.

Sustained 128kB Sequential Write

The story is pretty much the same on the longer sequential write test, though some of the other low-end NVMe drives have fallen far enough that the Optane Memory H10's score isn't a complete embarrassment. However, the QLC portion on its own is still doing a better job of handling sustained sequential writes than the caching configuration.

There's no clear trend in performance for the H10 during the sustained sequential write test. It is mostly performing between the levels of the QLC and Optane portions, which means the caching software is getting in the way rather than allowing the two halves to work together and deliver better performance than either one individually. It's possible that with more idle time to clear out the Optane and SLC caches we would see drastically different behavior here.

Random IO Performance Mixed Read/Write Performance
Comments Locked

60 Comments

View All Comments

  • The_Assimilator - Tuesday, April 23, 2019 - link

    > I don't understand the purpose of this product.

    It's Intel still trying, and still failing, to make Optane relevant in the consumer space.
  • tacitust - Tuesday, April 23, 2019 - link

    It works in the sense that the OEMs who use this drive will be able to use the fact that customers will be getting cutting edge Optane storage. As the review says, this is a low effort solution, so it likely didn't cost much to develop, so they won't need too many design wins to recoup their costs. It also gets Optane into many more consumer devices, which helps in the long run in terms of perception, if nothing else.

    Note: most users won't know or even care that the drive itself doesn't provide faster performance than other solutions, so it doesn't really matter to Intel either. If they get the design win, Optane does gain relevance in the consumer space, just not with the small segment of power users who read AnandTech for the reviews.
  • ironargonaut - Monday, April 29, 2019 - link

    Seems it does provide faster performance in some usage cases.
    https://www.pcworld.com/article/3389742/intel-opta...
  • CheapSushi - Wednesday, April 24, 2019 - link

    I can't stand these dumb posts where people shut down the usage for consumers. I use it all the time for OS and other programs/files. I use it as cache. I use it for different reasons. Even the cheap early x2 laned variants. I'm not in IT or anything enterprise.
  • name99 - Thursday, April 25, 2019 - link

    It's worse than that.
    The OPTANE team clearly want to sell as many Optanes as they can.
    But INTC management has decided that they can extract maximal money from enterprise by limiting
  • name99 - Thursday, April 25, 2019 - link

    It's worse than that.
    The OPTANE team clearly want to sell as many Optanes as they can.
    But INTC management has decided that they can extract maximal money from enterprise by limiting the actually sensible Optane uses (in the memory system, either as persistent memory ---for enterprise, or as a good place to swap to, for consumers).

    And so we have this ridiculous situation where the Optane team keeps trying to sell Optane in ways that make ZERO sense because the way that makes by far the most sense (sell a 16 or 32 GB or 64GB DIMM that acts as the swap space) is prevented by Intel high management (who presumably are scared that if cheap CPUs can talk to Optane DIMMs, then someone somewhere will figure out how to use them in bulk rather than super expensive special Xeons).
    Corporate dysfunction at its finest...
  • Billy Tallis - Friday, April 26, 2019 - link

    I think it's too soon to say that Intel's artificially holding back Optane DIMMs from market segments where they might have a chance. They had initially planned to have Optane DIMM support in Skylake-SP but couldn't get it working until Cascade Lake, which has only been shipping in volume for a few months. Now that they have got one working Optane-compatible memory controller out the door, they can consider bringing those memory controller features down to other product segments. But we've seen that they have given up on updating the memory controllers on their 14nm consumer parts even to provide LPDDR4 support, which certainly is a more compelling and widely-demanded feature than Optane support. I wouldn't expect Intel to be able to introduce Optane support to their consumer CPUs until their second generation of 10nm (not counting CNL) processors at the earliest. Trying to squeeze it into their first mass-market 10nm would be unreasonable since they should be trying at all costs to avoid feature creep on those parts and just ship something that works and isn't still Skylake.
  • ironargonaut - Monday, April 29, 2019 - link

    Read here for an actual real world usage test. Two system with only memory difference and same input sometimes significantly different results.
    https://www.pcworld.com/article/3389742/intel-opta...
    3X speed up for some tasks. I don't know about ya'll but I multitask a lot at work so I will let background stuff go while I do something else that is in front of me.
  • weevilone - Monday, April 22, 2019 - link

    That's too bad. I tried to tinker with the Optane caching when it launched and it was a software disaster. I wrote it off to early days stuff and put it in my kids' PC when they began to allow non-boot drives to be cached. It was another disaster and Intel's techs couldn't figure it out.

    I wound up re-installing Windows the first time and I had to redo the kids' game drive the second time. No thanks.
  • CheapSushi - Wednesday, April 24, 2019 - link

    The problem is you were using the proprietary HDD caching they marketed. There are so many ways to do drive caching on Windows that doesn't involve that Intel software. It's way better and smoother. even if still software. Software RAID and cache is superior to hardware cache unless you're using $1K+ add-on cards.

Log in

Don't have an account? Sign up now