Sequential Read Performance

Our first test of sequential read performance uses short bursts of 128MB, issued as 128kB operations with no queuing. The test averages performance across eight bursts for a total of 1GB of data transferred from a drive containing 16GB of data. Between each burst the drive is given enough idle time to keep the overall duty cycle at 20%.

Burst 128kB Sequential Read (Queue Depth 1)

The burst sequential read performance of the Optane Memory H10 is much lower than what the high-end TLC-based drives provide, but it is competitive with the other low-end NVMe drives that are limited to PCIe 3 x2 links. The Optane Memory caching is only responsible for about a 10% speed increase over the raw QLC speed, so this is obviously not one of the scenarios where the caching drivers can effectively stripe access between the Optane and NAND.

Our test of sustained sequential reads uses queue depths from 1 to 32, with the performance and power scores computed as the average of QD1, QD2 and QD4. Each queue depth is tested for up to one minute or 32GB transferred, from a drive containing 64GB of data. This test is run twice: once with the drive prepared by sequentially writing the test data, and again after the random write test has mixed things up, causing fragmentation inside the SSD that isn't visible to the OS. These two scores represent the two extremes of how the drive would perform under real-world usage, where wear leveling and modifications to some existing data will create some internal fragmentation that degrades performance, but usually not to the extent shown here.

Sustained 128kB Sequential Read

On the longer sequential read test, the Optane caching is still not effectively combining the performance of the Optane and NAND halves of the H10. However, when reading back data that was not written sequentially, the Optane cache is a significant help.

The Optane cache is a bit of a hindrance to sequential reads at low queue depths on this test, but at QD8 and higher it provides some benefit over using just the QLC.

Sequential Write Performance

Our test of sequential write burst performance is structured identically to the sequential read burst performance test save for the direction of the data transfer. Each burst writes 128MB as 128kB operations issued at QD1, for a total of 1GB of data written to a drive containing 16GB of data.

Burst 128kB Sequential Write (Queue Depth 1)

The burst sequential write speed of 32GB of Optane on its own is quite poor, so this is a case where the QLC NAND is significantly helping the Optane on the H10. The SLC write cache on the H10's QLC side is competitive with those on the TLC-based drives, but when the caching software gets in the way the H10 ends up with SATA-like performance.

Our test of sustained sequential writes is structured identically to our sustained sequential read test, save for the direction of the data transfers. Queue depths range from 1 to 32 and each queue depth is tested for up to one minute or 32GB, followed by up to one minute of idle time for the drive to cool off and perform garbage collection. The test is confined to a 64GB span of the drive.

Sustained 128kB Sequential Write

The story is pretty much the same on the longer sequential write test, though some of the other low-end NVMe drives have fallen far enough that the Optane Memory H10's score isn't a complete embarrassment. However, the QLC portion on its own is still doing a better job of handling sustained sequential writes than the caching configuration.

There's no clear trend in performance for the H10 during the sustained sequential write test. It is mostly performing between the levels of the QLC and Optane portions, which means the caching software is getting in the way rather than allowing the two halves to work together and deliver better performance than either one individually. It's possible that with more idle time to clear out the Optane and SLC caches we would see drastically different behavior here.

Random IO Performance Mixed Read/Write Performance
Comments Locked

60 Comments

View All Comments

  • Valantar - Tuesday, April 23, 2019 - link

    "Why hamper it with a slower bus?": cost. This is a low-end product, not a high-end one. The 970 EVO can at best be called "midrange" (though it keeps up with the high end for performance in a lot of cases). Intel doesn't yet have a monolithic controller that can work with both NAND and Optane, so this is (as the review clearly states) two devices on one PCB. The use case is making a cheap but fast OEM drive, where caching to the Optane part _can_ result in noticeable performance increases for everyday consumer workloads, but is unlikely to matter in any kind of stress test. The problem is that adding Optane drives up prices, meaning that this doesn't compete against QLC drives (which it would beat in terms of user experience) but also TLC drives which would likely be faster in all but the most cache-friendly, bursty workloads.

    I see this kind of concept as the "killer app" for Optane outside of datacenters and high-end workstations, but this implementation is nonsense due to the lack of a suitable controller. If the drive had a single controller with an x4 interface, replaced the DRAM buffer with a sizeable Optane cache, and came in QLC-like capacities, it would be _amazing_. Great capacity, great low-QD speeds (for anything cached), great price. As it stands, it's ... meh.
  • cb88 - Friday, May 17, 2019 - link

    Therein lies the BS... Optane cannot compete as a low end product as it is too expensive.. so they should have settled for being the best premium product with 4x PCIe... probably even maxing out PCIe 4.0 easily once it launches.
  • CheapSushi - Wednesday, April 24, 2019 - link

    I think you're mixing up why it would be faster. The lanes are the easier part. It's inherently faster. But you can't magically make x2 PCIe lanes push more bandwidth than x4 PCIe lanes on the same standard (3.0 for example).
  • twotwotwo - Monday, April 22, 2019 - link

    Prices not announced, so they can still make it cheaper.

    Seems like a tricky situation unless it's priced way below anything that performs similarly though. Faster options on one side and really cheap drives that are plenty for mainstream use on the other.
  • CaedenV - Monday, April 22, 2019 - link

    lol cheaper? All of the parts of a traditional SSD, *plus* all of the added R&D, parts, and software for the Optane half of the drive?
    I will be impressed if this is only 2x the price of a Sammy... and still slower.
  • DanNeely - Monday, April 22, 2019 - link

    Ultimately, to scale this I think Intel is going to have to add an on card PCIe switch. With the company currently dominating the market setting prices to fleece enterprise customers, I suspect that means they'll need to design something in house. PCIe4 will help some, but normal drives will get faster too.
  • kpb321 - Monday, April 22, 2019 - link

    I don't think that would end up working out well. As the article mentions PCI-E switches tend to be power hungry which wouldn't work well and would add yet another part to the drive and push the BOM up even higher. For this to work you'd need to deliver TLC level performance or better but at a lower cost. Ultimately the only way I can see that working would be moving to a single integrated controller. From a cost perspective eliminating the DRAM buffer by using a combination of the Optane memory and HBM should probably work. This would probably push it into a largely or completely hardware managed solution and would improve compatibility and eliminate the issues with the PCI-E bifrication and bottlenecks.
  • ksec - Monday, April 22, 2019 - link

    Yes, I think we will need a Single Controller to see its true potential and if it has a market fit.

    Cause right now I am not seeing any real benefits or advantage of using this compared to decent M.2 SSD.
  • Kevin G - Monday, April 22, 2019 - link

    What Intel needs to do for this to really take off is to have a combo NAND + Optane controller capable of handling both types natively. This would eliminate the need for a PCIe switch and free up board space on the small M.2 sticks. A win-win scenario if Intel puts forward the development investment.
  • e1jones - Monday, April 22, 2019 - link

    A solution for something in search of a problem. And, typical Intel, clearly incompatible with a lot of modern systems, much less older systems. Why do they keep trying to limit the usability of Optane!?

    In a world where each half was actually accessible, it might be useful for ZFS/NAS apps, where the Optane could be the log or cache and the QLC could be a WORM storage tier.

Log in

Don't have an account? Sign up now