Our suite of Linux-based synthetic tests cannot directly measure how the Intel Optane Memory H20 or Enmotus FuzeDrive SSD behave when used with their respective caching or tiering software, but we can investigate how the individual components of those storage systems perform in isolation. The tests on this page are all conducted on our usual AMD Ryzen testbed.

Burst IO Performance

Our burst IO tests operate at queue depth 1 and perform several short data transfers interspersed with idle time. The random read and write tests consist of 32 bursts of up to 64MB each. The sequential read and write tests use eight bursts of up to 128MB each. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

QD1 Burst IO Performance
Random Read Random Write
Sequential Read Sequential Write

The Intel Optane cache devices still have a huge lead over other storage technologies for random read performance, but the most notable improvements the Optane Memory H20 makes over the H10 are for random and sequential writes, where the little Optane devices most drastically underperform NAND SSDs. The small 32GB cache is now almost as fast as the 118GB Optane 800P was, but that's still not enough to match the performance of a decent-sized NAND SSD.

The SLC portion of the Enmotus FuzeDrive SSD doesn't particularly stand out, since these burst IO tests are mostly operating out of the SLC caches even on the regular NAND SSDs. However, the static SLC does clearly retain full performance on these tests even when full, in stark contrast to most of the drives that rely on drive-managed SLC caching.

Sustained IO Performance

Our sustained IO tests exercise a range of queue depths and transfer more data than the burst IO tests, but still have limits to keep the duration somewhat realistic. The primary scores we report are focused on the low queue depths that make up the bulk of consumer storage workloads. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Sustained IO Performance
Random Read Throughput Power Efficiency
Random Write Throughput Power Efficiency
Sequential Read Throughput Power Efficiency
Sequential Write Throughput Power Efficiency

There are no big changes to note for the Optane cache device on these longer-running tests that bring in some higher queue depths. The main advantages of Optane devices are at low queue depths, especially the QD1 that was already covered by the burst IO tests. The SLC portion of the Enmotus FuzeDrive does start to show some performance loss on the write tests when it is mostly-full, illustrating that SLC NAND is not immune to the performance impacts of background garbage collection with time-consuming block erase operations.

Random Read
Random Write
Sequential Read
Sequential Write

Digging deeper into the performance results, the Optane portion of the H20 shows the results we expect, reaching its full performance at fairly low queue depths and showing high consistency.

The SLC portion of the Enmotus FuzeDrive SSD shows many of the characteristics we're used to seeing from NAND-based SSDs, including the common Phison trait of poor sequential read performance at low queue depths.

Random Read Latency

This test illustrates how drives with higher throughput don't always offer better IO latency and Quality of Service (QoS), and that latency often gets much worse when a drive is pushed to its limits. This test is more intense than real-world consumer workloads and the results can be a bit noisy, but large differences that show up clearly on a log scale plot are meaningful. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

The Optane cache on the H20 can't beat the maximum random read throughput available from a mainstream or high-end TLC SSD, but until the Optane hits its throughput limit it has unbeatable latency. The SLC portion of the Enmotus FuzeDrive SSD not really any faster on this test than a good TLC drive.

Application Benchmarks and IO Traces Measuring The Building Blocks: Advanced Synthetic Tests
Comments Locked

45 Comments

View All Comments

  • deil - Wednesday, May 19, 2021 - link

    I still feel this is lazy solution.
    QLC for data storage, Optane for file metadata storage is the way.
    instant search and big size, best of both worlds.
  • Wereweeb - Wednesday, May 19, 2021 - link

    What you're describing is inferior to current QLC SSD's. Optane is still orders of magnitude slower than RAM, and I bet it would still be slower than just using system RAM like many DRAMless drives do. Plus, expensive for a consumer product.

    Optane's main use is to add terabytes of low-cost low-latency storage to workstations (That's how Intel uses it, to sell said workstations), and today both RAM and SLC drives are hot on it's heels.
  • jabber - Wednesday, May 19, 2021 - link

    All I want is a OS file system that can handle microfiles without grinding down to KBps all the time. Nothing more I love than seeing my super fast storage grind to a halt when I do file large user data copies.
  • Tomatotech - Wednesday, May 19, 2021 - link

    Pay for a 100% Optane SSD then. Or review your SSD / OS choices if this aspect is key to your income.
  • haukionkannel - Wednesday, May 19, 2021 - link

    If there only would be pure optane m2 ssd about 500 Gb to 1tb… and i,know… it would cost at least $1000 to $2000 but that would be quite usefull in highend nat storage or even as a main pc system drive.
  • Fedor - Sunday, May 23, 2021 - link

    There are, and have been for quite a few years. See the 900p, 905p (discontinued) and enterprise equivalents like 4800X and now the new 5800X.
  • jabber - Wednesday, May 19, 2021 - link

    They ALL grind to a halt when they hit thousands of microfiles.
  • ABR - Wednesday, May 19, 2021 - link

    As can be seen from the actual application benchmarks, these caching drives add almost nothing to (and sometimes take away from) performance. This matches my experience with a hybrid SSD - hard drive a few years ago on Windows that was also 16 or 32 GB for the fast part – it was indistinguishable from a regular hard drive in performance. Upgrading the same machine to a full SSD on the other hand was night and day. Basically software doesn't seem to be able to do a good job of determining what to cache.
  • lightningz71 - Wednesday, May 19, 2021 - link

    I see a lot of people bagging on Optane in general, both here and at other forums. I admit to not being a fan of it for many reasons, however, when it works, and when it's implemented with very specific goals, it does make a big difference. The organization I work at got a whole bunch (thousands) of PCs a few years ago that had mechanical hard drives. Over the last few years, different security and auditing software has been installed on them that has seriously impacted their performance. The organization was able to bulk buy a ton of the early 32GB Optane drives and we've been installing them in the machines as workload has permitted. The performance difference when you get the configuration right is drastically better for ordinary day to day office workers. This is NOT a solution for power users. This is a solution for machines that will be doing only a few, specific tasks that are heavily access latency bound and don't change a lot from day to day. The caching algorithms figure out the access patterns relatively quickly and it's largely indistinguishable from the newer PCs that were purchased with SSDs from the start.

    As for the H20, I understand where Intel was going with this, and as a "minimum effort" refresh on an existing product, it achieves it's goals. However, I feel that Intel has seriously missed the mark with this product in furthering the product itself.

    I suggest that Intel should have invested in their own combined NVME/Optane controller chip that would do the following:
    1) Use PCIe 4.0 on the bus interface with a unified 4x setup.
    2) Instead of using regular DRAM for caching, use the Optane modules themselves in that role. Tier the caching with host-based caching like the DRAMless controller models do, then tier that down to the Optane modules. They can continue to use the same strategies that regular Optane uses for caching, but have it implemented on the on-card controller instead of the host operating system. A lot of the features that were the reason that the Optane device needed to be it's own PCIe device separate from the SSD were addressed in NVME Spec 1.4(a and b), meaning that a lot of those things can be done through the unified controller. A competent controller chip should have been achievable that would have realized all of the features of the existing, but with much better I/O capabilities.

    Maybe that's coming in the next generation, if that ever happens. This... this was a minimum effort to keep a barely relevant product... barely relevant.
  • zodiacfml - Thursday, May 20, 2021 - link

    I did not get the charts. I did not see any advantage except if the workload fits in Optane, is that correct?

Log in

Don't have an account? Sign up now