Our suite of Linux-based synthetic tests cannot directly measure how the Intel Optane Memory H20 or Enmotus FuzeDrive SSD behave when used with their respective caching or tiering software, but we can investigate how the individual components of those storage systems perform in isolation. The tests on this page are all conducted on our usual AMD Ryzen testbed.

Advanced Synthetic Tests

Our benchmark suite includes a variety of tests that are less about replicating any real-world IO patterns, and more about exposing the inner workings of a drive with narrowly-focused tests. Many of these tests will show exaggerated differences between drives, and for the most part that should not be taken as a sign that one drive will be drastically faster for real-world usage. These tests are about satisfying curiosity, and are not good measures of overall drive performance. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Whole-Drive Fill

Pass 1
Pass 2

As expected, we see fairly steady write performance from the Intel Optane devices, including on the second write pass—but the total throughput is still quite low compared to NAND SSDs. The SLC portion of the Enmotus FuzeDrive SSD has similar performance consistency, but competitive throughput. The QLC portion of that drive does have the typical drive-managed SLC cache that starts out with a capacity of about 333 GB when this section of the drive is empty, and shrinks down to about 19 GB for the second pass when the drive is full.

Sustained 128kB Sequential Write (Power Efficiency)
Average Throughput for last 16 GB Overall Average Throughput

The SLC portion of the Enmotus FuzeDrive SSD is naturally far faster for the overall fill process than the drives that run out of SLC and slow down part way through. The Optane devices aren't quite in a "slow and steady wins the race" situation against the traditional NAND SSDs, but the Optane Memory H20's cache device is at least faster than the post-cache performance of the QLC drives and the DRAMless Samsung 980.

Working Set Size

There's a bit of variability in the random read latency of the Optane cache on the Optane Memory H20, but it's so much faster than the NAND devices that a bit of inconsistency hardly matters. We're just seeing noise that only shows up at this scale and is insignificant at NAND speeds. The SLC slice of the FuzeDrive SSD is faster than any of the other NAND drives, but it's a narrow lead. These fast devices naturally do not show any of the performance drop-off that comes from having insufficient DRAM: the Optane caches don't need it in the first place, and the SLC portion of the FuzeDrive SSD is small enough to be managed with a fraction of the drive's DRAM.

Performance vs Block Size

Random Read
Random Write
Sequential Read
Sequential Write

The Optane cache on the H20 behaves almost exactly like previous Optane Memory devices, except that it has acquired a strong dislike for sequential writes one 512B sector at a time. We often see sub-4kB sequential writes performing badly on NAND devices because the flash translation layer operates with 4kB granularity, so perhaps Intel has switched something in this Optane module to operate with 1kB granularity and it now needs to perform a read-modify-write cycle to handle this case. (The lack of a similar performance drop for random writes is a bit of a puzzle.)

The SLC portion of the Enmotus FuzeDrive SSD shows similar overall behavior to other Phison-based NAND drives, albeit with generally higher performance.

Mixed IO Performance

For details on our mixed IO tests, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Mixed IO Performance
Mixed Random IO Throughput Power Efficiency
Mixed Sequential IO Throughput Power Efficiency

The Intel Optane devices handle the mixed random IO test better than any of the NAND drives, and the Optane Memory H20's speed improvements over the H10 bring it up to the level of the larger 118GB Optane SSD 800P. The SLC portion of the Enmotus FuzeDrive SSD performs comparably to a decent 1TB TLC drive with SLC cache.

On the mixed sequential IO test, the Optane devices are far slower than the NAND devices, and the SLC has no real advantage either. A fast controller and lots of NAND is the best recipe for high performance on this test.

Mixed Random IO
Mixed Sequential IO

The Optane devices show completely different performance trends from the NAND devices on the mixed random IO test: the Optane drives speed up significantly as the workload gets more write-heavy, while the NAND drives have flat or declining performance. On the mixed sequential IO test, the Optane behavior is a bit more normal, albeit with very low-end performance.

Measuring The Building Blocks: Synthetic Tests Conclusions
Comments Locked

45 Comments

View All Comments

  • deil - Wednesday, May 19, 2021 - link

    I still feel this is lazy solution.
    QLC for data storage, Optane for file metadata storage is the way.
    instant search and big size, best of both worlds.
  • Wereweeb - Wednesday, May 19, 2021 - link

    What you're describing is inferior to current QLC SSD's. Optane is still orders of magnitude slower than RAM, and I bet it would still be slower than just using system RAM like many DRAMless drives do. Plus, expensive for a consumer product.

    Optane's main use is to add terabytes of low-cost low-latency storage to workstations (That's how Intel uses it, to sell said workstations), and today both RAM and SLC drives are hot on it's heels.
  • jabber - Wednesday, May 19, 2021 - link

    All I want is a OS file system that can handle microfiles without grinding down to KBps all the time. Nothing more I love than seeing my super fast storage grind to a halt when I do file large user data copies.
  • Tomatotech - Wednesday, May 19, 2021 - link

    Pay for a 100% Optane SSD then. Or review your SSD / OS choices if this aspect is key to your income.
  • haukionkannel - Wednesday, May 19, 2021 - link

    If there only would be pure optane m2 ssd about 500 Gb to 1tb… and i,know… it would cost at least $1000 to $2000 but that would be quite usefull in highend nat storage or even as a main pc system drive.
  • Fedor - Sunday, May 23, 2021 - link

    There are, and have been for quite a few years. See the 900p, 905p (discontinued) and enterprise equivalents like 4800X and now the new 5800X.
  • jabber - Wednesday, May 19, 2021 - link

    They ALL grind to a halt when they hit thousands of microfiles.
  • ABR - Wednesday, May 19, 2021 - link

    As can be seen from the actual application benchmarks, these caching drives add almost nothing to (and sometimes take away from) performance. This matches my experience with a hybrid SSD - hard drive a few years ago on Windows that was also 16 or 32 GB for the fast part – it was indistinguishable from a regular hard drive in performance. Upgrading the same machine to a full SSD on the other hand was night and day. Basically software doesn't seem to be able to do a good job of determining what to cache.
  • lightningz71 - Wednesday, May 19, 2021 - link

    I see a lot of people bagging on Optane in general, both here and at other forums. I admit to not being a fan of it for many reasons, however, when it works, and when it's implemented with very specific goals, it does make a big difference. The organization I work at got a whole bunch (thousands) of PCs a few years ago that had mechanical hard drives. Over the last few years, different security and auditing software has been installed on them that has seriously impacted their performance. The organization was able to bulk buy a ton of the early 32GB Optane drives and we've been installing them in the machines as workload has permitted. The performance difference when you get the configuration right is drastically better for ordinary day to day office workers. This is NOT a solution for power users. This is a solution for machines that will be doing only a few, specific tasks that are heavily access latency bound and don't change a lot from day to day. The caching algorithms figure out the access patterns relatively quickly and it's largely indistinguishable from the newer PCs that were purchased with SSDs from the start.

    As for the H20, I understand where Intel was going with this, and as a "minimum effort" refresh on an existing product, it achieves it's goals. However, I feel that Intel has seriously missed the mark with this product in furthering the product itself.

    I suggest that Intel should have invested in their own combined NVME/Optane controller chip that would do the following:
    1) Use PCIe 4.0 on the bus interface with a unified 4x setup.
    2) Instead of using regular DRAM for caching, use the Optane modules themselves in that role. Tier the caching with host-based caching like the DRAMless controller models do, then tier that down to the Optane modules. They can continue to use the same strategies that regular Optane uses for caching, but have it implemented on the on-card controller instead of the host operating system. A lot of the features that were the reason that the Optane device needed to be it's own PCIe device separate from the SSD were addressed in NVME Spec 1.4(a and b), meaning that a lot of those things can be done through the unified controller. A competent controller chip should have been achievable that would have realized all of the features of the existing, but with much better I/O capabilities.

    Maybe that's coming in the next generation, if that ever happens. This... this was a minimum effort to keep a barely relevant product... barely relevant.
  • zodiacfml - Thursday, May 20, 2021 - link

    I did not get the charts. I did not see any advantage except if the workload fits in Optane, is that correct?

Log in

Don't have an account? Sign up now