Our suite of Linux-based synthetic tests cannot directly measure how the Intel Optane Memory H20 or Enmotus FuzeDrive SSD behave when used with their respective caching or tiering software, but we can investigate how the individual components of those storage systems perform in isolation. The tests on this page are all conducted on our usual AMD Ryzen testbed.

Burst IO Performance

Our burst IO tests operate at queue depth 1 and perform several short data transfers interspersed with idle time. The random read and write tests consist of 32 bursts of up to 64MB each. The sequential read and write tests use eight bursts of up to 128MB each. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

QD1 Burst IO Performance
Random Read Random Write
Sequential Read Sequential Write

The Intel Optane cache devices still have a huge lead over other storage technologies for random read performance, but the most notable improvements the Optane Memory H20 makes over the H10 are for random and sequential writes, where the little Optane devices most drastically underperform NAND SSDs. The small 32GB cache is now almost as fast as the 118GB Optane 800P was, but that's still not enough to match the performance of a decent-sized NAND SSD.

The SLC portion of the Enmotus FuzeDrive SSD doesn't particularly stand out, since these burst IO tests are mostly operating out of the SLC caches even on the regular NAND SSDs. However, the static SLC does clearly retain full performance on these tests even when full, in stark contrast to most of the drives that rely on drive-managed SLC caching.

Sustained IO Performance

Our sustained IO tests exercise a range of queue depths and transfer more data than the burst IO tests, but still have limits to keep the duration somewhat realistic. The primary scores we report are focused on the low queue depths that make up the bulk of consumer storage workloads. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Sustained IO Performance
Random Read Throughput Power Efficiency
Random Write Throughput Power Efficiency
Sequential Read Throughput Power Efficiency
Sequential Write Throughput Power Efficiency

There are no big changes to note for the Optane cache device on these longer-running tests that bring in some higher queue depths. The main advantages of Optane devices are at low queue depths, especially the QD1 that was already covered by the burst IO tests. The SLC portion of the Enmotus FuzeDrive does start to show some performance loss on the write tests when it is mostly-full, illustrating that SLC NAND is not immune to the performance impacts of background garbage collection with time-consuming block erase operations.

Random Read
Random Write
Sequential Read
Sequential Write

Digging deeper into the performance results, the Optane portion of the H20 shows the results we expect, reaching its full performance at fairly low queue depths and showing high consistency.

The SLC portion of the Enmotus FuzeDrive SSD shows many of the characteristics we're used to seeing from NAND-based SSDs, including the common Phison trait of poor sequential read performance at low queue depths.

Random Read Latency

This test illustrates how drives with higher throughput don't always offer better IO latency and Quality of Service (QoS), and that latency often gets much worse when a drive is pushed to its limits. This test is more intense than real-world consumer workloads and the results can be a bit noisy, but large differences that show up clearly on a log scale plot are meaningful. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

The Optane cache on the H20 can't beat the maximum random read throughput available from a mainstream or high-end TLC SSD, but until the Optane hits its throughput limit it has unbeatable latency. The SLC portion of the Enmotus FuzeDrive SSD not really any faster on this test than a good TLC drive.

Application Benchmarks and IO Traces Measuring The Building Blocks: Advanced Synthetic Tests
Comments Locked

45 Comments

View All Comments

  • Billy Tallis - Thursday, May 20, 2021 - link

    It's a general property of caching that if your workload doesn't actually fit in the cache, then it will run at about the same speed as if that cache didn't exist. This is as true of storage caches as it is of a CPU's caches for RAM. Of course, defining whether your workload "fits" in a cache is a bit fuzzy, and depends on details of the workload's spatial and temporal locality, and the cache replacement policy.
  • scan80269 - Thursday, May 20, 2021 - link

    That Intel Optane Memory H20 stick may be the source of the "coil whine". Don't be so sure about this noise always coming from the main board. A colleague has been bothered by a periodic high-pitched noise from her laptop, up until the installed Optane Memory H10 stick was replaced by a regular m.2 NAND SSD. The noise can come from a capacitor or inductor in the switching regulator circuit on the m.2 stick.
  • scan80269 - Thursday, May 20, 2021 - link

    Oh, and Intel Optane Memory H20 is spec'ed at PCIe 3.0 x4 for the m.2 interface. I have the same HP Spectre x360 15.6" laptop with Tiger Lake CPU, and it happily runs the m.2 NVMe SSD at PCIe Gen4 speed, with a sequential read speed of over 6000 MB/s as measured by winsat disk. So this is the H20 not supporting PCIe Gen4 speed as opposed to the HP laptop lacking support of that speed.
  • Billy Tallis - Thursday, May 20, 2021 - link

    I tested the laptop with 10 different SSDs. The coil whine is not from the SSD.

    I tested the laptop with a PCIe gen4 SSD, and it did not operate at gen4 speed. I checked the lspci output in Linux and the host side of that link did not list 16 GT/s capability.

    Give me a little credit here, instead of accusing me of being wildly wrong about stuff that's trivially verifiable.
  • Polaris19832145 - Wednesday, September 22, 2021 - link

    What about using an Intel 660p Series M.2 2280 2TB PCIe NVMe 3.0 x4 3D2, QLC Internal Solid State Drive (SSD) SSDPEKNW020T8X1 extra CPU l2 or even l3 cache at 1-8TB going forward in a PCI-e 4.0 slot if intel and AMD will allow this to occur for getting rid of any GPU and HDD bottlenecking in the PCH and CPU lanes on the motherboard here? Is it even possible for this sort of additional cache allowed for the CPU to access by formatting the SSD to use for added l3 and l2 cache for speeding up the GPU on an APU or CPU using igpu or even for GPUs running in mgpu on AMD or sli on Nvidia to help kill the CPU bottlenecking issues here if they can mod one for this sort of thing here for the second m.2 PCI-e 4.0 SSD slot to use for additional CPU cache needs here?

Log in

Don't have an account? Sign up now