Our suite of Linux-based synthetic tests cannot directly measure how the Intel Optane Memory H20 or Enmotus FuzeDrive SSD behave when used with their respective caching or tiering software, but we can investigate how the individual components of those storage systems perform in isolation. The tests on this page are all conducted on our usual AMD Ryzen testbed.

Burst IO Performance

Our burst IO tests operate at queue depth 1 and perform several short data transfers interspersed with idle time. The random read and write tests consist of 32 bursts of up to 64MB each. The sequential read and write tests use eight bursts of up to 128MB each. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

QD1 Burst IO Performance
Random Read Random Write
Sequential Read Sequential Write

The Intel Optane cache devices still have a huge lead over other storage technologies for random read performance, but the most notable improvements the Optane Memory H20 makes over the H10 are for random and sequential writes, where the little Optane devices most drastically underperform NAND SSDs. The small 32GB cache is now almost as fast as the 118GB Optane 800P was, but that's still not enough to match the performance of a decent-sized NAND SSD.

The SLC portion of the Enmotus FuzeDrive SSD doesn't particularly stand out, since these burst IO tests are mostly operating out of the SLC caches even on the regular NAND SSDs. However, the static SLC does clearly retain full performance on these tests even when full, in stark contrast to most of the drives that rely on drive-managed SLC caching.

Sustained IO Performance

Our sustained IO tests exercise a range of queue depths and transfer more data than the burst IO tests, but still have limits to keep the duration somewhat realistic. The primary scores we report are focused on the low queue depths that make up the bulk of consumer storage workloads. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Sustained IO Performance
Random Read Throughput Power Efficiency
Random Write Throughput Power Efficiency
Sequential Read Throughput Power Efficiency
Sequential Write Throughput Power Efficiency

There are no big changes to note for the Optane cache device on these longer-running tests that bring in some higher queue depths. The main advantages of Optane devices are at low queue depths, especially the QD1 that was already covered by the burst IO tests. The SLC portion of the Enmotus FuzeDrive does start to show some performance loss on the write tests when it is mostly-full, illustrating that SLC NAND is not immune to the performance impacts of background garbage collection with time-consuming block erase operations.

Random Read
Random Write
Sequential Read
Sequential Write

Digging deeper into the performance results, the Optane portion of the H20 shows the results we expect, reaching its full performance at fairly low queue depths and showing high consistency.

The SLC portion of the Enmotus FuzeDrive SSD shows many of the characteristics we're used to seeing from NAND-based SSDs, including the common Phison trait of poor sequential read performance at low queue depths.

Random Read Latency

This test illustrates how drives with higher throughput don't always offer better IO latency and Quality of Service (QoS), and that latency often gets much worse when a drive is pushed to its limits. This test is more intense than real-world consumer workloads and the results can be a bit noisy, but large differences that show up clearly on a log scale plot are meaningful. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

The Optane cache on the H20 can't beat the maximum random read throughput available from a mainstream or high-end TLC SSD, but until the Optane hits its throughput limit it has unbeatable latency. The SLC portion of the Enmotus FuzeDrive SSD not really any faster on this test than a good TLC drive.

Application Benchmarks and IO Traces Measuring The Building Blocks: Advanced Synthetic Tests
Comments Locked

45 Comments

View All Comments

  • haukionkannel - Wednesday, May 19, 2021 - link

    Most likely PCI 5.0 or 6.0 in reality… and bigger ottaen part. Much bigger!
  • tuxRoller - Friday, May 21, 2021 - link

    You made me curious regarding the history of hsm.
    It earliest one seems to be the IBM 3850 in the 70s.
    So. Yeah. It's not exactly new tech:-|
  • Monstieur - Tuesday, May 18, 2021 - link

    VMD changes the PID & VID so the NVMe drive will not be detected with generic drivers. This is the same behavior on X299, but those boards let you enable / disable VMD per PCIe slot. There is yet another feature called "CPU Attached RAID" which lets you use RST RAID or Optane Memory acceleration with non-VMD drives attached to the CPU lanes and not chipset lanes.
  • Monstieur - Tuesday, May 18, 2021 - link

    500 Series:
    VMD (CPU) > RST VMD driver / RST Optane Memory Acceleration with H10 / H20
    Non-VMD (CPU) > Generic driver
    CPU Attached RAID (CPU) > Generic or RST driver / RST RAID / RST Optane Memory Acceleration with H10 / H20 / 900p / 905p
    RAID (PCH) > Generic or RST driver / RST RAID / RST Optane Memory Acceleration with H10 / H20 / 900p / 905p
    AHCI (PCH) > Generic driver

    X299:
    VMD (CPU) > VROC VMD driver / VROC RAID
    Non-VMD (CPU) > Generic driver
    CPU Attached RAID (CPU) > Generic or RST driver / RST RAID / RST Optane Memory Acceleration with H10 / H20 / 900p / 905p
    RAID (PCH) > Generic or RST driver / RST RAID / RST Optane Memory Acceleration with H10 / H20 / 900p / 905p
    AHCI (PCH) > Generic driver
  • dwillmore - Tuesday, May 18, 2021 - link

    This really looks like a piece of hardware to avoid unless you run Windoes on the most recent generation of Intel hardware. So, that's a double "nope" from me. That's for the warning!
  • Billy Tallis - Tuesday, May 18, 2021 - link

    VMD has been an important feature of Intel server platforms for years. As a result, Linux has supported VMD for years. You may not be able to do a clean install of Windows onto this Tiger Lake laptop without loading extra drivers, but Linux has no problem.

    I had a multi-boot setup on a drive that was in the Whiskey Lake laptop. When I moved it over to the Tiger Lake laptop, grub tried to load its config from the wrong partition. But once I got past that issue, Linux booted with no trouble. Windows could only boot into its recovery environment. From there, I had to put RST drivers on a USB drive, load them in the recovery environment so it could detect the NVMe drive, then install them into the Windows image on the NVMe drive so it could boot on its own.
  • dsplover - Tuesday, May 18, 2021 - link

    Great read, thanks. Love the combinations benefits being explained so well.
  • CaptainChaos - Tuesday, May 18, 2021 - link

    The phrase "putting lipstick on a pig" comes to mind for Intel here!
  • Tomatotech - Wednesday, May 19, 2021 - link

    Other way round. Optane is stunning but Intel has persistently shot it in the foot for almost all their non-server releases.

    In Intel’s defence, getting it right requires full-stack cooperation between Intel, Microsoft, and motherboard makers. You’d think they should be able to do it, given that cooperating is at the basis of their existence, but in Optane’s case it hasn’t been achievable.

    Only Apple seems to be achieving this full stack integration with their M1 chip & unified memory & their OS, and it took them a long time to get to this point.
  • CaptainChaos - Wednesday, May 19, 2021 - link

    Yes... I meant that Optane is the lipstick & QLC is the pig Tomatotech dude! I use several Optane drives but see no advantage at this point for QLC! It's just not priced properly to provide a tempting alternative to TLC.

Log in

Don't have an account? Sign up now