Our suite of Linux-based synthetic tests cannot directly measure how the Intel Optane Memory H20 or Enmotus FuzeDrive SSD behave when used with their respective caching or tiering software, but we can investigate how the individual components of those storage systems perform in isolation. The tests on this page are all conducted on our usual AMD Ryzen testbed.

Advanced Synthetic Tests

Our benchmark suite includes a variety of tests that are less about replicating any real-world IO patterns, and more about exposing the inner workings of a drive with narrowly-focused tests. Many of these tests will show exaggerated differences between drives, and for the most part that should not be taken as a sign that one drive will be drastically faster for real-world usage. These tests are about satisfying curiosity, and are not good measures of overall drive performance. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Whole-Drive Fill

Pass 1
Pass 2

As expected, we see fairly steady write performance from the Intel Optane devices, including on the second write pass—but the total throughput is still quite low compared to NAND SSDs. The SLC portion of the Enmotus FuzeDrive SSD has similar performance consistency, but competitive throughput. The QLC portion of that drive does have the typical drive-managed SLC cache that starts out with a capacity of about 333 GB when this section of the drive is empty, and shrinks down to about 19 GB for the second pass when the drive is full.

Sustained 128kB Sequential Write (Power Efficiency)
Average Throughput for last 16 GB Overall Average Throughput

The SLC portion of the Enmotus FuzeDrive SSD is naturally far faster for the overall fill process than the drives that run out of SLC and slow down part way through. The Optane devices aren't quite in a "slow and steady wins the race" situation against the traditional NAND SSDs, but the Optane Memory H20's cache device is at least faster than the post-cache performance of the QLC drives and the DRAMless Samsung 980.

Working Set Size

There's a bit of variability in the random read latency of the Optane cache on the Optane Memory H20, but it's so much faster than the NAND devices that a bit of inconsistency hardly matters. We're just seeing noise that only shows up at this scale and is insignificant at NAND speeds. The SLC slice of the FuzeDrive SSD is faster than any of the other NAND drives, but it's a narrow lead. These fast devices naturally do not show any of the performance drop-off that comes from having insufficient DRAM: the Optane caches don't need it in the first place, and the SLC portion of the FuzeDrive SSD is small enough to be managed with a fraction of the drive's DRAM.

Performance vs Block Size

Random Read
Random Write
Sequential Read
Sequential Write

The Optane cache on the H20 behaves almost exactly like previous Optane Memory devices, except that it has acquired a strong dislike for sequential writes one 512B sector at a time. We often see sub-4kB sequential writes performing badly on NAND devices because the flash translation layer operates with 4kB granularity, so perhaps Intel has switched something in this Optane module to operate with 1kB granularity and it now needs to perform a read-modify-write cycle to handle this case. (The lack of a similar performance drop for random writes is a bit of a puzzle.)

The SLC portion of the Enmotus FuzeDrive SSD shows similar overall behavior to other Phison-based NAND drives, albeit with generally higher performance.

Mixed IO Performance

For details on our mixed IO tests, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Mixed IO Performance
Mixed Random IO Throughput Power Efficiency
Mixed Sequential IO Throughput Power Efficiency

The Intel Optane devices handle the mixed random IO test better than any of the NAND drives, and the Optane Memory H20's speed improvements over the H10 bring it up to the level of the larger 118GB Optane SSD 800P. The SLC portion of the Enmotus FuzeDrive SSD performs comparably to a decent 1TB TLC drive with SLC cache.

On the mixed sequential IO test, the Optane devices are far slower than the NAND devices, and the SLC has no real advantage either. A fast controller and lots of NAND is the best recipe for high performance on this test.

Mixed Random IO
Mixed Sequential IO

The Optane devices show completely different performance trends from the NAND devices on the mixed random IO test: the Optane drives speed up significantly as the workload gets more write-heavy, while the NAND drives have flat or declining performance. On the mixed sequential IO test, the Optane behavior is a bit more normal, albeit with very low-end performance.

Measuring The Building Blocks: Synthetic Tests Conclusions
Comments Locked

45 Comments

View All Comments

  • haukionkannel - Wednesday, May 19, 2021 - link

    Most likely PCI 5.0 or 6.0 in reality… and bigger ottaen part. Much bigger!
  • tuxRoller - Friday, May 21, 2021 - link

    You made me curious regarding the history of hsm.
    It earliest one seems to be the IBM 3850 in the 70s.
    So. Yeah. It's not exactly new tech:-|
  • Monstieur - Tuesday, May 18, 2021 - link

    VMD changes the PID & VID so the NVMe drive will not be detected with generic drivers. This is the same behavior on X299, but those boards let you enable / disable VMD per PCIe slot. There is yet another feature called "CPU Attached RAID" which lets you use RST RAID or Optane Memory acceleration with non-VMD drives attached to the CPU lanes and not chipset lanes.
  • Monstieur - Tuesday, May 18, 2021 - link

    500 Series:
    VMD (CPU) > RST VMD driver / RST Optane Memory Acceleration with H10 / H20
    Non-VMD (CPU) > Generic driver
    CPU Attached RAID (CPU) > Generic or RST driver / RST RAID / RST Optane Memory Acceleration with H10 / H20 / 900p / 905p
    RAID (PCH) > Generic or RST driver / RST RAID / RST Optane Memory Acceleration with H10 / H20 / 900p / 905p
    AHCI (PCH) > Generic driver

    X299:
    VMD (CPU) > VROC VMD driver / VROC RAID
    Non-VMD (CPU) > Generic driver
    CPU Attached RAID (CPU) > Generic or RST driver / RST RAID / RST Optane Memory Acceleration with H10 / H20 / 900p / 905p
    RAID (PCH) > Generic or RST driver / RST RAID / RST Optane Memory Acceleration with H10 / H20 / 900p / 905p
    AHCI (PCH) > Generic driver
  • dwillmore - Tuesday, May 18, 2021 - link

    This really looks like a piece of hardware to avoid unless you run Windoes on the most recent generation of Intel hardware. So, that's a double "nope" from me. That's for the warning!
  • Billy Tallis - Tuesday, May 18, 2021 - link

    VMD has been an important feature of Intel server platforms for years. As a result, Linux has supported VMD for years. You may not be able to do a clean install of Windows onto this Tiger Lake laptop without loading extra drivers, but Linux has no problem.

    I had a multi-boot setup on a drive that was in the Whiskey Lake laptop. When I moved it over to the Tiger Lake laptop, grub tried to load its config from the wrong partition. But once I got past that issue, Linux booted with no trouble. Windows could only boot into its recovery environment. From there, I had to put RST drivers on a USB drive, load them in the recovery environment so it could detect the NVMe drive, then install them into the Windows image on the NVMe drive so it could boot on its own.
  • dsplover - Tuesday, May 18, 2021 - link

    Great read, thanks. Love the combinations benefits being explained so well.
  • CaptainChaos - Tuesday, May 18, 2021 - link

    The phrase "putting lipstick on a pig" comes to mind for Intel here!
  • Tomatotech - Wednesday, May 19, 2021 - link

    Other way round. Optane is stunning but Intel has persistently shot it in the foot for almost all their non-server releases.

    In Intel’s defence, getting it right requires full-stack cooperation between Intel, Microsoft, and motherboard makers. You’d think they should be able to do it, given that cooperating is at the basis of their existence, but in Optane’s case it hasn’t been achievable.

    Only Apple seems to be achieving this full stack integration with their M1 chip & unified memory & their OS, and it took them a long time to get to this point.
  • CaptainChaos - Wednesday, May 19, 2021 - link

    Yes... I meant that Optane is the lipstick & QLC is the pig Tomatotech dude! I use several Optane drives but see no advantage at this point for QLC! It's just not priced properly to provide a tempting alternative to TLC.

Log in

Don't have an account? Sign up now