Random Read Performance

Our first test of random read performance uses very short bursts of operations issued one at a time with no queuing. The drives are given enough idle time between bursts to yield an overall duty cycle of 20%, so thermal throttling is impossible. Each burst consists of a total of 32MB of 4kB random reads, from a 16GB span of the disk. The total data read is 1GB.

Burst 4kB Random Read (Queue Depth 1)

The burst random read test easily fits within the Optane cache on the Optane Memory H10, so it outperforms all of the flash-based SSDs, but is substantially slower than the pure Optane storage devices.

Our sustained random read performance is similar to the random read test from our 2015 test suite: queue depths from 1 to 32 are tested, and the average performance and power efficiency across QD1, QD2 and QD4 are reported as the primary scores. Each queue depth is tested for one minute or 32GB of data transferred, whichever is shorter. After each queue depth is tested, the drive is given up to one minute to cool off so that the higher queue depths are unlikely to be affected by accumulated heat build-up. The individual read operations are again 4kB, and cover a 64GB span of the drive.

Sustained 4kB Random Read

On the longer random read test that covers a wider span of the disk than the Optane cache can manage, the H10's performance is on par with the TLC-based SSDs.

The Optane cache provides little benefit over pure QLC storage at lower queue depths, but at the higher queue depths the H10 with caching enabled starts to develop a real lead over the QLC portion on its own. Unfortunately, but the time queue depths are this high, the flash-based SSDs have all surpassed the H10's random read throughput.

Random Write Performance

Our test of random write burst performance is structured similarly to the random read burst test, but each burst is only 4MB and the total test length is 128MB. The 4kB random write operations are distributed over a 16GB span of the drive, and the operations are issued one at a time with no queuing.

Burst 4kB Random Write (Queue Depth 1)

The burst random write performance of the H10 with caching enabled is better than either half of the drive can manage on its own, but far less than the sum of its parts. A good SLC write cache on a TLC drive is still better than the Optane caching on top of QLC.

As with the sustained random read test, our sustained 4kB random write test runs for up to one minute or 32GB per queue depth, covering a 64GB span of the drive and giving the drive up to 1 minute of idle time between queue depths to allow for write caches to be flushed and for the drive to cool down.

Sustained 4kB Random Write

On the longer random write test that covers a much wider span than the Optane cache can handle, the Optane Memory H10 falls behind all of the flash-based competition. The caching software ends up creating more work that drags performance down far below what the QLC portion can manage with just its SLC cache.

Random write performance on the Optane Memory H10 is unsteady but generally trending downward as the test progresses. Two layers of caching getting in each others way is not a good recipe for consistent sustained performance.

AnandTech Storage Bench - Light Sequential IO Performance
Comments Locked

60 Comments

View All Comments

  • yankeeDDL - Monday, April 22, 2019 - link

    Is it me or, generally speaking, it is noticeably slower than the 970 Evo?
  • DanNeely - Monday, April 22, 2019 - link

    The 970 can make use of 4 lanes, with only 2 effective lanes in most scenarios any good x4 drive is going to be able to smoke the H10.
  • yankeeDDL - Monday, April 22, 2019 - link

    I still remember that Optane should be 1000x faster and 1000x cheaper. It seems that it is faster, albeit by a much lower factor ... then why hamper it with a slower bus? I mean, I came to read the review thinking that it could be a nice upgrade, and then I see it beaten handily by the 970 Evo. What's the point of such device? It is clearly more complex, so I doubt it'll be cheaper than the 970 Evo...
  • Alexvrb - Monday, April 22, 2019 - link

    Wait, did they say it would be cheaper? I don't remember that. I know they thought it would be a lot faster than it is... to be fair they seemed to be making projections like NAND based solutions wouldn't speed up at all in years LOL.

    It can be a lot faster in certain configs (the high end PCIe add-on cards, for example) but it's insanely expensive. Even then it's mainly faster for low QDs...
  • kgardas - Tuesday, April 23, 2019 - link

    Yes, but just in comparison with DRAM prices. E.g. NVDIMM of big size cheaper than DIMM of big size.
  • Irata - Tuesday, April 23, 2019 - link

    It was supposed to be 1000x faster and have 1000x the endurance of NAND as per Intel's official 2016 slides.

    It may be slightly off on those promises - would have loved for the article to include the slide with Intel's original claims.

    Price wasn't mentioned.
  • yankeeDDL - Tuesday, April 23, 2019 - link

    You're right. They said 1000x faster, 1000x endurance and 10x denser, but they did not say cheaper, although, the 10x denser somewhat implies it (https://www.micron.com/~/media/documents/products/... Still, this drive is not faster, nor it has significantly higher endurance. Let's see if it is any cheaper.
  • Valantar - Tuesday, April 23, 2019 - link

    Denser than DRAM, not NAND. Speed claims are against NAND, price/density claims against DRAM - where they might not be 1/10th the price, but definitely cheaper. The entire argument for 3D Xpoint is "faster than NAND, cheaper than DRAM (while persistent and closer to the former than the latter in capacity)", after all.
  • CheapSushi - Wednesday, April 24, 2019 - link

    I think this is why there's still negative impressions around 3D Xpoint. Too many people still don't understand it or confuse the information given.
  • cb88 - Friday, May 17, 2019 - link

    Optane itself is *vastly* faster than this... on an NVDIMM it rivals DDR4 with latencies in hundreds of ns instead of micro or milliseconds. And bandwidth basically on par with DDR4.

    I think it's some marketing BS that they don't use 4x PCIe on thier M.2 cards .... perhaps trying to avoid server guys buying them up cheap and putting them on quad m.2 to PCIe adapters.

Log in

Don't have an account? Sign up now