Sequential Read Performance

Our first test of sequential read performance uses short bursts of 128MB, issued as 128kB operations with no queuing. The test averages performance across eight bursts for a total of 1GB of data transferred from a drive containing 16GB of data. Between each burst the drive is given enough idle time to keep the overall duty cycle at 20%.

Burst 128kB Sequential Read (Queue Depth 1)

The burst sequential read results are bizarre, with the 32GB caching configuration coming in second only to the Optane SSD 900P while the large Optane M.2 modules perform much worse as cache devices than as standalone drives. The caching performance from the 64GB Optane Memory M10 is especially disappointing, with less than a third of the performance that the drive delivers as a standalone device. Some SSD caching software attempts to have sequential I/O bypass the cache to leave the SSD ready handle random I/O, but this test is not a situation where such a strategy would make sense. Without more documentation from Intel about their proprietary caching algorithms and with no way to query the Optane Memory drivers about the cache status, it's hard to figure out what's going on here. Aside from the one particularly bad result from the M10 as a cache, all of the Optane configurations do at least score far above the SATA SSD.

Our test of sustained sequential reads uses queue depths from 1 to 32, with the performance and power scores computed as the average of QD1, QD2 and QD4. Each queue depth is tested for up to one minute or 32GB transferred, from a drive containing 64GB of data.

Sustained 128kB Sequential Read

The sustained sequential read test results make more sense. The 32GB cache configuration isn't anywhere near large enough for this test's 64GB dataset, but the larger Optane M.2 modules offer good performance as standalone drives or as cache devices. The 64GB Optane Memory M10 scores worse as a cache drive, which is to be expected since the test's dataset doesn't quite fit in the cache.

Using an 118GB Optane M.2 module as a cache seems to help with sequential reads at QD1, likely due to some  prefetching in the caching software. The 64GB cache handles the sustained sequential read workload better than either of the sustained random I/O tests, but it is still slower than the SSD alone at low queue depths. Performance from the 32GB cache is inconsistent but usually still substantially better than the hard drive alone.

Sequential Write Performance

Our test of sequential write burst performance is structured identically to the sequential read burst performance test save for the direction of the data transfer. Each burst writes 128MB as 128kB operations issued at QD1, for a total of 1GB of data written to a drive containing 16GB of data.

Burst 128kB Sequential Write (Queue Depth 1)

As with the random write tests, the cache configurations show higher burst sequential write performance than testing the Optane M.2 modules as standalone SSDs. This points to driver improvements that may include mild cheating through the use of a RAM cache, but the performance gap is small enough that there doesn't appear to be much if any data put at risk. The 64GB and 118GB caches have similar performance with the 64GB slightly ahead, but the 32GB cache barely keep up with a SATA drive.

Our test of sustained sequential writes is structured identically to our sustained sequential read test, save for the direction of the data transfers. Queue depths range from 1 to 32 and each queue depth is tested for up to one minute or 32GB, followed by up to one minute of idle time for the drive to cool off and perform garbage collection. The test is confined to a 64GB span of the drive.

Sustained 128kB Sequential Write

The rankings on the sustained sequential write test are quite similar, but this time the 118GB Optane SSD 800P has the lead over the 64GB Optane Memory M10. The performance advantage of the caching configurations over the standalone drive performance is smaller than for the burst sequential write test, because this test writes far more data than could be cached in RAM.

Aside from some differences at QD1, the Optane M.2 modules offer basically the same performance when used as caches or as standalone drives. Since this test writes no more than 32GB at a time without a break and all of the caches tested are that size or larger, the caching software can always stream all of the writes to just the Optane module without having to stop and flush dirty data to the slower hard drive. If this test were lengthened to write more than 32GB at a time or if it were run on the 16GB Optane Memory, performance would plummet partway through each phase of the test.

Random Performance Mixed Read/Write Performance
Comments Locked

96 Comments

View All Comments

  • Samus - Wednesday, May 16, 2018 - link

    For $160-$170 (<$150 on sale, basically the price of 64GB of Optane) you can get a the WD Black 512GB M2 NVME PCIe SSD that does 2000MB+/sec rear for all 512GB.

    Why the hell is Optane so expensive. 5-7x the price of traditional NAND?
  • Arnulf - Wednesday, May 16, 2018 - link

    Because it is crap which nobody would buy if it was priced close to SSDs of similar performance and capacity:

    "It costs 5-7 times more than SSDs, must be something magical about it, let's buy one honey!"

    Much like $1000 mobile phones, bait for the stupid.
  • CheapSushi - Wednesday, May 16, 2018 - link

    Because it uses phase change instead of NAND and it's new tech. They're trying to recoup R&D cost.
  • FunBunny2 - Wednesday, May 16, 2018 - link

    "hey're trying to recoup R&D cost. "

    PCM is decades old tech. look it up. throwing good money after bad, just like pharma.
  • deil - Wednesday, May 16, 2018 - link

    I have 8 TB drive AND I would enjoy some speedup as current usual run takes ~~5h full run. With that 32 GB joke drive even if it would not double the speed, Speedup of 20% time is a lot in my case. AND I don't get to redesign anything to use another drive or have to build 8 TB ssd raid.
  • Spunjji - Wednesday, May 16, 2018 - link

    On what basis do you think you'll achieve any speed-up, though?
  • tipoo - Wednesday, May 16, 2018 - link

    Yeah, I can't see why 5x the NAND for the cost wouldn't almost always be preferable for budget systems.

    I can only see this making sense for datacenter use.
  • 0ldman79 - Thursday, May 17, 2018 - link

    Primocache does the same thing.

    I've got an 80gig in my desktop, a 60 in an Asus laptop that has two 2.5 bays and a 16gig M.2 in my Inspiron 7559.

    I don't use RAM as a buffer, just the SSD. Works great, unless you have an unstable system. Any time you lose power or don't shut down cleanly the cache resets. With the cache, however, my main box boots in about 20-30 seconds, all apps loaded, where as just running the mechanical drive a reboot is nearly a 4 minute affair.
  • lefty2 - Tuesday, May 15, 2018 - link

    Ironically, these drives work better with AMD motherboards than Intel:
    https://fudzilla.com/news/pc-hardware/46145-amd-st...
  • CajunArson - Tuesday, May 15, 2018 - link

    Where does Idiot-Zilla prove that Optane works "better" with AMD motherboards than Intel?

    But for a site that starts with "Fud" I will give them credit for dispelling the completely wrong "FUD" that is actually spread by AMD fanboys that Optane is a proprietary technology that only works with Intel products. Never has been proprietary.

Log in

Don't have an account? Sign up now