AnandTech Storage Bench - Light

Our Light storage test has relatively more sequential accesses and lower queue depths than The Destroyer or the Heavy test, and it's by far the shortest test overall. It's based largely on applications that aren't highly dependent on storage performance, so this is a test more of application launch times and file load times. This test can be seen as the sum of all the little delays in daily usage, but with the idle times trimmed to 25ms it takes less than half an hour to run. Details of the Light test can be found here. As with the ATSB Heavy test, this test is run with the drive both freshly erased and empty, and after filling the drive with sequential writes.

ATSB - Light (Data Rate)

The Light test shows much greater differences between full and empty drive performance, for both flash SSDs and for the rather variable 280GB Optane SSD 900p. The 480GB model shows less variation in its average data rater between the full and empty runs. Overall, the Optane SSDs outperform a full flash-based SSD but are unimpressive compared to a fresh out of the box flash-based SSD.

ATSB - Light (Average Latency)ATSB - Light (99th Percentile Latency)

Aside from the different behavior of full vs empty, the average and 99th percentile latency scores of the Optane SSDs are not too interesting. The best-case performance is not quite as fast as the best from a flash based SSD, but once the flash drive is slowed down by being full, the Optane SSD shows a meaningful latency advantage.

ATSB - Light (Average Read Latency)ATSB - Light (Average Write Latency)

The average read latency of the Optane SSDs on the Light test is not hurt by filling the drive, giving it much better latency in the worst case scenario than any flash-based SSD. When the Light test is run on freshly-erased drives, the Optane SSD's average read latency is about the same as the best flash-based drives. Neither Optane SSD sets a record for average write latency, and Samsung's fastest NVMe drives have a clear advantage.

ATSB - Light (99th Percentile Read Latency)ATSB - Light (99th Percentile Write Latency)

As with the average read latency, the 99th percentile read latency of the Optane SSDs on the Light test only impresses when compared to the performance of flash-based SSDs in unfavorable conditions like being completely full. Otherwise, the Samsung PM981 performs just as well, and the 960 PRO isn't far behind. The 99th percentile write latency of the Optane SSDs is clearly worse than Samsung's top NVMe SSDs.

ATSB - Light (Power)

The Optane SSD 900p again requires much more energy than most NVMe SSDs, and the larger Optane drive requires significantly more power—three times as much as the most efficient NVMe SSD we've tested.

AnandTech Storage Bench - Heavy Random Performance
Comments Locked

69 Comments

View All Comments

  • Notmyusualid - Sunday, December 17, 2017 - link

    So, when you are at gun point, in a corner, you finally concede defeat?

    I think you need professional help.
  • tuxRoller - Friday, December 15, 2017 - link

    If you are staying with a single thread submission model Windows may we'll have a decent sized advantage with both iocp and rio. Linux kernel aio is just such a crap shoot that it's really only useful if you run big databases and you set it up properly.
  • IntelUser2000 - Friday, December 15, 2017 - link

    "Lower power consumption will require serious performance compromises.

    Don't hold your breath for a M.2 version of the 900p, or anything with performance close to the 900p. Future Optane products will require different controllers in order to offer significantly different performance characteristics"

    Not necessarily. Optane Memory devices show the random performance is on par with the 900P. It's the sequential throughput that limits top-end performance.

    While its plausible the load power consumption might be impacted by performance, not always true for idle. The power consumption in idle can be cut significantly(to 10's of mW levels) by using a new controller. It's reasonable to assume the 900P uses the controller derived from the 750, which is also power hungry.
  • p1esk - Friday, December 15, 2017 - link

    Wait, I don't get it: the operation is much simpler than flash (no garbage collection, no caching, etc), so the controller should be simpler. Then why does it consume more power?
  • IntelUser2000 - Friday, December 15, 2017 - link

    You are still confusing load power consumption with idle power consumption. What you said makes sense for load, when its active. Not for idle.

    Optane Memory devices having 1/3rd the idle power demonstrates its due to the controller. They likely wanted something with short TTM, so they chose whatever controller they had and retrofitted it.
  • rahvin - Friday, December 15, 2017 - link

    Optane's very nature as a heat based phase change material is always going to result in higher power use than NAND because it's always going to take more energy to heat a material up than it would to create a magnetic or electric field.
  • tuxRoller - Saturday, December 16, 2017 - link

    That same nature also means that it will require less energy per reset as the process node shrinks (roughly e~1/F).
    In general, pcm is a much more amenable to process scaling than nand.
  • CheapSushi - Friday, December 15, 2017 - link

    Keep in mind a big part of the sequential throughput limit is the fact that the Optane M.2s are x2 PCIe lanes. This AIC is x4. Most NAND M.2 sticks are x4 as well.
  • twotwotwo - Friday, December 15, 2017 - link

    I'm curious whether it's possible to get more IOPS doing random 512B reads, since that's the sector size this advertises.

    When the description of the memory tech itself came out, bit addressability--not having to read any minimum block size--was a selling point. But it may be that the controller isn't actually capable of reading any more 512B blocks/s than 4KB ones, even if the memory and the bus could handle it.

    I don't think any additional IOPS you get from smaller reads would help most existing apps, but if you were, say, writing a database you wanted to run well on this stuff, it'd be interesting to know that small reads help.
  • tuxRoller - Friday, December 15, 2017 - link

    Those latencies seem pretty high. Was this with Linux or Windows? The table on page one indicates both were used.
    Can you run a few of these tests against a loop mounted ram block device? I'm curious to see what both the min, average and standard deviation values of latency look like when the block layer is involved.

Log in

Don't have an account? Sign up now