AnandTech Storage Bench - Heavy

Our Heavy storage benchmark is proportionally more write-heavy than The Destroyer, but much shorter overall. The total writes in the Heavy test aren't enough to fill the drive, so performance never drops down to steady state. This test is far more representative of a power user's day to day usage, and is heavily influenced by the drive's peak performance. The Heavy workload test details can be found here. This test is run twice, once on a freshly erased drive and once after filling the drive with sequential writes.

ATSB - Heavy (Data Rate)

As with The Destroyer, the average data rate of the Intel Optane SSD 800p puts it near the top of the rankings, but behind the fastest flash-based SSDs and the Optane 900p. Intel's VROC again adds overhead that isn't worthwhile without the high queue depths of synthetic benchmarks.

ATSB - Heavy (Average Latency)ATSB - Heavy (99th Percentile Latency)

The average and 99th percentile latencies of the Optane SSD 800p on the Heavy test are better than any of the low-end NVMe SSDs, but it is only in RAID that the latency drops down to the level of the best flash-based SSDs and the 900p.

ATSB - Heavy (Average Read Latency)ATSB - Heavy (Average Write Latency)

The average read latency of the Optane SSD 800p ranks second behind the 900p. VROC adds enough overhead that the RAID configurations end up having slightly higher average read latencies than the Samsung 960 PRO. For the average write latencies, VROC is far more useful, and helps the 800p make up for the lack of a write cache.

ATSB - Heavy (99th Percentile Read Latency)ATSB - Heavy (99th Percentile Write Latency)

The 99th percentile read and write latencies of the 800p RAID configurations are on par with the 900p, but the individual drives have slightly worse QoS than the Samsung 960 PRO.

ATSB - Heavy (Power)

The 800p again leads in energy usage thanks to its high overall performance without the high baseline power consumption of the 900p. The budget NVMe SSDs all use at least twice as much energy over the course of the test, and the Samsung 960 PRO is closer to the budget drives than to the 800p.

AnandTech Storage Bench - The Destroyer AnandTech Storage Bench - Light
Comments Locked

116 Comments

View All Comments

  • MrSpadge - Friday, March 9, 2018 - link

    Did you ever had an SSD run out of write cycles? I've personally only witnessed one such case (old 60 GB drive from 2010, old controller, being almost full all the time), but numerous other SSD deaths (controller, Sandforce or whatever).
  • name99 - Friday, March 9, 2018 - link

    I have an SSD that SMART claims is at 42%. I'm curious to see how this plays out over the next three years or so.

    But yeah, I'd agree with your point. I've had two SSDs so far fail (many fewer than HDs, but of course I've owned many more HDs and for longer) and both those failures were inexplicable randomness (controller? RAM?) but they certainly didn't reflect the SSD running out of write cycles.

    I do have some very old (heavily used) devices that are flash based (iPod nano 3rd gen) and they are "failing" in the expected SSD fashion --- getting slower and slower, and can be goosed with some speed for another year by giving them a bulk erase. Meaning that it does seem that SSDs "wear-out" failure (when everything else is reliable) happens as claimed --- the device gets so slow that at some some point you're better off just moving to a new one --- but it takes YEARS to get there, and you get plenty of warning, not unexpected medium failure.
  • MonkeyPaw - Monday, March 12, 2018 - link

    The original Nexus 7 had this problem, I believe. Those things aged very poorly.
  • 80-wattHamster - Monday, March 12, 2018 - link

    Was that the issue? I'd read/heard that Lollipop introduced a change to the cache system that didn't play nicely with Tegra chips.
  • sharath.naik - Sunday, March 11, 2018 - link

    the Endurance listed here is barely better than MLC. it is not where close to even SLC
  • Reflex - Thursday, March 8, 2018 - link

    https://www.theregister.co.uk/2016/02/01/xpoint_ex...

    I know ddriver can't resist continuing to use 'hypetane' but seriously looking at this article, Optane appears to be a win nearly across the board. In some cases quite significantly. And this is with a product that is constrained in a number of ways. Prices also are starting at a much better place than early SSD's did vs HDD's.

    Really fantastic early results.
  • iter - Thursday, March 8, 2018 - link

    You need to lay off whatever you are abusing.

    Fantastic results? None of the people who can actually benefit from its few strong points are rushing to buy. And for everyone else intel is desperately flogging it at it is a pointless waste of money.

    Due to its failure to deliver on expectations and promisses, it is doubtful intel will any time soon allocate the manufacturing capacity it would require to make it competitive to nand, especially given its awful density. At this time intel is merely trying to make up for the money they put into making it. Nobody denies the strong low queue depth reads, but that ain't enough to make it into a money maker. Especially not when a more performant alternative has been available since before intel announced xpoint.
  • Alexvrb - Thursday, March 8, 2018 - link

    Most people ignore or gloss over the strong low QD results, actually. Which is ironic given that most of the people crapping all over them for having the "same" performance (read: bars in extreme benchmarks) would likely benefit from improved performance at low QD.

    With that being said capacity and price are terrible. They'll never make any significant inroads against NAND until they can quadruple their current best capacity.
  • Reflex - Thursday, March 8, 2018 - link

    Alex - I'm sure they are aware of that. I just remember how consumer NAND drives launched, the price/perf was far worse than this compared to HDD's, and those drives still lost in some types of performance (random read/write for instance) despite the high prices. For a new tech, being less than 3x while providing across the board better characteristics is pretty promising.
  • Calin - Friday, March 9, 2018 - link

    SSD never had a random R/W problem compared to magnetic disks, not even if you compared them by price to RAIDs and/or SCSI server drives. What problem they might had at the beginning was in sequential read (and especially write) speed. Current sequential write speeds for hard drives are limited by the rpm of the drive, and they reach around 150MB/s for a 7200 rpm 1TB desktop drive. Meanwhile, the Samsung 480 EVO SSD at 120GB (a good second or third generation SSD) reaches some 170MB/s sequential write.
    Where the magnetic rotational disk drives suffer a 100 times reduction in performance is random write, while the SSD hardly care. This is due to the awful access time of hard drives (move the heads and wait for the rotation of the disks to bring the data below the read/write heads) - that's 5-10 milliseconds wait time for each new operation).

Log in

Don't have an account? Sign up now