AnandTech Storage Bench - Light

Our Light storage test has relatively more sequential accesses and lower queue depths than The Destroyer or the Heavy test, and it's by far the shortest test overall. It's based largely on applications that aren't highly dependent on storage performance, so this is a test more of application launch times and file load times. This test can be seen as the sum of all the little delays in daily usage, but with the idle times trimmed to 25ms it takes less than half an hour to run. Details of the Light test can be found here. As with the ATSB Heavy test, this test is run with the drive both freshly erased and empty, and after filling the drive with sequential writes.

ATSB - Light (Data Rate)

The Light test allows the flash-based SSDs to make the most of their write caching throughout the test, so even the 900p is surpassed by the Samsung NVMe SSDs while the 800p ranks with the budget NVMe drives. When the drives are full and the flash-based SSDs get bogged down with garbage collection, the 900p comes out ahead but the 800p still trails behind the Samsung 960 PRO. The VROC configuration look especially poor in terms of average data rate, and the RAID-5 performance is surprisingly low.

ATSB - Light (Average Latency)ATSB - Light (99th Percentile Latency)

The Optane SSD 800p scores in the middle tier of SSDs for average latency on the Light test, and VROC RAID is no help here. VROC does help with the 99th percentile latencies, but without it the 800p looks like a low-end drive that merely doesn't have garbage collection problems.

ATSB - Light (Average Read Latency)ATSB - Light (Average Write Latency)

The average read latencies from the 800p are almost twice as high as those from the 900p, and VROC's overhead only makes it worse. The average write latencies of the 900p aren't as good as the best flash-based SSDs, and the write latency of the 800p is well over twice that of the 900p.

ATSB - Light (99th Percentile Read Latency)ATSB - Light (99th Percentile Write Latency)

The 99th percentile latency scores from the 800p are not great, but at least the drives perform just as well when full. Small flash-based drives are the most heavily affected when constant garbage collection becomes necessary.

ATSB - Light (Power)

The Light test is a fairly short run with any of these drives, but the 800p still manages to return extremely good power usage numbers that are well ahead of any flash-based NVMe SSD.

AnandTech Storage Bench - Heavy Random Performance
Comments Locked

116 Comments

View All Comments

  • Reflex - Saturday, March 10, 2018 - link

    I also think people forget how crappy & expensive gen1 and 2 SSD's were.
  • Drazick - Friday, March 9, 2018 - link

    We really need those in U2 / SATA Express form.
    Desktop users shouldn't use M2 with all its thermal limitations.
  • jabber - Friday, March 9, 2018 - link

    Whichever connector you use or whatever the thermals, once you go above 600MBps the real world performance difference is very hard to tell in most cases. We just need SATA4 and we can dump all these U2/SATA Express sockets. M.2 for compactness and SATA4 for everything else non Enterprise. Done.
  • Reflex - Friday, March 9, 2018 - link

    U2 essentially is next gen SATA. There is no SATA4 on the way. SATA is at this point an 18 year old specification ripe for retirement. There is also nothing wrong with M.2 even in desktops. Heat spreaders aren't a big deal in that scenario. All that's inside a SATA drive is the same board you'd see in M.2 form factor more or less.
  • leexgx - Saturday, March 10, 2018 - link

    apart from that your limited to 0-2 slots per board (most come with 6 SATA ports)

    i agree that a newer SATA that support NVME over it be nice but U2 be nice if anyone would adopt it and make the ports become standard and have U2 SSDs
  • jabber - Friday, March 9, 2018 - link

    I am amazed that no one has decided to just do the logical thing and slap a 64GB Flash cache in a 4TB+ HDD and be done with it. One unit and done.
  • iter - Friday, March 9, 2018 - link

    They have, seagate has a hybrid drive, not all that great really.

    The reason is that caching algorithms suck. They are usually FIFO - first in first out, and don't take into account actual usage patterns. Meaning you get good performance only if your work is confined to a data set that doesn't exceed the cache. If you exceed it, it starts bringing in garbage, wearing down the flash over nothing. Go watch a movie, that you are only gonna watch once - it will cache that, because you accessed it. And now you have gigabytes of pointless writes to the cache, displacing data that actually made sense to be cached.

    Which is why I personally prefer to have separate drives rather than cache. Because I know what can benefit from flash and what makes no sense there. Automatic tiering is pathetic, even in crazy expensive enterprise software.
  • jabber - Friday, March 9, 2018 - link

    Yeah I was using SSHD drives when they first came out but 8GB of flash doesn't really cut it. I'm sure after all this time 64GB costs the same as 8GB did back then (plus it would be space enough for several apps and data sets to be retained) and the algorithms will have improved. If Intel thinks caches for HDDs have legs then why not just combine them in one simple package?
  • wumpus - Friday, March 9, 2018 - link

    Presumably, there's no market. People who buy spinning rust are either buying capacity (for media, and using SSD for the rest) or cheaping out and not buying SSDs.

    What surprises me is that drives still include 64MB of DRAM, you would think that companies who bothered to make these drives would have switched to TLC (and pseudo-SLC) for their buffer/caches (writing on power off must be a pain). Good luck finding someone who would pay for the difference.

    Intel managed to shove this tech into the chipsets (presumably a software driver that looked for the hardware flag, similar to RAID) in 2011-2012, but apparently dropped that soon afterward. Too bad, reserving 64GB of flash to cache a harddrive (no idea if you could do this with a RAID array) sounds like something that is still usefull (not that you need the performance, just that the flash is so cheap). Just make sure the cache is set to "write through" [if this kills performance it shouldn't be on rust] to avoid doubling your chances of drive loss. Apparently the support costs weren't worth the bother.
  • leexgx - Saturday, March 10, 2018 - link

    8GB should be plenty for SSHD and there currant generation have cache evic protection (witch i think is 3rd gen) so say a LBA block is read 10 times it will assume that is something you open often or its a system file or a startup item, so 2-3GB of data will not get removed easily (so windows, office, browsers and other startup items will always be in the nand cache) the rest of the caching is dynamic if its had more then 2-4 reads it caches it to the nand

    the current generation SSHDs by seagate (don't know how others do it) its split into 3 sections so has a easy, bit harder and very hard to evict from read cache, as the first gen SSHDs from seagate just defragmenting the drive would end evicting your normal used stuff as any 2 reads would be cached right away that does not happen any more

    if you expect it to make your games load faster you need to look elsewhere, as they are meant to boost commonly used applications and OS and on startup programs but still have the space for storage

    that said i really dislike HDDs as boot drives if they did not cost £55 for a 250gb SSD i put them in for free

Log in

Don't have an account? Sign up now