AnandTech Storage Bench - Light

Our Light storage test has relatively more sequential accesses and lower queue depths than The Destroyer or the Heavy test, and it's by far the shortest test overall. It's based largely on applications that aren't highly dependent on storage performance, so this is a test more of application launch times and file load times. This test can be seen as the sum of all the little delays in daily usage, but with the idle times trimmed to 25ms it takes less than half an hour to run. Details of the Light test can be found here. As with the ATSB Heavy test, this test is run with the drive both freshly erased and empty, and after filling the drive with sequential writes.

The Patriot Hellfire, in blue, is highlighted as an example of a last-generation Phison E7 drive. Although we didn't test it at the time, the MP500 was based on the same controller and memory.

ATSB - Light (Data Rate)

The average data rates from the Corsair Force MP510 show a much larger performance hit for running the Light test on a full drive than we saw with the Heavy test, but in either case the MP510 maintains competitive performance.

ATSB - Light (Average Latency)ATSB - Light (99th Percentile Latency)

The average and 99th percentile latencies from the MP510 are very low, though most of the NVMe drives listed here have sub-millisecond 99th percentile latencies and the differences generally imperceptible for workloads this light.

ATSB - Light (Average Read Latency)ATSB - Light (Average Write Latency)

The MP510 continues to have some of the best write latencies and competitive read latencies, with the write latency in particular showing almost no penalty for running the test on a full drive.

ATSB - Light (99th Percentile Read Latency)ATSB - Light (99th Percentile Write Latency)The 99th percentile write latency of the Corsair MP510 on the Light test is minimal, showing that the entire test operates within the very fast SLC write cache. The read latencies are very good, but don't set any records.

ATSB - Light (Power)

The energy usage by the MP510 during the Light test is a bit better than average but still about 30% higher than the Crucial MX500 mainstream SATA SSD.

AnandTech Storage Bench - Heavy Random Performance
POST A COMMENT

42 Comments

View All Comments

  • imaheadcase - Thursday, October 18, 2018 - link

    Wow, i had no idea how cheap SSD have come. You know, its getting to price points soon that home servers would easily use SSD drives vs mechanical. Reply
  • bill.rookard - Thursday, October 18, 2018 - link

    If a 4TB drive becomes somewhat more affordable, then yes, they can. I guess it depends on how big of a server array you have. Personally, I have about 30TB in a 2U server using 4x4tb ZFS + 4x3tb ZFS for 20TB effective. Even a bargain basement setup for a similar size using the cheapest Micron 1100's 2TB SSDs you could find - you'd need 11 of them @ $280 each.

    Or - just a stitch over $3000.00. Meanwhile, the drives I used were factory refurbed enterprise drives and all 8 of them cost around $500.00
    Reply
  • nathanddrews - Thursday, October 18, 2018 - link

    I'm definitely waiting for larger SSDs to come down. I think if we ever get to $100/TB, I'll start to swap out more drives. 2TB for $199 would be great.

    I only recently started to experiment with "hybrid" storage on my home server. I've got about 40TB of rust with about 800GB of SSDs (older SSDs that didn't have a home anymore), using software to manage what folders/files are stored/backed up on which drives. UHD Blu-ray and other disc backups on the slow hard drives (still fast enough to saturate 1GbE) and documents/photos, etc. on the SSD array. My server doesn't have anything faster than SATA6Gbps, but the SSDs are still much quicker for smaller files/random access.
    Reply
  • Lolimaster - Thursday, October 18, 2018 - link

    I would upgrade to cheap 2.5-5Gbit NIC Reply
  • nathanddrews - Thursday, October 18, 2018 - link

    I've already got a couple 10GbE NICs, just waiting on an affordable switch... Reply
  • leexgx - Thursday, October 18, 2018 - link

    use a PC :) youtube video of a person doing it do need to make sure you have the right mobo so it can handle 10gb speeds between PCI-E 10GB cards or you be getting low speeds between cards (still far cheaper than a actual 10gb switch)

    https://www.youtube.com/watch?v=p39mFz7ORco
    Reply
  • Valantar - Friday, October 19, 2018 - link

    You're recommending running a PC 24/7 as a switch to provide >GbE speeds from a NAS? Really? Reply
  • nathanddrews - Friday, October 19, 2018 - link

    LOL that's a good joke! I mean, it's creative, but there's no way I'm doing that. I can wait a little longer to get a proper switch(es). Reply
  • rrinker - Thursday, October 18, 2018 - link

    I'm at the point of contemplating a new server for home, and hybrid was the way I was going to go, since 16TB or so of all SSD is just too expensive still. But 1-2TB of SSD as fast cache for a bunch of 4TB spinny drives would be relatively inexpensive and offer most of the benefits. And SSD for the OS drive of course. Reply
  • DominionSeraph - Monday, October 22, 2018 - link

    Yup, I picked up 24TB for $240. SSDs really can't compete. Reply

Log in

Don't have an account? Sign up now