Mixed Random Performance

Our test of mixed random reads and writes covers mixes varying from pure reads to pure writes at 10% increments. Each mix is tested for up to 1 minute or 32GB of data transferred. The test is conducted with a queue depth of 4, and is limited to a 64GB span of the drive. In between each mix, the drive is given idle time of up to one minute so that the overall duty cycle is 50%.

Mixed 4kB Random Read/Write

The Corsair Force MP510 demonstrated above average random read performance and top tier random writes, but the mixed workload results are average at best. The WD Black averages about 18% higher performance with the same NAND, and the Samsung 970 EVO is about 40% faster.

Sustained 4kB Mixed Random Read/Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

As with performance, the power efficiency score from the MP510 is acceptable but nothing special compared to the Toshiba XG6 and WD Black.

The unimpressive overall score from the MP510 on the mixed random I/O test seems to primarily stem from how it behaves during the more write-heavy half of the test. As the proportion of writes grows, the performance of the  MP510 increases relatively slowly until the very end when performance spikes on the 100% cachable pure random write phase of the test.

Mixed Sequential Performance

Our test of mixed sequential reads and writes differs from the mixed random I/O test by performing 128kB sequential accesses rather than 4kB accesses at random locations, and the sequential test is conducted at queue depth 1. The range of mixes tested is the same, and the timing and limits on data transfers are also the same as above.

Mixed 128kB Sequential Read/Write

The Corsair Force MP510 provides excellent performance on our mixed sequential I/O test, but for once appears to be slightly slower than the Phison E12 engineering sample we tested earlier this year.

Sustained 128kB Mixed Sequential Read/Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The MP510 turns in an extremely good power efficiency score for the mixed sequential I/O test: only about 4% behind the class-leading WD Black, and 50% better than the HP EX920.

The performance curve of the Corsair Force MP510 on the mixed sequential I/O test looks rather unusual, with performance jumping around a bit and at its best during the more write-heavy half of the test. Most drives show a more typical bathtub curve with the best performance at either end of the test, but the WD Black's behavior does show similarities to the MP510.

Sequential Performance Power Management
Comments Locked

42 Comments

View All Comments

  • imaheadcase - Thursday, October 18, 2018 - link

    Wow, i had no idea how cheap SSD have come. You know, its getting to price points soon that home servers would easily use SSD drives vs mechanical.
  • bill.rookard - Thursday, October 18, 2018 - link

    If a 4TB drive becomes somewhat more affordable, then yes, they can. I guess it depends on how big of a server array you have. Personally, I have about 30TB in a 2U server using 4x4tb ZFS + 4x3tb ZFS for 20TB effective. Even a bargain basement setup for a similar size using the cheapest Micron 1100's 2TB SSDs you could find - you'd need 11 of them @ $280 each.

    Or - just a stitch over $3000.00. Meanwhile, the drives I used were factory refurbed enterprise drives and all 8 of them cost around $500.00
  • nathanddrews - Thursday, October 18, 2018 - link

    I'm definitely waiting for larger SSDs to come down. I think if we ever get to $100/TB, I'll start to swap out more drives. 2TB for $199 would be great.

    I only recently started to experiment with "hybrid" storage on my home server. I've got about 40TB of rust with about 800GB of SSDs (older SSDs that didn't have a home anymore), using software to manage what folders/files are stored/backed up on which drives. UHD Blu-ray and other disc backups on the slow hard drives (still fast enough to saturate 1GbE) and documents/photos, etc. on the SSD array. My server doesn't have anything faster than SATA6Gbps, but the SSDs are still much quicker for smaller files/random access.
  • Lolimaster - Thursday, October 18, 2018 - link

    I would upgrade to cheap 2.5-5Gbit NIC
  • nathanddrews - Thursday, October 18, 2018 - link

    I've already got a couple 10GbE NICs, just waiting on an affordable switch...
  • leexgx - Thursday, October 18, 2018 - link

    use a PC :) youtube video of a person doing it do need to make sure you have the right mobo so it can handle 10gb speeds between PCI-E 10GB cards or you be getting low speeds between cards (still far cheaper than a actual 10gb switch)

    https://www.youtube.com/watch?v=p39mFz7ORco
  • Valantar - Friday, October 19, 2018 - link

    You're recommending running a PC 24/7 as a switch to provide >GbE speeds from a NAS? Really?
  • nathanddrews - Friday, October 19, 2018 - link

    LOL that's a good joke! I mean, it's creative, but there's no way I'm doing that. I can wait a little longer to get a proper switch(es).
  • rrinker - Thursday, October 18, 2018 - link

    I'm at the point of contemplating a new server for home, and hybrid was the way I was going to go, since 16TB or so of all SSD is just too expensive still. But 1-2TB of SSD as fast cache for a bunch of 4TB spinny drives would be relatively inexpensive and offer most of the benefits. And SSD for the OS drive of course.
  • DominionSeraph - Monday, October 22, 2018 - link

    Yup, I picked up 24TB for $240. SSDs really can't compete.

Log in

Don't have an account? Sign up now