Performance Consistency

In our Intel SSD DC S3700 review I introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below I took a freshly secure erased SSD and filled it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next I kicked off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. I ran the test for just over half an hour, no where near what we run our steady state tests for but enough to give me a good look at drive behavior once all spare area filled up.

I recorded instantaneous IOPS every second for the duration of the test. I then plotted IOPS vs. time and generated the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, I did vary the percentage of the drive that I filled/tested depending on the amount of spare area I was trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers may behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

  Corsair Neutron 240GB Crucial m4 256GB Crucial M500 960GB Plextor M5 Pro Xtreme 256GB Samsung SSD 840 Pro 256GB
Default
25% Spare Area  

Like most consumer drives, the M500 exhibits the same pattern of awesome performance for a short while before substantial degradation. The improvement over the m4 is just insane though. Whereas the M500 sees its floor at roughly 2600 IOPS, the m4 will drop down to as low as 28 IOPS. That's slower than mechanical hard drive performance and around the speed of random IO in an mainstream ARM based tablet. To say that Crucial has significantly improved IO consistency from the m4 to the M500 would be an understatement.

Plextor's M5 Pro is an interesting comparison because it uses the same Marvell 9187 controller. While both drives attempt to be as consistent as possible, you can see differences in firmware/gc routines clearly in these charts. Plextor's performance is more consistent and higher than the M500 as well.

The 840 Pro comparison is interesting because Samsung manages better average performance, but has considerably worse consistency compared to the M500. The 840 Pro does an amazing job with 25% additional spare area however, something that can't be said for the M500. Although performance definitely improves with 25% spare area, the gains aren't as dramatic as what happens with Samsung. Although I didn't have time to run through additional spare are points, I do wonder if we might see better improvements with even more spare area when you take into account that ~7% of the 25% spare area is reserved for RAIN.

  Corsair Neutron 240GB Crucial m4 256GB Crucial M500 960GB Plextor M5 Pro Xtreme 256GB Samsung SSD 840 Pro 256GB
Default
25% Spare Area  

I am relatively pleased by the M500's IO consistency without any additional over provisioning. I suspect that anyone investing in a 960GB SSD would want to use as much of it as possible. At least in the out of box scenario, the M500 does better than the 840 Pro from a consistency standpoint. None of these drives however holds a candle to Corsair's Neutron however. The Neutron's LAMD controller shows its enterprise roots and delivers remarkably high and consistent performance out of the box.

  Corsair Neutron 240GB Crucial m4 256GB Crucial M500 960GB Plextor M5 Pro Xtreme 256GB Samsung SSD 840 Pro 256GB
Default
25% Spare Area  

Encryption Done Right & Drive Configurations A Preview of The Destroyer, Our 2013 Storage Bench
Comments Locked

111 Comments

View All Comments

  • gochichi - Wednesday, April 10, 2013 - link

    Crucial is in a unique position. I don't think people care about performance numbers. What we know is that SSD s are either a nightmare or a dream. What we want is a dream. People want simple understandable marketing.

    My favorite SSD so far is a Monster Digital 240gb Daytona. It has been absolutely flawless. But the 120 gb version is a lemon. Reselling the drive would never happen. Monster Digital is probably not going to be a player in the SSD market going forward.

    My point is, what's at stake here is who's the next Seagate? The next Western Digital? Of SSDs. Samsung can do no wrong, much like Apple. And yet this weird little company called Crucial has enjoyed tremendous on-the-street notoriety with their M4 series.

    As far as I can tell the M4 is a little outdated. My question is why not release an M5? Why 500? Why waste so much consumer goodwill? Is it just that this drive isn't good? Or not good enough for proper successorship?

    I don't know why I've purchased crucial drives before, it started with a little 64gb m4. The I just trust the m4 line. My point is why does Crucial carry bad models and why so many confusing numbers? The m4 is a golden opropportunity. Where's that trusty m5 follow up? Samsung has understandable generations and model lines. They're making sure they're the Western Digital of SSD. Why isn't Crucial doing similar?
  • Solid State Brain - Wednesday, April 10, 2013 - link

    The reason why it's M500 and not M5 is probably because of Plextor:
    http://www.plextor-digital.com/index.php/en/M5-Pro...
  • Kristian Vättö - Thursday, April 11, 2013 - link

    Crucial isn't exactly small, they are a subsidiary of Micron. As said above, Plextor has M5S and M5 Pro SSDs so M5 would have been very confusing, hence the M500. The OEM version of M4 was C400, so it's actually not that confusing.
  • FunBunny2 - Thursday, April 11, 2013 - link

    -- My point is, what's at stake here is who's the next Seagate? The next Western Digital? Of SSDs.

    Getting harder to say. The three well known public companies doing SSD (mostly) as such, STEC, OCZ, Fusion-io, have been missing all targets for a least a couple of quarters. Violin may or may not IPO in the next few months.

    The reasonable answer is that there won't be a Seagate or WDC for SSD. It's well understood how to take commodity HDD to Enterprise Drive, using tighter QA and some incrementally better parts at modest cost. With SSD, as this review shows, "progress" in feature shrink isn't improving any of the factors at lower cost. It is quite perverse. The NAND suppliers will come to dominate consumer SSD, with performance asymptotically approaching a bit better than current HDD, with a price premium. Look for TLC, with huge erase blocks, long latencies, slowing controllers (having to do all that much more work to get around the NAND).

    Enterprise SSD will likely fade away, to be replaced by NAND arrays, along the line of the Sun/Oracle device, which has been around for a few years.
  • dilidolo - Wednesday, April 10, 2013 - link

    Everyone else mentioned Super Cap in M500 but not here. I just want to confirm if it's true.
  • Tjalve - Friday, April 12, 2013 - link

    Therte seems to be capacitors on the drive. But i would like to know aswell.
  • klmccaughey - Thursday, April 11, 2013 - link

    The pricing is WAY off. £274 ($420) for 240GB one in the UK!!! They must be mad.
  • philipma1957 - Friday, April 12, 2013 - link

    the 960gb was 570 usd at amazon. at your price x 4 it would be $1680. that is a lot of value tax.
  • philipma1957 - Friday, April 12, 2013 - link

    I just went on amazon uk the 240gb is 168 pounds the 480gb is 321 pounds. some what better. then the price you found
  • Karol Bulova - Saturday, April 13, 2013 - link

    I own Samsung 840Pro (it had cash-back recently) so I welcome this comments from articles on Anandtech.

    'The 840 Pro does an amazing job with 25% additional spare area however, something that can't be said for the M500. '

    'if you simply set aside 25% of the total NAND capacity as spare area' performance improves'

    I am running Win8 64bit with TRIM enabled - what is unclear for me though, is:

    1. is spare are just free not occupied space on the HDD (e.g. when it is not full)
    2. or is it just un-formatted partition (without a filesystem - thus no files expect for header)
    3. or there shouldn't be any partition at all - and drive will somehow figure it up that I just magically allocated spare area

    Or is there some utility for Samsung to do spare area? Please advice - from what I understand I should reinstall windows and choose 192GB as my main drive capacity instead of full!

Log in

Don't have an account? Sign up now