Performance Consistency

In our Intel SSD DC S3700 review Anand introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst-case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below we take a freshly secure erased SSD and fill it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. We run the test for just over half an hour, nowhere near what we run our steady state tests for but enough to give a good look at drive behavior once all spare area fills up.

We record instantaneous IOPS every second for the duration of the test and then plot IOPS vs. time and generate the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, we vary the percentage of the drive that gets filled/tested depending on the amount of spare area we're trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers are guaranteed to behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

  Corsair Force LS 240GB Corsair Neutron 240GB Samsung SSD 840 EVO 250GB Kingston SSDNow V300 240GB Samsung SSD 840 Pro 256GB
Default
25% OP -

Performance consistency with the Force LS is okay for a budget drive. It's clearly nowhere near the Neutron but compared to Samsung SSD 840 EVO and Pro, it's not bad (though keep in mind that the Force LS has more over-provisioning by default). What's a bit of a letdown is the fact that increasing the over-provisioning doesn't really improve IO consistency. The time it takes to enter steady-state is longer but the actual steady-state performance is essentially unchanged.

  Corsair Force LS 240GB Corsair Neutron 240GB Samsung SSD 840 EVO 250GB Kingston SSDNow V300 240GB Samsung SSD 840 Pro 256GB
Default
25% OP -


  Corsair Force LS 240GB Corsair Neutron 240GB Samsung SSD 840 EVO 250GB Kingston SSDNow V300 240GB Samsung SSD 840 Pro 256GB
Default
25% OP -

TRIM Validation

Our performance consistency tests take a deeper dive into the worst-case performance than our old TRIM/garbage collection tests did but the HD Tach method is still handy for checking TRIM functionality. Like before, I filled the drive with sequential data and then tortured with 4KB random writes (QD=32, 100% LBA) for 30 minutes:

Worst-case performance is pretty low as we saw in the IO consistency tests. However, Phison's garbage collection is rather aggressive as during a single HD Tach pass the performance increases from ~30MB/s to a peak of 350MB/s.

A single TRIM pass recovers performance back to the original ~360MB/s.

Introduction, The Drive & The Test AnandTech Storage Bench 2013
POST A COMMENT

27 Comments

View All Comments

  • Runamok81 - Wednesday, September 25, 2013 - link

    Looks like we have a shrinking middle-class withh SSDs as well. Does this mean manufactuers should focus on one extreme of the performance/value slider or else risk consumers purchasing leftover stock from last years tech? Reply
  • MrSpadge - Wednesday, September 25, 2013 - link

    The problem with budget and middle class SSDs is that the bulk of the cost goes into the flash - which you have to buy anyway. The controller does cost a bit, but you can't save much by making an SSD slower. That's why it's not really worth it for customers to spend a little less for a significantly slower SSD. Exception: Samsung 840/840 Evo. they've still got the excellent controller and at least decent performance, yet they mostly cost significantly less than others. Reply
  • ericbentley - Monday, September 30, 2013 - link

    Samsung 840/840 Evo can afford to use a good controller yet still be budget oriented because they use TLC flash, while the Corsair LS here still uses MLC. While MLC is better in terms of longevity, most people still want the benefits of speed from the controller and MLC vs TLC is a back-burner issue for them

    I'm wondering if Corsair had tried TLC before and had some reason for not using it for a drive like this, seems like a no brainer to me, unless they couldn't secure a large enough supply
    Reply
  • Kristian Vättö - Wednesday, September 25, 2013 - link

    I think we are starting to get to a similar point as where DRAM is now. For an average user, the difference between a low-end and high-end SSDs is becoming negligible because even the low-end SSDs are pretty good now (e.g. the Force LS). That means the middle-class no longer serves a purpose because the average users will mostly go with the cheaper options and enthusiasts only want the fastest. Reply
  • vol7ron - Wednesday, September 25, 2013 - link

    The big thing still out there, that hasn't been answered is lots of flash for cheap. When buying an SSD many customers look for the fastest, since the cost is marginal between what's available. However, the one area of the market that is still "expensive" is in the mass-flash +1TB (or even +512GB). Even with a slower controller/NAND, I think Corsair could slide into this space. Look at Apple and what they're shipping their new Mac with - what's it called? Fusion? - essentially a hybrid drive, rebranded. I'm sure a company that focuses on the best balance of quality vs cost, wouldn't do that if SSD costs were lower. --- There's still the niche market for high capacity flash, the ultimate HDD terminator. Reply
  • Spoony - Sunday, September 29, 2013 - link

    Apple is shipping a 128GB SSD alongside a normal platter drive in 1TB or 2TB configurations. Fusion Drive is just the name for a logical volume manager with block migration. They are not re-branding a hybrid drive like a Momentus XT. It is a custom solution which merges two discrete devices in software. For better or worse.

    I definitely agree with you. 1-2TB SSDs at $0.40/MB rather than the current $0.95/MB would be very compelling. I would buy if it was reliable, even if it wasn't blazing fast.
    Reply
  • Spoony - Sunday, September 29, 2013 - link

    1TB or 3TB configs. Not 2TB. Sorry.

    Also, edit functionality would be convenient.
    Reply
  • Cumulus7 - Wednesday, September 25, 2013 - link

    Since you recommend the Samsung EVO over the Crucial M500: aren't yout concerned that the EVO may not last as long as the M500?
    I prefer the M500 at the moment since i expect its NAND to last a lot longer. But i may be wrong...
    Reply
  • fokka - Wednesday, September 25, 2013 - link

    of course mlc should theoretically last longer, but that doesn't mean tlc doesn't last more than long enough, as you can read here:

    http://www.anandtech.com/show/7173/samsung-ssd-840...
    Reply
  • MrSpadge - Wednesday, September 25, 2013 - link

    I'd choose and recommend the Evo as well, for all typical users. People write much less on average than they fear they might. It's only a different story for power users, servers etc. Reply

Log in

Don't have an account? Sign up now