Performance Consistency

In our Intel SSD DC S3700 review Anand introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst-case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below we take a freshly secure erased SSD and fill it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. We run the test for just over half an hour, nowhere near what we run our steady state tests for but enough to give a good look at drive behavior once all spare area fills up.

We record instantaneous IOPS every second for the duration of the test and then plot IOPS vs. time and generate the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, we vary the percentage of the drive that gets filled/tested depending on the amount of spare area we're trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers are guaranteed to behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

  Intel SSD 530 240GB Intel SSD 335 240GB Corsair Neutron 240GB Samsung SSD 840 EVO 250GB Samsung SSD 840 Pro 256GB
Default
25% OP -

Even though the SF-2281 is over two and a half years old, its performance consistency is still impressive. Compared to the SSD 335, there's been quite significant improvement as it takes nearly double the time for SSD 530 to enter steady-state. Increasing the over-provisioning doesn't seem to have a major impact on performance, which is odd. On one hand it's a good thing as you can fill the SSD 530 without worrying that its performance will degrade but on the other hand, the steady-state performance could be better. For example the Corsair Neutron beats the SSD 530 by a fairly big margin with 25% over-provisioning.

  Intel SSD 530 240GB Intel SSD 335 240GB Corsair Neutron 240GB Samsung SSD 840 EVO 250GB Samsung SSD 840 Pro 256GB
Default
25% OP -

 

  Intel SSD 530 240GB Intel SSD 335 240GB Corsair Neutron 240GB Samsung SSD 840 EVO 250GB Samsung SSD 840 Pro 256GB
Default
25% OP -

TRIM Validation

To test TRIM, I filled the drive with incompressible sequential data and proceeded with 60 minutes of incompressible 4KB random writes at queue depth of 32. I measured performance after the torture as well as after a single TRIM pass with AS-SSD since it uses incompressible data and hence suits for this purpose.

Intel SSD 530 Resiliency - AS-SSD Incompressible Sequential Write
  Clean After Torture (60 min) After TRIM
Intel SSD 530 240GB 315.1MB/s 183.3MB/s 193.3MB/s

SandForce's TRIM has never been fully functional when the drive is pushed into a corner with incompressible writes and the SSD 530 doesn't bring any change to that. This is really a big problem with SandForce drives if you're going to store lots of incompressible data (such as MP3s, H.264 videos and other highly compressed formats) because sequential speeds may suffer even more in the long run. As an OS drive the SSD 530 will do just fine since it won't be full of incompressible data, but I would recommend buying something non-SandForce if the main use will be storage of incompressible data. Hopefully SandForce's third generation controller will bring a fix to this.

Introduction, The Drive & The Test AnandTech Storage Bench 2013
POST A COMMENT

60 Comments

View All Comments

  • Duncan Macdonald - Saturday, November 16, 2013 - link

    Would it be possible for you to do an additional SSD test - how much does the write performance recover after a 30 minute idle period. Most consumer PCs (and even many servers) tend to have idle periods every day and if the garbage collection and free space erasure algorithms on the drive can get it back to a near new condition then this would be significent. Reply
  • 'nar - Monday, November 18, 2013 - link

    I agree, GC and idle time makes TRIM unnecessary, and even works better anyway. These benchmarks are a gross exaggeration of anything done in real-world usage. On the one hand everyone recognizes that fact, but on the other they keep hammering consistency and incompressible data like everyone does video editing all day every day. Reply
  • thefoodaddy - Saturday, November 16, 2013 - link

    The prices in that table for the Seagate and Crucial 240GB are, sadly, not $150 ($220 and $180, respectively)--way to get my hopes up, Dyntamitedata.com! Reply
  • purerice - Sunday, November 17, 2013 - link

    On google shopping I just typed in "Seagate SSD 600" and selected 240GB. 50+stores had them and 1 store has them for $149.99 with $0 tax and $0 shipping Reply
  • Kristian Vättö - Sunday, November 17, 2013 - link

    The prices were taken on November 12th and both drives were $150 back then (probably a temporary sale). Reply
  • slickr - Saturday, November 16, 2013 - link

    Man these SSD's seem like a lot of hard work to me. I mean with all the firmware updates that need to be flushed, with all the failures that seem to be happening, with the inconsistent performances, with the fairly still high prices even after 4 years of SSD drives.

    I mean 4 years ago I though we would have at least 250GB for $150 by around this time, by around year 2014, but we we are still way off, I though 500GB SSD's would have started becoming more mainstream in 4 years, but now that we are here, now that I'm in the future it hasn't been done.

    In fact some of the drives are still plagued by the same problems some of the first SSD's had. I mean I agree that the average SSD is more reliable and generally faster, but this is not by much and the prices have been slow to come down.

    So I hope to see $150 250GB SSD's and more in the next several months, maybe 2014 will be the year, but I think if you just want reliability and security its best to go with normal hard drives that have huge capacity at cheap prices, I can get 1TB for $70 that is super cheap.
    Reply
  • 'nar - Monday, November 18, 2013 - link

    I think you have been mislead by the benchmarks. They do not compare SSD's to hard drives, so you have no perspective. I recommend SSD's for everyone. They are faster and more reliable. Get a hard drive if you want your 1 TB of storage, which will be pictures, music, and video anyway, all things that would not benefit from SSD speeds.

    The only concern is that many people that complain about reliability fail to mention the model SSD that failed on them. I use Intel/Sandforce drive for systems I build for others, and OCZ/SandForce/Bigfoot on all of my own and never had a problem. I suspect that those looking for cheap, get cheap. If you want reliability don't look for the cheapest drive. As in all things, you get what you pay for. Find yourself a good drive, THEN look for a good price on it. Don't assume that any SSD made by a particular manufacturer is good.
    Reply
  • name99 - Saturday, November 16, 2013 - link

    "The problem now is that every significant segment from a performance angle has been covered."

    Unfortunately no. If *I* were an SSD manufacturer, I'd try to differentiate myself by putting together a hybrid drive that isn't crap. It is insane that, with 2013 almost over, there is, as far as I can tell, precisely one HD available that is a hybrid drive --- and that HD is available in one form factor+size, only as a bare drive, and with a minuscule pool of flash.
    Complain about Apple all you like, but at least they have done (within the scope of what they control) something about this --- unlike freaking WD, Seagate, SanDisk and everyone else.

    WTF have SanDisk (or Sandforce, or Samsung, or Toshiba, or ...) down something about this? Put together a decent package of some RAM, some flash, a controller, firmware that does the caching properly, and sell it to WD or Seagate to glue onto 1TB+ size drives? Apple's solution is expensive, probably too expensive, because it's using pretty good quality flash and a lot of it. Cut down to 48 or 32GB of flash that's slightly slower and I think you could still give a heck of a kick to a drive at an additional cost of $30 to $50. I'd certainly be willing to pay this.

    I do not understand WD and Seagate. You go to Best Buy or Frys today, and they're each trying to reach out at you with a huge collection of basically identical drives --- they'll sell you a 2TB 2.5" in a green version, a black version, a red version, a blue version. (And those are not case colors, they are supposedly different models.)
    The one thing they won't sell is the thing that would actually make a difference, that I'd be willing to pay for, a freaking HYBRID version that consists of more than adding 8GB of crappy bargain bin flash and lame caching software that won't even capture writes.
    Reply
  • Bob Todd - Sunday, November 17, 2013 - link

    Indeed. While it would be great if every laptop with a 2.5" drive had a mSATA or M.2 slot available, they are still the minority. I have SSDs as the boot drives of every machine sans one laptop that still has one of the 7200rpm 750GB Seagate SSHDs. I want at least 500GB of capacity for that machine, but I don't really want to drop the money for an SSD that big. A 7mm 500+ GB drive with 32+ GB of NAND needs to happen. Reply
  • emvonline - Sunday, November 17, 2013 - link

    100% agree. A 32G SSD+1TB HDD would cover all storage needs and be very fast for 90% of all work. On the once a month timing that you load a rarely accessed 1GB video it would take 3 seconds more than a SSD. All this assumes the Cache software works correctly :-) Reply

Log in

Don't have an account? Sign up now