Performance Consistency

In our Intel SSD DC S3700 review Anand introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst-case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below we take a freshly secure erased SSD and fill it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. We run the test for just over half an hour, nowhere near what we run our steady state tests for but enough to give a good look at drive behavior once all spare area fills up.

We record instantaneous IOPS every second for the duration of the test and then plot IOPS vs. time and generate the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, we vary the percentage of the drive that gets filled/tested depending on the amount of spare area we're trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers are guaranteed to behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

  Samsung SSD 840 EVO mSATA 1TB Mushkin Atlas 240GB Intel SSD 525 Plextor M5M Samsung SSD 840 EVO 1TB
Default
25% OP - -

As expected, IO consistency is mostly similar to the regular EVO. The only change appears to be in steady-state behavior where the 2.5" EVO exhibits more up-and-down behavior, whereas the EVO mSATA is more consistent. This might be due to the latest firmware update because it changed some TurboWrite algorithms and it seems that the TurboWrite is kicking in in the 2.5" EVO every once in a while to boost performance (our EVO mSATA has the latest firmware but the 2.5" EVO was tested with the original firmware).

Increasing the OP in the EVO mSATA results in noticeably better performance but also causes some weird behavior. After about 300 seconds, the IOPS repeatedly drops to 1000 until it evens out after 800 seconds. I am not sure what exactly is happening here but I have asked Samsung to check if this is normal and if they can provide an explanation. My educated guess would be TurboWrite (again) because the drive seems to be reorganizing blocks to increase performance back to peak level. If you're focusing too much on reorganizing existing blocks of data, the latency for incoming writes will increase (and IOPS will drop). 

  Samsung SSD 840 EVO mSATA 1TB Mushkin Atlas 240GB Intel SSD 525 Plextor M5M Samsung SSD 840 EVO 1TB
Default
25% OP - -

 

 

  Samsung SSD 840 EVO mSATA 1TB Mushkin Atlas 480GB Intel SSD 525 Plextor M5M Samsung SSD 840 EVO 1TB
Default
25% OP - -

 

TRIM Validation

To test TRIM, I first filled all user-accessible LBAs with sequential data and continued with torturing the drive with 4KB random writes (100% LBA, QD=32) for 60 minutes. After the torture I TRIM'ed the drive (quick format in Windows 7/8) and ran HD Tach to make sure TRIM is functional.

Surprisingly, it's not. The write speed should be around 300MB/s for the 250GB model based on our Iometer test but here the performance is only 100-150MB/s for the earliest LBAs. Sequential writes do restore performance slowly but even after a full drive the performance has not fully recovered.

Samsung SSD 840 EVO mSATA Resiliency - Iometer Sequential Write
  Clean Dirty (40min torture) After TRIM
Samsung SSD 840 EVO mSATA 120GB 180.4MB/s 69.3MB/s 126.2MB/s

At first I thought this was an error in our testing but I was able to duplicate the issue with our 120GB sample and using Iometer for testing (i.e. 60-second sequential write run in Iometer instead of HD Tach). Unfortunately I ran out of time to test this issue more thoroughly (e.g. does a short period of idling help) but I'll be sure to run more tests once I get back to my testbed. 

The NAND: Going Vertical, Not 3D (Yet) AnandTech Storage Bench 2013
Comments Locked

65 Comments

View All Comments

  • Chubblez - Thursday, January 9, 2014 - link

    I have 8 5 shelf SAN's (EMC AX4-5f) all full of 1 and 2TB drives that would disagree with you.

    I also have 4 256G EVO's that get hammered pretty hard in an HP server, but I can't speak to the very long term effects. Initial data tells me your wrong on that as well.
  • Death666Angel - Sunday, January 12, 2014 - link

    I have 8 2TB drives since October 2011 running in 2 RAID 5 in my file server (consumer grade Samsung, WD, Hitachi). I have 2 1TB 2.5" drives running since 2 years in my desktop. I have a 1TB and a 1.5TB 3.5" drive for well over 4 years in external enclosures. Not one failed. So let's pit anecdotal evidence against anecdotal evidence, shall we?
  • dgingeri - Thursday, January 9, 2014 - link

    I'd like to see Samsung put out a 500Gb or 1TB SSD with a m.2 PCIe connection. That with an adapter for two m.2 PCIe drives on a PCIe x4 card would make my year. I'd finally be able to get all the drives out of my system and bring it down to just motherboard, cards, H100, and power supply.
  • romrunning - Thursday, January 9, 2014 - link

    Why does the Samsung 840 Pro score really badly in the "Destroyer" tests, but are at the top of pretty much every other benchmark shown here?
  • Kristian Vättö - Thursday, January 9, 2014 - link

    Because it only has 7% OP by default, whereas most drives have 12% nowadays. In steady-state even small differences in OP can play a major role.
  • blanarahul - Thursday, January 9, 2014 - link

    Do you think it would be beneficial to 'not' have Turbowrite at or above 500 GB and focus more on parallism?? I ask this because a 250 GB 840 Evo achieves about 260 MB/s after it runs out of TW cache. So I would expect the 500 GB or above Evo to atleast reach 500 MB/s.
  • Solid State Brain - Friday, January 10, 2014 - link

    Question: do Samsung 840 EVO SSDs have less OP than the previous 840 ones, since a percentage of the reserved space is now used for the fixed SLC cache area/TurboWrite?
    With trim enabled, that should not be a problem (as it makes the drive have "dynamic OP" with free space), but under intense workloads it might, if it's the case.
  • Kristian Vättö - Friday, January 10, 2014 - link

    Technically yes. The 840 actually has slightly better IO consistency, which the higher OP explains.

    https://dl.dropboxusercontent.com/u/128928769/IO%2...

    (The title reads Intel DC S3700 but that's because I use it's graph as a basis of other graphs to guarantee that they're all the same size)
  • romrunning - Friday, January 10, 2014 - link

    That's interesting. I guess that means if you manually over-provision the 840 Pro, then it should dominate the Destroyer benchmark as well. It would be nice to test that scenario out.
  • Kristian Vättö - Friday, January 10, 2014 - link

    Due to the nature of the Destroyer, it's unfortunately not possible to assign the test for a certain LBA range (i.e. add OP).

Log in

Don't have an account? Sign up now