Performance Consistency

In our Intel SSD DC S3700 review Anand introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst-case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below we take a freshly secure erased SSD and fill it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. We run the test for just over half an hour, nowhere near what we run our steady state tests for but enough to give a good look at drive behavior once all spare area fills up.

We record instantaneous IOPS every second for the duration of the test and then plot IOPS vs. time and generate the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, we vary the percentage of the drive that gets filled/tested depending on the amount of spare area we're trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers are guaranteed to behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

  Samsung SSD 840 EVO mSATA 1TB Mushkin Atlas 240GB Intel SSD 525 Plextor M5M Samsung SSD 840 EVO 1TB
Default
25% OP - -

As expected, IO consistency is mostly similar to the regular EVO. The only change appears to be in steady-state behavior where the 2.5" EVO exhibits more up-and-down behavior, whereas the EVO mSATA is more consistent. This might be due to the latest firmware update because it changed some TurboWrite algorithms and it seems that the TurboWrite is kicking in in the 2.5" EVO every once in a while to boost performance (our EVO mSATA has the latest firmware but the 2.5" EVO was tested with the original firmware).

Increasing the OP in the EVO mSATA results in noticeably better performance but also causes some weird behavior. After about 300 seconds, the IOPS repeatedly drops to 1000 until it evens out after 800 seconds. I am not sure what exactly is happening here but I have asked Samsung to check if this is normal and if they can provide an explanation. My educated guess would be TurboWrite (again) because the drive seems to be reorganizing blocks to increase performance back to peak level. If you're focusing too much on reorganizing existing blocks of data, the latency for incoming writes will increase (and IOPS will drop). 

  Samsung SSD 840 EVO mSATA 1TB Mushkin Atlas 240GB Intel SSD 525 Plextor M5M Samsung SSD 840 EVO 1TB
Default
25% OP - -

 

 

  Samsung SSD 840 EVO mSATA 1TB Mushkin Atlas 480GB Intel SSD 525 Plextor M5M Samsung SSD 840 EVO 1TB
Default
25% OP - -

 

TRIM Validation

To test TRIM, I first filled all user-accessible LBAs with sequential data and continued with torturing the drive with 4KB random writes (100% LBA, QD=32) for 60 minutes. After the torture I TRIM'ed the drive (quick format in Windows 7/8) and ran HD Tach to make sure TRIM is functional.

Surprisingly, it's not. The write speed should be around 300MB/s for the 250GB model based on our Iometer test but here the performance is only 100-150MB/s for the earliest LBAs. Sequential writes do restore performance slowly but even after a full drive the performance has not fully recovered.

Samsung SSD 840 EVO mSATA Resiliency - Iometer Sequential Write
  Clean Dirty (40min torture) After TRIM
Samsung SSD 840 EVO mSATA 120GB 180.4MB/s 69.3MB/s 126.2MB/s

At first I thought this was an error in our testing but I was able to duplicate the issue with our 120GB sample and using Iometer for testing (i.e. 60-second sequential write run in Iometer instead of HD Tach). Unfortunately I ran out of time to test this issue more thoroughly (e.g. does a short period of idling help) but I'll be sure to run more tests once I get back to my testbed. 

The NAND: Going Vertical, Not 3D (Yet) AnandTech Storage Bench 2013
POST A COMMENT

65 Comments

View All Comments

  • wingless - Thursday, January 09, 2014 - link

    1TB in that small package?! 2014 is really the FUTURE! This will turn my laptop into a monster. Reply
  • Samus - Thursday, January 09, 2014 - link

    What's interesting is it isn't even worth considering these models UNLESS you go with 1TB, because all the other capacities aren't nearly price competitive with the competition. Fortunately for Samsung, there is no competition at the flagship capacity, so they could charge whatever they want. Reply
  • Kristian Vättö - Friday, January 10, 2014 - link

    Like I said, those are MSRPs, not street prices. The MSRPs of the 2.5" EVO are only $10 less but as you can see, the street prices are significantly lower. Reply
  • TheSlamma - Wednesday, January 15, 2014 - link

    it's the present Reply
  • jaydee - Thursday, January 09, 2014 - link

    Hard for me to justify the 48-55% price premium of the 840 EVO over the Crucial M500 (250 GB and 500 GB versions). At some point "faster" SSD's hits diminishing returns in "real life" scenario's... Reply
  • fokka - Thursday, January 09, 2014 - link

    "I wasn't able to find the EVO mSATA on sale anywhere yet, hence the prices in the table are the MSRPs provided by Samsung. For the record, the MSRPs for EVO mSATA are only $10 higher than 2.5" EVO's, so I fully expect the prices to end up being close to what the 2.5" EVO currently retails for."

    meaning: the prices will go down, once broadly available.
    Reply
  • emn13 - Thursday, January 09, 2014 - link

    On the desktop? Given the lack of power-loss protection, the 840 EVO is probably a worse choice even at comparable prices.

    But on mobile? Sudden power loss is less likely (though background GC complicates that picture), and the 840 EVO's lower power draw, particularly in idle, extends battery life.

    I'm pretty sure I'd opt for the 840 EVO on a battery-powered device, assuming the price difference isn't too great.
    Reply
  • nathanddrews - Thursday, January 09, 2014 - link

    If it helps your decision at all, I just upgraded my wife's notebook (Lenovo Y580) from a 2.5" 250GB Samsung 840 (not pro) to an mSATA 240GB Crucial M500 (and then put the stock 750GB HDD back in) and it's phenomenal. The M500 feels snappier, but that could just be due to restoring the existing Windows image onto a clean drive. Either way, it was a great $130 upgrade.

    If you have a free mSATA port on your notebook, it's a no-brainer to get an SSD for it.
    Reply
  • Solid State Brain - Thursday, January 09, 2014 - link

    The trim behavior might be something introduced with one of the latest firmwares. I have a Samsung 840 250GB and I recently tried doing some steady state tests. After hammering it with writes, trim does not restore performance immediately. However with normal usage/light workloads, or keeping the drive idle, however, it will eventually (in a matter of hours) return back to the initial performance.

    I guess this is some kind of strategy to improve long term wear/stability/write endurance. Maybe some sustained write protection kicks in to avoid writing immediately at full speed after trimming the free space.
    Reply
  • Solid State Brain - Thursday, January 09, 2014 - link

    PS: where's the edit button to fix typos/errors, etc, when needed?? :( Reply

Log in

Don't have an account? Sign up now