Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the buttons below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

  ADATA SP610 ADATA SP920 JMicron JMF667H (Toshiba NAND) Samsung SSD 840 EVO mSATA Crucial MX100
Default
25% OP

Ouch, this doesn't look too promising. The SMI controller seems to be very aggressive when it comes to steady-state performance, meaning that as soon as there is an empty block it prioritizes host writes over internal garbage collection. The result is fairly inconsistent performance because for a second the drive is pushing over 50K IOPS but then it must do garbage collection to free up blocks, which results in the IOPS dropping to ~2,000. Even with added over-provisioning, the behavior continues, although now more IOs happen at a higher speed because the drive has to do less internal garbage collection to free up blocks.

This may have something to do with the fact that the SM2246EN controller only has a single core. Most controllers today are at least dual-core, which means that at least in one simple scenario one core can be dedicated to host operations while the other handles internal routines. Of course the utilization of cores is likely much more complex and manufactures are not usually willing to share this information, but it would explain why the SP610 has such a large variance in performance.

As we are dealing with a budget mainstream drive, I am not going to be that harsh with the IO consistency. Most users are unlikely to put the drive under a heavy 4KB random write load anyway, so for light and moderate usage the drive should do just fine because ~2,000 IOPS at the lowest is not even that bad -- and it's still a large step ahead of any HDD.

  ADATA SP610 ADATA SP920 JMicron JMF667H (Toshiba NAND) Samsung SSD 840 EVO mSATA Crucial MX100
Default
25% OP

Just to put things in perspective, however, even the over-provisioned "384GB" SP610 ends up offering worse consistency than the 128GB JMicron JMF667H SSD. Pricing will need to be very compelling if this drive is going to stand up against drives like the Crucial MX100.

  ADATA SP610 ADATA SP920 JMicron JMF667H (Toshiba NAND) Samsung SSD 840 EVO mSATA Crucial MX100
Default
25% OP

TRIM Validation

To test TRIM, I first filled all user-accessible LBAs with sequential data and continued with torturing the drive with 4KB random writes (100% LBA, QD=32) for 30 minutes. After the torture I TRIM'ed the drive (quick format in Windows 7/8) and ran HD Tach to make sure TRIM is functional.

And it is.

Introduction, The Drives & The Test AnandTech Storage Bench 2013
POST A COMMENT

24 Comments

View All Comments

  • nicolapeluchetti - Friday, June 27, 2014 - link

    Has anyone any idea on why the Samsung SSD 840 Pro is so bad in Anandtech Bench 2013 and so good in 2011? Here is the link it did 142 in 2013 http://www.anandtech.com/show/8170/sandisk-extreme... Nut in 2011 it's number 1 http://www.anandtech.com/show/8170/sandisk-extreme...

    How is this possible?I mean are the workloads so different?Did Samsung optimize the controller for the test?
    Reply
  • WithoutWeakness - Friday, June 27, 2014 - link

    The 2013 Bench is definitely different enough to have different results for a given drive. More detailed info on the differences between the 2011 and 2013 benches can be found here: http://www.anandtech.com/show/6884/crucial-micron-... Reply
  • Muyoso - Friday, June 27, 2014 - link

    Yea, I bought the 840 Pro on the basis of that 2011 test bench, and now everytime I see an SSD review I am sad to see how ravaged it gets vs the competition. Reply
  • CrystalBay - Friday, June 27, 2014 - link

    I wouldn't worry about the 840P it still a top drive with excellent support . Come this September Samsung is going bring out some new drives. I'm very curious about what's next from them. Reply
  • Kristian Vättö - Friday, June 27, 2014 - link

    Maybe September is coming sooner than you think ;-) Reply
  • CrystalBay - Friday, June 27, 2014 - link

    Oh what a nice surprise ! can't wait.... Reply
  • Galatian - Saturday, June 28, 2014 - link

    Which answers my question wether I should get the XP941 now for my ASRock Extreme9 or wait ;-) Reply
  • Khenglish - Friday, June 27, 2014 - link

    It has to do with how the 840Pro handles garbage collection. Basically the way the 2013 test is structured the 840Pro delays far longer than it should before reorganizing itself, but the 2011 test is less stressful in this regard. This means that the 840Pro is a very fast drive if you don't have it running at 100% at all times, but if you are then other drives are likely preferable. Reply
  • althaz - Sunday, June 29, 2014 - link

    The 2013 test is more enterprisey. The 2011 test is a better indicator of performance if you half-fill your SSD and use it for your OS plus a few core apps. If you fill it up and use it for everything, the 2013 test is more useful. Reply
  • nitro912gr - Friday, June 27, 2014 - link

    I can find the evo 840 250GB at the same price with that adata sp610, should I go with the later since it is bundled with the 3.5" case?
    I can't see much more difference aside that.
    Reply

Log in

Don't have an account? Sign up now