Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the dropdown selections below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

Micron M600 256GB
Default
25% Over-Provisioning

The 1TB M600 actually performs quite significantly worse than the 256GB model, which is most likely due to the tracking overhead that the increased capacity causes (more pages to track). Overall IO consistency has not really changed from the MX100 as Dynamic Write Acceleration only affects burst performance. I suspect the firmware architectures for sustained performance are similar between the MX100 and M600, although with added over-provisioning the M600 is a bit more consistent.

Micron M600 256GB
Default
25% Over-Provisioning

Micron M600 256GB
Default
25% Over-Provisioning

TRIM Validation

To test TRIM, I filled the 128GB M600 with sequential 128KB data and proceeded with a 30-minute random 4KB write (QD32) workload to put the drive into steady-state. After that I TRIM'ed the drive by issuing a quick format in Windows and ran HD Tach to produce the graph below.

It appears that TRIM does not fully recover the SLC cache as the acceleration capacity seems to be only ~7GB. I suspect that giving the drive some idle time would do the trick because it might take a couple of minutes (or more) for the internal garbage collection to finish after issuing a TRIM command.

Introduction, The Drive & The Test AnandTech Storage Bench 2013
POST A COMMENT

56 Comments

View All Comments

  • milli - Monday, September 29, 2014 - link

    The MX100 already had terrible service time. The M600 is even worse.
    I mean if it's even worse than this showing the MX100 delivered (http://techreport.com/r.x/adata-sp610/db2-100-writ... then forget about it.
    Reply
  • milli - Monday, September 29, 2014 - link

    http://techreport.com/r.x/adata-sp610/db2-100-writ...
    Link got messed.
    Reply
  • BedfordTim - Monday, September 29, 2014 - link

    If service times are such an issue why did Tech Report give the MX100 an Editor's Choice award? Reply
  • milli - Monday, September 29, 2014 - link

    Because everybody is a sucker for low prices. Reply
  • menting - Monday, September 29, 2014 - link

    i guess you go out and buy the fastest, regardless of price then? Reply
  • milli - Monday, September 29, 2014 - link

    Obviously not. I'm just giving one of the main reasons why the MX100 wins so many awards. Reply
  • Samus - Monday, September 29, 2014 - link

    It's still a better drive than competing products in its price segment. The only other drive that comes close is the 840 Evo (which apparently has some huge performance bugs on static data - and support is terrible...the bug has existed for over a year.)

    You could consider spending more money on an Intel drive or something from Sandisk, but most consumers need something "reliable-enough" and price is always the driving factor in consumer purchases. If that weren't true, you wouldn't see so many Chevy Cobalt's and Acer PC's.

    The irony is, for price and reliability, the best route is a used Intel SSD320 (or even an X25-M) off eBay for $60 bucks. They never fail and have a 15 year lifespan under typical consumer workloads. They're still SATA 3Gbps, but many people won't notice the difference if coming from a hard disk. Considering the write performance of many cheap SSD's anyway (such as the M500) the performance of a 4 year old Intel SSD might even still be superior.
    Reply
  • Cellar Door - Monday, September 29, 2014 - link

    My X-25M failed after 2 years of use. So please don't use the word 'never' - Intel sent me a 320 as a replacement, due to 3 year warranty. Performance wise, it's ancient but still an ssd. Reply
  • Samus - Monday, September 29, 2014 - link

    Like many SSD's, they are prone to failure from overfapping. Reply
  • Lerianis - Friday, October 03, 2014 - link

    Eh? Overwriting, I think you mean. That said, all of these drives should be able to handle 20GB's write per day at least for years without issues. Reply

Log in

Don't have an account? Sign up now