Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the dropdown selections below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

SanDisk Ultra II 240GB
Default
25% Over-Provisioning

The IO consistency of the Ultra II is not too good. At steady-state it averages about 2,500 IOPS, whereas MX100 and 840 EVO manage around 4,000-5,000. However, what is positive is the fact that it takes about 200 seconds before the performance starts to drop, which is mostly due to the fact that the Ultra II does not provide as many IOPS in the first place.

Since we are dealing with a value client drive, I would not consider the IO consistency to be a big issue because it is very unlikely that the drive will be used in a workload that is even remotely comparable to our performance consistency benchmark, but nevertheless it is always interesting to dive into the architecture of the drive. While the Ultra II is not the fastest SSD, it is still relatively consistent, which is ultimately the key to a smooth user experience.

SanDisk Ultra II 240GB
Default
25% Over-Provisioning

 

SanDisk Ultra II 240GB
Default
25% Over-Provisioning

 

TRIM Validation

To test TRIM, I filled the Ultra II with sequential 128KB data and proceeded with a 30-minute random 4KB write (QD32) workload to put the drive into steady-state. After that I TRIM'ed the drive by issuing a quick format in Windows and ran HD Tach to produce the graph below.

And TRIM works as it should.

The Introduction, The Drive & The Test AnandTech Storage Bench 2013
POST A COMMENT

54 Comments

View All Comments

  • theuglyman0war - Thursday, September 18, 2014 - link

    considering a RAID with one of the value SSD offerings. Would be nice if these reviews included RAID considerations in these reviews. ( does the SLC n-cache, or the MPR parity effect RAID perhaps? [158Gbit of usable capacity:132 of final user capacity does this effect RAID in ANY way?] )

    And how about a shootout between the mx100 evo 840 and ultra II in RAID configurations.

    As many seem to go on about how final user experience is fine with these value SSDs I would imagine that at RAID speeds that would be doubly true? And the savings more meaningful?
    Reply
  • steveshin10 - Friday, September 19, 2014 - link

    I always saw your reviews well. Thank you.
    But I have a question about your TRIM Validation test.
    If I want know about my SSDs Trim Performance then I only just follow your method?
    What are difference "MS WHCK's Trim Performance test" and your test.
    And I want know how working about the "WHSK's Trim Performance Test Workload"
    Do you know that? or How can I trace (or see, or known) the "WHCK Trim Perormance Test Workload"?
    Thank you.
    Reply
  • kgh00007 - Wednesday, September 24, 2014 - link

    Is TLC nand even a good idea considering what is happening to the 840 EVO and older data?

    I'm worried about TLC nand loosing data if it is powered off for a long time.
    Reply
  • sirkiwi - Friday, October 24, 2014 - link

    Excellent alternative to the MX100. I'll grab a Sandisk rather than a Crucial for my next build. Reply

Log in

Don't have an account? Sign up now