Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the dropdown selections below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

SanDisk Ultra II 240GB
25% Over-Provisioning

The IO consistency of the Ultra II is not too good. At steady-state it averages about 2,500 IOPS, whereas MX100 and 840 EVO manage around 4,000-5,000. However, what is positive is the fact that it takes about 200 seconds before the performance starts to drop, which is mostly due to the fact that the Ultra II does not provide as many IOPS in the first place.

Since we are dealing with a value client drive, I would not consider the IO consistency to be a big issue because it is very unlikely that the drive will be used in a workload that is even remotely comparable to our performance consistency benchmark, but nevertheless it is always interesting to dive into the architecture of the drive. While the Ultra II is not the fastest SSD, it is still relatively consistent, which is ultimately the key to a smooth user experience.

SanDisk Ultra II 240GB
25% Over-Provisioning


SanDisk Ultra II 240GB
25% Over-Provisioning


TRIM Validation

To test TRIM, I filled the Ultra II with sequential 128KB data and proceeded with a 30-minute random 4KB write (QD32) workload to put the drive into steady-state. After that I TRIM'ed the drive by issuing a quick format in Windows and ran HD Tach to produce the graph below.

And TRIM works as it should.

The Introduction, The Drive & The Test AnandTech Storage Bench 2013


View All Comments

  • coder111 - Thursday, September 18, 2014 - link

    I think we can safely assume any off the shelf/factory based hardware encryption has backdoors by NSA/GCHQ or Chineese intelligence or whoever. So it doesn't protect from governments nor government sponsored industrial espionage. In the best case it protects from information leaks in case of casual theft. At worst, it gives a false sense of security.

    I would never rely on anything except open-source encryption where the source is continuously being reviewed. And even that is can be compromised by introducing hard to detect bugs.

    So I don't think hardware encryption support adds much value if any.
  • jabber - Wednesday, September 17, 2014 - link

    Can we just cut the reviews for SATA based SSDs down purely to "What makes this SSD radically different to 99% of the similar performing SSDs out there"? Would just save a bit of time that's all. SSDs are now like RAM, all much of a muchness and we just need to know the bells and whistles if any. Reply
  • Kristian Vättö - Wednesday, September 17, 2014 - link

    SSDs are way different than RAM is. RAM is managed by a controller that is integrated to the CPU, so the DRAM you buy is basically a bunch of dummy silicon with no logic in itself. If SSDs were just a stick of NAND, then you would be correct, but the truth is that SSDs have a controller and firmware, which makes one SSD different from the other. In addition, NAND is much more complex to manage compared to DRAM because you have all sorts of limitations (page is the minimum write and you can only erase blocks etc.), so there is much more to it than with RAM, which is basically just differently binned pieces of DRAM silicon.

    We could drop all component reviews if we took your mindset because current computers can do pretty much everything that average users want them to do. However, the fact is that there are people who are interested in the details and how things work, and there are people who do more than just web browsing and email, so that is where detailed reviews are needed.
  • Powerlurker - Sunday, September 21, 2014 - link

    StorageReview basically stopped reviewing consumer-grade drives for most second and third tier manufacturers because they got tired of writing reviews that said, "Yep, it's another Sandforce reference design, and it pretty much performs the same as all the others." Reply
  • milli - Wednesday, September 17, 2014 - link

    I feel that Anand is praising the MX100 just a little bit too much. I had to first hand experience that it's not that good (even for a budget drive). Those bad service times you measure on the MX100 are really noticable in real life usage.
    In my case, five identical i5 machines. Four with 256GB MX100 drives. One with a PNY 240GB Optima (was even cheaper than the MX100). During heavy work, the machine with the Optima was clearly snappier. And I don't mean I measured higher MB/s but snappier. Everything you open just starts to happen/load a little sooner.
    This shows that your reviews for low-end consumer SSDs are too workstation/server minded. The fact that the MX100 start being faster than the Ultra II after a barrage of 10GB of random data, means basically nothing for a casual desktop user. The usual desktop usage is small burst of data. Lots of idle time. You want those burst to happen/start fast.
    You keep pushing encryption as a pro (and it is one) but keep in mind that probably less than 1% of people actually use it.
    My advice? Both the M500 and MX100 are laggy. Drives with Phison controllers are laggy. Transcend SSD 340 (JMicron) felt laggy like the M500.
    PNY Optima didn't feel laggy. Sandisk Ultra Plus didn't feel too laggy either (not perfect though).
    I'm eager to test the Ultra II and the Hynix SH910A (because the Neutron felt so smooth).
    I don't know how you're going to measure this through benchmarks (well the service time one seems to show it) but my brain surely is perceptible to this.
  • hojnikb - Wednesday, September 17, 2014 - link

    I smell bullshit or defective drives. There is no way, mx100 or m500 for that matter would fell as laggy as phison or jmicron for that matter. Both of those controllers are far inferior to marvell offering. In fact, some older jmicrons and phisons are so bad, that it doesn't even take a heavy user to notice the difference or lag. Reply
  • milli - Wednesday, September 17, 2014 - link

    Yes, true. I've got dozens of broken drives. ;) Seriously, no bullshit.
    Obviously I'm talking about drives with JMicron and Phison controllers from 2014 and not the older ones.
    Also with laggy I don't mean that I'm waiting seconds, right! As an enthusiast and professional of plus 20 years, a couple fractions of a second make them feel more laggy to me.
  • hojnikb - Wednesday, September 17, 2014 - link

    Well, considering the workload you have, why didn't you go with more expensive drives in the first place ?

    I mean, those drives are not exactly meant for heavy work you're describing.
  • MrSpadge - Wednesday, September 17, 2014 - link

    If one of them does it significantly better than the others, that's something worth noting. Especially if it doesn't show in the benchmarks we're used to looking at. Reply
  • milli - Wednesday, September 17, 2014 - link

    Very true what you say but the clients these machines are meant for, use it for light usage. It's only during preparation of these machines that I notice the differences. The only reason I used the MX100, is because of Anand's recommendation. At first I was skeptical about it because of the M500. It seems that skepticism was just. There are better client usage cheap drives out there. Reply

Log in

Don't have an account? Sign up now