Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we don’t have consistent IO latency with SSD is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the buttons below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

  WD Black2 120GB Samsung SSD 840 EVO mSATA 1TB Mushkin Atlas 240GB Intel SSD 525 Plextor M5M
Default
25% OP -

The area where low cost designs usually fall behind is performance consistency and the JMF667H in the Black2 is no exception. I was actually expecting far worse results, although the JMF667H is certainly one of the worst SATA 6Gbps controllers we've tested lately. The biggest issue is the inability to sustain performance because while the thickest line is at ~5,000 IOPS, the performance is constantly dropping below 1,000 IOPS and even to zero on occasion. Increasing the over-provisioning helps a bit, although no amount of over-provisioning can fix a design issue this deep.

  WD Black2 120GB Samsung SSD 840 EVO mSATA 1TB Mushkin Atlas 240GB Intel SSD 525 Plextor M5M
Default
25% OP -

 

  WD Black2 120GB Samsung SSD 840 EVO mSATA 1TB Mushkin Atlas 480GB Intel SSD 525 Plextor M5M
Default
25% OP -

TRIM Validation

To test TRIM, I first filled all user-accessible LBAs with sequential data and continued with torturing the drive with 4KB random writes (100% LBA, QD=32) for 30 minutes. After the torture I TRIM'ed the drive (quick format in Windows 7/8) and ran HD Tach to make sure TRIM is functional.

Based on our sequential Iometer write test, the write performance should be around 150MB/s after secure erase. It seems that TRIM doesn't work perfectly but performance would likely further recover after some idle time.

The Drive & The Test AnandTech Storage Bench 2013
Comments Locked

100 Comments

View All Comments

  • piroroadkill - Thursday, January 30, 2014 - link

    I don't know what WD was thinking with this product, I read other reviews before..
    A terrible SSD and a normal HDD with no caching....
    ... For a price that's equal to buying a Samsung 840 Evo 500GB. This product has no purpose.
  • RealBeast - Thursday, January 30, 2014 - link

    "(the first generation 80GB Intel X-25M cost $595) and performance wasn't much better than what hard drives offered"

    Nonsense, the X-25M was a huge improvement on HDDs and it got the whole SSD thing going. I replaced 4 RAID 0 Raptors for my OS with an X-25M at around $450 and never looked back.

    I still use my original three X-25M drives as Adobe scratch drives and they are going strong well beyond 150GB of writes to each. I doubt that my current 250/256-480/500GB OS drives will outlive them.

    Black 2 makes sense for laptops with only one slot, no real place for it in desktops unless the prices gets competitive to 2 drives.
  • Kristian Vättö - Friday, January 31, 2014 - link

    I didn't specifically mean the X-25M, I just used it as a pricing example. It was one of the first SSDs that didn't suck but some of the SSDs before it were truly horrible and could barely compete with hard drives.
  • xrror - Monday, February 3, 2014 - link

    The irony is guess who made the controllers on many of those early drives that sucked? ;)
  • Frallan - Friday, January 31, 2014 - link

    To little to late

    This is just 2 bad drives in one package - combining the bad of both sides - and expensive to boot.

    Just my 0.02€
  • name99 - Friday, January 31, 2014 - link

    Of course on a Mac the smart thing to do would immediately be to run core storage to fuse the two "partitions" together to give a genuine hybrid drive with genuine hybrid performance.

    If WD had the slightest intelligence, they would cobble together some basic program that could do all this automatically --- set up the appropriate partition table, set the partition types, then run diskutil cs to perform the fuse operation. Mac users may be less numerous than Windows users, but they also tend to have more money to spend on peripherals... But they're not going to spend all that glorious money that has made Apple so rich on companies that treat the like second class citizens...
  • stratum - Friday, January 31, 2014 - link

    Does this work under Linux?
  • jeffbd - Friday, January 31, 2014 - link

    Doesn't work on Linux without requiring access to a Windows OS? Pass. I was going to buy this too. Oh well. I'll stick to my dual drive using dual components method for now.
  • Horsepower - Saturday, February 1, 2014 - link

    My desktop system has no internal hard drive, just a removeable rack which I use for booting to different drives. My most recent refresh included a Seagate SSHD with only slight HDD performance increase (over previous Velociraptor(s). This could be useful for my setup.
  • 0ldman79 - Tuesday, February 4, 2014 - link

    I just keep thinking about data recovery on the mechanical drive.

    If a driver is required to access the 1TB spinner then exactly how are we suppose to use various low level data recovery tools?

    I can't see recommending this to my customers. I'm a bit nervous about using one for anything other than a gaming rig.

Log in

Don't have an account? Sign up now