Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we don’t have consistent IO latency with SSD is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the buttons below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

  WD Black2 120GB Samsung SSD 840 EVO mSATA 1TB Mushkin Atlas 240GB Intel SSD 525 Plextor M5M
Default
25% OP -

The area where low cost designs usually fall behind is performance consistency and the JMF667H in the Black2 is no exception. I was actually expecting far worse results, although the JMF667H is certainly one of the worst SATA 6Gbps controllers we've tested lately. The biggest issue is the inability to sustain performance because while the thickest line is at ~5,000 IOPS, the performance is constantly dropping below 1,000 IOPS and even to zero on occasion. Increasing the over-provisioning helps a bit, although no amount of over-provisioning can fix a design issue this deep.

  WD Black2 120GB Samsung SSD 840 EVO mSATA 1TB Mushkin Atlas 240GB Intel SSD 525 Plextor M5M
Default
25% OP -

 

  WD Black2 120GB Samsung SSD 840 EVO mSATA 1TB Mushkin Atlas 480GB Intel SSD 525 Plextor M5M
Default
25% OP -

TRIM Validation

To test TRIM, I first filled all user-accessible LBAs with sequential data and continued with torturing the drive with 4KB random writes (100% LBA, QD=32) for 30 minutes. After the torture I TRIM'ed the drive (quick format in Windows 7/8) and ran HD Tach to make sure TRIM is functional.

Based on our sequential Iometer write test, the write performance should be around 150MB/s after secure erase. It seems that TRIM doesn't work perfectly but performance would likely further recover after some idle time.

The Drive & The Test AnandTech Storage Bench 2013
POST A COMMENT

100 Comments

View All Comments

  • Kristian Vättö - Thursday, January 30, 2014 - link

    There is no official Mac support, so maybe. I just tried using the drive in OS X and added my findings but unfortunately I don't have the tools to properly test it in a Mac. Reply
  • Maltz - Thursday, January 30, 2014 - link

    To create a Fusion drive, you have to have two separate devices, not partitions. This drive shows up as a single device. Reply
  • name99 - Friday, January 31, 2014 - link

    I don't think that's true. diskutil (in the command line) is crazy tolerant of gluing all sorts of weird shit together. The GUI is much more strict.
    What you MIGHT have to do if first run
    diskutil ar create concat ...
    to convert each partition to an AR2 partition. You can then run diskutil cs to fuse these partitions together. As a general principle, whenever diskutil refuses to do something with a raw partition, wrapping it in an AR2 partition will get the job done.
    Reply
  • Calinou__ - Thursday, January 30, 2014 - link

    This drive is the first Windrive. Reply
  • Panzerknacker - Thursday, January 30, 2014 - link

    Very smart and interesting how they designed this. Considering the separation between SSD and HDD happens on LBA level and is based on the partition table, I don't understand why a driver is required though. This should work out of the box. Could you test this a bit more? Reply
  • Xajel - Thursday, January 30, 2014 - link

    As they stated, it's a limitation of the SATA protocol, SATA is a point-to-point protocol, means you can't connect more than one device to the same SATA port... there's an un-required extension to the SATA protocol which is called SATA multiplier, basically it's a chip in the other hand of the SATA cable ( of course not in the cable it self ) that will take a single SATA cable and connect it to multiple drives... though it's not available in all chipsets as it's not required...

    I still hope WD will come with another version with such support, or who knows, maybe a small switch to select between which mode you want, but that will require more space for such switching... but my main concern with these approaches is when one of the drives fails ( after warranty goes )... you will loose a hell !!

    In the mean while, I feel the best solution is to find a laptop with both mSATA and 2.5" SATA so you will not worry about one of the drives failing and you loose both...
    Reply
  • Panzerknacker - Thursday, January 30, 2014 - link

    No this is not true. As they stated, separation between the SSD and HDD happens INSIDE the drive, by means of a system which sends I/0 targeted at the first 120gb of the drive's sectors to the SSD, and I/0 targeted at the rest of the sectors to the HDD. The system in which the Black 2 is installed should see only one 1120GB disk with 2 partitions (default). I don't understand why this would require a driver because the system does NOT know or need to know what's going on inside the drive as long as you don't mess with the default partition layout. Reply
  • DanNeely - Thursday, January 30, 2014 - link

    The 2 partitions in one drive thing only happens after the drivers are installed. It's implemented using port multiplication which means that it has 2 different drives hanging off a single sata port. Some older computers didn't come with support for it, so WD used the looks like only a 120GB drive by default kludge for wider spread compatibility.

    While I understand the need for back compatibility, and wouldn't have a problem if they defaulted to legacy support; I do wish they provided an option either in a configuration app or via a jumper selection that would let it expose itself as a pair of port multiplied drives for systems that do support the feature without having to go through WDs indirection layer.
    Reply
  • extide - Thursday, January 30, 2014 - link

    Yeah but thats not the case. Panzerknacker is right. If it truly was port multiplication then the computer would see two individual drives. Instead it sees one big one. Theoretically it should be able to work without a driver and show up as a single drive, its kinda like a JBOD of the SSD + HDD built into the drive. That shouldn't need any drivers.

    Also, is there linux support for this?
    Reply
  • Kristian Vättö - Thursday, January 30, 2014 - link

    Not officially but once the partitions have been set in Windows they show up fine in Linux and OS X. Reply

Log in

Don't have an account? Sign up now