Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the dropdown selections below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

OCZ ARC 100 240GB
Default
25% Over-Provisioning

The performance consistency takes a small hit compared to the Vector 150 and Vertex 460, but compared to the other value drives the ARC 100 offers amazing consistency. Most of the performance gain is due to the higher default over-provisioning (12% vs 7% in other value SSDs), although the Barefoot 3 platform has always done well when it comes to consistency. While there is quite a bit of variation in IOPS, the average is still somewhere between 15K and 20K IOPS, whereas for example the MX100 only provides about 5K IOPS at steady-state.

OCZ ARC 100 240GB
Default
25% Over-Provisioning
OCZ ARC 100 240GB
Default
25% Over-Provisioning

Introduction, The Drive & The Test AnandTech Storage Bench 2013
Comments Locked

54 Comments

View All Comments

  • Anato - Thursday, August 28, 2014 - link

    Those pull down menus are pain to use. Please use old buttons instead. Thanks!
  • MHz Tweaker - Sunday, August 31, 2014 - link

    Of the 12 SSD's I have purchased in the last 5 years.....

    qty 4 Vertex 2's
    qty 1 HyperX 3K
    qty 2 Vertex 4's
    qty 3 Samsung EVO's
    qty 2 Samsung 840 Pro's

    I have had 2 failures, both OCZ drives (one Vertex 2 and one Vertex 4)
    The Vertex 2 died within a few months of purchase
    The Vertex 4 died in just under a year

    My top choice would be Samsung then 2nd Crucial and maybe another HyperX 3K drive 3rd
  • danwat1234 - Wednesday, January 21, 2015 - link

    In the article, I don't really see how the Arc 100 , Vector 150, Vertex 460 isn't OK for the laptop crowd. It is only taking half a watt at idle, less than a typical 5400RPM laptop drive of about 1 watt. It is unfortunate DIPM isn't supported but no big deal.

    In the article, doesn't the Arc 100, Vector 150 and Vertex 460 all use the same 19nm flash, but you say the Arc 100 uses slower flash? I know the controller in the Vertex460 and Arc 100 is slower than in the Vector 150 (350 vs 400MHZ or so) and I think slightly slower DRAM cache speed.

    Thanks
  • danwat1234 - Wednesday, January 21, 2015 - link

    Also how much lower is the Arc 100 120GB version in performance versus 240GB? Less die means less performance..

Log in

Don't have an account? Sign up now