Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the dropdown selections below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

OCZ ARC 100 240GB
Default
25% Over-Provisioning

The performance consistency takes a small hit compared to the Vector 150 and Vertex 460, but compared to the other value drives the ARC 100 offers amazing consistency. Most of the performance gain is due to the higher default over-provisioning (12% vs 7% in other value SSDs), although the Barefoot 3 platform has always done well when it comes to consistency. While there is quite a bit of variation in IOPS, the average is still somewhere between 15K and 20K IOPS, whereas for example the MX100 only provides about 5K IOPS at steady-state.

OCZ ARC 100 240GB
Default
25% Over-Provisioning
OCZ ARC 100 240GB
Default
25% Over-Provisioning

Introduction, The Drive & The Test AnandTech Storage Bench 2013
POST A COMMENT

50 Comments

View All Comments

  • blackmagnum - Tuesday, August 26, 2014 - link

    I'd rather have the cheaper/faster SanDisk Extreme II and its 10-year warranty. Reply
  • Lonyo - Tuesday, August 26, 2014 - link

    10 year warranty doesn't mean much. In 5 years, if it dies you would probably just replace it with something far faster and cheaper. The warranty doesn't get you your data back, it just gets you a replacement drive.
    After 3 years you might like a replacement drive, but much more than that and it becomes pretty meaningless to get a replacement slow/old drive considering how quickly SSDs have advanced. A 10 year warranty is pretty meaningless for this type of product, which is probably why Sandisk are happy to offer it. It gives false peace of mind and they know most people wouldn't take them up on it.
    Reply
  • FunBunny2 - Tuesday, August 26, 2014 - link

    -- but much more than that and it becomes pretty meaningless to get a replacement slow/old drive considering how quickly SSDs have advanced

    We got the Great Recession because Bankster asserted that home prices, which had been exponentially rising, would do so forever. We're near, if not at, the asymptotic limit of node, esp. for NAND. Controller logic & error correction can do only so much.

    If the industry can invent a smaller, non-destructive written, piece of memory (which responds to current semi-conductor logic) then may be. But, were I to be betting, I'd bet that consumer SSD will be only marginally better in 3 years.
    Reply
  • Kristian Vättö - Tuesday, August 26, 2014 - link

    That is what 3D NAND is for. Reply
  • xenol - Tuesday, August 26, 2014 - link

    I'd argue raw bandwidth is really that important considering that RAM disks which is comfortably over 10 times faster than an SSD in terms of bandwidth for sequential and smaller reads still don't offer a huge performance improvement over an SSD that an SSD provides over an HDD (and even then, it's not linear). "Loading" at this point is initialization, and that depends entirely on software.

    I suppose it'd be nice if we had universal memory, but SSDs are not a good candidate for that.
    Reply
  • TheWrongChristian - Tuesday, August 26, 2014 - link

    I tell you what, I'd take that bet. With SATA being replaced as the primary interface and 3D NAND, I can easily see peak performance doubling in the next 3 years, and significant improvements in steady state performance. Reply
  • FunBunny2 - Tuesday, August 26, 2014 - link

    And what, exactly, will the normal SSD consumer (not an uber-gamer) do with that performance? Would s/he even notice? Will 3D NAND lower the price/byte to spinning rust? I wouldn't bet on that, either. My bet: in 3 years consumer SSD will still be performing "good enough" for Joe Sixpack at about the same price as "good enough" does today. What may be different: PC makers, if there are any left, will ship with an SSD rather than a HDD be default, and those with the need for mass storage will buy one with both. Reply
  • oynaz - Wednesday, August 27, 2014 - link

    Performance is not the point - space is. You are correct that SSDs are already so fast that even a doubling ot tripling of perfomance will not make much of a difference.
    However, they are still too small. 256 Gb just doesn't cut it.
    ... I am not exactly sure where I am going with this ;-)
    Reply
  • Kibbles - Saturday, August 30, 2014 - link

    Typical usage patterns change with the technology available.
    Imagine when VR becomes common place. You'd want things to load almost instantaneously because once the computer experience becomes truly interactive, fast response times are so much more important.
    Reply
  • leminlyme - Tuesday, September 02, 2014 - link

    I LIKE YOU, AND THE WAY YOU THINK. I must add however, that storage and access is not a bottleneck for 3d environments at the moment. If you had such immense immersive environments that our current storage read times were being the bottleneck, I think 780 ti's would be worth 100$ comparitively. Reply

Log in

Don't have an account? Sign up now