Performance Consistency

In our Intel SSD DC S3700 review I introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below I took a freshly secure erased SSD and filled it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next I kicked off a 4KB random write workload at a queue depth of 32 using incompressible data. I ran the test for just over half an hour, no where near what we run our steady state tests for but enough to give me a good look at drive behavior once all spare area filled up.

I recorded instantaneous IOPS every second for the duration of the test. I then plotted IOPS vs. time and generated the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive alllocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.


              

Here we see a lot of the code re-use between the Vector and Vertex 4 firmware. Vector performs like a faster Vertex 4, with all of its datapoints shifted up in the graph. The distribution of performance is a bit tighter than on the Vertex 4 and performance is definitely more consistent than the 840 Pro. The S3700 is obviously in a league of its own here, but I do hope that over time we'll see similarly consistent drives from other vendors.

The next set of charts look at the steady state (for most drives) portion of the curve. Here we'll get some better visibility into how everyone will perform over the long run.


              

The source data is the same, we're just focusing on a different part of the graph. Here the Vector actually looks pretty good compared to all non-S3700 drives. In this case the Vector's performance distribution looks a lot like SandForce. There's a clear advantage again over the 840 Pro and Vertex 4.

The final set of graphs abandons the log scale entirely and just looks at a linear scale that tops out at 40K IOPS. We're also only looking at steady state (or close to it) performance here:


              

If we look at the tail end of the graph with a linear scale, we get a taste of the of just how varied IO latency can be with most of these drives. Vector looks much more spread out than the Vertex 4, but that's largely a function of the fact that its performance is just so much higher without an equivalent increase in aggressive defrag/GC routines. The 840 Pro generally manages lower performance in this worst case scenario. The SandForce based Intel SSD 330 shows a wide range of IO latencies but overall performance is much better. Had SandForce not been plagued by so many poorly handled reliability issues it might have been a better received option today.

From an IO consistency perspective, the Vector looks a lot like a better Vertex 4 or 840 Pro. Architecturally I wouldn't be too surprised if OCZ's method of NAND mapping and flash management wasn't very similar to Samsung's, which isn't a bad thing at all. I would like to see more emphasis placed on S3700-style IO consistency though. I do firmly believe that the first company to deliver IO consistency for the client space will reap serious rewards.

Performance vs. Transfer Size AnandTech Storage Bench 2011
Comments Locked

151 Comments

View All Comments

  • dj christian - Thursday, November 29, 2012 - link

    "Now, according to Anandtech, a 256GB-labelled SSD actually *HAS* the full 256GiB (275GB) of flash memory. But you lose 8% of flash for provisioning, so you end up with around 238GiB (255GB) anyway. It displays as 238GB in Windows.

    If the SSDs really had 256GB (238GiB) of space as labelled, you'd subtract your 8% and get 235GB (219GiB) which displays as 219GB in Windows. "

    Uuh what?
  • sully213 - Wednesday, November 28, 2012 - link

    I'm pretty sure he's referring to the amount of NAND on the drive minus the 6.8% set aside as spare area, not the old mechanical meaning where you "lost" disk space when a drive was formatted because of base 10 to base 2 conversion.
  • JellyRoll - Tuesday, November 27, 2012 - link

    How long does the heavy test take? The longest recorded busy time was 967 seconds from the Crucial M4. This is only 16 minutes of activity. Does the trace replay in real time, or does it run compressed? 16 minutes surely doesnt seem to be that much of a long test.
  • DerPuppy - Tuesday, November 27, 2012 - link

    Quote from text "Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:"
  • JellyRoll - Tuesday, November 27, 2012 - link

    yes, I took note of that :). That is the reason for the question though, if there were an idea of how long the idle periods were we can take into account the amount of time the GC for each drive functions, and how well.
  • Anand Lal Shimpi - Wednesday, November 28, 2012 - link

    I truncate idles longer than 25 seconds during playback. The total runtime on the fastest drives ends up being around 1.5 hours.

    Take care,
    Anand
  • Kristian Vättö - Wednesday, November 28, 2012 - link

    And on Crucial v4 it took 7 hours...
  • JellyRoll - Wednesday, November 28, 2012 - link

    Wouldn't this compress the QD during the test period? If the SSDs recorded activity is QD2 for an hour, then the trace is replayed quickly this creates a high QD situation. QD2 for an hour compressed to 5 minutes is going to play back at a much higher QD.
  • dj christian - Thursday, November 29, 2012 - link

    What is QD?
  • doylecc - Tuesday, December 4, 2012 - link

    Que Depth

Log in

Don't have an account? Sign up now