Performance Consistency

In our Intel SSD DC S3700 review I introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below I took a freshly secure erased SSD and filled it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next I kicked off a 4KB random write workload at a queue depth of 32 using incompressible data. I ran the test for just over half an hour, no where near what we run our steady state tests for but enough to give me a good look at drive behavior once all spare area filled up.

I recorded instantaneous IOPS every second for the duration of the test. I then plotted IOPS vs. time and generated the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive alllocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.


              

Here we see a lot of the code re-use between the Vector and Vertex 4 firmware. Vector performs like a faster Vertex 4, with all of its datapoints shifted up in the graph. The distribution of performance is a bit tighter than on the Vertex 4 and performance is definitely more consistent than the 840 Pro. The S3700 is obviously in a league of its own here, but I do hope that over time we'll see similarly consistent drives from other vendors.

The next set of charts look at the steady state (for most drives) portion of the curve. Here we'll get some better visibility into how everyone will perform over the long run.


              

The source data is the same, we're just focusing on a different part of the graph. Here the Vector actually looks pretty good compared to all non-S3700 drives. In this case the Vector's performance distribution looks a lot like SandForce. There's a clear advantage again over the 840 Pro and Vertex 4.

The final set of graphs abandons the log scale entirely and just looks at a linear scale that tops out at 40K IOPS. We're also only looking at steady state (or close to it) performance here:


              

If we look at the tail end of the graph with a linear scale, we get a taste of the of just how varied IO latency can be with most of these drives. Vector looks much more spread out than the Vertex 4, but that's largely a function of the fact that its performance is just so much higher without an equivalent increase in aggressive defrag/GC routines. The 840 Pro generally manages lower performance in this worst case scenario. The SandForce based Intel SSD 330 shows a wide range of IO latencies but overall performance is much better. Had SandForce not been plagued by so many poorly handled reliability issues it might have been a better received option today.

From an IO consistency perspective, the Vector looks a lot like a better Vertex 4 or 840 Pro. Architecturally I wouldn't be too surprised if OCZ's method of NAND mapping and flash management wasn't very similar to Samsung's, which isn't a bad thing at all. I would like to see more emphasis placed on S3700-style IO consistency though. I do firmly believe that the first company to deliver IO consistency for the client space will reap serious rewards.

Performance vs. Transfer Size AnandTech Storage Bench 2011
Comments Locked

151 Comments

View All Comments

  • kmmatney - Tuesday, November 27, 2012 - link

    I don't see anything wrong with stating that. My 256 Samsung 830 also appears as a 238GB drive in windows...
  • jwilliams4200 - Tuesday, November 27, 2012 - link

    The problem is that "formatting" a drive does not change the capacity.

    Windows is displaying the capacity in GiB, not GB. It is just Windows bug that they label their units incorrectly.
  • Gigaplex - Tuesday, November 27, 2012 - link

    Yes and no. There is some overhead in formatting which reduces usable capacity, but the GiB/GB distinction is a much larger factor in the discrepancy.
  • jwilliams4200 - Wednesday, November 28, 2012 - link

    The GiB/GB bug in Windows accounts for almost all of the difference. It is not worth mentioning that partitioning usually leaves 1MiB of space at the beginning of the drive. 256GB = 238.4186GiB. If you subtract 1MiB from that, it is 238.4176GiB. So why bother to split hairs?
  • Anand Lal Shimpi - Wednesday, November 28, 2012 - link

    This is correct. I changed the wording to usable vs. formatted space, I was using the two interchangeably. The GiB/GB conversion is what gives us the spare area.

    Take care,
    Anand
  • suprem1ty - Thursday, November 29, 2012 - link

    It's not a bug. Just a different way of looking at digital capacity.
  • suprem1ty - Thursday, November 29, 2012 - link

    Oh wait sorry I see what you mean now. Disregard previous post
  • flyingpants1 - Wednesday, November 28, 2012 - link

    I think I might know what his problem is.

    When people see their 1TB-labelled drive displays only 931GB in Windows, they assume it's because formatting a drive with NTFS magically causes it to lose 8% of space, which is totally false. Here's a short explanation for newbie readers. A gigabyte (GB) as displayed in Windows is actually a gibibyte (GiB).

    1 gibibyte = 1073741824 bytes = 1024 mebibytes
    1 gigabyte = 1000000000 bytes = 1000 megabytes = 0.931 gibibytes
    1000 gigabytes = 931 gibibytes

    Windows says GB but actually means GiB.

    SSDs and HDDs are labelled differently in terms of space. Let's say they made a spinning hard disk with exactly 256GB (238GiB) of space. It would appear as 238GB in Windows, even after formatting. You didn't lose anything,
    because the other 18 gigs was never there in the first place.

    Now, according to Anandtech, a 256GB-labelled SSD actually *HAS* the full 256GiB (275GB) of flash memory. But you lose 8% of flash for provisioning, so you end up with around 238GiB (255GB) anyway. It displays as 238GB in Windows.

    If the SSDs really had 256GB (238GiB) of space as labelled, you'd subtract your 8% and get 235GB (219GiB) which displays as 219GB in Windows.
  • flyingpants1 - Wednesday, November 28, 2012 - link

    IMO drive manufacturers should stop messing around and put 256GiB of USABLE space on each 256GiB drive, and start marking them as such.
  • Holly - Wednesday, November 28, 2012 - link

    Tbh imho using base 10 units in binary environment is just asking for a facepalm. Everything underneath runs on 2^n anyway and this new "GB" vs "GiB" is just a commercial bullshit so storage devices can be sold with flashier stickers. Your average raid controller bios will show 1TB drive as 931GB one as well (at least few ICHxR and one server Adaptec I have access to right now all do).

Log in

Don't have an account? Sign up now