When we reviewed Intel's SSD DC S3700 I started looking at consistency of IO latency, an area that Intel's latest controller specifically targeted for significant improvement. In our review of OCZ's Vector I took the same methodology and applied it to the current crop of high-end consumer drives. As I believe improving IO consistency is a good optimization practice for all SSDs, the hope is that we'll see improvements in this area on both client and enterprise focused drives.

In the comments thread for the Vector review, jwilliams4200 posted some very interesting data. The S3700 has 264GiB of NAND on-board but only exposes 186GiB of it (200GB advertised capacity) as user accessible storage, the rest is used as spare area to improve performance, consistency and endurance. Most client drives on the other hand only feature about 7% of their total NAND capacity set aside as spare area (256GiB of NAND, 238GiB of user storage). The obvious hypothesis is that a big part (if not all?) of the S3700's advantage in performance consistency is due to the large amount of spare area.

We've understood the relationship between spare area and write amplification for quite some time now. I borrowed the chart below from a paper published by IBM's Zurich Research Laboratory showing the relationship between write amplification and spare area for various workloads:


Write Amplification vs. Spare Area, courtesy of IBM Zurich Research Laboratory

With more spare area, write amplification is lower. The controller has more room and thus flexibility with running its defragmentation and garbage collection algorithms. The dreaded read-modify-write scenario is triggered less frequently as there's less space pressure thanks to additional spare area. If you're testing in a full-drive scenario, there's simply less data to keep track of when you've got more NAND set aside as spare area. This keeps internal data structure footprints smaller by default, which also improves performance.

It's because of this relationship between write amplification and spare area that we've always recommended setting aside 10 - 20% of your SSD and not filling it up entirely. Most modern controllers will do just fine if you partition the drive and leave the last 10 - 20% untouched. With TRIM support even the partitioning step isn't really necessary, but it does help from a data management standpoint. The real question is what's the relationship between spare area and IO latency/performance consistency.

To find out, I repeated jwilliams4200's tests. I took a bunch of 240/256GB drives and filled them to various percentages of their capacity, and performed our IO consistency test with an identical span of LBAs. The goal was to simulate worst case IO consistency while taking into account greater percentages of spare area.

The Tests

For this investigation I picked a couple of the most interesting, modern SSD controllers: Samsung's MDX controller from the SSD 840 Pro, Link A Media Devices' LM87800 controller from Corsair's Neutron and Barefoot 3 from OCZ's Vector. For all of these drives I used 256GB capacities. With the exception of the Neutron, all of the drives had ~7% of their NAND capacity set aside as spare area. The Neutron is a 240GB drive after its ~12% spare area.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, I did vary the percentage of the drive that I filled/tested depending on the amount of spare area I was trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area.  If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives I've tested here but not all controllers may behave the same way.

In our Intel SSD DC S3700 review I introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below I took a freshly secure erased SSD and filled it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next I kicked off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. I ran the test for just over half an hour, no where near what we run our steady state tests for but enough to give me a good look at drive behavior once all spare area filled up.

I recorded instantaneous IOPS every second for the duration of the test. I then plotted IOPS vs. time and generated the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive alllocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

Impact of Spare Area
  Intel SSD DC S3700 200GB Corsair Neutron 240GB OCZ Vector 256GB Samsung SSD 840 Pro 256GB
Default
12% Spare Area - - -
25% Spare Area -
50% Spare Area - - -

I ran the most data on Samsung's SSD 840 Pro as it's likely going to be the most popular of the drives here. If you click through the various simulated capacities/spare area points you'll see the drive's performance consistency improves tremendously once you hit 25% spare area. The drive's IO consistency goes from a spread out mess to something that resembles a thickly painted line across the graph. At 50% spare area, the 840 Pro begins to lose that initial performance cliff and instead develops a much flatter curve. Compared to the S3700 there's still some obvious variation in IO latency (and thus performance) but it's far more controlled than in the default configuration.

OCZ's Vector behaves similarly. The individual IOs on the Vector with 25% spare area are more tightly grouped than on the 840 Pro in general, but you can see OCZ's periodic defrag routine kick in with regular blips that deviate below the curve.

The LAMD based Neutron is actually remarkably consistent even in its default configuration. I originally didn't include the Neutron but Kristian theorized that LAMD's enterprise experience could make for an interesting datapoint here. He was right. The Neutron remains one of the most consistent SSDs out of the box without dedicating any additional spare area to the controller. Even compared to Samsung's SSD 840 Pro at 225GB, the Neutron manages to deliver more consistent performance. The Neutron's curve tightens up at 25% spare area but there's still a good amount of variance. The LAMD defrag routines don't look as predictable/periodic as we've seen on other drives, including the S3700. I wonder if the controller is doing something a bit more dynamic here.

The next set of charts look at the steady state (for most drives) portion of the curve. Here we'll get some better visibility into how everyone will perform over the long run.

Impact of Spare Area
  Intel SSD DC S3700 200GB Corsair Neutron 240GB OCZ Vector 256GB Samsung SSD 840 Pro 256GB
Default
12% Spare Area - - -
25% Spare Area -
50% Spare Area - - -

The scale isn't different here, but we do get a better look at steady state consistency. The Neutron continues to impress, looking almost identical to the S3700 if you give up 25% of the drive's NAND to spare area. Samsung also does very well at 25% spare area, there's just a huge improvement there.

The final set of graphs abandons the log scale entirely and just looks at a linear scale that tops out at 50K IOPS. We're also only looking at steady state (or close to it) performance here:

Impact of Spare Area
  Intel SSD DC S3700 200GB Corsair Neutron 240GB OCZ Vector 256GB Samsung SSD 840 Pro 256GB
Default
12% Spare Area - - -
25% Spare Area -
50% Spare Area - - -

Zooming in and looking at a linear scale does show us that additional spare area alone isn't enough to deliver the degree of consistency that Intel's S3700 offers. The solution really has to be a combination of controller and spare area. It's also entirely possible that a controller that prioritizes IO consistency would be able to offset the need for significant spare area. That's definitely the case with Corsair's Neutron, and I'd assume that future derivaties of Intel's S3700 with standard MLC and less spare area will behave similarly. For most consumers this likely won't matter as minimum performance improves substantially once you hit 25% spare area for these drives, but enterprise customers with several if not dozens of SSDs in a RAID array will still benefit from the S3700's architecture.

Final Words

Delivering consistent performance is an important part of improving user experience. As Intel's SSD DC S3700 and Corsair's Neutron have shown, consistent performance is possible through a controller that prioritizes efficient and aggressive defragmentation. I do believe it is important for controller vendors to prioritize IO consistency in their controller and firmware designs.

For drives on the market today that don't already prioritize consistent IO, it is possible to deliver significant improvements in IO consistency through an increase in spare area. OCZ's Vector and Samsung's SSD 840 Pro both deliver much better IO consistency if you simply set aside 25% of the total NAND capacity as spare area. Consumer SSD prices are finally low enough where we're no longer forced to buy the minimum capacity for our needs. Whatever drive you end up buying, plan on using only about 75% of its capacity if you want a good balance between performance consistency and capacity.

Comments Locked

134 Comments

View All Comments

  • chrone - Friday, December 7, 2012 - link

    Great article as always, Anand!

    Quick question, will eMMC on smartphone or any flash drive chipset outhere benefit from this?
  • jameskatt - Sunday, December 9, 2012 - link

    The biggest problem I have with SSDs is that if you keep it nearly full, it wears out FASTER, the FAILS.

    SSDs have a limited lifespan. Each storage cell on an SSD can only be written to a few thousand times.

    The problem is that if the SSD is nearly full, then the cells in the space left has a higher likelihood of reaching their write limit. When this happens, the entire SSD fails!

    Thus, to keep the SSD from wearing out, it is best to keep 25% of it empty.

    I backup to a regular hard drive on the same computer, ready to go, in case the SSD fails.

    When SSDs fail, it s catastrophically sudden.
  • Impulses - Sunday, December 9, 2012 - link

    None of that is in any way accurate... Like at all. First off, all SSD employ wear leveling, so even if you constantly keep it 80% or 90% full the write operations are still distributed across all cells and stuff is regularly moved around the drive to account for this.

    You seem to be vastly overstating how much of a limited lifespan SSD have... Feel free to reef Anand's previous articles on the matter.

    SSD don't generally fail suddenly and in catastrophic fashion when you're talking about cell decay either... They have SMART attributes you can monitor like on any hard drive, and even once the write limit for a cell has been reached you should still be able to read from it.
  • skroh - Tuesday, December 11, 2012 - link

    Read my post on the previous page of this thread. I have an SSD that I've been running as my main system volume for more than 2 years. According to the provided tools, I have used about 2% of its predicted life for read/write cycles. According to Anand, when these counters reach zero, they have a built-in safety margin of about 10% beyond that. Unless you are writing gigabytes per day, every day, to your drive, it will likely last longer than the motherboard you have it plugged into or the traditional hard drive you are backing it up to.

    As for the drive dying much faster if you keep it full, Google "spare area." The drive manufacturers already reserve space on the drive to reduce the dreaded write amplification. Keeping a little extra room out of the space visible to you as a user is just common sense for several reasons, including performance, but thanks to the hidden spare area it doesn't have to be anywhere near 25%. In any case, under normal non-server workloads, both wear and performance problems are much smaller than you imagine.
  • batguiide - Sunday, December 9, 2012 - link

    Share
    a website with you ,
    ( socanpower。ca)
    Believe you will love it.
    laptop battery,CPU fan,AC power adapters.DC power adapters and laptop keyboard.
    I bought two. Cheap, good quality, you can
    go and ship with there.
  • scottkarch - Tuesday, December 11, 2012 - link

    None of the RAID controllers on our servers support TRIM. Since the OS can't talk to each drive individually, I was wondering if you either know or could give me an educated guess about the following two scenarios.

    1) Make a logical RAID volume but when partitioning in the OS, set aside 25% of space as unpartitioned

    2) I vaguely recall seeing the option when making the RAID set, to use a % of the available space... This was in the raid controller, prior to OS intstall/config.

    I've already done #1 on a test Citrix server with SSDs. I will have to see if #2 is an option on any rebuilds we do.

    I'm wondering if this can be made to work with SSDs behind an older RAID controller... Thanks. Great article
  • scottkarch - Tuesday, December 18, 2012 - link

    Sorry to bump. I'll try to ask a different way.

    If putting SSDs behind a raid controller, does just making it part of a raid set somehow use all the available storage? Or, if you THEN only make a partition with 70% of the available space, would that unused 30% get treated as free by the disk and give the free blocks needed for garbage collection?

    2 x 512GB SSDs in a RAID1 but only make a partition of 385GB

    or

    6 x 512GB SSDs in a RAID5 = 2.56TB logical volume, and make a single 1.7TB partition.

    Would this accomplish the same thing as free space on bare drives?
  • jonjonjonj - Saturday, December 15, 2012 - link

    i would have loved to see sandforce and marvell controllers included. id be willing to bet more people have a sandoforce or marvell drive then the 4 above combined. either way its an interesting article.
  • ssd_distri - Sunday, December 16, 2012 - link

    You put the spotlight on intersting subject, but used a scenario that is pretty useless for 99,99% readers - the scenario is totally crazy from desktop user`s view standpoint.

    How about:
    - disclose the GB of data that the SSD is filled with for the Heavy and Light Anand SSD benches
    - rerun the benches for each SSD with additional data loaded onto the SSD, so it is filled with default load / 50% / 70% / 80% / 90% (100% is pretty uninteresting imo)
    - plot a chart for the benches like always, only make it bigger and on each result bar mark respective performance losses at higher SSD capacity usage.
    - That way we can see how different SSD controllers manage workload vs increased capacity used in realistic user workloads.... ?

    And should include SF2x and V4 for reference and maybe the TLC 840
  • olee22 - Tuesday, December 25, 2012 - link

    I have this drive as system-boot drive for Windows 7.
    Kingston SSDNow V Series SNV125-S2-64GB
    http://media.kingston.com/support/downloads/MKD_10...

    What is the good way to set it up regarding free space?
    1. Format full free space, leave 20% empty all the time.
    2. Partition into two areas (80% = 52GB, 20%=13GB), and only format the bigger space, and fill it as much as wanted, and leave the smaller area unformatted.

    This SSD has no native TRIM, I use Diskeeper to make a garbage collection about every week.

Log in

Don't have an account? Sign up now