The Cleaning Lady and Write Amplification

Imagine you’re running a cafeteria. This is the real world and your cafeteria has a finite number of plates, say 200 for the entire cafeteria. Your cafeteria is open for dinner and over the course of the night you may serve a total of 1000 people. The number of guests outnumbers the total number of plates 5-to-1, thankfully they don’t all eat at once.

You’ve got a dishwasher who cleans the dirty dishes as the tables are bussed and then puts them in a pile of clean dishes for the servers to use as new diners arrive.

Pretty basic, right? That’s how an SSD works.

Remember the rules: you can read from and write to pages, but you must erase entire blocks at a time. If a block is full of invalid pages (files that have been overwritten at the file system level for example), it must be erased before it can be written to.

All SSDs have a dishwasher of sorts, except instead of cleaning dishes, its job is to clean NAND blocks and prep them for use. The cleaning algorithms don’t really kick in when the drive is new, but put a few days, weeks or months of use on the drive and cleaning will become a regular part of its routine.

Remember this picture?

It (roughly) describes what happens when you go to write a page of data to a block that’s full of both valid and invalid pages.

In actuality the write happens more like this. A new block is allocated, valid data is copied to the new block (including the data you wish to write), the old block is sent for cleaning and emerges completely wiped. The old block is added to the pool of empty blocks. As the controller needs them, blocks are pulled from this pool, used, and the old blocks are recycled in here.

IBM's Zurich Research Laboratory actually made a wonderful diagram of how this works, but it's a bit more complicated than I need it to be for my example here today so I've remade the diagram and simplified it a bit:

The diagram explains what I just outlined above. A write request comes in, a new block is allocated and used then added to the list of used blocks. The blocks with the least amount of valid data (or the most invalid data) are scheduled for garbage collection, cleaned and added to the free block pool.

We can actually see this in action if we look at write latencies:

Average write latencies for writing to an SSD, even with random data, are extremely low. But take a look at the max latencies:

While average latencies are very low, the max latencies are around 350x higher. They are still low compared to a mechanical hard disk, but what's going on to make the max latency so high? All of the cleaning and reorganization I've been talking about. It rarely makes a noticeable impact on performance (hence the ultra low average latencies), but this is an example of happening.

And this is where write amplification comes in.

In the diagram above we see another angle on what happens when a write comes in. A free block is used (when available) for the incoming write. That's not the only write that happens however, eventually you have to perform some garbage collection so you don't run out of free blocks. The block with the most invalid data is selected for cleaning; its data is copied to another block, after which the previous block is erased and added to the free block pool. In the diagram above you'll see the size of our write request on the left, but on the very right you'll see how much data was actually written when you take into account garbage collection. This inequality is called write amplification.


Intel claims very low write amplification on its drives, although over the lifespan of your drive a < 1.1 factor seems highly unlikely

The write amplification factor is the amount of data the SSD controller has to write in relation to the amount of data that the host controller wants to write. A write amplification factor of 1 is perfect, it means you wanted to write 1MB and the SSD’s controller wrote 1MB. A write amplification factor greater than 1 isn't desirable, but an unfortunate fact of life. The higher your write amplification, the quicker your drive will die and the lower its performance will be. Write amplification, bad.

Live Long and Prosper: The Logical Page Why SSDs Care About What You Write: Fragmentation & Write Combining
Comments Locked

295 Comments

View All Comments

  • sunbear - Monday, August 31, 2009 - link

    Even though most laptops are now SATA-300 compatible, the majority are not able to actually exceed SATA-150 transfer speeds according to some people who have tried. I would imagine that sequential read/write performance would be important for swap but the SATA-150 will be the limiting factor for any of the SSD's mentioned in Anand's article in this case.


    Here's the situation with Thinkpads:
    http://blogs.technet.com/keithcombs/archive/2008/1...">http://blogs.technet.com/keithcombs/arc...vo-think...

    The new MacBookPro is also limited to SATA-150.
  • smartins - Tuesday, September 1, 2009 - link

    Actually, The ThinkPad T500/T400/W500 are fully SATA-300 compatible, it's only the drives that ship with the machines that are SATA-150 capped.
    I have a Corsair P64 on my T500 and get an average of 180MB/read which is consistent with all the reviews of this drive.
  • mczak - Monday, August 31, 2009 - link

    article says you shouldn't expect it soon, but I don't think so. Several dealers already list it, though not exactly in stock (http://ht4u.net/preisvergleich/a444071.html)">http://ht4u.net/preisvergleich/a444071.html). Price tag, to say it nicely, is a bit steep though.
  • Seramics - Monday, August 31, 2009 - link

    Another great articles from Anandtech. Kudos guys at AT, ur my no. 1 hardware site! Anyway, its really great that we have a really viable competitor to Intel- Indilinx. They really deserve the praise. Now we can buy a non Intel SSD and have no nonsensical stuttering issue! Overall, Intel is still leader but its completely nonsensical how bad their sequential write speed is! I mean, its even slower than a mechanical hard disk! Thats juz not acceptable given the gap in performance is so large and Intel SSD's actually can suffer a significantly worst performance in real world when sequential write speed performance matters. Intel, fix your seq write speed nonsence please!
  • Seramics - Monday, August 31, 2009 - link

    Sorry for double post. Its unintentional and i duno how to delete the 2nd post.
  • Seramics - Monday, August 31, 2009 - link

    Another great articles from Anandtech. Kudos guys at AT, ur my no. 1 hardware site! Anyway, its really great that we have a really viable competitor to Intel- Indilinx. They really deserve the praise. Now we can buy a non Intel SSD and have no nonsensical stuttering issue! Overall, Intel is still leader but its completely nonsensical how bad their sequential write speed is! I mean, its even slower than a mechanical hard disk! Thats juz not acceptable given the gap in performance is so large and Intel SSD's actually can suffer a significantly worst performance in real world when sequential write speed performance matters. Intel, fix your seq write speed nonsence please!
  • Shadowmaster625 - Monday, August 31, 2009 - link

    Subtle. Very subtle. Good article though.

    3 questions:

    1. Is there any way to read the individual page history off the SSD device so I can construct a WinDirStat style graphical representation of the remaining expected life of the flash? Or better yet is there already a program that does this?

    2. Suppose I had a 2 gigabyte movie file on my 60gb vertex drive. And suppose I had 40GB of free space. If I were to make 20 copies of that movie file, then delete them all, would that be the same as running Wiper?

    3. Any guesses as to which of these drives will perform best when we make the move to SATA-III?

    4. (Bonus) What is stopping Intel from buying Indilinx (and pulling their plug)? (Or just pulling their plug without buying them...)

  • SRSpod - Thursday, September 3, 2009 - link

    3. These drives will perform just as they do now when connected to a 6 GBps SATA controller. In order to communicate at the higher speed, both the drive and the controller need to support it. So you'll need new 6 GBps drives to connect to your 6 GBps controller before you'll see any benefit from the new interface.
  • heulenwolf - Monday, August 31, 2009 - link

    Yeah, once the technology matures a little more and drives become more commoditized, I'd like to see more features in terms of feedback on drive life, reliability, etc. When I got my refurb Samsung drives from Dell, for example, they could have been on the verge of dying or they could have been almost new. There's no telling. The controller could know exactly where the drive stands, however. Some kind of controller-tracked indication of drive life left would be a feature that might distinguish comparable drives from one another in a crowded marketplace.

    While they're at it, a tool to allow adjusting of values such as the amount of space not reported to the OS with output in terms of write amplification and predicted drive life would be really nifty.

    Sure, its over the top, but we can always hope.
  • nemitech - Monday, August 31, 2009 - link

    I picked up an Agility 120 Gb for $234 last week from ebay ($270 list price - - 6% bing cashback - $20 pay pal discount). I am sure there will be similar deals around black Friday. $2 per Gb is possible for a good SSD.

Log in

Don't have an account? Sign up now