The Cleaning Lady and Write Amplification

Imagine you’re running a cafeteria. This is the real world and your cafeteria has a finite number of plates, say 200 for the entire cafeteria. Your cafeteria is open for dinner and over the course of the night you may serve a total of 1000 people. The number of guests outnumbers the total number of plates 5-to-1, thankfully they don’t all eat at once.

You’ve got a dishwasher who cleans the dirty dishes as the tables are bussed and then puts them in a pile of clean dishes for the servers to use as new diners arrive.

Pretty basic, right? That’s how an SSD works.

Remember the rules: you can read from and write to pages, but you must erase entire blocks at a time. If a block is full of invalid pages (files that have been overwritten at the file system level for example), it must be erased before it can be written to.

All SSDs have a dishwasher of sorts, except instead of cleaning dishes, its job is to clean NAND blocks and prep them for use. The cleaning algorithms don’t really kick in when the drive is new, but put a few days, weeks or months of use on the drive and cleaning will become a regular part of its routine.

Remember this picture?

It (roughly) describes what happens when you go to write a page of data to a block that’s full of both valid and invalid pages.

In actuality the write happens more like this. A new block is allocated, valid data is copied to the new block (including the data you wish to write), the old block is sent for cleaning and emerges completely wiped. The old block is added to the pool of empty blocks. As the controller needs them, blocks are pulled from this pool, used, and the old blocks are recycled in here.

IBM's Zurich Research Laboratory actually made a wonderful diagram of how this works, but it's a bit more complicated than I need it to be for my example here today so I've remade the diagram and simplified it a bit:

The diagram explains what I just outlined above. A write request comes in, a new block is allocated and used then added to the list of used blocks. The blocks with the least amount of valid data (or the most invalid data) are scheduled for garbage collection, cleaned and added to the free block pool.

We can actually see this in action if we look at write latencies:

Average write latencies for writing to an SSD, even with random data, are extremely low. But take a look at the max latencies:

While average latencies are very low, the max latencies are around 350x higher. They are still low compared to a mechanical hard disk, but what's going on to make the max latency so high? All of the cleaning and reorganization I've been talking about. It rarely makes a noticeable impact on performance (hence the ultra low average latencies), but this is an example of happening.

And this is where write amplification comes in.

In the diagram above we see another angle on what happens when a write comes in. A free block is used (when available) for the incoming write. That's not the only write that happens however, eventually you have to perform some garbage collection so you don't run out of free blocks. The block with the most invalid data is selected for cleaning; its data is copied to another block, after which the previous block is erased and added to the free block pool. In the diagram above you'll see the size of our write request on the left, but on the very right you'll see how much data was actually written when you take into account garbage collection. This inequality is called write amplification.


Intel claims very low write amplification on its drives, although over the lifespan of your drive a < 1.1 factor seems highly unlikely

The write amplification factor is the amount of data the SSD controller has to write in relation to the amount of data that the host controller wants to write. A write amplification factor of 1 is perfect, it means you wanted to write 1MB and the SSD’s controller wrote 1MB. A write amplification factor greater than 1 isn't desirable, but an unfortunate fact of life. The higher your write amplification, the quicker your drive will die and the lower its performance will be. Write amplification, bad.

Live Long and Prosper: The Logical Page Why SSDs Care About What You Write: Fragmentation & Write Combining
POST A COMMENT

296 Comments

View All Comments

  • zodiacfml - Wednesday, September 02, 2009 - link

    Very informative, answered more than anything in my mind. Hope to see this again in the future with these drive capacities around $100. Reply
  • mgrmgr - Wednesday, September 02, 2009 - link

    Any idea if the (mid-Sept release?) OCZ Colossus's internal RAID setup will handle the problem of RAID controllers not being able to pass Windows 7's TRIM command to the SSD array. I'm intent on getting a new Photoshop machine with two SSDs in Raid-0 as soon as Win7 releases, but the word here and elsewhere so far is that RAID will block the TRIM function. Reply
  • kunedog - Wednesday, September 02, 2009 - link

    All the Gen2 X-25M 80GB drives are apparently gone from Newegg . . . so they've marked up the Gen1 drives to $360 (from $230):
    http://www.newegg.com/Product/Product.aspx?Item=N8...">http://www.newegg.com/Product/Product.aspx?Item=N8...

    Unbelievable.
    Reply
  • gfody - Wednesday, September 02, 2009 - link

    What happened to the gen2 160gb on Newegg? For a month the ETA was 9/2 (today) and now it's as if they never had it in the first place. The product page has been removed.

    It's like Newegg are holding the gen2 drives hostage until we buy out their remaining stock of gen1 drives.
    Reply
  • iwodo - Tuesday, September 01, 2009 - link

    I think it acts as a good summary. However someone wrote last time about Intel drive handling Random Read / Write extremely poorly during Sequential Read / Write.

    Has Aanand investigate yet?

    I am hoping next Gen Intel SSD coming in Q2 10 will bring some substantial improvement.
    Reply
  • statik213 - Tuesday, September 01, 2009 - link

    Does the RAID controller propagate TRIM commands to the SSD? Or will having RAID negate TRIM? Reply
  • justaviking - Tuesday, September 01, 2009 - link

    Another great article, Anand! Thanks, and keep them coming.

    If this has already been discussed, I apologize. I'm still exhausted from reading the wonderful article, and have not read all 17 pages of comments.

    On PAGE 3, it talks about the trade-off of larger vs. smaller pages.

    I wonder if it would be feasible to make a hybrid drive, with a portion of the drive using small pages for faster performance when writing small files, and the majority of it being larger pages to keep the management of the drive reasonable.

    Any file could be written anywhere, but the controller would bias small writes to the small pages, and large writes to large files.

    Externally it would appear as a single drive, of course, but deep down in the internals, it would essentially be two drives. Each of the two portions would be tuned for maximum performance in different areas, but able to serve as backup or overflow if the other portion became full or ever got written to too many times.

    Interesting concept? Or a hair brained idea buy an ignorant amateur?
    Reply
  • CList - Tuesday, September 01, 2009 - link

    Great article, wonderful to see insightful, in depth analysis.

    I'd be curious to hear anyone's thoughts on the implications are of running virtual hard disk files on SSD's. I do a lot of work these days on virtual machines, and I'd love to get them feeling more snappy - especially on my laptop which is limited to 4GB of ram.

    For example;
    What would the constant updates of those vmdk (or "vhd") files do to the disk's lifespan?

    If the OS hosting the VM is windows 7, but the virtual machine is WinServer2003 will the TRIM command be used properly?

    Cheers,
    CList
    Reply
  • pcfxer - Tuesday, September 01, 2009 - link

    Great article!

    "It seems that building Pidgin is more CPU than IO bound.."

    Obviously, Mr. Anand doesnt' understand how compilers work ;). Compilers will always be CPU and memory bound, reduce your memory in the computer to say 256MB (or lower) and you'll see what I mean. The levels of recursion necessary to follow the production (grammars that define the language) use up memory but would rarely use the drive unless the OS had terrible resource management. :0.
    Reply
  • CMGuy - Wednesday, September 02, 2009 - link

    While I can't comment on the specifics of software compilers I know that faster disk IO makes a big difference when your performing a full build (compilation and packaging) of software.
    IDEs these days spend a lot their time reading/writing small files (thats a lot of small, random, disk IO) and a good SSD can make a huge difference to this.
    Reply

Log in

Don't have an account? Sign up now