The Cleaning Lady and Write Amplification

Imagine you’re running a cafeteria. This is the real world and your cafeteria has a finite number of plates, say 200 for the entire cafeteria. Your cafeteria is open for dinner and over the course of the night you may serve a total of 1000 people. The number of guests outnumbers the total number of plates 5-to-1, thankfully they don’t all eat at once.

You’ve got a dishwasher who cleans the dirty dishes as the tables are bussed and then puts them in a pile of clean dishes for the servers to use as new diners arrive.

Pretty basic, right? That’s how an SSD works.

Remember the rules: you can read from and write to pages, but you must erase entire blocks at a time. If a block is full of invalid pages (files that have been overwritten at the file system level for example), it must be erased before it can be written to.

All SSDs have a dishwasher of sorts, except instead of cleaning dishes, its job is to clean NAND blocks and prep them for use. The cleaning algorithms don’t really kick in when the drive is new, but put a few days, weeks or months of use on the drive and cleaning will become a regular part of its routine.

Remember this picture?

It (roughly) describes what happens when you go to write a page of data to a block that’s full of both valid and invalid pages.

In actuality the write happens more like this. A new block is allocated, valid data is copied to the new block (including the data you wish to write), the old block is sent for cleaning and emerges completely wiped. The old block is added to the pool of empty blocks. As the controller needs them, blocks are pulled from this pool, used, and the old blocks are recycled in here.

IBM's Zurich Research Laboratory actually made a wonderful diagram of how this works, but it's a bit more complicated than I need it to be for my example here today so I've remade the diagram and simplified it a bit:

The diagram explains what I just outlined above. A write request comes in, a new block is allocated and used then added to the list of used blocks. The blocks with the least amount of valid data (or the most invalid data) are scheduled for garbage collection, cleaned and added to the free block pool.

We can actually see this in action if we look at write latencies:

Average write latencies for writing to an SSD, even with random data, are extremely low. But take a look at the max latencies:

While average latencies are very low, the max latencies are around 350x higher. They are still low compared to a mechanical hard disk, but what's going on to make the max latency so high? All of the cleaning and reorganization I've been talking about. It rarely makes a noticeable impact on performance (hence the ultra low average latencies), but this is an example of happening.

And this is where write amplification comes in.

In the diagram above we see another angle on what happens when a write comes in. A free block is used (when available) for the incoming write. That's not the only write that happens however, eventually you have to perform some garbage collection so you don't run out of free blocks. The block with the most invalid data is selected for cleaning; its data is copied to another block, after which the previous block is erased and added to the free block pool. In the diagram above you'll see the size of our write request on the left, but on the very right you'll see how much data was actually written when you take into account garbage collection. This inequality is called write amplification.


Intel claims very low write amplification on its drives, although over the lifespan of your drive a < 1.1 factor seems highly unlikely

The write amplification factor is the amount of data the SSD controller has to write in relation to the amount of data that the host controller wants to write. A write amplification factor of 1 is perfect, it means you wanted to write 1MB and the SSD’s controller wrote 1MB. A write amplification factor greater than 1 isn't desirable, but an unfortunate fact of life. The higher your write amplification, the quicker your drive will die and the lower its performance will be. Write amplification, bad.

Live Long and Prosper: The Logical Page Why SSDs Care About What You Write: Fragmentation & Write Combining
Comments Locked

295 Comments

View All Comments

  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    wow I misspelled my own name :) Time to sleep for real this time :)

    Take care,
    Anand

  • IntelUser2000 - Monday, August 31, 2009 - link

    Looking at pure max TDP and idle power numbers and concluding the power consumption based on those figures are wrong.

    Look here: http://www.anandtech.com/cpuchipsets...px?i=3403&a...">http://www.anandtech.com/cpuchipsets...px?i=3403&a...

    Modern drives quickly reach idle even between times where the user don't even know and at "load". Faster drives will reach lower average power because it'll work faster to get to idle. This is why initial battery life tests showed X25-M with much higher active/idle power figures got better battery life than Samsungs with less active/idle power.

    Max power is important, but unless you are running that app 24/7 its not real at all, especially the max power benchmarks are designed to reach close to TDP as possible.
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    I agree, it's more than just max power consumption. I tried to point that out with the last paragraph on the page:

    "As I alluded to before, the much higher performance of these drives than a traditional hard drive means that they spend much more time at an idle power state. The Seagate Momentus 5400.6 has roughly the same power characteristics of these two drives, but they outperform the Seagate by a factor of at least 16x. In other words, a good SSD delivers an order of magnitude better performance per watt than even a very efficient hard drive."

    I didn't have time to run through some notebook tests to look at impact on battery life but it's something I plan to do in the future.

    Take care,
    Anand
  • IntelUser2000 - Monday, August 31, 2009 - link

    Thanks, people pay too much attention to just the max TDP and idle power alone. Properly done, no real apps should ever reach max TDP for 100% of the duration its running at.
  • cristis - Monday, August 31, 2009 - link

    page 6: "So we’re at approximately 36 days before I exhaust one out of my ~10,000 write cycles. Multiply that out and it would take 36,000 days" --- wait, isn't that 360,000 days = 986 years?
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    woops, you're right :) Either way your flash will give out in about 10 years and perfectly wear leveled drives with no write amplification aren't possible regardless.

    Take care,
    Anand
  • cdillon - Monday, August 31, 2009 - link

    I gather that you're saying it'll give out after 10 years because a flash cell will lose its stored charge after about 10 years, not because the write-life will be surpassed after 10 years, which doesn't seem to be the case. The 10-year charge life doesn't mean they become useless after 10 years, just that you need to refresh the data before the charge is lost. This makes flash less useful for data archival purposes, but for regular use, who doesn't re-format their system (and thus re-write 100% of the data) at least once every 10 years? :-)
  • Zheos - Monday, August 31, 2009 - link

    "This makes flash less useful for data archival purposes, but for regular use, who doesn't re-format their system (and thus re-write 100% of the data) at least once every 10 years? :-)"

    I would like an input on that too, cuz thats a bit confusing.
  • GourdFreeMan - Tuesday, September 1, 2009 - link

    Thermal energy (i.e. heat) allows the electrons trapped in the floating gate to overcome the potential well and escape, causing zeros (represented by a larger concentration of electrons in the floating gate) to eventually become ones (represented by a smaller concentration of electrons in the floating gate). Most SLC flash is rated at about 10 years of data retention at either 20C (68F) or 25C (77F). What Anand doesn't mention is that as a rule of thumb for every 9 degrees C (~16F) that the temperature is raised above that point, data retention lifespan is halved. (This rule of thumb only holds for human habitable temperatures... the exact relation is governed by the Arrhenius equation.)

    Wear leveling and error correction codes can be employed to mitigate this problem, which only gets worse as you try to store more bits per cell or use a smaller lithography process without changing materials or design.
  • Zheos - Tuesday, September 1, 2009 - link

    Thank you GourdFreeMan for the additional input,

    But, if we format like every year or so , doesnt the countdown on data retention restart from 0 ? or after ~10 year (seems too be less if like you said temperature affect it) the SSD will not only fail at times but become unusable ? Or if we come to that point a format/reinstall would resolve the problem ?

    I dont care about losing data stored after 10 years, what i do care is if the drive become ASSURELY unsusable after 10 year maximum. For drives that comes at a premium price, i don't like this if its the case.

Log in

Don't have an account? Sign up now