The Cleaning Lady and Write Amplification

Imagine you’re running a cafeteria. This is the real world and your cafeteria has a finite number of plates, say 200 for the entire cafeteria. Your cafeteria is open for dinner and over the course of the night you may serve a total of 1000 people. The number of guests outnumbers the total number of plates 5-to-1, thankfully they don’t all eat at once.

You’ve got a dishwasher who cleans the dirty dishes as the tables are bussed and then puts them in a pile of clean dishes for the servers to use as new diners arrive.

Pretty basic, right? That’s how an SSD works.

Remember the rules: you can read from and write to pages, but you must erase entire blocks at a time. If a block is full of invalid pages (files that have been overwritten at the file system level for example), it must be erased before it can be written to.

All SSDs have a dishwasher of sorts, except instead of cleaning dishes, its job is to clean NAND blocks and prep them for use. The cleaning algorithms don’t really kick in when the drive is new, but put a few days, weeks or months of use on the drive and cleaning will become a regular part of its routine.

Remember this picture?

It (roughly) describes what happens when you go to write a page of data to a block that’s full of both valid and invalid pages.

In actuality the write happens more like this. A new block is allocated, valid data is copied to the new block (including the data you wish to write), the old block is sent for cleaning and emerges completely wiped. The old block is added to the pool of empty blocks. As the controller needs them, blocks are pulled from this pool, used, and the old blocks are recycled in here.

IBM's Zurich Research Laboratory actually made a wonderful diagram of how this works, but it's a bit more complicated than I need it to be for my example here today so I've remade the diagram and simplified it a bit:

The diagram explains what I just outlined above. A write request comes in, a new block is allocated and used then added to the list of used blocks. The blocks with the least amount of valid data (or the most invalid data) are scheduled for garbage collection, cleaned and added to the free block pool.

We can actually see this in action if we look at write latencies:

Average write latencies for writing to an SSD, even with random data, are extremely low. But take a look at the max latencies:

While average latencies are very low, the max latencies are around 350x higher. They are still low compared to a mechanical hard disk, but what's going on to make the max latency so high? All of the cleaning and reorganization I've been talking about. It rarely makes a noticeable impact on performance (hence the ultra low average latencies), but this is an example of happening.

And this is where write amplification comes in.

In the diagram above we see another angle on what happens when a write comes in. A free block is used (when available) for the incoming write. That's not the only write that happens however, eventually you have to perform some garbage collection so you don't run out of free blocks. The block with the most invalid data is selected for cleaning; its data is copied to another block, after which the previous block is erased and added to the free block pool. In the diagram above you'll see the size of our write request on the left, but on the very right you'll see how much data was actually written when you take into account garbage collection. This inequality is called write amplification.


Intel claims very low write amplification on its drives, although over the lifespan of your drive a < 1.1 factor seems highly unlikely

The write amplification factor is the amount of data the SSD controller has to write in relation to the amount of data that the host controller wants to write. A write amplification factor of 1 is perfect, it means you wanted to write 1MB and the SSD’s controller wrote 1MB. A write amplification factor greater than 1 isn't desirable, but an unfortunate fact of life. The higher your write amplification, the quicker your drive will die and the lower its performance will be. Write amplification, bad.

Live Long and Prosper: The Logical Page Why SSDs Care About What You Write: Fragmentation & Write Combining
Comments Locked

295 Comments

View All Comments

  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    Maybe I should compile these things into a book? :)

    Here are my answers about some stuff:

    1) There's a spec for how hard drive makers report capacity. They define 1GB as 1 billion bytes. This is technically correct (base 10 SI prefix as you correctly pointed out). The HDDs also physically have this much storage on them, they are made up of sequentially numbered sectors that are easily counted in a decimal number system.

    All other aspects of PC storage (e.g. cache, DRAM, NAND flash) however work in base 2 (like the rest of the PC). In these respects 1GB is defined as 1024^3 because we're dealing with a base 2 number system. There are reasons for this but it goes beyond the scope of what I'm posting :)

    Intel adheres to the same spec that the HDD makers use. But the X25-M is made up of flash, which as I just mentioned is addressed in a base 2 number system. There's more flash than user space on the drive, it's used as spare area, woohoo. I think we're both on the same page here, just saying things differently :)

    2) We'll see a 320GB drive, just not this year. I don't know that the demand is there especially given the weak economy.

    Dreams do sometimes come true... ;)

    3) Perhaps, but I don't like the idea of a drive doing anything but idling when it's supposed to be...idle. This does funny things to notebook battery life I'd think.

    4) This is true. There's also another thing you can do with the jumper (and perhaps some additional software): flash any indilinx drive with any firmware regardless of vendor :)

    5) I had to throw out a lot of data because of variations between runs. It ended up being a combination of immature drivers, immature benchmarks and some OS trickery. The setup I have now is very reliable and provides very repeatable results with very little variation. While I run everything three times, the runs are so close that you could technically do only one run per drive and still be fine.

    6) I wouldn't count WD and Seagate out just yet. It may take them a while but they won't go quietly...

    7) Samsung makes a ton of money from SSD sales to OEMs, they don't seem to care about the end user market as much. If end users start protesting Samsung drives however, things will change.

    In my opinion? Once Apple falls, the rest will follow. If Apple will migrate to Intel (possible) or Indilinx (less likely), we'll see the same from the other OEMs and Samsung will be forced to change.

    Or I could be too pessimistic and we'll see better performance from Samsung before then.

    8) Agreed :)

    I'll finish here too :)

    Take care,
    Anand
  • Reven - Monday, August 31, 2009 - link

    Anand, dont listen to the guys like blyndy who diss on the anthologies, I love them. You can find a basic review anywhere, its the in-depth yet simple to understand stuff like these anthologies that make me visit Anandtech all the time.

    Keep it up, dude!
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    Thank you :)
  • EasterEEL - Monday, August 31, 2009 - link

    I have a couple of questions regarding the Intel® SATA SSD Firmware Update Tool (2832KB) v1.3 8/24/2009.

    Does this firmware enable TRIM within the SSD to work with Windows 7?

    If AHCI is enabled in the BIOS (but not RAID) does Windows 7 use it's own drivers with TRIM? Or does it load Intel’s Matrix Storage Manager driver which does not support TRIM as per the article note below?

    "Unfortunately if you’re running an Intel controller in RAID mode (whether non-member RAID or not), Windows 7 loads Intel’s Matrix Storage Manager driver, which presently does not pass the TRIM command. Intel is working on a solution to this and I'd expect that it'll get fixed after the release of Intel's 34nm TRIM firmware in Q4 of this year."

  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    That update does not enable TRIM. The TRIM firmware is in testing now and it will be out sometime in Q4 of this year (October - December).

    If AHCI is enabled in the BIOS and you haven't loaded Intel's MSM drivers then it will use the Windows 7 driver and TRIM will be supported.

    Take care,
    Anand
  • uberowo - Monday, August 31, 2009 - link

    I do have a question however. :D

    I am building a gaming pc, and I am buying ssd disk/s. Would I benefit from getting 2x80gb intel gen2s and using raid0? Or should I stick with a single 160gb?
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    While I haven't tested 2 x 80GB drives in RAID-0, my feeling is that a single SSD is going to be better than two in RAID going forward. As of now I don't know that anyone's TRIM firmware is going to work if you've got two drives in RAID-0.

    The perceived performance gains in RAID-0 also aren't that great on SSDs from what I've seen.

    Take care,
    Anand
  • Ardax - Monday, August 31, 2009 - link

    A naive guess would be that it depends on the workload. For lots of sequential transfers a RAID-0 should shine -- particularly on reads -- because you're spreading the transfers out over multiple SATA channels.

    Losing TRIM is a problem. Finding a controller than can handle the performance is entirely likely to be another.
  • uberowo - Monday, August 31, 2009 - link

    Thanks a lot for taking the time to answer. Not to mention making this awesome site. :)
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    You guys take the time to read it and make some truly wonderful comments, it's the least I can do :)

    -A

Log in

Don't have an account? Sign up now