The Cleaning Lady and Write Amplification

Imagine you’re running a cafeteria. This is the real world and your cafeteria has a finite number of plates, say 200 for the entire cafeteria. Your cafeteria is open for dinner and over the course of the night you may serve a total of 1000 people. The number of guests outnumbers the total number of plates 5-to-1, thankfully they don’t all eat at once.

You’ve got a dishwasher who cleans the dirty dishes as the tables are bussed and then puts them in a pile of clean dishes for the servers to use as new diners arrive.

Pretty basic, right? That’s how an SSD works.

Remember the rules: you can read from and write to pages, but you must erase entire blocks at a time. If a block is full of invalid pages (files that have been overwritten at the file system level for example), it must be erased before it can be written to.

All SSDs have a dishwasher of sorts, except instead of cleaning dishes, its job is to clean NAND blocks and prep them for use. The cleaning algorithms don’t really kick in when the drive is new, but put a few days, weeks or months of use on the drive and cleaning will become a regular part of its routine.

Remember this picture?

It (roughly) describes what happens when you go to write a page of data to a block that’s full of both valid and invalid pages.

In actuality the write happens more like this. A new block is allocated, valid data is copied to the new block (including the data you wish to write), the old block is sent for cleaning and emerges completely wiped. The old block is added to the pool of empty blocks. As the controller needs them, blocks are pulled from this pool, used, and the old blocks are recycled in here.

IBM's Zurich Research Laboratory actually made a wonderful diagram of how this works, but it's a bit more complicated than I need it to be for my example here today so I've remade the diagram and simplified it a bit:

The diagram explains what I just outlined above. A write request comes in, a new block is allocated and used then added to the list of used blocks. The blocks with the least amount of valid data (or the most invalid data) are scheduled for garbage collection, cleaned and added to the free block pool.

We can actually see this in action if we look at write latencies:

Average write latencies for writing to an SSD, even with random data, are extremely low. But take a look at the max latencies:

While average latencies are very low, the max latencies are around 350x higher. They are still low compared to a mechanical hard disk, but what's going on to make the max latency so high? All of the cleaning and reorganization I've been talking about. It rarely makes a noticeable impact on performance (hence the ultra low average latencies), but this is an example of happening.

And this is where write amplification comes in.

In the diagram above we see another angle on what happens when a write comes in. A free block is used (when available) for the incoming write. That's not the only write that happens however, eventually you have to perform some garbage collection so you don't run out of free blocks. The block with the most invalid data is selected for cleaning; its data is copied to another block, after which the previous block is erased and added to the free block pool. In the diagram above you'll see the size of our write request on the left, but on the very right you'll see how much data was actually written when you take into account garbage collection. This inequality is called write amplification.


Intel claims very low write amplification on its drives, although over the lifespan of your drive a < 1.1 factor seems highly unlikely

The write amplification factor is the amount of data the SSD controller has to write in relation to the amount of data that the host controller wants to write. A write amplification factor of 1 is perfect, it means you wanted to write 1MB and the SSD’s controller wrote 1MB. A write amplification factor greater than 1 isn't desirable, but an unfortunate fact of life. The higher your write amplification, the quicker your drive will die and the lower its performance will be. Write amplification, bad.

Live Long and Prosper: The Logical Page Why SSDs Care About What You Write: Fragmentation & Write Combining
Comments Locked

295 Comments

View All Comments

  • mtoma - Monday, August 31, 2009 - link

    Here is an issue I think deserves to be adressed: could an conventional HDD (with 2-3 or 4 platters) slow down the performance of a PC , even if that PC boots from an excellent SSD drive, like an Intel X-25M? Let's say that on the SSD lies only the operating system, and that onto the conventional HDD lies the movie and music archive. But both drives run at the same time, and it is a well known fact that the PC runs at the speed of the slowest component (in our case the conventional HDD).
    I did not found ANYWHERE in the Web a review, or even an opinion regarding this issue.
    I would appreciate if I get a competent answer.
    Thanks a lot!
  • gstrickler - Monday, August 31, 2009 - link

    That's a good question, and I too would like to see a report from someone who has done it.

    Some of your assertions/assumptions are not quite accurate. A PC doesn't "run at the speed of the slowest component", but rather it's performance is limited by the slowest component. Depending upon your usage patterns, a slow component may have very little effect on performance or it may make the machine nearly unusable. I think that's probably what you meant, I'm just clarifying it.

    As for putting the OS on an SSD and user files on a HD, you would want to have not only the OS, but also your applications (at least your frequently used ones) installed on the SSD. Put user data (especially large files such as .jpg, music, video, etc.), and less frequently used applications and data on the HD. Typical user documents (.doc, .xls, .pdf) can be on either drive, but access might be better with them on the SSD so that you don't have to wait for the HD to spin-up. In that case, the HD might stay spun-down (low power idle) most of the time, which might improve battery life a bit.

    Databases are a bit trickier. It depends upon how large the database is, how much space you have available on the SSD, how complex the data relations are, how complex the queries are, how important performance is, how much RAM is available, how well indexes are used, and how well the database program can take advantage of caching. Performance should be as good or better with the database on the SSD, but the difference may be so small that it's not noticeable, or it might be dramatically faster. That one is basically "try it and see".

    Where to put the paging file/swap space? That's a tough one to answer. Putting it on the SSD might be slightly faster if your SSD has high write speeds, however,that will increase the amount of writing the the SSD and could potentially shorten it's usable life. It also seems like a waste to use expensive SSD storage for swap space. You should be able to minimize those by using a permanent swap space of the smallest practical size for your environment.

    However, putting the swap space on a less costly HD means the HD will be spun-up (active idle) and/or active more often, possibly costing you some battery life. Also, while the HD may have very good streaming write speeds, it's streaming read speed and random access (read or write) speed will be slower than most SSDs, so you're likely to have slightly slower overall response and slightly shorter battery life than you will by putting the swap space on the SSD.

    On a desktop machine with a very fast HD, it might make sense to put the paging file on the HD (or to put a small swap space on the SSD and some more on the HD), but on a machine where battery life is an important consideration, it might be better to have the swap space on the SSD, even though it's "expensive".
  • Pirks - Monday, August 31, 2009 - link

    just turn the page file off, and get yourself 4 or 8 gigs of RAM
  • gstrickler - Monday, August 31, 2009 - link

    Windows doesn't like to operate without a page file.
  • smartins - Tuesday, September 1, 2009 - link

    Actually, I've been running without a page file for a while and never had any problems. Windows feels much more responsive. You do have to have plenty or ram, I have 6GB on this machine.
  • mtoma - Thursday, September 3, 2009 - link

    In my case, it's not a problem of RAM (I have 12 GB RAM and a Core i7 920),it's a problem of throwing or not 300 dolars down the window (on a Intel SSD drive). Currently I have a 1.5 TB Seagate Barracuda 11th generation, on wich I store ONLY movies, music and photos. My primary drive (OS plus programms) is a 300 GB Velociraptor.
    Do you think diffrent types of Windows behave difrent if you remove the page file? It seems to me if I remove this page file, I walk onto a minefield, and I don't want to do that.
    Besides that, my real problem is to use (when I purachase the Intel drive) the Seagate Barracuda in a external HDD enclosure OR internally, and thus, possibly slow down my PC.
  • SRSpod - Thursday, September 3, 2009 - link

    Adding a slow hard drive to your system will not slow your system down (well, apart from a slight delay at POST when it detects the drive). The only difference in speed will be that when you access something on the HDD instead of the SSD, it will be slower than if you were accessing it on the SSD. You won't notice any difference until you access data from the HDD, and if it's only music, movies and photos, and you're not doing complex editing of those files, then a regular HDD will be fast enough to view and play those files without issues.
    If you don't plan to remove it from your system, then attach it internally. Introducing a USB connection between the HDD and your system will only slow things down compared to using SATA.

    Removing the pagefile can cause problems in certain situations and with certain programs (Photoshop, for example). If you have enough RAM, then you shouldn't be hitting the pagefile much anyway, so where it's stored won't make so much of a difference. Personally, I'd put it on the SSD, so that when you do need it, it's fast.
  • samssf - Friday, September 18, 2009 - link

    Won't Windows write to the page file regardless of how much RAM you have? I was under the impression Windows will swap out memory that it determines isn't being used / needed at the moment.

    If you absolutely need to have a page file, I would use available RAM to create a RAM disk, and place your page file on this virtual disk. That way you're setting aside RAM you know you don't need for the page file, since Windows will write to that file anyway.

    If you can, just turn it off.
  • minime - Monday, August 31, 2009 - link

    Would someone please have the courtesy to test those things in a business environment? I'm talking about servers. Database, webapplication, Java, etc. Reliability? Maybe even enrich the article with a PCI-E SSD (Fusion-IO)?
  • ciukacz - Monday, August 31, 2009 - link

    http://it.anandtech.com/IT/showdoc.aspx?i=3532">http://it.anandtech.com/IT/showdoc.aspx?i=3532

Log in

Don't have an account? Sign up now