Putting Theory to Practice: Understanding the SSD Performance Degradation Problem

Let’s look at the problem in the real world. You, me and our best friend have decided to start making SSDs. We buy up some NAND-flash and build a controller. The table below summarizes our drive’s characteristics:

  Our Hypothetical SSD
Page Size 4KB
Block Size 5 Pages (20KB)
Drive Size 1 Block (20KB
Read Speed 2 KB/s
Write Speed 1 KB/s

 

Through impressive marketing and your incredibly good looks we sell a drive. Our customer first goes to save a 4KB text file to his brand new SSD. The request comes down to our controller, which finds that all pages are empty, and allocates the first page to this text file.


Our SSD. The yellow boxes are empty pages

The user then goes and saves an 8KB JPEG. The request, once again, comes down to our controller, and fills the next two pages with the image.


The picture is 8KB and thus occupies two pages, which are thankfully empty

The OS reports that 60% of our drive is now full, which it is. Three of the five open pages are occupied with data and the remaining two pages are empty.

Now let’s say that the user goes back and deletes that original text file. This request doesn’t ever reach our controller, as far as our controller is concerned we’ve got three valid and two empty pages.

For our final write, the user wants to save a 12KB JPEG, that requires three 4KB pages to store. The OS knows that the first LBA, the one allocated to the 4KB text file, can be overwritten; so it tells our controller to overwrite that LBA as well as store the last 8KB of the image in our last available LBAs.

Now we have a problem once these requests get to our SSD controller. We’ve got three pages worth of write requests incoming, but only two pages free. Remember that the OS knows we have 12KB free, but on the drive only 8KB is actually free, 4KB is in use by an invalid page. We need to erase that page in order to complete the write request.


Uhoh, problem. We don't have enough empty pages.

Remember back to Flash 101, even though we have to erase just one page we can’t; you can’t erase pages, only blocks. We have to erase all of our data just to get rid of the invalid page, then write it all back again.

To do so we first read the entire block back into memory somewhere; if we’ve got a good controller we’ll just read it into an on-die cache (steps 1 and 2 below), if not hopefully there’s some off-die memory we can use as a scratch pad. With the block read, we can modify it, remove the invalid page and replace it with good data (steps 3 and 4). But we’ve only done that in memory somewhere, now we need to write it to flash. Since we’ve got all of our data in memory, we can erase the entire block in flash and write the new block (step 5).

Now let’s think about what’s just happened. As far as the OS is concerned we needed to write 12KB of data and it got written. Our SSD controller knows what really transpired however. In order to write that 12KB of data we had to first read 12KB then write an entire block, or 20KB.

Our SSD is quite slow, it can only write at 1KB/s and read at 2KB/s. Writing 12KB should have taken 12 seconds but since we had to read 12KB and then write 20KB the whole operation now took 26 seconds.

To the end user it would look like our write speed dropped from 1KB/s to 0.46KB/s, since it took us 26 seconds to write 12KB.

Are things starting to make sense now? This is why the Intel X25-M and other SSDs get slower the more you use them, and it’s also why the write speeds drop the most while the read speeds stay about the same. When writing to an empty page the SSD can write very quickly, but when writing to a page that already has data in it there’s additional overhead that must be dealt with thus reducing the write speeds.

The Blind SSD Free Space to the Rescue
POST A COMMENT

314 Comments

View All Comments

  • tirez321 - Wednesday, March 18, 2009 - link

    I can kinda see that it wouldn't now.
    Because there would still be states there regardless.
    But if you could inform the drive that it is deleted somehow, hmm.

    Reply
  • strikeback03 - Wednesday, March 18, 2009 - link

    The subjective experiences with stuttering are more important to me than most of the test numbers. Other tests I have found of the G.Skill Titan and similar have looked pretty good, but left out mention of stuttering in use.

    Too bad, as the 80GB Intel is too small and the ~$300 for a 120GB is about the most I am willing to pay. Maybe sometime this year the OCZ Vertex or similar will get there.
    Reply
  • strikeback03 - Tuesday, March 24, 2009 - link

    When I wrote that, the Newegg price for the 120GB Vertex was near $400. Now they have it for $339 with a $30 MIR. Now that's progress. Reply
  • kamikaz1k - Wednesday, March 18, 2009 - link

    the latency times are switched...incase u wanted to kno.
    also, first post ^^ hallo!
    Reply
  • GourdFreeMan - Wednesday, March 18, 2009 - link

    It seems rather premature to assume the ATA TRIM command will significantly improve the SSD experience on the desktop. If you were to use TRIM to rewrite a nonempty physical block, you do not avoid the 2ms erase penalty when more data is written to that block later on and instead simply add the wear of another erase cycle. TRIM, then, is only useful for performance purposes when an entire 512 KiB physical block is free.

    A well designed operating system would have to keep track of both the physical and logical maps of used space on an SSD, and only issue TRIM when deletion of a logical cluster coincides with the freeing of an entire physical block. Issuing TRIMs at any other time would only hurt performance. This means the OS will have significantly fewer opportunities to issue TRIMs than you assume. Moreover, after significant usage the physical blocks will become fragmented and fewer and fewer TRIMs will be able to be issued.

    TRIM works great as long as you only deal with large files, or batches of small files contiguously created and deleted with significant temporal locality. It would greatly aid SSDs in the "used" state Anand artificially creates in this article, but on a real system where months of web browsing, Windows updates and software installing/uninstalling have occurred the effect would be less striking.

    TRIM could be mated with periodic internal (not filesystem) defragmentation to mitigate these issues, but that would significantly reduce the lifespan of the SSD...

    It seems the real solution to the SSD performance problem would be to decrease the size of the physical block... ideally to 4 KiB, as that is the most common cluster size on modern filesystems. (This assumes, of course, that the erase, read and write latencies could be scaled down linearly.)
    Reply
  • Kary - Thursday, March 19, 2009 - link

    Why use TRIM at all?!?!?

    If you have extras Blocks on the drive (NOT PAGES, FULL BLOCKS) then there is no need for TRIM command.

    1)Currently in use BLOCK is half full
    2)More than half a block needs to be written
    3)extra BLOCK is mapped into the system
    4)original/half full block is mapped out of system.. can be erased during idle time.

    You could even bind multiple continuous blocks this way (I assume that it is possible to erase simultaneously any of the internal groupings pages from Blocks on up...they probably share address lines...ex. erase 0000200 -> just erase block #200 ....erase 00002*0 -> erase block 200 to 290...btw, did addressing in base ten instead of binary just to simplify for some :)
    Reply
  • korbendallas - Wednesday, March 18, 2009 - link

    Actually i think that the Trim command is merely used for marking blocks as free. The OS doesn't know how the data is placed on the SSD, so it can't make informed decision on when to forcefully erase pages. In the same way, the SSD doesn't know anything about what files are in which blocks, so you can't defrag files internally in the drive.

    So while you can't defrag files, you CAN now defrag free space, and you can improve the wear leveling because deleted data can be ignored.

    So let's say you have 10 pages where 50% of the blocks were marked deleted using the Trim command. That means you can move the data into 5 other pages, and erase the 10 pages. The more deleted blocks there are in a page, the better a candidate for this procedure. And there isn't really a problem with doing this while the drive is idle - since you're just doing something now, that you would have to do anyway when a write command comes.
    Reply
  • GourdFreeMan - Wednesday, March 18, 2009 - link

    This is basically what I am arguing both for and against in the fourth paragraph of my original post, though I assumed it would be the OS'es responsibility, not the drive's.

    Do SSDs track dirty pages, or only dirty blocks? I don't think there is enough RAM on the controller to do the former...
    Reply
  • korbendallas - Wednesday, March 18, 2009 - link

    Well, let's take a look at how much storage we actually need. A block can be erased, contain data, or be marked as trimmed or deallocated.

    That's three different states, or two bits of information. Since each block is 4kB, a 64GB drive would have 16777216 blocks. So that's 4MB of information.

    So yeah, saving the block information is totally feasible.
    Reply
  • GourdFreeMan - Thursday, March 19, 2009 - link

    Actually the drive only needs to know if the page is in use or not, so you can cut that number in half. It can determine a partially full block that is a candidate for defragmentation by looking at whether neighboring pages are in use. By your calculation that would then be 2 MiB.

    That assumes the controller only needs to support drives of up to 64 GiB capacity, that pages are 4 KiB in size, and that the controller doesn't need to use RAM for any other purpose.

    Most consumer SSD lines go up to 256 GiB in capacity, which would bring the total RAM needed up to 8 MiB using your assumption of a 4 KiB page size.

    However, both hard drives and SSDs use 512 byte sectors. This does not necessarily mean that internal pages are therefore 512 bytes in size, but lacking any other data about internal pages sizes, let's run the numbers on that assumption. To support a 256 MiB drive with 512 byte pages, you would need 64 MiB of RAM -- which only the Intel line of SSDs has more than -- dedicated solely to this purpose.

    As I said before there are ways of getting around this RAM limitation (e.g. storing page allocation data per block, keeping only part of the page allocation table in RAM, etc.), so I don't think the technical challenge here is insurmountable. There still remains the issue of wear, however...
    Reply

Log in

Don't have an account? Sign up now