The Anatomy of an SSD

Let’s meet Mr. N-channel MOSFET again:

Say Hello

This is the building block of NAND-flash; one transistor is required per cell. A single NAND-flash cell can either store one or two bits of data. If it stores one, then it’s called a Single Level Cell (SLC) flash and if it stores two then it’s a Multi Level Cell (MLC) flash. Both are physically made the same way; in fact there’s nothing that separates MLC from SLC flash, it’s just a matter of how the data is stored in and read from the cell.


SLC flash (left) vs. MLC flash (right)

Flash is read from and written to in a guess-and-test fashion. You apply a voltage to the cell and check to see how it responds. You keep increasing the voltage until you get a result.

  SLC NAND flash MLC NAND flash
Random Read 25 µs 50 µs
Erase 2ms per block 2ms per block
Programming 250 µs 900 µs

 

With four voltage levels to check, MLC flash takes around 3x longer to write to as SLC. On the flip side you get twice the capacity at the same cost. Because of this distinction, and the fact that even MLC flash is more than fast enough for a SSD, you’ll only see MLC used for desktop SSDs while SLC is used for enterprise level server SSDs.


Cells are strung together in arrays as depicted in the image to the right

So a single cell stores either one or two bits of data, but where do we go from there? Groups of cells are organized into pages, the smallest structure that’s readable/writable in a SSD. Today 4KB pages are standard on SSDs.

Pages are grouped together into blocks; today it’s common to have 128 pages in a block (512KB in a block). A block is the smallest structure that can be erased in a NAND-flash device. So while you can read from and write to a page, you can only erase a block (128 pages at a time). This is where many of the SSD’s problems stem from, I’ll repeat this again later because it’s one of the most important parts of understanding SSDs.


Arrays of cells are grouped into a page, arrays of pages are grouped into blocks

Blocks are then grouped into planes, and you’ll find multiple planes on a single NAND-flash die.

The combining doesn’t stop there; you can usually find either one, two or four die per package. While you’ll see a single NAND-flash IC, there may actually be two or four die in that package. You can also stack multiple ICs on top of each other to minimize board real estate usage.

 

Hey, There’s an Elephant in the Room Strength in Numbers, What makes SSDs Fast
POST A COMMENT

308 Comments

View All Comments

  • tirez321 - Wednesday, March 18, 2009 - link

    I can kinda see that it wouldn't now.
    Because there would still be states there regardless.
    But if you could inform the drive that it is deleted somehow, hmm.

    Reply
  • strikeback03 - Wednesday, March 18, 2009 - link

    The subjective experiences with stuttering are more important to me than most of the test numbers. Other tests I have found of the G.Skill Titan and similar have looked pretty good, but left out mention of stuttering in use.

    Too bad, as the 80GB Intel is too small and the ~$300 for a 120GB is about the most I am willing to pay. Maybe sometime this year the OCZ Vertex or similar will get there.
    Reply
  • strikeback03 - Tuesday, March 24, 2009 - link

    When I wrote that, the Newegg price for the 120GB Vertex was near $400. Now they have it for $339 with a $30 MIR. Now that's progress. Reply
  • kamikaz1k - Wednesday, March 18, 2009 - link

    the latency times are switched...incase u wanted to kno.
    also, first post ^^ hallo!
    Reply
  • GourdFreeMan - Wednesday, March 18, 2009 - link

    It seems rather premature to assume the ATA TRIM command will significantly improve the SSD experience on the desktop. If you were to use TRIM to rewrite a nonempty physical block, you do not avoid the 2ms erase penalty when more data is written to that block later on and instead simply add the wear of another erase cycle. TRIM, then, is only useful for performance purposes when an entire 512 KiB physical block is free.

    A well designed operating system would have to keep track of both the physical and logical maps of used space on an SSD, and only issue TRIM when deletion of a logical cluster coincides with the freeing of an entire physical block. Issuing TRIMs at any other time would only hurt performance. This means the OS will have significantly fewer opportunities to issue TRIMs than you assume. Moreover, after significant usage the physical blocks will become fragmented and fewer and fewer TRIMs will be able to be issued.

    TRIM works great as long as you only deal with large files, or batches of small files contiguously created and deleted with significant temporal locality. It would greatly aid SSDs in the "used" state Anand artificially creates in this article, but on a real system where months of web browsing, Windows updates and software installing/uninstalling have occurred the effect would be less striking.

    TRIM could be mated with periodic internal (not filesystem) defragmentation to mitigate these issues, but that would significantly reduce the lifespan of the SSD...

    It seems the real solution to the SSD performance problem would be to decrease the size of the physical block... ideally to 4 KiB, as that is the most common cluster size on modern filesystems. (This assumes, of course, that the erase, read and write latencies could be scaled down linearly.)
    Reply
  • Kary - Thursday, March 19, 2009 - link

    Why use TRIM at all?!?!?

    If you have extras Blocks on the drive (NOT PAGES, FULL BLOCKS) then there is no need for TRIM command.

    1)Currently in use BLOCK is half full
    2)More than half a block needs to be written
    3)extra BLOCK is mapped into the system
    4)original/half full block is mapped out of system.. can be erased during idle time.

    You could even bind multiple continuous blocks this way (I assume that it is possible to erase simultaneously any of the internal groupings pages from Blocks on up...they probably share address lines...ex. erase 0000200 -> just erase block #200 ....erase 00002*0 -> erase block 200 to 290...btw, did addressing in base ten instead of binary just to simplify for some :)
    Reply
  • korbendallas - Wednesday, March 18, 2009 - link

    Actually i think that the Trim command is merely used for marking blocks as free. The OS doesn't know how the data is placed on the SSD, so it can't make informed decision on when to forcefully erase pages. In the same way, the SSD doesn't know anything about what files are in which blocks, so you can't defrag files internally in the drive.

    So while you can't defrag files, you CAN now defrag free space, and you can improve the wear leveling because deleted data can be ignored.

    So let's say you have 10 pages where 50% of the blocks were marked deleted using the Trim command. That means you can move the data into 5 other pages, and erase the 10 pages. The more deleted blocks there are in a page, the better a candidate for this procedure. And there isn't really a problem with doing this while the drive is idle - since you're just doing something now, that you would have to do anyway when a write command comes.
    Reply
  • GourdFreeMan - Wednesday, March 18, 2009 - link

    This is basically what I am arguing both for and against in the fourth paragraph of my original post, though I assumed it would be the OS'es responsibility, not the drive's.

    Do SSDs track dirty pages, or only dirty blocks? I don't think there is enough RAM on the controller to do the former...
    Reply
  • korbendallas - Wednesday, March 18, 2009 - link

    Well, let's take a look at how much storage we actually need. A block can be erased, contain data, or be marked as trimmed or deallocated.

    That's three different states, or two bits of information. Since each block is 4kB, a 64GB drive would have 16777216 blocks. So that's 4MB of information.

    So yeah, saving the block information is totally feasible.
    Reply
  • GourdFreeMan - Thursday, March 19, 2009 - link

    Actually the drive only needs to know if the page is in use or not, so you can cut that number in half. It can determine a partially full block that is a candidate for defragmentation by looking at whether neighboring pages are in use. By your calculation that would then be 2 MiB.

    That assumes the controller only needs to support drives of up to 64 GiB capacity, that pages are 4 KiB in size, and that the controller doesn't need to use RAM for any other purpose.

    Most consumer SSD lines go up to 256 GiB in capacity, which would bring the total RAM needed up to 8 MiB using your assumption of a 4 KiB page size.

    However, both hard drives and SSDs use 512 byte sectors. This does not necessarily mean that internal pages are therefore 512 bytes in size, but lacking any other data about internal pages sizes, let's run the numbers on that assumption. To support a 256 MiB drive with 512 byte pages, you would need 64 MiB of RAM -- which only the Intel line of SSDs has more than -- dedicated solely to this purpose.

    As I said before there are ways of getting around this RAM limitation (e.g. storing page allocation data per block, keeping only part of the page allocation table in RAM, etc.), so I don't think the technical challenge here is insurmountable. There still remains the issue of wear, however...
    Reply

Log in

Don't have an account? Sign up now