Live Long and Prosper: The Logical Page

Computers are all about abstraction. In the early days of computing you had to write assembly code to get your hardware to do anything. Programming languages like C and C++ created a layer of abstraction between the programmer and the hardware, simplifying the development process. The key word there is simplification. You can be more efficient writing directly for the hardware, but it’s far simpler (and much more manageable) to write high level code and let a compiler optimize it.

The same principles apply within SSDs.

The smallest writable location in NAND flash is a page; that doesn’t mean that it’s the largest size a controller can choose to write. Today I’d like to introduce the concept of a logical page, an abstraction of a physical page in NAND flash.

Confused? Let’s start with a (hopefully, I'm no artist) helpful diagram:

On one side of the fence we have how the software views storage: as a long list of logical block addresses. It’s a bit more complicated than that since a traditional hard drive is faster at certain LBAs than others but to keep things simple we’ll ignore that.

On the other side we have how NAND flash stores data, in groups of cells called pages. These days a 4KB page size is common.

In reality there’s no fence that separates the two, rather a lot of logic, several busses and eventually the SSD controller. The latter determines how the LBAs map to the NAND flash pages.

The most straightforward way for the controller to write to flash is by writing in pages. In that case the logical page size would equal the physical page size.

Unfortunately, there’s a huge downside to this approach: tracking overhead. If your logical page size is 4KB then an 80GB drive will have no less than twenty million logical pages to keep track of (20,971,520 to be exact). You need a fast controller to sort through and deal with that many pages, a lot of storage to keep tables in and larger caches/buffers.

The benefit of this approach however is very high 4KB write performance. If the majority of your writes are 4KB in size, this approach will yield the best performance.

If you don’t have the expertise, time or support structure to make a big honkin controller that can handle page level mapping, you go to a larger logical page size. One such example would involve making your logical page equal to an erase block (128 x 4KB pages). This significantly reduces the number of pages you need to track and optimize around; instead of 20.9 million entries, you now have approximately 163 thousand. All of your controller’s internal structures shrink in size and you don’t need as powerful of a microprocessor inside the controller.

The benefit of this approach is very high large file sequential write performance. If you’re streaming large chunks of data, having big logical pages will be optimal. You’ll find that most flash controllers that come from the digital camera space are optimized for this sort of access pattern where you’re writing 2MB - 12MB images all the time.

Unfortunately, the sequential write performance comes at the expense of poor small file write speed. Remember that writing to MLC NAND flash already takes 3x as long as reading, but writing small files when your controller needs large ones worsens the penalty. If you want to write an 8KB file, the controller will need to write 512KB (in this case) of data since that’s the smallest size it knows to write. Write amplification goes up considerably.

Remember the first OCZ Vertex drive based on the Indilinx Barefoot controller? Its logical page size was equal to a 512KB block. OCZ asked for a firmware that enabled page level mapping and Indilinx responded. The result was much improved 4KB write performance:

Iometer 4KB Random Writes, IOqueue=1, 8GB sector space Logical Block Size = 128 pages Logical Block Size = 1 Page
Pre-Release OCZ Vertex 0.08 MB/s 8.2 MB/s

A Quick Flash Refresher The Cleaning Lady and Write Amplification
Comments Locked

295 Comments

View All Comments

  • nemitech - Monday, August 31, 2009 - link

    opps - not ebay - it was NEWEGG.
  • Loki726 - Monday, August 31, 2009 - link

    Thanks a ton for including the pidgin compiler benchmarks. I didn't think that HD performance would make much of a difference (linking large builds might be a different story), but it is great to have numbers to back up that intuition. Keep it up.
  • torsteinowich - Monday, August 31, 2009 - link

    Hi

    You write that the Indilinx wiper tool collects a free page list from the OS, then wipes the pages. This sounds like a dangerous operation to me since the OS might allocate some of these blocks after the tool collects the list, but before they are wiped.

    Have you received a good explanation for Indilinx about how they ensure file system integrity? As far as i know Windows cannot temporarily switch to read-only mode on an active file system (at least not the system drive). The only way i could see this tool working safely would be by booting off a different media and accessing the file system to be trimmed offline with a tool that correctly identifies the unused pages for the particular file system being used. I could be wrong of course, maybe windows 7 has a system call to temporarily freeze FS writes, but i doubt it.
  • has407 - Monday, August 31, 2009 - link

    It: (1) creates a large temporary file (wiper.dat) which gobbles up all (or most) of the free space; (2) determines the LBA's occupied by that file; (3) tells the SSD to TRIM those LBA's; and then (4) deletes the temporary file (wiper.date).

    From the OS/filesystem perspective, it's just another app and another file. (A similar technique is used by, e.g., sysinternals Windows SDelete app to zero free space. For Windows you could also probably use the hooks used by defrag utilities to accomplis it, but that would be a lot more work.)
  • cghebert - Monday, August 31, 2009 - link

    Anand,

    Great article. Once again you have outclassed pretty much every other site out there with the depth of content in this review. You should start marketing t-shirts that say "Everything I learned about SSDs I learned from AnandTech"

    I did have a question about gaming benchmarks, since you made this statement:

    " but as you'll see later on in my gaming tests the benefits of an SSD really vary depending on the game"

    But I never saw any gaming benchmarks. Did I miss something?
  • nafhan - Monday, August 31, 2009 - link

    Just wanted to say awesome review.
    I've been reading Anandtech since 2000, and while other sites have gone downhill or (apparently) succumbed to pressure from advertisers, you guys have continued to give in depth, critical reviews.
    I also appreciate that you do some real analysis instead of just throwing 10 pages of charts online.
    Thanks, and keep up the good work!
  • zysurge - Monday, August 31, 2009 - link

    Awesome amazing article. So much information, presented clearly.

    Question, though? I have an Intel G2 160GB drive coming in the next few days for my Dell D830 laptop, which will be running Windows 7 x64.

    Do I set the controller to ATA and use the Intel Matrix driver, or set it to AHCI and use Microsoft's driver? Will either provide an advantage? I realize neither will provide TRIM until Q4, but after the firmware update, both should, right?

    Thanks in advance!
  • ggathagan - Wednesday, September 16, 2009 - link

    From page 15 (Early Trim support...):
    Under Windows 7 that means you have to use a Microsoft made IDE or AHCI driver (you can't install chipset drivers from anyone else).
  • Mumrik - Monday, August 31, 2009 - link

    but I can't live with less than 300GB on that drive, and SSDs in usable sizes still cost more than high end video cards :-(

    I really hope I'll be able to pick up a 300GB drive for 100-200 bucks in a year or so, but it is probably a bit too optimistic.
  • Simen1 - Monday, August 31, 2009 - link

    This is simply wrong. Ask anyone over 10 years if they think this mathematical statement is true or false. 80 can never equal 74,5.

    Now, someone claims that 1 GB = 10^9 B and others claim that 1 GB is 2^30 B. Who is really right? What does the G and the B mean? Who defines that?

    The answers is easy to find and document. B means Byte. G stands for Giga ans means 10^6, not 2^30. Giga is defined in the international system of units, SI.

    No standardization organization have _ever_ defined Giga to be 2^30. But IEC, International Electrotechnical Commission, have defined "Gi" to 2^30. This is supposed to be used for digital storage so people wont be confused by all the misunderstandings around this. Misunderstandings that mainly comes from Microsoft and quite a few other big software vendors. Companies that ignore the mathematical errors in their software when they claim that 80GB = 74,5 GB, and ignore both international standards on how to shorten large numbers.

Log in

Don't have an account? Sign up now