Live Long and Prosper: The Logical Page

Computers are all about abstraction. In the early days of computing you had to write assembly code to get your hardware to do anything. Programming languages like C and C++ created a layer of abstraction between the programmer and the hardware, simplifying the development process. The key word there is simplification. You can be more efficient writing directly for the hardware, but it’s far simpler (and much more manageable) to write high level code and let a compiler optimize it.

The same principles apply within SSDs.

The smallest writable location in NAND flash is a page; that doesn’t mean that it’s the largest size a controller can choose to write. Today I’d like to introduce the concept of a logical page, an abstraction of a physical page in NAND flash.

Confused? Let’s start with a (hopefully, I'm no artist) helpful diagram:

On one side of the fence we have how the software views storage: as a long list of logical block addresses. It’s a bit more complicated than that since a traditional hard drive is faster at certain LBAs than others but to keep things simple we’ll ignore that.

On the other side we have how NAND flash stores data, in groups of cells called pages. These days a 4KB page size is common.

In reality there’s no fence that separates the two, rather a lot of logic, several busses and eventually the SSD controller. The latter determines how the LBAs map to the NAND flash pages.

The most straightforward way for the controller to write to flash is by writing in pages. In that case the logical page size would equal the physical page size.

Unfortunately, there’s a huge downside to this approach: tracking overhead. If your logical page size is 4KB then an 80GB drive will have no less than twenty million logical pages to keep track of (20,971,520 to be exact). You need a fast controller to sort through and deal with that many pages, a lot of storage to keep tables in and larger caches/buffers.

The benefit of this approach however is very high 4KB write performance. If the majority of your writes are 4KB in size, this approach will yield the best performance.

If you don’t have the expertise, time or support structure to make a big honkin controller that can handle page level mapping, you go to a larger logical page size. One such example would involve making your logical page equal to an erase block (128 x 4KB pages). This significantly reduces the number of pages you need to track and optimize around; instead of 20.9 million entries, you now have approximately 163 thousand. All of your controller’s internal structures shrink in size and you don’t need as powerful of a microprocessor inside the controller.

The benefit of this approach is very high large file sequential write performance. If you’re streaming large chunks of data, having big logical pages will be optimal. You’ll find that most flash controllers that come from the digital camera space are optimized for this sort of access pattern where you’re writing 2MB - 12MB images all the time.

Unfortunately, the sequential write performance comes at the expense of poor small file write speed. Remember that writing to MLC NAND flash already takes 3x as long as reading, but writing small files when your controller needs large ones worsens the penalty. If you want to write an 8KB file, the controller will need to write 512KB (in this case) of data since that’s the smallest size it knows to write. Write amplification goes up considerably.

Remember the first OCZ Vertex drive based on the Indilinx Barefoot controller? Its logical page size was equal to a 512KB block. OCZ asked for a firmware that enabled page level mapping and Indilinx responded. The result was much improved 4KB write performance:

Iometer 4KB Random Writes, IOqueue=1, 8GB sector space Logical Block Size = 128 pages Logical Block Size = 1 Page
Pre-Release OCZ Vertex 0.08 MB/s 8.2 MB/s

A Quick Flash Refresher The Cleaning Lady and Write Amplification
Comments Locked

295 Comments

View All Comments

  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    Maybe I should compile these things into a book? :)

    Here are my answers about some stuff:

    1) There's a spec for how hard drive makers report capacity. They define 1GB as 1 billion bytes. This is technically correct (base 10 SI prefix as you correctly pointed out). The HDDs also physically have this much storage on them, they are made up of sequentially numbered sectors that are easily counted in a decimal number system.

    All other aspects of PC storage (e.g. cache, DRAM, NAND flash) however work in base 2 (like the rest of the PC). In these respects 1GB is defined as 1024^3 because we're dealing with a base 2 number system. There are reasons for this but it goes beyond the scope of what I'm posting :)

    Intel adheres to the same spec that the HDD makers use. But the X25-M is made up of flash, which as I just mentioned is addressed in a base 2 number system. There's more flash than user space on the drive, it's used as spare area, woohoo. I think we're both on the same page here, just saying things differently :)

    2) We'll see a 320GB drive, just not this year. I don't know that the demand is there especially given the weak economy.

    Dreams do sometimes come true... ;)

    3) Perhaps, but I don't like the idea of a drive doing anything but idling when it's supposed to be...idle. This does funny things to notebook battery life I'd think.

    4) This is true. There's also another thing you can do with the jumper (and perhaps some additional software): flash any indilinx drive with any firmware regardless of vendor :)

    5) I had to throw out a lot of data because of variations between runs. It ended up being a combination of immature drivers, immature benchmarks and some OS trickery. The setup I have now is very reliable and provides very repeatable results with very little variation. While I run everything three times, the runs are so close that you could technically do only one run per drive and still be fine.

    6) I wouldn't count WD and Seagate out just yet. It may take them a while but they won't go quietly...

    7) Samsung makes a ton of money from SSD sales to OEMs, they don't seem to care about the end user market as much. If end users start protesting Samsung drives however, things will change.

    In my opinion? Once Apple falls, the rest will follow. If Apple will migrate to Intel (possible) or Indilinx (less likely), we'll see the same from the other OEMs and Samsung will be forced to change.

    Or I could be too pessimistic and we'll see better performance from Samsung before then.

    8) Agreed :)

    I'll finish here too :)

    Take care,
    Anand
  • Reven - Monday, August 31, 2009 - link

    Anand, dont listen to the guys like blyndy who diss on the anthologies, I love them. You can find a basic review anywhere, its the in-depth yet simple to understand stuff like these anthologies that make me visit Anandtech all the time.

    Keep it up, dude!
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    Thank you :)
  • EasterEEL - Monday, August 31, 2009 - link

    I have a couple of questions regarding the Intel® SATA SSD Firmware Update Tool (2832KB) v1.3 8/24/2009.

    Does this firmware enable TRIM within the SSD to work with Windows 7?

    If AHCI is enabled in the BIOS (but not RAID) does Windows 7 use it's own drivers with TRIM? Or does it load Intel’s Matrix Storage Manager driver which does not support TRIM as per the article note below?

    "Unfortunately if you’re running an Intel controller in RAID mode (whether non-member RAID or not), Windows 7 loads Intel’s Matrix Storage Manager driver, which presently does not pass the TRIM command. Intel is working on a solution to this and I'd expect that it'll get fixed after the release of Intel's 34nm TRIM firmware in Q4 of this year."

  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    That update does not enable TRIM. The TRIM firmware is in testing now and it will be out sometime in Q4 of this year (October - December).

    If AHCI is enabled in the BIOS and you haven't loaded Intel's MSM drivers then it will use the Windows 7 driver and TRIM will be supported.

    Take care,
    Anand
  • uberowo - Monday, August 31, 2009 - link

    I do have a question however. :D

    I am building a gaming pc, and I am buying ssd disk/s. Would I benefit from getting 2x80gb intel gen2s and using raid0? Or should I stick with a single 160gb?
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    While I haven't tested 2 x 80GB drives in RAID-0, my feeling is that a single SSD is going to be better than two in RAID going forward. As of now I don't know that anyone's TRIM firmware is going to work if you've got two drives in RAID-0.

    The perceived performance gains in RAID-0 also aren't that great on SSDs from what I've seen.

    Take care,
    Anand
  • Ardax - Monday, August 31, 2009 - link

    A naive guess would be that it depends on the workload. For lots of sequential transfers a RAID-0 should shine -- particularly on reads -- because you're spreading the transfers out over multiple SATA channels.

    Losing TRIM is a problem. Finding a controller than can handle the performance is entirely likely to be another.
  • uberowo - Monday, August 31, 2009 - link

    Thanks a lot for taking the time to answer. Not to mention making this awesome site. :)
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    You guys take the time to read it and make some truly wonderful comments, it's the least I can do :)

    -A

Log in

Don't have an account? Sign up now