A Quick Flash Refresher

DRAM is very fast. Writes happen in nanoseconds as do CPU clock cycles, those two get along very well. The problem with DRAM is that it's volatile storage; if the charge stored in each DRAM cell isn't refreshed, it's lost. Pull the plug and whatever you stored in DRAM will eventually disappear (and unlike most other changes, eventually happens in fractions of a second).

Magnetic storage, on the other hand, is not very fast. It's faster than writing trillions of numbers down on paper, but compared to DRAM it plain sucks. For starters, magnetic disk storage is mechanical - things have to physically move to read and write. Now it's impressive how fast these things can move and how accurate and relatively reliable they are given their complexity, but to a CPU, they are slow.

The fastest consumer hard drives take 7 milliseconds to read data off of a platter. The fastest consumer CPUs can do something with that data in one hundred thousandth that time.

The only reason we put up with mechanical storage (HDDs) is because they are cheap, store tons of data and are non-volatile: the data is still there even when you turn em off.

NAND flash gives us the best of both worlds. They are effectively non-volatile (flash cells can lose their charge but after about a decade) and relatively fast (data accesses take microseconds, not milliseconds). Through electron tunneling a charge is inserted into an N-channel MOSFET. Once the charge is in there, it's there for good - no refreshing necessary.


N-Channel MOSFET. One per bit in a NAND flash chip.

One MOSFET is good for one bit. Group billions of these MOSFETs together, in silicon, and you've got a multi-gigabyte NAND flash chip.

The MOSFETs are organized into lines, and the lines into groups called pages. These days a page is usually 4KB in size. NAND flash can't be written to one bit at a time, it's written at the page level - so 4KB at a time. Once you write the data though, it's there for good. Erasing is a bit more complicated.

To coax the charge out of the MOSFETs requires a bit more effort and the way NAND flash works is that you can't discharge a single MOSFET, you have to erase in larger groups called blocks. NAND blocks are commonly 128 pages, that means if you want to re-write a page in flash you have to first erase it and all 127 adjacent pages first. And allow me to repeat myself: if you want to overwrite 4KB of data from a full block, you need to erase and re-write 512KB of data.

To make matters worse, every time you write to a flash page you reduce its lifespan. The JEDEC spec for MLC (multi-level cell) flash is 10,000 writes before the flash can start to fail.

Dealing with all of these issues requires that controllers get very crafty with how they manage writes. A good controller must split writes up among as many flash channels as possible, while avoiding writing to the same pages over and over again. It must also deal with the fact that some data is going to get frequently updated while others will remain stagnant for days, weeks, months or even years. It has to detect all of this and organize the drive in real time without knowing anything about how it is you're using your computer.

It's a tough job.

But not impossible.

Index Live Long and Prosper: The Logical Page
Comments Locked

295 Comments

View All Comments

  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    I believe OCZ cut prices to distributors that day, but the retail prices will take time to fall. Once you see X25-M G2s in stock then I'd expect to see the Indilinx drives fall in price. Resellers won't give you a break unless they have to :)

    Take care,
    Anand
  • bobjones32 - Monday, August 31, 2009 - link

    Another great AnandTech article, thanks for the read.

    Just a head's-up on the 80GB X-25m Gen2 - A day before Newegg finally had them on sale, they bumped their price listing from $230 to $250. They sold at $250 for about 2 hours last Friday, went back out of stock until next week, and bumped the price again from $250 to $280.

    So....plain supply vs. demand is driving the price of the G2 roughly $50 higher than it was listed at a week ago. I have a feeling that if you wait a week or two, or shop around a bit, you'll easily find them selling elsewhere for the $230 price they were originally going for.
  • AbRASiON - Monday, August 31, 2009 - link

    Correct, Newegg has gouged the 80gb from 229 to 279 and the 160gb from 449 to 499 :(

  • Stan Zaske - Monday, August 31, 2009 - link

    Absolutely first rate article Anand and I thoroughly enjoyed reading it. Get some rest dude! LOL
  • Jaramin - Monday, August 31, 2009 - link

    I'm wondering, if I were to use a low capacity SSD to install my OS on, but install my programs to a HDD for space reasons, just how much would that spoil the SSD advantage? All OS reads an writes would still be on the SSD, and the paging file would also be there. I'm very curious about the amount of degradation one would see relative to different use routines and apps.
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    Putting all of your apps (especially frequently used ones) off of your SSD would defeat the purpose of an SSD. You'd be missing out on the ultra-fast app launch times.

    Pick a good SSD and you won't have to worry too much about performance degradation. As long as you don't stick it into a database server :)

    Take care,
    Anand
  • swedishchef - Tuesday, September 1, 2009 - link

    What if you just put your photoshop cache on a pair of Velociraptors? Would it be the same loss of benefit?

    I have the same question regarding uncompressed HD video work, where I need write speeds well over the Intel x25-m ( over 240Mb/s). My assumption would be that I could enjoy the fast IO and App. launch of an SSD and increase CPU performance with the SSD while keeping the files on a fast external or internal raid configuration.


    Thank you again for a a brilliant Article Anand.
    I have been waiting for it for a long time. Yours are the only calm words out on the net.

    Grateful Geek /Also professional image creator.
  • creathir - Monday, August 31, 2009 - link

    Great article Anand. I've been waiting for it...

    My only thoughts are, why can't Intel get their act together with the sequential business? Why can the others handle it, but they can't? To have such an awesome piece of hardware have such a nasty blemish is strange to me, especially on a Gen-2 product.

    I suppose there is some technical reason as to why, but it needs to be addressed.

    - Creathir
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    If Intel would only let me do a deep dive on their controller I'd be able to tell you :) There's more I'd like to say but I can't yet unfortunately.

    Take care,
    Anand
  • shotage - Monday, August 31, 2009 - link

    Awesome article!

    I'm intrigued with the cap on the sequential reads that Intel has on the G2 drives as well. I always thought it was strange to see even on their first gen stuff.

    I'm assuming that this cap might be in place to somehow ensure the excellent performance they are giving with random read/writes. All until TRIM finally shows up and you'll have to write up another full on review (which I eagerly await!).

    I can't wait to see what 2010 brings to the table. What with the next version of SATA and TRIM just over the horizon, I could finally get the kind of performance out of my PC that I want!!

Log in

Don't have an account? Sign up now