Why SSDs Care About What You Write: Fragmentation & Write Combining

PC Perspective's Allyn Malventano is a smart dude, just read one of his articles to figure that out. He pieced together a big aspect of how the X25-M worked on his own, a major key to how to improve SSD performance.

You'll remember from the Anthology that SSDs get their high performance by being able to write to multiple flash die across multiple channels in parallel. This works very well for very large files since you can easily split the reads and writes across multiple die/channels.

Here we go to write a 128KB file, it's split up and written across multiple channels in our tiny mock SSD:

When we go to read the file, it's read across multiple channels and performance is once again, excellent.

Remember what we talked about before however: small file random read/write performance is actually what ends up being slowest on hard drives. It's what often happens on a PC and thus we run into a problem when performing such an IO. Here we go to write a 4KB file. The smallest size we can write is 4KB and thus it's not split up at all, it can only be written to a single channel:

As Alyn discovered, Intel and other manufacturers get around this issue by combining small writes into larger groups. Random writes rarely happen in a separated manner, they come in bursts with many at a time. A write combining controller will take a group of 4KB writes, arrange them in parallel, and then write them together at the same time.

This does wonders for improving random small file write performance, as everything completes as fast as a larger sequential write would. What it hurts is what happens when you overwrite data.

In the first example where we wrote a 128KB file, look what happens if we delete the file:

Entire blocks are invalidated. Every single LBA in these blocks will come back invalid and can quickly be cleaned.

Look at what happens in the second example. These 4KB fragments are unrelated, so when one is overwritten, the rest aren't. A few deletes and now we're left with this sort of a situation:

Ugh. These fragmented blocks are a pain to deal with. Try to write to it now and you have to do a read-modify-write. Without TRIM support, nearly every write to these blocks will require a read-modify-write and send write amplification through the roof. This is the downside of write combining.

Intel's controller does its best to recover from these situations. That's why its used random write performance is still very good. Samsung's controller isn't very good at recovering from these situations.

Now you can see why performing a sequential write over the span of the drive fixes a fragmented drive. It turns the overly fragmented case into one that's easy to deal with, hooray. You can also see why SSD degradation happens over time. You don't spend all day writing large sequential files to your disk. Instead you write a combination of random and sequential, large and small files to the disk.

The Cleaning Lady and Write Amplification A Wear Leveling Refresher: How Long Will My SSD Last?
Comments Locked

295 Comments

View All Comments

  • zodiacfml - Wednesday, September 2, 2009 - link

    Very informative, answered more than anything in my mind. Hope to see this again in the future with these drive capacities around $100.
  • mgrmgr - Wednesday, September 2, 2009 - link

    Any idea if the (mid-Sept release?) OCZ Colossus's internal RAID setup will handle the problem of RAID controllers not being able to pass Windows 7's TRIM command to the SSD array. I'm intent on getting a new Photoshop machine with two SSDs in Raid-0 as soon as Win7 releases, but the word here and elsewhere so far is that RAID will block the TRIM function.
  • kunedog - Wednesday, September 2, 2009 - link

    All the Gen2 X-25M 80GB drives are apparently gone from Newegg . . . so they've marked up the Gen1 drives to $360 (from $230):
    http://www.newegg.com/Product/Product.aspx?Item=N8...">http://www.newegg.com/Product/Product.aspx?Item=N8...

    Unbelievable.
  • gfody - Wednesday, September 2, 2009 - link

    What happened to the gen2 160gb on Newegg? For a month the ETA was 9/2 (today) and now it's as if they never had it in the first place. The product page has been removed.

    It's like Newegg are holding the gen2 drives hostage until we buy out their remaining stock of gen1 drives.
  • iwodo - Tuesday, September 1, 2009 - link

    I think it acts as a good summary. However someone wrote last time about Intel drive handling Random Read / Write extremely poorly during Sequential Read / Write.

    Has Aanand investigate yet?

    I am hoping next Gen Intel SSD coming in Q2 10 will bring some substantial improvement.
  • statik213 - Tuesday, September 1, 2009 - link

    Does the RAID controller propagate TRIM commands to the SSD? Or will having RAID negate TRIM?
  • justaviking - Tuesday, September 1, 2009 - link

    Another great article, Anand! Thanks, and keep them coming.

    If this has already been discussed, I apologize. I'm still exhausted from reading the wonderful article, and have not read all 17 pages of comments.

    On PAGE 3, it talks about the trade-off of larger vs. smaller pages.

    I wonder if it would be feasible to make a hybrid drive, with a portion of the drive using small pages for faster performance when writing small files, and the majority of it being larger pages to keep the management of the drive reasonable.

    Any file could be written anywhere, but the controller would bias small writes to the small pages, and large writes to large files.

    Externally it would appear as a single drive, of course, but deep down in the internals, it would essentially be two drives. Each of the two portions would be tuned for maximum performance in different areas, but able to serve as backup or overflow if the other portion became full or ever got written to too many times.

    Interesting concept? Or a hair brained idea buy an ignorant amateur?
  • CList - Tuesday, September 1, 2009 - link

    Great article, wonderful to see insightful, in depth analysis.

    I'd be curious to hear anyone's thoughts on the implications are of running virtual hard disk files on SSD's. I do a lot of work these days on virtual machines, and I'd love to get them feeling more snappy - especially on my laptop which is limited to 4GB of ram.

    For example;
    What would the constant updates of those vmdk (or "vhd") files do to the disk's lifespan?

    If the OS hosting the VM is windows 7, but the virtual machine is WinServer2003 will the TRIM command be used properly?

    Cheers,
    CList
  • pcfxer - Tuesday, September 1, 2009 - link

    Great article!

    "It seems that building Pidgin is more CPU than IO bound.."

    Obviously, Mr. Anand doesnt' understand how compilers work ;). Compilers will always be CPU and memory bound, reduce your memory in the computer to say 256MB (or lower) and you'll see what I mean. The levels of recursion necessary to follow the production (grammars that define the language) use up memory but would rarely use the drive unless the OS had terrible resource management. :0.
  • CMGuy - Wednesday, September 2, 2009 - link

    While I can't comment on the specifics of software compilers I know that faster disk IO makes a big difference when your performing a full build (compilation and packaging) of software.
    IDEs these days spend a lot their time reading/writing small files (thats a lot of small, random, disk IO) and a good SSD can make a huge difference to this.

Log in

Don't have an account? Sign up now