One Tough Act to Follow

What have I gotten myself into? The SSD Anthology I wrote back in March was read over 2 million times. Microsoft linked it, Wikipedia linked it, my esteemed colleagues in the press linked it, Linus freakin Torvalds linked it.

The Anthology took me six months to piece together; I wrote and re-wrote parts of that article more times than I'd care to admit. And today I'm charged with the task of producing its successor. I can't do it.

The article that started all of this was the Intel X25-M review. Intel gave me gold with that drive; the article wrote itself, the X25-M was awesome, everything else in the market was crap.


Intel's X25-M SSDs: The drives that started a revolution

The Anthology all began with a spark: the SSD performance degradation issue. It took a while to put together, but the concept and the article were handed to me on a silver platter: just use an SSD for a while and you’ll spot the issue. I just had to do the testing and writing.


OCZ's Vertex: The first Indilinx drive I reviewed, the drive that gave us hope there might be another.

But today, as I write this, the words just aren't coming to me. The material is all there, but it just seems so mature and at the same time, so clouded and so done. We've found the undiscovered country, we've left no stone unturned, everyone knows how these things work - now SSD reviews join the rest as a bunch of graphs and analysis, hopefully with witty commentary in between.

It's a daunting, no, deflating task to write what I view as the third part in this trilogy of articles. JMicron is all but gone from the market for now, Indilinx came and improved (a lot) and TRIM is nearly upon us. Plus, we all know how trilogies turn out. Here's hoping that this one doesn't have Ewoks in it.

What Goes Around, Comes Around

No we're not going back to the stuttering crap that shipped for months before Intel released their X25-M last year, but we are going back in the way we have to look at SSD performance.

In my X25-M review the focus was on why the mainstream drives at the time stuttered and why the X25-M didn't. Performance degradation over time didn't matter because all of the SSDs on the market were slow out of the box; and as I later showed, the pre-Intel MLC SSDs didn’t perform worse over time, they sucked all of the time.

Samsung and Indilinx emerged with high performance, non-stuttering alternatives, and then we once again had to thin the herd. Simply not stuttering wasn't enough, a good SSD had to maintain a reasonable amount of performance over the life of the drive.

The falling performance was actually a side effect of the way NAND flash works. You write in pages (4KB) but you can only erase in blocks (128 pages or 512KB); thus SSDs don't erase data when you delete it, only when they run out of space to write internally. When that time comes, you run into a nasty situation called the read-modify-write. Here, even to just write 4KB, the controller must read an entire block (512KB), update the single page, and write the entire block back out. Instead of writing 4KB, the controller has to actually write 512KB - a much slower operation.

I simulated this worst case scenario performance by writing to every single page on the SSDs I tested before running any tests. The performance degradation ranged from negligible to significant:

PCMark Vantage HDD Score New "Used"
Corsair P256 (Samsung MLC) 26607 18786
OCZ Vertex Turbo (Indilinx MLC) 26157 25035

 

So that's how I approached today's article. Filling the latest generations of Indilinx, Intel and Samsung drives before testing them. But, my friends, things have changed.

The table below shows the performance of the same drives showcased above, but after running the TRIM instruction (or a close equivalent) against their contents:

PCMark Vantage HDD Score New "Used" After TRIM/Idle GC % of New Perf
Corsair P256 (Samsung MLC) 26607 18786 24317 91%
OCZ Vertex Turbo (Indilinx MLC) 26157 25035 26038 99.5%

 

Oh boy. I need a new way to test.

A Quick Flash Refresher
Comments Locked

295 Comments

View All Comments

  • shabby - Monday, August 31, 2009 - link

    The 80gig g2 is $399 now!
  • gfody - Tuesday, September 1, 2009 - link

    The gen2 80gb is at $499 as of 12:00AM PST
  • maxfisher05 - Monday, August 31, 2009 - link

    As of right now (8/31) newegg has the 160GB Intel G2 listed at $899!!!!!!!!!!!!!!!!!!! To quote Anand "lolqtfbbq!"
  • siliq - Monday, August 31, 2009 - link

    Great article! Love reading this. Thanks Anand.

    We gather from this article that all the pain-in-@$$ about SSDs come from the inconsistency between the size of the read-write page and the erase block. When SSDs are reading/writing a page it's 4K, but the minimum size of erasing operation is 512K. Just wondering is there any possibility that manufacturers can come up with NAND chips that allows controllers to directly erase a 4K page without all the extra hassles. What are the obstacles that prevent manufacturers from achieving this today?
  • bji - Tuesday, September 1, 2009 - link

    It is my understanding that flash memory has already been pushed to its limit of efficiency in terms of silicon usage in order to allow for the lowest possible per-GB price. It is much cheaper to implement sophisticated controllers that hide the erase penalty as much as possible than it is to "fix" the issue in the flash memory itself.

    It is absolutely possible to make flash memory that has the characteristics you describe - 4K erase blocks - but it would require a very large number of extra gates in silicon and this would push the cost up per GB quite a bit. Just pulling numbers out of the air, let's say it would cost 2x as much per GB for flash with 4K erase blocks. People already complain about the high cost per GB of SSD drives (well I don't - because I don't steal software/music/movies so I have trouble filling even a 60 GB drive), I can't imagine that it would make market sense for any company to release an SSD based on flash memory that costs $7 per GB, especially when incredible performance can be achieved using standard flash, which is already highly optimized for price/performance/size as much as possible, as long as a sufficiently smart controller is used.

    Also - you should read up on NOR flash. This is a different technology that already exists, that has small erase blocks and is probably just what you're asking for. However, it uses 66% more silicon area than equivalent NAND flash (the flash used in SSD drives), so it is at least 66% more expensive. And no one uses it in SSDs (or other types of flash drives AFAIK) for this reason.
  • bji - Tuesday, September 1, 2009 - link

    Oh I just noticed in the Wikipedia article about NOR flash, that typical NOR flash erase block sizes are also 64, 128, or 256 KB. So the eraseblocks are just as problematic there as in NAND flash. However, NOR flash is more easily bit-addressable so would avoid some of the other penalties associated with NAND that the smart contollers have to work around.

    So to make a NAND or NOR flash with 4K eraseblocks would probably make them both 2X - 4X more expensive. No one is going to do that - it would push the price back out to where SSDs were not viable, just as they were a few years ago.
  • siliq - Tuesday, September 1, 2009 - link

    Amazing answers! Thank you very much
  • morrie - Monday, August 31, 2009 - link

    My laptop is limited to 4 GB swap. While that's enough for 99% of Linux users, I don't shut down my laptop, it's used as a desktop with dozens of apps running and hundreds of browser tabs. Therefore, after a few months of uptime, memory usage climbs above 4 GB. I have two hard drives in the laptop, and set up a software raid0 1GB swap partition, but I went with software raid1 for the other swap partition. So once the ram is used up for swap, the laptop slows noticeably, but after the raid0 swap partition fills up, the raid1 partition really slows it down. Once that fills up, it hits swap files (non raid) which slow it down more. But thanks to the kernel and the way swappiness works, once about 4 GB of Ram plus about 3 GB of physical swap is used, it really slows. I can gain a bit of speed by adding some physical swap files to increase the ratio of physical swap to ram swap (thus changing swappiness through other means), but this only works for another 1 GB of ram.

    No lectures or advice please, on how I'm using up memory or about how 4GB is more than sufficient, my uptimes are in the hundreds of days on this laptop and thanks to ADD/limited attention span, intermittent printer availability for printing out saved browser tabs and other reasons (old habits dying hard being one), my memory usage is what it is.

    So, the big question is, since the laptop has an eSATA port, can I install one of these ssd drives in an externel SATA tray, connected via eSATA to the laptop and move physical swap partitions to the ssd? I believe that swap on the ssd would be a lot faster even on the eSATA wire, than swap on the drives in the laptop (they're 7200 rpm drives btw). I'm aware that using the ssd for swap would shorten it's life, but if it lasts a year till faster laptops with more memory are available (and I get used to virtual machines and saving state so I can limit open browser windows), I'll be happy.

    Buying two of the drives and using them raided in the laptop is too costly right now, when prices drop that'll be a solution for this current laptop.

    Externel SSD over eSATA for Linux swap on a laptop? Faster than my current setup?
  • hpr - Monday, August 31, 2009 - link

    Sounds like you have some very small memory leak going on there.

    Have you tried that Firefox plugin that enables you to have your tabs but it doesn't really have a tab open in memory.


    TooManyTabs
    https://addons.mozilla.org/en-US/firefox/addon/942...">https://addons.mozilla.org/en-US/firefox/addon/942...

    Have fun filling up thousands of tabs and having low memory usage.
  • gstrickler - Monday, August 31, 2009 - link

    You should be able to use an SSD in an eSATA case, and yes, it should be faster than using your internal 7200 RPM drives. You probably want to use an Intel SSD for that (see page 19 of the article and note that the Intel drives don't drop off dramatically with usage).

    If you don't need to storage of your two internal 7200 RPM drives (or if you can get a sufficiently large SSD), you might be better off replacing one of them with an SSD and reconsider how you're allocating all your storage.

    As for printer availability, seems to me it would make more sense to use a CUPS based setup to create PDFs rather than having jobs sit in a print queue indefinitely. Then, print the PDFs at your convenience when you have a printer available. I don't know how your printing setup currently works, but it sounds like doing so would reduce your swap space usage.

Log in

Don't have an account? Sign up now