A Wear Leveling Refresher: How Long Will My SSD Last?

As if everything I’ve talked about thus far wasn’t enough to deal with, there’s one more major issue that directly impacts the performance of these drives: wear leveling.

Each MLC NAND cell can be erased ~10,000 times before it stops reliably holding charge. You can switch to SLC flash and up that figure to 100,000, but your cost just went up 2x. For these drives to succeed in the consumer space and do it quickly, it must be using MLC flash.


SLC (left) vs. MLC (right) flash

Ten thousand erase/write cycles isn’t much, yet SSD makers are guaranteeing their drives for anywhere from 1 - 10 years. On top of that, SSD makers across the board are calling their drives more reliable than conventional hard drives.

The only way any of this is possible is by some clever algorithms and banking on the fact that desktop users don’t do a whole lot of writing to their drives.

Think about your primary hard drive. How often do you fill it to capacity, erase and start over again? Intel estimates that even if you wrote 20GB of data to your drive per day, its X25-M would be able to last you at least 5 years. Realistically, that’s a value far higher than you’ll use consistently.

My personal desktop saw about 100GB worth of writes (whether from the OS or elsewhere) to my SSD and my data drive over the past 14 days. That’s a bit over 7GB per day of writes. Let’s do some basic math:

  My SSD
NAND Flash Capacity 256 GB
Formatted Capacity in the OS 238.15 GB
Available Space After OS and Apps 185.55 GB
Spare Area 17.85 GB

 

If I never install another application and just go about my business, my drive has 203.4GB of space to spread out those 7GB of writes per day. That means in roughly 29 days my SSD, if it wear levels perfectly, I will have written to every single available flash block on my drive. Tack on another 7 days if the drive is smart enough to move my static data around to wear level even more properly. So we’re at approximately 36 days before I exhaust one out of my ~10,000 write cycles. Multiply that out and it would take 360,000 days of using my machine the way I have been for the past two weeks for all of my NAND to wear out; once again, assuming perfect wear leveling. That’s 986 years. Your NAND flash cells will actually lose their charge well before that time comes, in about 10 years.

This assumes a perfectly wear leveled drive, but as you can already guess - that’s not exactly possible.

Write amplification ensures that while my OS may be writing 7GB per day to my drive, the drive itself is writing more than 7GB to its flash. Remember, writing to a full block will require a read-modify-write. Worst case scenario, I go to write 4KB and my SSD controller has to read 512KB, modify 4KB, write 512KB and erase a whole block. While I should’ve only taken up one write cycle for 2048 MLC NAND flash cells, I will have instead knocked off a single write cycle for 262,144 cells.

You can optimize strictly for wear leveling, but that comes at the expense of performance.

Why SSDs Care About What You Write: Fragmentation & Write Combining Why Does My 80GB Drive Appear as 74.5GB? Understanding Spare Area
Comments Locked

295 Comments

View All Comments

  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    Maybe I should compile these things into a book? :)

    Here are my answers about some stuff:

    1) There's a spec for how hard drive makers report capacity. They define 1GB as 1 billion bytes. This is technically correct (base 10 SI prefix as you correctly pointed out). The HDDs also physically have this much storage on them, they are made up of sequentially numbered sectors that are easily counted in a decimal number system.

    All other aspects of PC storage (e.g. cache, DRAM, NAND flash) however work in base 2 (like the rest of the PC). In these respects 1GB is defined as 1024^3 because we're dealing with a base 2 number system. There are reasons for this but it goes beyond the scope of what I'm posting :)

    Intel adheres to the same spec that the HDD makers use. But the X25-M is made up of flash, which as I just mentioned is addressed in a base 2 number system. There's more flash than user space on the drive, it's used as spare area, woohoo. I think we're both on the same page here, just saying things differently :)

    2) We'll see a 320GB drive, just not this year. I don't know that the demand is there especially given the weak economy.

    Dreams do sometimes come true... ;)

    3) Perhaps, but I don't like the idea of a drive doing anything but idling when it's supposed to be...idle. This does funny things to notebook battery life I'd think.

    4) This is true. There's also another thing you can do with the jumper (and perhaps some additional software): flash any indilinx drive with any firmware regardless of vendor :)

    5) I had to throw out a lot of data because of variations between runs. It ended up being a combination of immature drivers, immature benchmarks and some OS trickery. The setup I have now is very reliable and provides very repeatable results with very little variation. While I run everything three times, the runs are so close that you could technically do only one run per drive and still be fine.

    6) I wouldn't count WD and Seagate out just yet. It may take them a while but they won't go quietly...

    7) Samsung makes a ton of money from SSD sales to OEMs, they don't seem to care about the end user market as much. If end users start protesting Samsung drives however, things will change.

    In my opinion? Once Apple falls, the rest will follow. If Apple will migrate to Intel (possible) or Indilinx (less likely), we'll see the same from the other OEMs and Samsung will be forced to change.

    Or I could be too pessimistic and we'll see better performance from Samsung before then.

    8) Agreed :)

    I'll finish here too :)

    Take care,
    Anand
  • Reven - Monday, August 31, 2009 - link

    Anand, dont listen to the guys like blyndy who diss on the anthologies, I love them. You can find a basic review anywhere, its the in-depth yet simple to understand stuff like these anthologies that make me visit Anandtech all the time.

    Keep it up, dude!
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    Thank you :)
  • EasterEEL - Monday, August 31, 2009 - link

    I have a couple of questions regarding the Intel® SATA SSD Firmware Update Tool (2832KB) v1.3 8/24/2009.

    Does this firmware enable TRIM within the SSD to work with Windows 7?

    If AHCI is enabled in the BIOS (but not RAID) does Windows 7 use it's own drivers with TRIM? Or does it load Intel’s Matrix Storage Manager driver which does not support TRIM as per the article note below?

    "Unfortunately if you’re running an Intel controller in RAID mode (whether non-member RAID or not), Windows 7 loads Intel’s Matrix Storage Manager driver, which presently does not pass the TRIM command. Intel is working on a solution to this and I'd expect that it'll get fixed after the release of Intel's 34nm TRIM firmware in Q4 of this year."

  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    That update does not enable TRIM. The TRIM firmware is in testing now and it will be out sometime in Q4 of this year (October - December).

    If AHCI is enabled in the BIOS and you haven't loaded Intel's MSM drivers then it will use the Windows 7 driver and TRIM will be supported.

    Take care,
    Anand
  • uberowo - Monday, August 31, 2009 - link

    I do have a question however. :D

    I am building a gaming pc, and I am buying ssd disk/s. Would I benefit from getting 2x80gb intel gen2s and using raid0? Or should I stick with a single 160gb?
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    While I haven't tested 2 x 80GB drives in RAID-0, my feeling is that a single SSD is going to be better than two in RAID going forward. As of now I don't know that anyone's TRIM firmware is going to work if you've got two drives in RAID-0.

    The perceived performance gains in RAID-0 also aren't that great on SSDs from what I've seen.

    Take care,
    Anand
  • Ardax - Monday, August 31, 2009 - link

    A naive guess would be that it depends on the workload. For lots of sequential transfers a RAID-0 should shine -- particularly on reads -- because you're spreading the transfers out over multiple SATA channels.

    Losing TRIM is a problem. Finding a controller than can handle the performance is entirely likely to be another.
  • uberowo - Monday, August 31, 2009 - link

    Thanks a lot for taking the time to answer. Not to mention making this awesome site. :)
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    You guys take the time to read it and make some truly wonderful comments, it's the least I can do :)

    -A

Log in

Don't have an account? Sign up now